Assessing automaticity in audiovisual integration of speech
Poster
Salvador Soto-Faraco
Departament de Psicologia Bàsica, Universitat de Barcelona
Agnès Alsius
Departament de Psicologia Bàsica, Universitat de Barcelona Jordi Navarra
Departament de Psicologia Bàsica, Universitat de Barcelona Ruth Campbell
Department of Human Communication Science, University College London Abstract ID Number: 138 Full text:
Not available
Last modified: May 20, 2003 Abstract
The McGurk effect is usually presented as an example of fast, automatic, audiovisual integration. We report a series of experiments designed to assess these claims directly. First, we used a syllabic version of the speeded classification paradigm, whereby response latencies to the first (target) syllable of a word are slowed by irrelevant variation in the second (irrelevant) syllable. This result is often interpreted as the inability to filter out the irrelevant stimulus dimension. We managed to produce (Experiment 1) and to eliminate (Experiment 2) syllabic interference solely by ‘illusory’ (McGurk) audiovisual stimuli, suggesting that audiovisual integration occurs prior to attentional selection in this paradigm. A second paradigm manipulated concurrent visual attentional load in simple speech identification where visual (‘date’) and auditory (‘bate’) inputs conflicted (giving rise to a McGurk ‘date’). We found an increment in the proportion of purely auditory percepts when attentional load was high (Experiment 3), suggesting that audiovisual speech integration may not be completely independent of attention. This result cannot be simply attributed to an effect of attention on output (repeating back the words), because attention had no effect on participants’ responses when the video-track was degraded (Experiment 4). An interpretation of this pattern of results is that audiovisual integration will proceed automatically whenever attentional resources are available, but will fail if (visual) attentional resources are exhausted.
|