|
Integrating audiovisual speech information: an fMRI study
Poster
Jeffery Jones
ATR International -- Human Information Science Laboratories
Daniel Callan
ATR International -- Human Information Science Laboratories Abstract ID Number: 71 Full text:
Not available
Last modified: May 20, 2003 Abstract
Although temporal and spatial coincidence are necessary for integrating auditory and visual information regarding nonspeech events, the integration of audiovisual speech information occurs despite considerable temporal and spatial discrepancies, suggesting distinct neural mechanisms may be involved. During two fMRI experiments, subjects saw and heard a speaker and then identified the consonants produced. In the first experiment, congruent audiovisual stimuli were presented with the acoustics either synchronous with video of the speaker, or delayed by 250 ms. When audiovisual stimuli were synchronous, more extensive enhanced bilateral activity was found in the superior temporal gyrus and sulcus, than when the sound was delayed. Conversely, more activity was observed in the right premotor cortex and inferior parietal lobule when the acoustics were delayed. In the second experiment, both congruent and incongruent audiovisual stimuli were presented in synchrony or +/- 400 ms out of phase. Regression analysis showed a relationship between acoustically influenced consonant perception and increased levels of activation in visual cortex. Moreover, more activation in the posterior parietal cortext was observed when stimuli were incongruent. Together, the results of these studies suggest a complex interaction between unimodal and polymodal speech processing regions. Funded by TAO of Japan.
|