4th Annual Meeting of the International Multisensory Research Forum
    Home > Papers > Ladan Shams
Ladan Shams

Humans integrate auditory and visual information in a statistically optimal fashion
Single Paper Presentation

Ladan Shams
Department of Psychology, UCLA

Whee Ky Ma
Division of Biology, California Institute of Technology

Graeme Smith
Division of Biology, California Institute of Technology

     Abstract ID Number: 94
     Full text: Not available
     Last modified: May 20, 2003

Abstract
Temporally coincident signals in the different sensory modalities do not always originate from the same source, and thus, should not--and do not--always get integrated. However, previous models of cross-modal interactions have exclu-sively focused on conditions in which the signals of the different modalities do get fused, and are unable to account for conditions in which the signals do not get integrated. We developed a new model which does not assume such a mandatory integration. The model uses Bayesian inference (a.k.a., ideal observer) to make inference about the causes of the various sensory signals. We used the sound-induced flash illusion (a single flash accompanied by two auditory beeps is perceived as two flashes) as a testbed for examining this model. The model predictions fit the data very well, in both conditions where the auditory and visual signals do and do not get integrated. These results indicate that the human performance is highly consistent with that of an ideal observer, implying that the brain uses a mechanism similar to Bayesian inference in a framework similar to the proposed model for combining auditory and visual signals. Therefore, humans seem to integrate the auditory and visual signals in a statistically optimal fashion.


    Learn more
    about this
    publishing
    project...


Public Knowledge

 
Open Access Research
home | overview | program
papers | organization | links
  Top