4th Annual Meeting of the International Multisensory Research Forum
    Home > Papers > Amir Ashkenazi
Amir Ashkenazi

Cross modal interaction between vision and audition: Role of semantic and spatial processes
Poster

Amir Ashkenazi
John B. Pierce Laboratory and Yale School of Medicine

Yoav Arieh
John B. Pierce Laboratory and Yale School of Medicine

Lawrence Marks
John B. Pierce Laboratory and Yale School of Medicine

     Abstract ID Number: 111
     Full text: Not available
     Last modified: March 31, 2003

Abstract
People are faster at classifying lights as bright or as dim when accompanied, respectively, by a high or low frequency tone (Marks, 1987). The existence of this cross-modal interaction implies that the auditory and visual systems share information at some level of processing. Two experiments tested whether auditory-visual cross-talk occurs at a semantic level or a spatial level of representation. In Experiment 1, we imposed a secondary task with semantic load onto the classification task, reasoning that the magnitude of the cross-modal interaction should thereby decline if its origin is semantic. In Experiment 2, we manipulated the spatial location of the keys that the subjects pressed to make their classifications, reasoning that the magnitude (or direction) of the interaction should thereby change if its origin is spatial. The auditory-visual interaction proved resistant to both manipulations, however, suggesting that its origin might be neither semantic nor spatial.


    Learn more
    about this
    publishing
    project...


Public Knowledge

 
Open Access Research
home | overview | program
papers | organization | links
  Top