Visual-auditory spatial integration: Is the whole different from the sum of the parts?
IMASSA/Département de Sciences Cognitives, 91223 Brétigny sur Orge Cedex, France/ Université Paris 5 La Sorbonne
EPF Ecole d'ingénieur/ Sceaux/ France
IMASSA/Département de Sciences Cognitives, 91223 Brétigny sur Orge Cedex, France/ Université Paris 8
Abstract ID Number: 112
Last modified: May 20, 2003
Visual and auditory integration was investigated in a two-dimension localization task in an 80° by 60° perceptive field. The purpose of the experiment was to determine the respective contribution of each sensory modality into the bimodal perception. We did consider three randomized conditions of presentation: visual alone, auditory alone, and combined visual-auditory. Visual stimulus was a spot of light and the auditory stimulus was a broadband noise burst addressed using 35 loudspeakers in a 7 by 5 matrix. To measure the localization performance, observers had to point a visual cursor toward the perceived target location. The spatial accuracy was determined: distance from target to designed localization, dispersion and orientation of the response distributions were collected. The data were used to evaluate the integrative processing applying. Results in the VA condition were consistent with the Maximum Likelihood Estimation model in Azimuth while showing no superiority over a competitive model in Elevation. The results are in agreement with a differential role of audition in VA localization according to the direction of the spatial source. The Central Nervous System seems to combine information from vision and audition in a way that depends on stimulus orientation. This support evidence that the weights of vision and audition vary according to the relative "appropriateness" of the two information sources.