Optimal integration of spatiotemporal information across vision and touch

Hannah Helbig, Max Planck Institute for Biological Cybernetics, Tuebingen

Abstract
The brain integrates spatial (e.g., size, location) as well as temporal (e.g., event perception) information across different sensory modalities (e.g., Ernst & Banks, 2002; Bresciani et al., 2006) in a statistically optimal manner to come up with the most reliable percept. That is, the variance (just-noticeable difference, JND) of the multisensory perceptual estimate is maximally reduced. Here we asked whether this holds also for spatiotemporal information encoded by different sensory modalities.
To study this question, we visually presented observers with a dot moving along a line. In the haptic condition, observer’s finger was passively moved along the line using a robot device. Observers had to discriminate the length of two lines presented in a 2-IFC task either visually alone, haptically alone or bimodally. To judge the length of a line, spatial information (position of the moving dot or finger) had to be accumulated in time.
The bimodal discrimination performance (JND) was significantly improved relative to the performance in the uni-modal tasks and did not differ from the predictions of an optimal integration model. This result indicates that observers adopt an optimal integration strategy to integrate spatial information accumulated in time.

Not available

Back to Abstract