Learning to combine arbitrary signals from vision and touch
Single Paper Presentation
Max-Planck f. Biological Cybernetics, Tübingen, Germany
Graduate School for Neural and Behavioural Sciences, Tübingen, Germany.
Abstract ID Number: 109
Last modified: May 20, 2003
When different perceptual signals of the same physical property are integrated?e.g., the size of an object, which can be seen and felt?they form a more reliable sensory estimate. This however implies that the sensory system already knows which signals belong together and how they are related. In a Bayesian model of cue integration this prior knowledge can be made explicit. Here, we examine whether such a relationship between two arbitrary sensory signals from vision and touch can be learned from their statistical co-occurrence such that they become integrated. In the Bayesian model this means changing the prior distribution over the stimuli. To this end, we trained subjects with stimuli that are usually uncorrelated in the world?the luminance of an object (visual signal) and its stiffness (haptic signal). In the training phase we presented only combinations of these signals, which were highly correlated. Before and after training we measured discrimination performance with distributions of stimuli, which were either congruent with the correlation during training or incongruent. The incongruent stimuli came form an anti-correlated distribution compared to training. If subjects were sensitive to the correlation between the signals then we expect to see a change in their prior knowledge about what combinations of stimuli usually to encounter. Accordingly, this should change their discrimination performance between pre- and post-test. We found a significant interaction between the two factors pre/post-test and congruent/incongruent. After training, discrimination thresholds for the incongruent stimuli are increased relative to the thresholds for congruent stimuli, suggesting that subjects learned to combine the two signals effectively.