Combining sensory cues for spatial orientation: Assessing the contribution of different modalities in the facilitation of mental rotations

Alexandre Lehmann, LPPA CNRS

Abstract
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. Previous studies have shown that the cognitive cost of mental rotations is reduced when viewpoint change results from the observer’s motion rather than the spatial layout’s, which is explained by automatic updating mechanisms involved during self-motion. Nevertheless, little is known about how this process is triggered and particularly how sensory cues combine in order to enhance mental rotations. We developed a high-end virtual reality setup that for the first time allowed, with a series of experiments, to dissociate each modality possibly stimulated during viewpoint changes. At first, we validated this setup by replicating the classical advantage found for a moving observer. Secondly, we found that enhancing the spatial binding possibilities, by displaying the table during its rotation was not sufficient to significantly reduce the mental rotation cost. Thirdly, we found that mental rotations are not significantly improved with a single modality being stimulated during the observer’s motion (vision or body), whereas they are with a combination of two modalities (body & vision or body & sound). These results are discussed in terms of sensory-independent triggering of spatial updating during self-motion, with non-linear effects when sensory modalities are co-activated.

Not available

Back to Abstract