Abstract
Positions of objects shift on our retina every time we move, which leads to the ambiguous situation that retinal inputs can be attributed to a change in either the object position or our own position. To disambiguate the sensory input, the brain must account for the amplitude and direction of self-motion and update objects positions accordingly. Spatial updating has been mostly investigated in a discrete fashion, where participants are asked to compare the pre- to post-movement target positions. Therefore, little is known about the dynamics of spatial updating during the intervening motion, which may depend on the available sensory signals. The otolith organs are the linear acceleration sensors. Because the strength and quality of their signal depends on the dynamics of the motion, we hypothesized that the quality of updating would be affected accordingly. To test this hypothesis, we used an apparent motion illusion during whole-body passive translation. While participants were moved with a bell-shaped velocity profile in complete darkness, two dots were briefly and successively flashed, one above and one below a body-fixed fixation target, thereby inducing the perception of a single dot moving. This illusion could be presented at the time of peak acceleration, peak velocity or peak deceleration of the body motion, and participants were asked to report its orientation relative to vertical. Individual updating gains show an underestimation of displacement regardless of the tested phase of the translation. Furthermore, the updating gain was higher at peak acceleration than at peak velocity and peak deceleration. This pattern was systematically observed, regardless of whether the time interval or the traveled distance between the presentation of the two stimuli were matched. Our results provide a dynamic characterization of spatial updating during body motion, thereby unveiling an asymmetry in how acceleration and deceleration signals are incorporated in the underlying computations.
Meeting abstract presented at VSS 2018