Abstract
When observing a moving target while an observer is moving, the same retinal speeds can correspond to vastly different physical velocities. When an observer moves in the same direction as the target, the retinal speed is partially cancelled, and vice-versa. Observers must thus obtain an accurate estimate of their own velocity, and subtract it from or add it to the retinal speed elicited by the target to obtain an accurate estimate of the object velocity. When self-motion is experienced visually only, this compensation is likely to be incomplete, leading to biases in judgments of object motion during visual self-motion (Hypothesis 1). Furthermore, such added compensatory computations should decrease precision (Hypothesis 2). To test these hypotheses, we presented two motion intervals in a 3D virtual environment; one in which a target moved linearly to the left or to the right in the fronto-parallel plane, and one that consisted in a cloud of ¬smaller targets travelling in the same direction. The single target moved at one of two constant speeds (6.6 or 8m/s, 6m from the observer), while the speed of the cloud was determined by a PEST staircase. While observing the single moving target, participants were moved visually in the same direction, in the opposite direction, or remained static. Participants were then asked to judge which motion was faster. In support of Hypothesis 1, we found differences in accuracy between static, congruent and incongruent motion; target motion during congruent self-motion was judged as slower than in the static condition and vice-versa, indicating inadequate compensation for the observer’s motion. Furthermore, we found that self-motion during target motion observation decreases precision compared to the static condition in support of Hypothesis 2. This has implications for everyday situations such as estimating pedestrians’ behavior while driving a car.