Abstract
Introduction: Detection of a moving object amongst an array of static objects is effortless for a stationary observer. The moving object has a unique feature — movement — and thus “pops out” when viewed by a motion filter. When an observer is in motion, detection of a moving object is more difficult. A simple motion filter would no longer work because self-motion produces scene motion. This leads to 2 possibilities, either moving observers are blind to object motion, or the visual system removes common flow components thus “stabilising” the scene and isolating object motion. We examine the ability to detect object motion during simulated self-motion. Methods: 9 or 25 cubes were placed within a space of 0.26×0.26×0.5m. The cubes were rendered on a computer monitor and viewed with stereo glasses. In 50% of the trials, the viewpoint was transformed so the rendered display corresponded to an observer moving laterally at 3 cm/s and counter-rotating their head to keep the centre of the array straight-ahead, in the other 50% it was rendered from a stationary viewpoint. A single cube began to move within the array (laterally at 1cm/s) and the observer had to indicate if it was moving left or right. In 50% of the trials the scene was displayed from a Cyclopean view (no disparity), in the other, cubes were seen with disparity. RTs were recorded. Results: RTs with the moving viewpoint were significantly longer when there was no disparity-depth. Accuracy was also low. When there was disparity depth in the scene, object motion was detected with ease with both a static and moving viewpoint. With a moving viewpoint RTs were shorter with 25 elements than with 9 elements. Conclusions: Observers can detect object motion during self-motion. Such an ability requires the detection of element motion that is not consistent with viewpoint movement. This is compatible with the hypothesis that optic flow processing has its primary role in the updating and stabilisation of perceptual space.