Abstract
A powerful cue to the 3D layout of the world is differential image motion resulting from observer movement (motion parallax). Previously (VSS 2010) we measured its role in segmentation and depth perception from shear motion. Here we extend these experiments to the equally important case of dynamic occlusion, which contains both compression-expansion and accretion-deletion cues. Observers performed lateral head translation while an electromagnetic tracker recorded head position. Stimuli consisted of random dots whose horizontal displacements were synchronized proportionately to head movement by a scale factor (“syncing gain”) proportional to depth, and were modulated using periodic velocity envelopes to generate dynamic occlusion motion. Segmentation performance was assessed by measuring discrimination thresholds for envelope orientation. This task included two conditions: one in which stimuli were synched to the head motion, and the other in which previously recorded stimulus motions were “played-back”. In the depth-ordering task, subjects reported whether the half-cycle left or right of the centre of the screen appeared nearer. We compared conditions in which accretion-deletion occurred in an ecologically correct or incorrect relationship, or was absent. Depth-ordering showed robust performance across a wider range of syncing gains compared to shear. In the cue conflict condition, reported depth was consistent with motion parallax at low syncing gains, but consistent with accretion at high syncing gains; segmentation showed similar results for head motion and playback, for correct and incorrect accretion-deletion, and similar or slightly better performance compared to the results from shear. These results demonstrate that dynamic occlusion is a more powerful cue than shear in extracting depth and segmentation information from motion parallax. The results also suggest that motion parallax more effectively signals small depth differences within an object, whereas accretion-deletion provides more information about larger depth differences between separate objects or an object and a background.
Supported by NSERC grant OGP0001978 to C.B.