Abstract
Motion parallax, i.e. differential retinal image motion resulting from movement of the observer, provides an important visual cue to segmentation and depth perception. Previously we examined its role in segmentation (VSS 2009), and here we additionally explore its contribution to depth perception.
Subjects performed lateral head translation while an electromagnetic tracker recorded head position. Stimuli consisted of random dots on a black background, whose horizontal displacements were synchronized proportionately to head motion by a scale factor (gain), and were modulated using square or sinewave envelopes to generate shearing motion.
Subjects performed three tasks: depth ordering, depth magnitude and segmentation. In depth ordering they performed a 2AFC task, reporting whether the half-cycle above vs below the centre of the screen appeared nearer. Depth magnitude estimates were obtained by matching the perceived depth to that of a texture-mapped 3d surface of similar shape which was rendered in a perspective view. Segmentation performance was assessed by measuring discrimination thresholds for envelope orientation. This task included two conditions: one in which stimuli were synched to the head motion and the other in which previously recorded motions of the stimuli were “played-back”.
For square wave modulation, good depth ordering performance was obtained only at low gain values; however sinewave modulation yielded unambiguous depth across a broader range of gains. In the depth magnitude task, subjects matched proportionately greater depths for larger gain values. In the segmentation task, orientation discrimination showed surprisingly similar thresholds for head motion and playback.
These results suggest that the ecological range of depths in which motion parallax gives good segmentation is very wide, whereas for good depth perception it is quite limited. The dependence of depth ordering on modulation waveform suggests that motion parallax is more useful for depth differences within one object than between occluding objects.
Supported by NSERC grant OGP0001978 to C.B.