Abstract
In everyday life, we perceive depth relationships in a scene seemingly effortlessly and almost instantaneously. However, past experimental studies on motion parallax and structure from motion have reported that integration times of 600-1000 msec are required for perception of depth or 3D structure. Here we re-examined the temporal characteristics of depth discrimination from motion parallax, using random dot textured surfaces. Relative shearing motions of textures were synchronized to the observer's head movements to portray a surface slanted about a horizontal axis. The dot displacements were produced under two different rendering schemes: orthographic and perspective. Perspective rendering differs from orthographic by including additional cues, i.e. variation of speed of the dots with distance, lateral speed gradients across the display and small vertical displacements. No pictorial depth cues or variation of the size of the random dots with distance were available, and thus the task was impossible without observer movement. Three observers performed a 2AFC depth discrimination task in which they reported the perceived direction of slant, and the presentation duration of the stimulus was varied over intervals ranging from 62.5 to 4000 msec. The stimuli were presented on a computer screen in a 28 degree diameter circular window, at 57 cm viewing distance. We found that 1) better performance occurred with perspective than orthographic rendering at all stimulus presentation durations giving above chance performance; 2) subjects were able to discriminate depth at durations as short as 125 msec; 3) performance for both types of rendering was relatively constant for durations over 500 msec, but dropped at shorter durations. Somewhat surprisingly, the integration of dynamic perspective cues does not seem to require additional processing time. Depth from motion parallax can occur much more rapidly than previously thought, consistent with the apparent swiftness of depth perception that is experienced in everyday life.
Meeting abstract presented at VSS 2016