Abstract
Rogers and Graham (1979) showed that motion parallax can support depth percepts comparable to those generated by stereopsis. However, several experiments with physical stimuli have shown that when both are present in depth estimation tasks, observers appear to rely on stereoscopic information. One explanation is that these tasks bias observers towards using stereopsis. Here we devised a novel segmentation task to evaluate cue integration that should benefit from relative motion. Observers viewed two superimposed, frontoparallel, horizontal wavy lines using a virtual reality headset. On each side of the set of lines, a probe was aligned with the end of one of the curves; observers indicated whether the two probes were coincident with the same curve. We generated two levels of complexity by manipulating the curvature. Using the method of constant stimuli we varied the depth separation between the two curves from 0 to 2.4cm and assessed performance using stereopsis alone, motion parallax alone (monocular) or both cues present. On monocular trials, observers moved their heads laterally by 6cm. As anticipated, accuracy was lowest when there was no depth offset between the two curves. The low complexity condition was generally easy, and performance was the same across conditions. However, in our high complexity condition, performance was near chance when there was no depth separation and gradually increased as the depth offset increased. Accuracy was similar in the stereopsis only and combined conditions but significantly poorer when only motion parallax was available. This was true even when the range of head motion was doubled. In sum, despite using a task that should benefit from relative motion and the fact that the two depth cues provided consistent information, our results echo previous studies in showing an apparent lack of integration of motion parallax and stereopsis.