Abstract
Human observers are able to use vergence eye posture to set the distance of one target equal to that of another (Wright 1951, Enright 1991), to set distance to half or double that of another (Brenner & van Damme 1997), and to estimate the depth between two targets as a fraction of the distance to the farther (Foley & Richards 1972). On the other hand, relative retinal disparity is the basis for fine stereoscopic judgments: observers have better stereoacuity for simultaneously than sequentially presented targets (Westheimer 1979, Enright 1991). Do humans sensibly combine relative disparity information from extra-retinal and retinal sources? Observers made depth judgments for stereoscopically defined intervals of 0 to 10 cm (crossed and uncrossed) relative to a visual target at 45 cm. Endpoints of the depth interval were defined by luminous dot targets (20 arcmin diam) shown in a dark room on a modified Wheatstone stereoscope. There were three classes of stimuli. In the SIMultaneous condition, both targets were visible for 1.5 sec. In the ALTernating condition, the targets were each twice visible, in alternation, for 0.5 sec. In the fast simultaneous (FSIM) condition, they were both visible for 3 CRT refresh frames (75 Hz). At large relative disparities (greater than about 40 arcmin), depth estimates in SIM resembled those in ALT, and deviated from those in FSIM. At small disparities (less than 10 arcmin), SIM and FSIM had better precision than ALT; this finding is consistent with Brenner & van Damme's measurement of a standard deviation of 10 arcmin for nulling the depth between sequentially viewed targets. At intermediate disparities, precision was greater for SIM than for FSIM or ALT. We suggest that the visual system can sensibly combine measurements of relative disparity made using the two methods, and that vergence change per se contributes to depth perception even for relative disparities less than 40 arcmin.