Purchase this article with an account.
Brittney Hartle, Laurie Wilcox; Scaling stereoscopic depth through reaching. Journal of Vision 2021;21(9):1896. doi: https://doi.org/10.1167/jov.21.9.1896.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Depth estimation from stereopsis is biased under many viewing scenarios and for a range of estimation methods, particularly for virtual stimuli. These distortions are often attributed to misestimates of viewing distance that result in incorrect scaling of binocular disparity. The majority of research on depth scaling has considered only visual cues to distance. However, we do not just look at the world, we interact with objects and in this way may have access to proprioceptive cues to distance. There is evidence that stereopsis aids actions such as prehension; is the reverse also true? We assessed the impact of proprioceptive distance from arm’s reach on stereopsis using a ring game that is contingent on accurate absolute distance perception. Observers used hand controllers and their index finger to move rings onto a peg in a virtual environment. They completed the task as quickly as possible while avoiding touching the rings to the peg (errors were signalled via controller vibration). After each block of 5 trials observers were given feedback regarding their completion time and accuracy. To evaluate the impact of this proprioceptive experience we assessed depth magnitude estimation before and after completion of the ring task. Observers were asked to estimate the depth between a rectangle and a reference frame located at the same distance as the peg in an otherwise blank field. We found that depth estimation accuracy and scaling improved with experience. Importantly, in a follow-up experiment we found that this improvement was contingent on performing the reach. Consistent with the assumption that observers underestimate absolute distance, we found that most ring-placement errors were due to underreaches. We conclude that the improvement in depth estimation seen here reflects a cross-modal calibration of visual space that is underappreciated, but potentially important for everyday interactions.
This PDF is available to Subscribers Only