Purchase this article with an account.
Athena Buckthought, Ahmad Yoonessi, Curtis L. Baker; Dynamic perspective cues enhance depth perception from motion parallax. Journal of Vision 2017;17(1):10. doi: 10.1167/17.1.10.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Motion parallax, the perception of depth resulting from an observer's self-movement, has almost always been studied with random dot textures in simplified orthographic rendering. Here we examine depth from motion parallax in more naturalistic conditions using textures with an overall 1/f spectrum and dynamic perspective rendering. We compared depth perception for orthographic and perspective rendering, using textures composed of two types of elements: random dots and Gabor micropatterns. Relative texture motion (shearing) with square wave corrugation patterns was synchronized to horizontal head movement. Four observers performed a two-alternative forced choice depth ordering task with monocular viewing, in which they reported which part of the texture appeared in front of the other. For both textures, depth perception was better with dynamic perspective than with orthographic rendering, particularly at larger depths. Depth ordering performance with naturalistic 1/f textures was slightly lower than with the random dots; however, with depth-related size scaling of the micropatterns, performance was comparable to that with random dots. We also examined the effects of removing each of the three cues that distinguish dynamic perspective from orthographic rendering: (a) small vertical displacements, (b) lateral gradients of speed across the corrugations, and (c) speed differences in rendered near versus far surfaces. Removal of any of the three cues impaired performance. In conclusion, depth ordering performance is enhanced by all of the dynamic perspective cues but not by using more naturalistic 1/f textures.
This PDF is available to Subscribers Only