Purchase this article with an account.
Max Kinateder, Emily Cooper; Using Visual Snapshots to Estimate Egocentric Orientation in Natural Environments. Journal of Vision 2018;18(10):513. doi: https://doi.org/10.1167/18.10.513.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Accurate real-time estimates of one's orientation are essential for moving about an environment. During movement, several dynamic sensory cues contribute to estimates of heading orientation, such as vestibular signals and optic flow. We examined the ability of observers to estimate their orientation from the static information present in their local view of a scene: a "visual snapshot" taken from a specific position and orientation within a 3D environment. On the one hand, in most complex, natural environments, such local views uniquely specify the observer's position and orientation, suggesting that this task should be straight-forward. On the other hand, utilizing information from local views alone (e.g., landmarks, geometry) likely hinges on accessing an accurate 3D internal representation and discriminating between views. Using a head-mounted display, participants were immersed in an indoor environment, which recreated views of a real room at all orientations from a fixed location. On each trial, participants were shown two snapshots taken from two randomly selected orientations. Their task was to turn as if they were orienting themselves from one view to the other. We examine the effects of prior exposure to the scene and on-line visual feedback. Even with modest prior exposure, participants rapidly learned to perform the task either with or without continuous visual feedback while turning. Without visual feedback, however, their accuracy and precision were substantially lower. Our results suggest that observers can infer coarse but reasonable heading orientations from static visual information alone. We will further study how the accuracy and precision of egocentric orientation is influenced by room clutter, prior experience, and restricted visual field.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only