Abstract
Eye-tracking studies offer substantial insight into cognition, revealing which visual features viewers prioritize over others as they construct a sense of place in an environment. Yet, one key feature of real-world experience is overlooked by traditional eye-tracking paradigms. Everyday visual environments are actively explored: we gain rich information about a place by shifting our eyes, turning our heads, and moving our bodies. In this study, we sought to understand how active exploration impacts the way that humans encode the rich information available in a real-world scene.
To test this, we exploited recent developments in immersive Virtual Reality (iVR) and custom in-headset eye-tracking to monitor participants’ (N=18) gaze while they naturally explored real-world, 360º photospheres via head turns and eye movements (Active Condition). In half of the trials, photospheres were passively displayed to participants while they were head-fixed (Passive Condition), thus enabling us to perform quantitative, in-depth comparisons of gaze behavior and attentional deployment as subjects encoded novel real-world environments during active exploration vs. passive viewing.
We report that active viewing impacts all aspects of gaze behavior, including 1) low-level oculomotor movements and 2) how individuals allocate their attention. Relative to fixations made in the passive fixation, active fixations were shorter (p < 0.001), more frequent (p < 0.001), less centrally tending (p < 0.001), and more entropic (p < 0.001). Furthermore, gaze behavior was overwhelmingly more guided by semantic, rather than low-level visual features in active, as compared with passive viewing (ANOVA interaction: p < 0.001). Taken together, our results demonstrate that active viewing influences nearly every aspect of gaze behavior, from how we move our eyes to what we choose to attend to.