Abstract
The past few decades have seen rapid growth in the study of the core principles of Natural Systems Analysis (NSA; Geisler, 2008): that the computations employed by the visual system are the product of evolutionary optimization for the sensory evidence (i.e., images) and tasks critical for survival. A core tenet of NSA posits that a deep understanding of these systems requires knowledge of the properties of the visual environment in which they operate. Prior studies have typically analyzed sets of narrow field-of-view static photographs that have not been selected to reflect everyday visual experience. Critically, the absence of fixation data for these images prohibits the assessment of the actual images that land on the retina during real-world conditions. Thus, the degree to which these images faithfully represent real-world visual experience is unclear. Here, we detail the systematic collection and analysis of the Retinal Image Statistics (RIS) experienced during everyday behavior. Twenty-four subjects walked around the MIT campus as naturally as possible while a mobile eye-tracker and supplementary wide field-of-view, high-resolution camera recorded the surrounding visual environment and gaze position. The fixation data was used to compute the actual retinal images subjects experienced. Additionally, we dissociated head/body motion from eye movements by computing and controlling for global optical flow across successive frames. Machine learning algorithms allowed us to reliably identify individual subjects from the spatiotemporal statistics of head/body/eye movements (direction, magnitude, and frequency) and the RIS of fixated regions. Further, we found that the magnitudes of head and eye movements during real-world vision raise possible concerns as to the validity of laboratory-based paradigms incorporating fixed head, centrally-presented images. We conclude by discussing new approaches in machine and human vision research that are made possible by this framework and our expanding database of dynamic real-world retinal images.
Meeting abstract presented at VSS 2016