August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Retinal Image Statistics During Real-World Visual Experience
Author Affiliations
  • Matthew Peterson
    Department of Brain and Cognitive Sciences, MIT
  • Jing Lin
    Department of Brain and Cognitive Sciences, MIT
  • Nancy Kanwisher
    Department of Brain and Cognitive Sciences, MIT
Journal of Vision September 2016, Vol.16, 242. doi:https://doi.org/10.1167/16.12.242
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew Peterson, Jing Lin, Nancy Kanwisher; Retinal Image Statistics During Real-World Visual Experience. Journal of Vision 2016;16(12):242. https://doi.org/10.1167/16.12.242.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The past few decades have seen rapid growth in the study of the core principles of Natural Systems Analysis (NSA; Geisler, 2008): that the computations employed by the visual system are the product of evolutionary optimization for the sensory evidence (i.e., images) and tasks critical for survival. A core tenet of NSA posits that a deep understanding of these systems requires knowledge of the properties of the visual environment in which they operate. Prior studies have typically analyzed sets of narrow field-of-view static photographs that have not been selected to reflect everyday visual experience. Critically, the absence of fixation data for these images prohibits the assessment of the actual images that land on the retina during real-world conditions. Thus, the degree to which these images faithfully represent real-world visual experience is unclear. Here, we detail the systematic collection and analysis of the Retinal Image Statistics (RIS) experienced during everyday behavior. Twenty-four subjects walked around the MIT campus as naturally as possible while a mobile eye-tracker and supplementary wide field-of-view, high-resolution camera recorded the surrounding visual environment and gaze position. The fixation data was used to compute the actual retinal images subjects experienced. Additionally, we dissociated head/body motion from eye movements by computing and controlling for global optical flow across successive frames. Machine learning algorithms allowed us to reliably identify individual subjects from the spatiotemporal statistics of head/body/eye movements (direction, magnitude, and frequency) and the RIS of fixated regions. Further, we found that the magnitudes of head and eye movements during real-world vision raise possible concerns as to the validity of laboratory-based paradigms incorporating fixed head, centrally-presented images. We conclude by discussing new approaches in machine and human vision research that are made possible by this framework and our expanding database of dynamic real-world retinal images.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×