September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
The “A Day in the Life” Project: A Preliminary Report
Author Affiliations & Notes
  • Jenny Hamer
    University of California San Diego
  • Celene Gonzalez
    California State University San Bernardino
  • Garrison W Cottrell
    University of California San Diego
Journal of Vision September 2019, Vol.19, 60c. doi:https://doi.org/10.1167/19.10.60c
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jenny Hamer, Celene Gonzalez, Garrison W Cottrell; The “A Day in the Life” Project: A Preliminary Report. Journal of Vision 2019;19(10):60c. doi: https://doi.org/10.1167/19.10.60c.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The goal of this research project is to create a model the human visual system with anatomical and experiential constraints. The anatomical constraints implemented in the model so far include a foveated retina, the log-polar transform between the retina and V1, and the bifurcation between central and peripheral pathways in the visual system. The experiential constraint consists of a realistic training set that models human visual experience. The dataset most often used for training deep networks is ImageNet, a highly unrealistic dataset of 1.2M images of 1,000 categories. The categories are a rather Borgian set, including (among more common ones) ‘abacus’, ‘lens cap’, ‘whiptail lizard’, ‘ptarmigan’, ‘abaya’, ‘viaduct’, ‘maypole’, ‘monastery’, and 120 dog breeds. Any network trained on these categories becomes a dog expert, which is only true of a small subset of the human population. The goal of the “Day in the Life” project is to collect a more realistic dataset of what humans observe and fixate upon in daily life. Through the use of a wearable eye-tracker with an Intel Realsense scene camera that gives depth information, we are recording data from subjects as they go about their day. We then use a deep network to segment and label the objects that are fixated. The goal is to develop a training set that is faithful to the distribution of what individuals actually look at in terms of frequency, dwell time, and distance. Training a visual system model with this data should result in representations that more closely mimic those developed in visual cortex. This data should also be useful in vision science, as frequency, probably the most important variable in psycholinguistics, has not typically been manipulated in human visual processing experiments for lack of norms. Here we report some initial results from this project.

Acknowledgement: NSF grant SMA-1640681 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×