September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Comparing novel object learning in humans, models, and monkeys
Author Affiliations & Notes
  • Michael J Lee
    Department of Brain and Cognitive Sciences, MIT
  • James J DiCarlo
    Department of Brain and Cognitive Sciences, MIT
    McGovern Institute for Brain Research, MIT
Journal of Vision September 2019, Vol.19, 114b. doi:https://doi.org/10.1167/19.10.114b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael J Lee, James J DiCarlo; Comparing novel object learning in humans, models, and monkeys. Journal of Vision 2019;19(10):114b. https://doi.org/10.1167/19.10.114b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans readily learn to identify novel objects, and it has been hypothesized that plasticity in visual cortex supports this behavior. Contributing to this view are reports of experience-driven changes in the properties of neurons at many levels of visual cortex, from V1 to inferotemporal cortex (IT). Here, we ask if object learning might instead be explained by a simple model in which a static set of IT-like visual features is followed by a perceptron learner. Specifically, we measured human (268 subjects; 170,000+ trials) and nonhuman primate (NHP; 2 subjects, 300,000+ trials) behavior across a battery of 29 visuomotor association tasks that each required the subject to learn to discriminate between a pair of synthetically-generated, never-before-seen 3D objects (58 distinct objects). Objects were rendered at varying scales, positions, and rotations; superimposed on naturalistic backgrounds; and presented for 200 msec. We then approximated the visual system’s IT response to each image using models of ventral stream processing (i.e. specific deep neural networks trained on ImageNet categorization), and we applied a reward-based, perceptron learner to the static set of features produced at the penultimate layer of each model. We report that our model is sufficient to explain both human and NHP rates of learning on these tasks. Additionally, we show humans, NHPs, and this model share the same pattern of performance over objects, but that NHPs reach criterion performance ~10× as slowly as humans (human t = 139, NHP t = 1149), suggesting humans have similar but more rapid learning mechanisms than their NHP cousins in this domain. Taken together, these results suggest the possibility that object learning is mediated by plasticity in a small population of “readout” neurons that learn and execute weighted sums of activity across an upstream sensory population representation (IT) that is largely stable.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×