December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Representation of object motion in the macaque ventral visual stream
Author Affiliations & Notes
  • Kohitij Kar
    Massachusetts Institute of Technology
  • Lynn K. A. Sörensen
    University of Amsterdam
  • James J. DiCarlo
    Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216 and Simons Foundation grant SCGB-542965 (J.J.D.)
Journal of Vision December 2022, Vol.22, 4283. doi:https://doi.org/10.1167/jov.22.14.4283
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kohitij Kar, Lynn K. A. Sörensen, James J. DiCarlo; Representation of object motion in the macaque ventral visual stream. Journal of Vision 2022;22(14):4283. https://doi.org/10.1167/jov.22.14.4283.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Primates seamlessly integrate dynamic visual information of moving objects to navigate their daily activities. However, we currently lack a neurally mechanistic understanding of how the brain supports the joint representation of object identity and position across time, leading to a unified perception of a moving object. Building on previous reports of behaviorally explicit object identity (Majaj et al., 2015; Kar et al., 2019) and object position information (Hong et al., 2016) in the macaque inferior temporal (IT) cortex, here we explicitly tested whether we can approximate object velocities from the distributed IT population activity. We repeatedly showed 600 movies (300ms long) that contained objects (one of ten) moving in specific directions (one of eight), at varying speeds, to monkeys (n=3) that passively fixated a central dot. We simultaneously measured large-scale neural activity (using chronic multielectrode arrays) from areas V4 (155 sites), IT (212 sites), and ventrolateral PFC (78 sites) across these monkeys. First, we observed that a nonlinear temporal integration model could dynamically transform V4, IT, and vlPFC population activity into object velocity readouts. Interestingly, however, unlike V4 and vlPFC-based decodes, object velocity could be also decoded linearly from instantaneous (~10ms) IT population activity (peaking ~300ms post-movie onset), indicating the presence of a precomputed velocity signal across the IT population activity pattern. Consistent with previous studies, the corresponding object identity decodes from IT significantly preceded (~150ms) these motion signals. In addition, we observed that IT-like layers from two-stream convolutional neural network models (of action recognition) also support simultaneous readouts of object identity and velocity –- establishing these as good baseline hypotheses to model primate object motion processing. These results challenge the common functional segregation of primate visual processing into the ventral (“what”) and dorsal (“where”) pathways and motivate the development of integrated (dorsal+ventral) models to study dynamic scene perception.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×