December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Long-term recordings from area V4 neurons and an accurately-predicting deep convolutional energy model reveal spatial, chromatic and temporal tuning properties under naturalistic conditions
Author Affiliations & Notes
  • Michele Winter
    University of California, Berkeley
  • Tom Dupré la Tour
    University of California, Berkeley
  • Michael Eickenberg
    Flatiron Institute
  • Michael Oliver
    Numerai
  • Jack Gallant
    University of California, Berkeley
  • Footnotes
    Acknowledgements  NIH NEI R01 EY012241-05, NIH NEI R01 EY019684-01A1, ONR 60744755-114407-UCB, ONR N00014-15-1-2861, NIH National Eye Institute Training Grant T32EY007043
Journal of Vision December 2022, Vol.22, 4363. doi:https://doi.org/10.1167/jov.22.14.4363
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michele Winter, Tom Dupré la Tour, Michael Eickenberg, Michael Oliver, Jack Gallant; Long-term recordings from area V4 neurons and an accurately-predicting deep convolutional energy model reveal spatial, chromatic and temporal tuning properties under naturalistic conditions. Journal of Vision 2022;22(14):4363. https://doi.org/10.1167/jov.22.14.4363.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Area V4 is an intermediate processing stage of the ventral visual stream. V4 neurons are selective for color and for shape features of intermediate complexity (e.g., curved edge elements and non-Cartesian gratings). However, current computational models of V4 neurons cannot predict more than a small fraction of the response variance observed under naturalistic conditions. To overcome this limitation we performed long-term, large-scale neurophysiological recordings of V4 neurons during stimulation with full-color nature videos. This produced a data set of unprecedented size, consisting of up to 7 hours of 60Hz video data recorded from single V4 neurons. We then developed a biologically plausible deep convolutional energy model and fit the model separately to each of the V4 neurons in the sample. We found that the fit models achieved high prediction performance on a withheld test set. Each model was used to synthesize a predicted optimal pattern (POP) video, which is predicted to elicit the maximal response of the corresponding neuron. These POPs were then analyzed to recover the spatial, chromatic and temporal tuning properties of the V4 population. The POPs recapitulate previous findings from V4 and also reveal new V4 tuning properties. For example, in the spatial domain V4 neurons differ in their tuning for low versus high frequencies, radial versus concentric gratings, texture versus contour, and contour curvature. In the color domain V4 neurons differ in their selectivity for monochromatic versus color patterns, and for blue-yellow, green-magenta and red-cyan patterns. Finally, in the time domain V4 neurons vary from fast phasic (peak 33-50 ms from stimulus onset), slow phasic (peak 67-83 ms from stimulus onset), and sustained patterns. In sum, the deep convolutional energy model accurately predicts V4 responses under naturalistic conditions and it provides a means to more fully understand and interpret the role of V4 in perception.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×