August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Color and Shape Contingency Representations in Rhesus Macaques
Author Affiliations & Notes
  • Spencer Loggia
    National Eye Institute
  • Stuart Duffield
    National Eye Institute
  • Kurt Braunlich
    National Eye Institute
    National Institute of Mental Health
  • James Cavanaugh
    National Eye Institute
  • Bevil Conway
    National Eye Institute
  • Footnotes
    Acknowledgements  NIH Intramural Research Program
Journal of Vision August 2023, Vol.23, 5793. doi:https://doi.org/10.1167/jov.23.9.5793
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Spencer Loggia, Stuart Duffield, Kurt Braunlich, James Cavanaugh, Bevil Conway; Color and Shape Contingency Representations in Rhesus Macaques. Journal of Vision 2023;23(9):5793. https://doi.org/10.1167/jov.23.9.5793.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Color and shape together are key visual features for object identification. However, the underlying brain structures and computations that support shape-color contingency are not well understood. Prior functional imaging experiments and behavioral observations provide clues: fMRI work has uncovered both joint and separable representations of colors and shapes, and behavioral results have shown that objects can take on powerful color associations. Here we develop a paradigm to connect the fMRI work with the behavioral work. We trained macaques on two separate alternative forced-choice (AFC) tasks. In the first, the ‘train task’, they were rewarded for choosing the correct shape or color of a colored-shape stimulus. In the second, the ‘probe task’, they were rewarded for correctly matching uncolored-shape and color, and for matching non-color associated shapes. The set of colors and color-associated shapes were the same between the two tasks. Reinforcement learning models of subject behavior show significantly higher learning rates for shape to color than color to shape associations (p<.01), and chance performance on the probe task after mastering the train task (p<.001). We then collected fMRI data in one animal, using a block-design version of the same task. These data reveal that color associated shapes preferentially activate color-biased regions in V4 and along Inferior Temporal Cortex, as well as areas in the prefrontal cortex, while shapes not paired with colors do not. Linear models trained to decode achromatic color-associated shapes were also able to cross-decode the associated color stimuli in many of the color-biased regions previously identified, with accuracy up to 4x chance. These results begin to reveal brain-wide networks across the ventral visual stream and prefrontal cortex that support color-shape association learning.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×