September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Conjunctive representation of colors and shapes in human occipitotemporal and posterior parietal cortices
Author Affiliations & Notes
  • Benjamin Swinchoski
    Yale University
  • JohnMark Taylor
    Columbia University
  • Yaoda Xu
    Yale University
  • Footnotes
    Acknowledgements  This project is supported by NIH grant 1R01EY030854 to Y.X. J.T. is supported by NIH grant 1F32EY033654.
Journal of Vision September 2024, Vol.24, 1260. doi:https://doi.org/10.1167/jov.24.10.1260
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Swinchoski, JohnMark Taylor, Yaoda Xu; Conjunctive representation of colors and shapes in human occipitotemporal and posterior parietal cortices. Journal of Vision 2024;24(10):1260. https://doi.org/10.1167/jov.24.10.1260.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How does the human brain jointly represent color and shape? In contrast with the traditional view that color and form are represented by separate visual areas and bound together via selective attention, a recent study using simple artificial shape stimuli and an orthogonal luminance change task found that color and form were largely jointly encoded in the same brain regions (including regions defined by their univariate response to color or shape), albeit in an independent manner, such that a classifier trained to discriminate shapes in one color could cross-decode the same shapes in a different color. The present study aims to understand how attention may impact feature representation when complex, real-world object shapes are encoded. We used three shapes (generated from side-view silhouettes of cars, helicopters, and ships) and three colors (red, green, and blue, equated in luminance and saturation). We obtained fMRI response patterns from 12 human participants as they viewed blocks of images, with each block containing exemplars of the same object and color with slight variations in shape and hue. In different fMRI runs, participants either attended to shape, color, or both features and had to respond to repetitions in the attended feature dimension(s). Unlike in the earlier study examining simple shape features with an orthogonal task, regardless of the feature attended, we found a drop in cross-color shape decoding compared to within-color shape decoding across occipitotemporal and posterior parietal cortices. These results indicate that nonlinear conjunctive coding of shape and color exists across the human ventral and dorsal visual regions when attention is directed towards real-world object features.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×