December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Dynamic neural representations reveal flexible feature use during scene categorization
Author Affiliations & Notes
  • Michelle Greene
    Bates College
  • Bruce Hansen
    Colgate University
  • Footnotes
    Acknowledgements  James S. McDonnell Foundation grant (220020430) to BCH; National Science Foundation grant (1736394) to BCH and MRG.
Journal of Vision December 2022, Vol.22, 4103. doi:https://doi.org/10.1167/jov.22.14.4103
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michelle Greene, Bruce Hansen; Dynamic neural representations reveal flexible feature use during scene categorization. Journal of Vision 2022;22(14):4103. https://doi.org/10.1167/jov.22.14.4103.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A fundamental goal of vision science is to map the representational states that transform ambient light arrays into perceived environments and events imbued with semantic meaning. Previous work has demonstrated that neural representations are associated with low-level visual features in early visual processing and resemble higher-level features later (Greene & Hansen, 2020). The goal of the current study is to assess the flexibility of feature use. Experiment 1 assessed feature preference in scene categorization using a variant of the triplet similarity task (Hebart et al., 2020). Observers were presented with three images and asked to select the least similar image. We created structured image triads. One pair was similar with respect to scene affordances and dissimilar concerning objects and texture. A different pair was similar concerning objects, and the last pair was similar with respect to texture. This allows us to assess which feature is most critical for scene similarity when forced to choose among competing features. We found that observers were twice as likely to choose affordance-based similarity and less likely to choose texture-based similarity. Do observers then use affordances to accomplish scene categorization? In Experiment 2, observers performed a scene categorization task while 64-channel EEG was recorded. Eight scene categories served as targets, and in different blocks, distractors were chosen that were either similar to each target with respect to affordances or with respect to texture. If affordances are used for categorization, observers would need to use alternative features in the affordance block, as affordances are no longer diagnostic of category. We found that observers were slower and less accurate in categorizing scenes in the affordance blocks. More strikingly, whole-brain EEG decoding revealed that neural representations of scene categories emerged ~50 ms slower in the affordance block, suggesting that the brain preferentially uses affordances over texture for categorization.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×