December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Shared and distinct representations of visual regularities across levels of abstraction
Author Affiliations & Notes
  • Brynn E. Sherman
    Yale University
  • Ayman Aljishi
    Yale University
  • Kathryn N. Graves
    Yale University
  • Imran H. Quraishi
    Yale University
  • Adithya Sivaraju
    Yale University
  • Eyiyemisi C. Damisah
    Yale University
  • Nicholas B. Turk-Browne
    Yale University
  • Footnotes
    Acknowledgements  NSF GRFP; NIH R01 MH069456; CIFAR
Journal of Vision December 2022, Vol.22, 3410. doi:https://doi.org/10.1167/jov.22.14.3410
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brynn E. Sherman, Ayman Aljishi, Kathryn N. Graves, Imran H. Quraishi, Adithya Sivaraju, Eyiyemisi C. Damisah, Nicholas B. Turk-Browne; Shared and distinct representations of visual regularities across levels of abstraction. Journal of Vision 2022;22(14):3410. https://doi.org/10.1167/jov.22.14.3410.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We regularly encounter the same objects, people, and places in predictable spatial and temporal configurations (e.g., a sequence of streets and landmarks on a daily commute). The human brain is highly attuned to this structure, as evidenced by the rapid extraction of simple regularities in studies of statistical learning. However, real-world regularities are more complex, often clouded by idiosyncrasies across repetitions (e.g., variable traffic patterns, music playing, and weather), requiring generalization to uncover higher-order structure. Little is known about how the brain extracts and represents structure at various levels of abstraction. To address this question, we recorded intracranial EEG from epilepsy patients while they viewed a stream of scenes containing different levels of regularities. In the exemplar-level condition, the same exact photographs were presented in repeating pairs (e.g., scene B always followed scene A). In the category-level condition, scene photographs were trial-unique but categories were paired (e.g., a mountain always followed a beach). We used a technique known as frequency tagging (which capitalizes on neural entrainment to periodic stimuli) to detect statistical learning in each condition, relative to a random control condition. Throughout the brain, we found robust frequency tagging not only to individual stimuli but also to learned pairs in both the exemplar-level and category-level conditions. The category-level result suggests that the brain abstracts over trial-level variance online to represent higher-order regularities. We next tested whether the same neural sites were involved in learning both exemplar- and category-level structure or whether these two levels of learning recruit distinct neural mechanisms. Although there was reliable overlap in the electrodes exhibiting both kinds of learning, there were also electrodes that represented only category-level or only exemplar-level regularities. Together, these findings provide initial insight into how the brain supports flexible and robust statistical learning across the visual hierarchy.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×