September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
The representational dynamics of visual expectations in the brain
Author Affiliations & Notes
  • Laurent Caplette
    Yale University
  • Tetsu Kurumisawa
    Yale University
  • Helen Borges
    Yale University
  • Jose Cortes-Briones
    Yale University
    Veterans Affairs Connecticut Healthcare System
    Connecticut Mental Health Center
  • Nicholas B. Turk-Browne
    Yale University
  • Footnotes
    Acknowledgements  Funding: NSF CCF 1839308 and NSERC Postdoctoral Fellowship
Journal of Vision September 2024, Vol.24, 1362. doi:https://doi.org/10.1167/jov.24.10.1362
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laurent Caplette, Tetsu Kurumisawa, Helen Borges, Jose Cortes-Briones, Nicholas B. Turk-Browne; The representational dynamics of visual expectations in the brain. Journal of Vision 2024;24(10):1362. https://doi.org/10.1167/jov.24.10.1362.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual perception is modulated by expectations resulting from prior knowledge. Despite significant progress in recent decades, the neural mechanisms underlying this phenomenon remain unclear. Notably, the features in which expectations of real-world objects are represented in the brain are largely unknown: Are expected objects represented as detailed images with both low- and high-level features or are they represented only in terms of some features? Which features play a part in the modulation of sensory processing once an object is seen? In this study, participants were shown cues followed by object images. There were 8 cues associated with 8 object images, with a 58% validity; these associations were not explicitly learned. Participants had to categorize objects as animate or inanimate while their brain activity was recorded using magnetoencephalography (MEG). We used representational similarity analysis and a convolutional neural network to assess the features in which expected and perceived objects were represented during the task. Perceived objects were first represented in low-level features on posterior sensors, and then in high-level features on anterior sensors. During that time, expected objects were represented in high-level features, on anterior sensors. Interestingly, a low-level representation of expected objects was observed during cue presentation prior to object onset (starting around 300 ms after cue onset). These results suggest that expected objects are represented both with low- and high-level features but that only high-level features play a role in the integration of expectations with sensory information. The fact that this high-level representation was only visible on anterior sensors, throughout all object processing, suggests that this integration happens in high-level brain areas. The precise loci of these phenomena will be further investigated using source-level analyses.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×