September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Context as a scaffold and details as bricks: Narrative understanding and updating information
Author Affiliations & Notes
  • Jayoon Choi
    Sungkwunkwan University
  • Seongyun Kim
    Center for Neuroscience and Imaging Research, Institute for Basic Science
  • Minjae Jo
    Sungkwunkwan University
  • Min-Suk Kang
    Sungkwunkwan University
    Center for Neuroscience and Imaging Research, Institute for Basic Science
  • Footnotes
    Acknowledgements  This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2022R1A2C2007363).
Journal of Vision September 2024, Vol.24, 369. doi:https://doi.org/10.1167/jov.24.10.369
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jayoon Choi, Seongyun Kim, Minjae Jo, Min-Suk Kang; Context as a scaffold and details as bricks: Narrative understanding and updating information. Journal of Vision 2024;24(10):369. https://doi.org/10.1167/jov.24.10.369.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When individuals perceive the real world, they actively maintain and update a representation of the current event as an event model. The event model can then be updated as those individuals take in and handle new information. We investigated how the brain serves the maintenance and modification of the event model while participants understand narratives of four short visual-audio clips in an fMRI scanner. In the initial session, participants watched only the visual stimuli of the four clips where sound was removed (visual encoding). In the second session, participants listened to only the sound extracted from the same, original clips (auditory encoding) and were instructed to integrate the new auditory information with the visual stimuli from the previous session. After completing the narrative comprehension task, participants were surveyed outside the scanner about their personal experience with the tasks. The survey indicated that the second encoding and recall were comparatively easier than the first encoding and recall across all stories. To identify brain regions sharing a common neural response among participants, we compared the inter-subject correlation of BOLD responses for the visual and auditory encoding conditions, respectively. Across all stories, the neural responses of the TPJ are similar across participants. More important, to identify regions maintaining information of the event model, we calculated intra-subject correlations between BOLD responses of the visual and auditory encoding conditions within each participant. We found a positive correlation for most stories in TPJ and PCC, indicating that the regions within the DMN play a key role not only in story integration but also in updating event models. In summary, participants demonstrated constructing a robust model during auditory encoding, aided by the event model formed during visual encoding. Together, neural results suggest that maintaining necessary information in the TPJ is instrumental in forming a richer event model.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×