Abstract
When individuals perceive the real world, they actively maintain and update a representation of the current event as an event model. The event model can then be updated as those individuals take in and handle new information. We investigated how the brain serves the maintenance and modification of the event model while participants understand narratives of four short visual-audio clips in an fMRI scanner. In the initial session, participants watched only the visual stimuli of the four clips where sound was removed (visual encoding). In the second session, participants listened to only the sound extracted from the same, original clips (auditory encoding) and were instructed to integrate the new auditory information with the visual stimuli from the previous session. After completing the narrative comprehension task, participants were surveyed outside the scanner about their personal experience with the tasks. The survey indicated that the second encoding and recall were comparatively easier than the first encoding and recall across all stories. To identify brain regions sharing a common neural response among participants, we compared the inter-subject correlation of BOLD responses for the visual and auditory encoding conditions, respectively. Across all stories, the neural responses of the TPJ are similar across participants. More important, to identify regions maintaining information of the event model, we calculated intra-subject correlations between BOLD responses of the visual and auditory encoding conditions within each participant. We found a positive correlation for most stories in TPJ and PCC, indicating that the regions within the DMN play a key role not only in story integration but also in updating event models. In summary, participants demonstrated constructing a robust model during auditory encoding, aided by the event model formed during visual encoding. Together, neural results suggest that maintaining necessary information in the TPJ is instrumental in forming a richer event model.