Abstract
In this fMRI study, participants watched the same short video several times in separate scan runs. We used a novel cognitive task design that afforded direct, time-locked comparisons between perception and imagery-based memory recall for the same information, in contrast to other studies of audiovisual episodic memory recall in which the memory probe took place after the movie, and at a different pace. In some runs of our paradigm, participants saw and heard the full movie (audio + video, A+V); in other runs, they saw only the video and were instructed to imagine the audio from memory (0+V); and in still other runs, they heard only the audio and were instructed to imagine the video from memory (A+0). Using the DeLINEATE toolbox (http://delineate.it), we trained subject-specific deep-learning models based on a sensory cortex (visual + auditory) region of interest to discriminate whether two fMRI volumes from different runs represented the same point in time, or two different points in time. The model performed easily above chance at this task. Although classification was, as expected, best when comparing one full-movie (A+V) run to another, and video-only runs (0+V) tended to classify better than audio-only (A+0) runs, performance was high between all run-type pairs. Critically, this included comparisons between 0+V and A+0 runs, which shared no common sensory information. In fact, classification was higher on data drawn from one 0+V and one A+0 run than on data drawn from two different A+0 runs. We believe this technique provides a powerful new way to assess reinstatement of brain activity patterns in sensory regions during recall of complex, naturalistic information.
Acknowledgement: Supported by NSF/EPSCoR grant #1632849 to MRJ and colleagues