Abstract
When taking a detour on the way home, does the visual system treat the front of your house differently than when you take the normal route? In the study of vision, representations are believed to reflect the physical attributes of the current stimulus. In contrast, in the study of memory, representations contain not only current stimulus features, but also a moving window of recent experience (temporal context). Here we test whether representations in high-level visual cortex contain temporal context information. In a jittered event-related fMRI design, eighteen observers viewed a series of scenes presented one at a time while making orthogonal indoor/outdoor judgments. Several scenes were repeated once over the course of each scanning run, but, unbeknownst to observers, some repetitions were preceded by the same two scenes in the trial sequence as when they were initially encountered (repeated context) and others by two novel scenes (novel context). To measure the effect of context, we examined repetition attenuation in the parahippocampal place area (PPA). If the PPA learns temporal context (in one shot), then scenes repeated in context will be more similar to the stored representation and will elicit greater attenuation. As a baseline, scenes repeated in novel contexts elicited significant but weak attenuation compared to when they were novel, but only in right PPA. In contrast, scenes repeated in repeated contexts elicited robust attenuation in bilateral PPA compared to when they were novel, and this effect was significantly stronger than for novel context repeats. A control study with eight new observers demonstrated that this context-dependent attenuation does not reflect carryover from repeated contexts per se, since presenting novel scenes in repeated contexts eliminated all traces of attenuation. While featured prominently in episodic memory, our findings suggest a broader role for temporal context in determining the content of perceptual representations.