Abstract
Recently, there has been much debate about whether scene memory is detailed or abstract. Previous work has demonstrated that visual long-term memory (VLTM) can store detailed information about object appearance and information about scene-level viewpoint. However, it is not known whether these two types of information are integrated within episodic representations of scenes as they were viewed. In the current experiments participants studied a series of pictures, some of which were different viewpoints of the same larger scene. The viewpoints overlapped, so some of objects were visible in both viewpoints. However, one object′s visual appearance was manipulated across the viewpoints (e.g., Viewpoint 1/Object A and Viewpoint 2/Object A′). Long-term memory for the studied scenes was tested using 2-alternative forced-choice (2AFC) tests. On some 2AFC trials, distracter scenes depicted novel conjunctions of previously studied object appearance and scene viewpoint information. For example, if an observer studied a scene depicting Viewpoint 1 with Object A and another scene depicting Viewpoint 2 with Object A′, then on the 2AFC test they would have to choose between Viewpoint 1 with Object A and Viewpoint 1 with Object A′ (i.e. a novel conjunction of viewpoint and object appearance). In three experiments using both incidental and intentional encoding instructions, participants were unable to perform above chance on 2AFC tests that required discriminating among previously viewed and novel conjunctions of object appearance and viewpoint information (Experiments 1a, 1b and 2). However, performance was better when object appearance (Experiments 1a, 1b and 2) or scene viewpoint (Experiment 3) alone was sufficient to succeed on the 2AFC test. These results replicate previous work demonstrating good memory for object appearance or viewpoint. However, the current results suggest that object appearance and scene viewpoint are not episodically integrated in VLTM. Thus, picture memory seems to be detailed but fragmented.