Abstract
After memorizing photos of scenes, viewers tend to remember having seen beyond the edges of views (“boundary extension”: BE). During normal scene perception, we don't try to memorize layout: we attend to common tasks. Is BE elicited when attention is diverted from the scene? Photographs (n=48) were presented for 750 ms. Superimposed on each were red 2s and 5s. Os (N=108) performed a search task (SEARCH-ONLY), a BE task (MEMORY-ONLY) or both (SEARCH-MEMORY). In the search task, Os counted 5s (0, 1, 2). In the BE task, Os rated test pictures on a 5-pt scale (“same”, “a little or a lot closer or wider angle”). In the SEARCH-MEMORY condition, Os performed both tasks giving search priority.
Search accuracy was well above the 33% chance level and did not differ significantly between SEARCH-ONLY (58.6%) and SEARCH-MEMORY (55.0%) conditions. Mean ratings revealed significant BE in both memory conditions. Interestingly, BE was greater when attention was divided (SEARCH-MEMORY). Was BE constrained by focused attention or did MEMORY-ONLY Os use a strategy (e.g., “Tree 5 mm from edge”)? To thwart this, a replication was run (N=72). The BE-task was presented as an unexpected, incidental test at the end of a 12-trial series. Results were the same. These data show that layout extrapolation automatically occurs in scene representation. Indeed, extrapolation was greater when attention was diverted to search. Thus BE is available to facilitate integration of views during scene perception even when the observer's attention is devoted to other tasks.