Abstract
Memory for dynamic scenes is better than static versions (Matthews et al., PB&R, 2007) and this may be due to increased gaze guidance by visual salience (Smith & Mital, JoV, 2013) increasing the chance of reinstating the same scanpath during recognition and encoding. Replaying only the fixated parts of a static scene during recognition (via a moving window) has been shown to improve memory performance compared to different locations (Foulsham & Kingstone, JEP:G, 2013). However, it is not known whether this effect is enhanced in more naturalistic dynamic scenes. Across two experiments, old/new recognition memory for dynamic and static scenes was tested under three conditions: 1) Full scene; 2) Own, a 6.2° moving-window yoked to the participant's raw gaze location from encoding; or 3) Other, a moving-window with the scanpath from a different scene. In experiment 1, memory was tested for 192 three second clips either immediately after study, or after a delay of one or two weeks. Analysis of accuracy and d' showed a significant decrease in performance with delay (immediate>one week = two week) and significantly better memory for Full compared to Own and Other moving windows. However, Own was significantly better than Other, confirming the fixation-dependent recognition effect and extending it to dynamic scenes. Surprisingly there were no differences between static and dynamic scenes. This was confirmed in a second experiment using immediate recall and a blocked design with the Full condition before either moving window condition. These results indicate that scene memory privileges fixated locations but that peripheral details are also important. Whether it is the fixated scene content or the scanpath that is important for cuing memory is not currently known. Future studies must attempt to dissociate these factors and also investigate the conditions under which memory for dynamic scenes is superior to static scenes.
Meeting abstract presented at VSS 2016