Abstract
Memory of scenes is known to be very good. But little is known about the mechanisms that human observers may use to encode and represent visual scenes in long-term memory. The goal of this research is to evaluate the role of visual image complexity in memory for real-world scenes. In the first study, participants evaluated the level of complexity of an initial pool of 1000 images of indoor scenes. In a second study, 60 pairs of scenes representing three levels of complexity (low, medium and high) were selected from the initial pool of images, with the constraint that each pair was composed of two images with the same complexity average, the same complexity variance, the same basic-level category and the same spatial layout. In a learning phase, participants were shown 60 pictures (1 second per picture). In the testing phase, they saw 30 old pictures and 30 new pictures, with each new picture (e.g. the second image of a pair) corresponding to the matched scene (e.g. the first image of a pair) shown during the learning phase. Results indicate that scenes of medium complexity level provided the best performances (d′ ∼ 1.2). Performances for scenes of high and low complexity were lower (d′< 0.8), although the hit and false alarm rates were equal in those conditions. This suggests that memory of scenes is not a linear mechanism depending on the quantity of information that the image contains.
This research was funded by a NIMH grant (1R03MH068322-1).