Purchase this article with an account.
Zoya Bylinskii, Phillip Isola, Antonio Torralba, Aude Oliva; Quantifying Context Effects on Image Memorability. Journal of Vision 2015;15(12):82. doi: https://doi.org/10.1167/15.12.82.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Why do some images stick in our minds while others fade away? Recent work suggests that this is partially due to intrinsic differences in image content (Isola 2011, Bainbridge 2013, Borkin 2013). However, the context in which an image appears can also affect its memorability. Previous studies have found that images that are distinct, interfere less with other images in memory and are thus better remembered (Standing 1973, Hunt 2006, Konkle 2010). However, these effects have not previously been rigorously quantified on large-scale sets of complex, natural stimuli. Our contribution is to quantify image distinctiveness and predict memory performance using information-theoretic measures on a large collection of scene images. We measured the memory performance of both online (Amazon Mechanical Turk) and in-lab participants on an image recognition game (using the protocol of Isola 2011). We systematically varied the image context for over 1,754 images (from 21 indoor and outdoor scene categories), by either presenting images together with other images from the same scene category or with images from different scene categories. We use state-of-the-art computer vision features to quantify the distinctiveness of images relative to other images in the same experimental context and to correlate image distinctiveness with memorability. We show that by changing an image's context, we can change its distinctiveness and predict effects on memorability. Images that are distinct with respect to one context may no longer be distinct with respect to another. We find that images that are not clear exemplars of their image category experience the largest drop in memory performance when combined with images of other categories. Moreover, image contexts that are more diverse lead to better memory performances overall. Our quantitative approach can be used for informing applications on how to make visual material more memorable.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only