Abstract
When glancing at a magazine, a website, or a book, we are continuously being exposed to photographs. Despite this overflow of visual information, humans are extremely good at remembering thousands of pictures along with some of their visual details. But are all images created equal? Here, we examined if images have intrinsic features that can make them consistently remembered. We ran a Visual Memory Game on Amazon Mechanical Turk. Participants (n = 272) viewed a sequence of images and indicated whenever they noticed a repeat. We measured image memorability as the probability that an image will be remembered after a single view. We found inter-subject consistency in our game, indicating that the memorability of a photograph is a stable property that is largely shared across different viewers (Spearman's r = 0.53). Given this consistency, we modeled the contribution of a set of global image descriptors to image memorability, and we trained a predictor based on these descriptors. We additionally annotated our images with object and scene labels, and modeled how each of these labels contributed to image memorability. We found that different image features have consistently different impacts on memorability: low-level image features, e.g. color and contrast, correlate weakly with memorability (r = 0.02–0.18), while multivariate global descriptors and object contents make more robust predictions (r = 0.33–0.41). We also examined false memories – those cases in which a participant thinks he or she remembers a picture but in fact has never seen it – and modeled which image features tend to give rise to these errors. This work demonstrates that memorability is a stable property of an image, and computational techniques can reveal which image features are driving this consistency.
Funded by NSF CAREER award to A.O. (0546262) and A.T. (0747120), as well as Google research awards to A.O. and A.T.