September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Visual memorability in the absence of semantic content
Author Affiliations
  • Qi Lin
    Department of Psychology, Yale University
  • Sami Yousif
    Department of Psychology, Yale University
  • Brian Scholl
    Department of Psychology, Yale University
  • Marvin Chun
    Department of Psychology, Yale UniversityInterdepartmental Neuroscience Program, Yale School of Medicine
Journal of Vision September 2018, Vol.18, 1302. doi:https://doi.org/10.1167/18.10.1302
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Qi Lin, Sami Yousif, Brian Scholl, Marvin Chun; Visual memorability in the absence of semantic content. Journal of Vision 2018;18(10):1302. https://doi.org/10.1167/18.10.1302.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What makes an image memorable? Recent work has characterized an intrinsic property of images, memorability, which predicts the likelihood of an image being remembered across observers (Isola et al.,2011; Bainbridge et al.,2013). Memorable images frequently contained objects and humans — raising the question of whether there is memorability in the absence of semantic content. Here, we describe visual memorability: memorability that is driven not by semantic content but by low-level visual features per se. Participants viewed a sequence of natural scene images (sampled from Isola et al.,2014) and made a response whenever they saw an image that they had seen previously during the task. Replicating previous findings, memorability was reliable across individuals, and these memorability scores were significantly correlated with those from the original study. To eliminate semantic content, we then transformed the original natural scene images using transformations such as phase-scrambling or texture-scrambling, and tested their memorability using the same paradigm in independent samples. Unsurprisingly, transformed images were significantly less memorable than the original meaningful images. Critically, however, we still found reliable memorability for both types of scrambling. That is, certain images were more likely to be remembered across observers, even when they contained little-to-no semantic content. Interestingly, memorability scores for intact, phase-scrambled, and texture-scrambled images were unrelated: an image that was memorable once transformed was not necessarily memorable in the original sample, and vice versa. Furthermore, when we used a computer vision model previously trained to predict memorability (Khosla et al.,2015), the predictions for the scrambled images did not predict the memorability of the scrambled images themselves — although they did predict the memorability of the original images, suggesting that scrambling preserves low-level features that predict memorability. Thus, our results expand prior work and suggest that there is pure visual memorability that operates independently of semantic content.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×