December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
The representational geometry of images and concepts in perception and memory
Author Affiliations
  • Adva Shoham
    Tel Aviv University
  • Idan Grosbard
    Tel Aviv University
  • Yoav Ger
    Tel Aviv University
  • Shira Kossovsky
    Tel Aviv University
  • Tal Barnahor
    Tel Aviv University
  • Galit Yovel
    Tel Aviv University
Journal of Vision December 2022, Vol.22, 3889. doi:https://doi.org/10.1167/jov.22.14.3889
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adva Shoham, Idan Grosbard, Yoav Ger, Shira Kossovsky, Tal Barnahor, Galit Yovel; The representational geometry of images and concepts in perception and memory. Journal of Vision 2022;22(14):3889. https://doi.org/10.1167/jov.22.14.3889.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recognition depends on matching a perceptual representation of a stimulus to its representation in memory. Nevertheless, the representation in memory differs from the perceptual representation in several ways. The representation in memory is more abstract to enable generalization across different appearances of familiar categories. Secondly, the representation in memory is not purely perceptual but is associated with conceptual information. Thus, in the current study we asked how similar are the representations of the same stimuli in perception and memory? To examine the perceptual representations, the visual similarity of face images was rated by participants who were familiar or unfamiliar with the identities. To examine the visual representation in memory participants were presented with names of the same familiar faces and rated their visual similarity. To examine the contribution of conceptual information, the same identities were rated based on their semantic similarity by participants who were familiar with them. Results show that semantic information contributed to the visual representation of faces in memory more than in perception, whereas visual information contributed to the visual representation in perception more than in memory. Furthermore, the representations of a deep neural network (DNN) that learns to classify faces based on their visual appearance (VGGface) were more correlated with human representations in perception than in memory, whereas the representations of a multi-modal DNN that learns the association between text and images (CLIP) were more correlated with human representations in memory than in perception. Interestingly, human visual representations in perception, but not in memory, are better predicted by both types of DNNs than either of them alone. We conclude that conceptual and perceptual information contribute differently to the representations of categories in perception and memory. This may account for the different types of errors that are typically committed in perception and memory tasks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×