August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Reversed contributions of visual and semantic information to the representations of familiar faces in perception and memory
Author Affiliations & Notes
  • Adva Shoham
    Tel Aviv University
  • Idan Daniel Grosbard
    Tel Aviv University
  • Yuval Navon
    Tel Aviv University
  • Galit Yovel
    Tel Aviv University
  • Footnotes
    Acknowledgements  The study was supported by an ISF grant 971/21
Journal of Vision August 2023, Vol.23, 5331. doi:https://doi.org/10.1167/jov.23.9.5331
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Adva Shoham, Idan Daniel Grosbard, Yuval Navon, Galit Yovel; Reversed contributions of visual and semantic information to the representations of familiar faces in perception and memory. Journal of Vision 2023;23(9):5331. https://doi.org/10.1167/jov.23.9.5331.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Familiar faces can be described by their visual features and their biographical information. This information can be retrieved from their images as well as in their absence by recalling information from memory based on their names. But what is the relative contribution of visual and semantic information to mental representations in perception and memory and in what order are they retrieved? These questions are hard to answer, as visual and semantic information are intermixed in human mental representations. Here we addressed these questions in two studies. In Study 1, participants rated the visual similarity of familiar faces based on their pictures (perception) or by recalling their visual appearance from memory based on their names. To disentangle the contribution of visual and semantic information we used visual and semantic deep neural networks (DNNs) as predictors of human representations in perception and memory. A face-trained DNN (VGG-16) was used to measure the representational geometry of visual information based on their images. A natural language processing (NLP) DNN was used to measure the representational geometry of semantic information, based on the Wikipedia descriptions of the famous identities. We found a larger contribution of visual than semantic information in human perception but a reversed pattern in memory. In Study 2, participants made speeded visual (i.e., hair color) or semantic decisions (i.e., occupation) about familiar faces based on their images (perception) or their names (memory). Reaction times were faster for visual than semantic decisions in the perception condition but vice versa in the memory condition. Taken together, our studies demonstrate reversed contributions and retrieval order of visual and semantic information in mental representations of familiar faces in perception and memory. Our approach can be used to study the same questions for other categories including objects, scenes as well as voices and sounds.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×