September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Representations of retrieved face information in visual cortex
Author Affiliations
  • Sue-Hyun Lee
    Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health
  • Brandon Levy
    Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health
  • Chris Baker
    Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health
Journal of Vision September 2015, Vol.15, 94. doi:https://doi.org/10.1167/15.12.94
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sue-Hyun Lee, Brandon Levy, Chris Baker; Representations of retrieved face information in visual cortex. Journal of Vision 2015;15(12):94. https://doi.org/10.1167/15.12.94.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Despite the high similarity of human faces, we can easily recognize and discriminate dozens of faces based on our memories and can retrieve how people look. Here, we asked how retrieved face information is represented in cortex? To address this question, we performed an event-related functional magnetic resonance imaging (fMRI) experiment, comprising separate perception, learning and retrieval sessions. During the perception session, inside the scanner, participants were presented with fixed pairings of six auditory cues (pseudowords) and face images (e.g. ‘greds’- man1, ‘drige’-man2), and six auditory cues and shoe images. During the learning session, on a separate day outside the scanner, participants were trained to memorize the pseudoword-image associations for about one hour. Finally, one day after the learning session, participants were scanned and instructed to retrieve each image in response to the paired pseudoword cue. To test the veracity of the retrieved visual information, participants were asked to perform forced-choice tests after the retrieval scan session. Every participant showed good performance in the forced-choice test (> 95% correct). We focused on the patterns of response in face-selective and object-selective cortical areas. Using multivoxel pattern analyses, we found 1) that face-selective and object-selective areas showed category (faces or shoes) specific patterns during both retrieval and perception, 2) that neither face nor object-selective areas showed patterns specific to individual faces or shoes during perception, but 3) that face-selective areas showed specific patterns of response to individual faces during retrieval. Taken together, these results suggest that retrieval of face information generates more discriminative neural responses for individual faces than that evoked by perception of the very same faces.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×