Abstract
Despite the high similarity of human faces, we can easily recognize and discriminate dozens of faces based on our memories and can retrieve how people look. Here, we asked how retrieved face information is represented in cortex? To address this question, we performed an event-related functional magnetic resonance imaging (fMRI) experiment, comprising separate perception, learning and retrieval sessions. During the perception session, inside the scanner, participants were presented with fixed pairings of six auditory cues (pseudowords) and face images (e.g. ‘greds’- man1, ‘drige’-man2), and six auditory cues and shoe images. During the learning session, on a separate day outside the scanner, participants were trained to memorize the pseudoword-image associations for about one hour. Finally, one day after the learning session, participants were scanned and instructed to retrieve each image in response to the paired pseudoword cue. To test the veracity of the retrieved visual information, participants were asked to perform forced-choice tests after the retrieval scan session. Every participant showed good performance in the forced-choice test (> 95% correct). We focused on the patterns of response in face-selective and object-selective cortical areas. Using multivoxel pattern analyses, we found 1) that face-selective and object-selective areas showed category (faces or shoes) specific patterns during both retrieval and perception, 2) that neither face nor object-selective areas showed patterns specific to individual faces or shoes during perception, but 3) that face-selective areas showed specific patterns of response to individual faces during retrieval. Taken together, these results suggest that retrieval of face information generates more discriminative neural responses for individual faces than that evoked by perception of the very same faces.
Meeting abstract presented at VSS 2015