Abstract
Faces carry multiple signals: some are stable across time, such as an individual's identity, whereas others are subject to change, such as expression. These stable and changeable attributes appear to have opposing computational demands. A mechanism for discriminating identity would need to distill information that remains constant in a face, whereas a mechanism for discriminating expression would need to be sensitive to change. Although this intuition aligns with most neural models for face perception, it is still a matter of intense debate. The main goal of our research is to determine the extent to which the signals conveyed by facial identity and expression are processed by independent mechanisms. To begin addressing this question, we designed an experiment to determine whether rhesus monkeys (Macaca mulatta) can extract both identity and expression cues from face stimuli. Using a two-alternative force-choice delayed match-to-sample task, we tested four subjects across two task conditions. The stimuli in both conditions were identical, the only difference was whether the task was to match identity or expression. We found that subjects were able to successfully select expression more often than identity on expression discrimination trials, and to select identity more often than expression on identity trials. These results provide the first clear indication that monkeys are able to extract multiple signals from faces.
Meeting abstract presented at VSS 2017