Abstract
People identify a human face more accurately following adaptation to a synthetically created “anti-face” with “opposite” features (Leopold et al., 2001). Previous experiments have shown that face adaptation survives two-dimensional scaling and shifts in retinal position, placing the locus of the effect in high-level visual areas, beyond those with strict retinotopic organization. In this study, we first adapted observers to three-quarter profile views of anti-faces and tested with frontal views of anti-caricatures. We found that opponent-based face adaptation survives this change in three-dimensional viewpoint. This indicates that face adaptation taps face encoding mechanisms that operate across view change. To examine the nature of the visual information underlying view-transferable face adaptation, we used opponent-based facial identity adaptation, in combination with stimuli created by a three-dimensional morphing program that operates on laser scans of human heads (Blanz & Vetter, 1999). By adapting and testing with faces that varied from the average only in their three-dimensional shape or surface reflectance, we show that the shape and surface reflectance information in faces can be adapted selectively. In a final experiment, we show that both shape and reflectance adaptation transfer across viewpoint. These findings indicate that neural representation of faces includes both shape and reflectance information in a form that generalizes across changes in three-dimensional viewpoint.