Abstract
Introduction: Learning algorithms based on natural image statistics have been capable of generating oriented receptive fields akin to the properties of V1 neurons. Here we ask whether an analogous approach can be taken to study the basis of face viewpoint representations. This extends our previous work showing that deviations from head symmetry can be used to discriminate among head orientations near the frontal view.
Methods: Male and female faces were digitized at 16 points around the perimeter of the head. Each head was digitized in 9 different horizontal rotations from −40° to +40° in 10° steps. For each of these rotations front, 24° up, and 24° down vertical rotations were digitized, making a total of 27 views of each head. The 27 views of all heads were then submitted to a principal component (PC) analysis. Psychophysical experiments were conducted to determine whether observers could correctly discriminate head orientation from the head outline alone.
Results: PC analysis showed that three components accounted for 97% of the variance. PC1 (56%) was positively weighted for rightward rotations and negatively weighted for leftward rotations, with no weighting on front or up/down views. PC2 (29%) was heavily weighted for front views and did not discriminate between left and right rotations, so it functions as an estimator of bilateral symmetry. PC3 (12%) was positively weighted on upward views and negatively weighted on downward views with insignificant weighting on horizontal rotation. Psychophysical results showed that observers could accurately estimate head orientation from head outlines alone.
Conclusions: Principal Components can be learned readily by Hebbian neural networks. Thus, we hypothesize that neural representations in face selective areas will reflect the small number of PCs that are theoretically necessary for representations of head rotation. Comparisons with neurophysiology appear to support this hypothesis.
Supported by NIH Grant EY002158 & NSERC Grant #OP0007551