Abstract
How efficiently do we combine information across facial features when recognizing a face? Some previous studies have suggested that the perception of a face is not simply the result of an independent analysis of individual facial features, but instead involves a coding of the relationships amongst features that enhances our ability to recognize a face. We tested whether an observer's ability to recognize a face is better than what one would expect from their ability to recognize the individual facial features in isolation by using a psychophysical summation-at-threshold technique. Specifically, we measured contrast sensitivity for identifying left eyes, right eyes, noses and mouths of human faces in isolation as well as in combination. Following Nandy & Tjan1, we computed an integration index Φ from these sensitivities, defined as Φ = S2left eye+right eye+nose+mouth / (S2left eye + S2right eye + S2nose + S2mouth ), where S is contrast sensitivity. An index of 1 indicates optimal integration of information across features (i.e., observers use the same amount of information from each feature when they are shown in isolation as when they are shown in combination with each other). An index <1 indicates sub-optimal integration (i.e., the combination of features prevents observers from using all of the information they were able to use when the features were shown in isolation). An index > 1 indicates super-optimal integration (i.e., the combination of features allows observers to use more of the available information than they were able to use when the features were shown in isolation). Surprisingly, we find that most observers integrate facial information sub-optimally, in a fashion that is more consistent with a model that bases its decisions on the single ‘best feature’. 1Nandy AS & Tjan BS, JOV 2008, 8(13):3,1-20.
This research was funded by National Institute of Health Grants EY019265 to J.M.G., and EY016093, EY017707 to B.S.T.