Abstract
The human face usually provides reliable information for determining a person’s unique identity. We tested identification when faces provided misleading information about identity due to illumination and expression variability. "Misleading" information was defined based on the performance of a fusion of face recognition algorithms from a recent international competition. Two kinds of stimulus pairs were selected: highly similar images of different people and highly dissimilar images of the same person. In four experiments, humans judged whether the pairs (n = 100) showed the "same person" or "different people". Three versions of the image pairs were tested. In Experiment 1, participants saw the original images, which included the face, neck, and shoulders of each person. In Experiment 2, the original image was cropped to include only the face. In Experiment 3, the original image was edited to remove the face, leaving the hair, neck, and shoulders. Participants matched identity accurately in the original images (d’ = 1.5, se = 0.15), but performed at chance with the face alone (d’ = 0.36, se = 0.15). Performance with the neck and shoulders was virtually identical to performance with the complete images (d’ = 1.5, se = 0.09). This indicates that use of the body information accounts for the accurate matching of the original complete images. An item analysis revealed that identification without the face was more accurate than identification with the complete image for 50 percent of the stimuli, indicating that the presence of a face can actually interfere with the use of the body for identification in suboptimal viewing conditions. Experiment 4 replicated Experiment 1, but with participants asked to rate their use of 16 internal (e.g., eye shape) and external (e.g., neck) features. People uniformly reported greater reliance on internal over external features, suggesting limited conscious access to their identification strategy.
Meeting abstract presented at VSS 2012