Abstract
Several models of face recognition have used data from neuroimaging and neurological studies to argue that the perception of facial expression and gaze direction are mediated by the same neural system — and therefore cannot be independently processed. In three experiments, we tested this notion using Garner's selective-attention task, which provides a measure of the degree of interdependence between a given pair of stimulus dimensions. In our experiments, participants made speeded classifications of either facial expression or gaze direction while the other dimension either remained constant (Baseline) or varied randomly across trials (Filtering). Slower performance in Filtering as compared to Baseline indicates that the two dimensions cannot be perceived independently and suggests a common locus of processing. In Exp. 1, upright photos depicting two different emotions (happy or angry) and two different gaze directions (looking towards or away from the observer) were presented. Performance in Filtering was found to be slower than performance in Baseline for both expression and gaze, suggesting that these dimensions cannot be independently processed. In Exp. 2, the same pattern of results was obtained even when the gaze was always directed away from the observer (to the left or right). In Exp. 3, we inverted the faces to isolate face-dependent from face-independent effects. Inversion had a striking effect on selective attention to expression, which was now unaffected by the irrelevant variations in gaze. In contrast, inversion had no effect on the pattern of selective attention to gaze. These results suggest that the failure of selective attention to expression, but not to gaze, in upright faces is due to face-related processing. Although these results support the idea that the perception of expression and gaze direction are mediated by the same underlying neural system, they also reveal important differences in the way these two dimensions are processed.