Abstract
In daily life, we must interpret others' facial expressions within a given situation, but in the lab, participants are often asked to assign a label to isolated expressions. We showed participants the same set of emotional facial expressions in two different tasks to investigate whether performance on these tasks draws on different styles of visual processing. Participants (N=38) viewed slides comprised of one facial expression and three emotional scenes (Figure 1) presented on a Tobii eye-tracker. Twenty-four posers each contributed one expression to the slides (happiness, anger, fear, disgust) and scene triads were built from 12 emotion-related scenes. In Matching trials, participants viewed each slide and verbally indicated which scene matched the expression. In Labeling trials, participants viewed each slide again and provided a label for each expression. We examined three areas of interest: eyes, mouth, and nose/central face. Our DVs were the number of times and length of time that participants looked to each AOI; to control for participants' longer looking time when labeling expressions, we calculated each DV as a percentage of overall looking to the face. Task x AOI interactions (ps <.001) showed that participants looked longer and more often to the nose/central face during Matching trials than Labeling trials (ps <.001). Conversely, participants looked longer and more often to the eyes during Labeling trials than Matching trials (ps <.001). This pattern replicated for all emotions but happiness (Figure 2), as shown by Task x AOI x Emotion interactions (ps <.001). These data show that participants' allocation of visual attention varied with the task presented. Tasks in which participants must assign labels to expressions may overestimate the importance of the eye region, while underestimating the importance of the nose/central face region, which appears to be important in participants' matching of expressions to situations.
Meeting abstract presented at VSS 2014