Abstract
We often rely on the facial expressions of others to determine our response to events they can see, but we cannot (e.g., a dog barking behind you likely poses no threat if the person facing you is smiling). Recent advances in personal computing now make it convenient to study social monitoring in new ways. Specifically, the video camera atop the screen makes it possible to record the dynamic expressions of participants otherwise engaged in the tasks studied by cognitive scientists. The video clips of these facial expressions can then be used as stimuli in their own right to study social monitoring processes and abilities.
In our lab we began by selecting 80 photos from the IAP set (Lang et al., 2005) that varied in valence (negative vs. positive) and arousal (low vs. high). In Part 1 the 80 images were presented in a random order for 3 sec, with participants viewing the complete set three times. The first time no mention was made of facial expressions. Participants were told the camera would record where they were looking while they categorized images as negative or positive. The second time they were asked to deliberately make expressions that would convey the emotional tone of the picture to someone else. The third time they were asked to make expressions that would mislead someone regarding the picture.
In Part 2 the video clips from these three phases were used to answer several questions about social monitoring: Which IAP pictures result in reliable spontaneous expressions that convey emotions to viewers? Does reading someone's facial expression improve with training through feedback? How easy is it to discriminate genuine from faked expressions? Answers to these and other questions will be presented in discussing how to harness this new technology in the study of social-cognitive perception.