Abstract
Ensemble perception has been shown for a wide range of stimuli, including orientation, motion, and size (Watamaniuk, et al., 1992; Dakin et al., 1997; Ariely, 2001; Chong and Treisman, 2003), faces (Haberman and Whitney, 2007) and biological motion (Sweeny, Haroz and Whitney, 2012), yet little is known about the influence of ensemble coding on eye movements. In the present study, subjects were shown an array of 24 faces with similar emotional expressions (drawn from a larger set of 147 morphs spanning happy, sad and angry), either in a random arrangement or spatially organized around the mean (such that adjacent faces had similar expressions). Subject’s eyes were tracked (EyeLink, 1000 Hz) during 1.5s of free viewing, following which subjects were asked to report the mean emotion of the array. We used a regression model to fit subjects’ response errors on the basis of each fixated face. For the randomly arranged displays, the model explained a significant proportion of the variance in subjects’ response errors; the perceived ensemble expression was well-predicted by a running average of the fixated faces. On the other hand, for the organized displays, the model accounted for a substantially smaller proportion of the variance. An additional regression model, which fitted subjects’ response errors on the organized trials based on the fixated face that was closest to the mean, was also not significant. The results indicate that perceived ensemble expression is based on the particular faces that are fixated only when the faces are randomly arranged, not when they spatially organized.
Meeting abstract presented at VSS 2013