Abstract
Classification of facial expressions is thought to require differential allotment of attention to some features and feature regions while limiting the allocation of attention to other features and feature regions. There is some evidence to suggest that adult observers classify expressions using both configural and featural information, however research documenting what information in the human face leads to successful recognition is incomplete. We applied the Bubbles masking approach (Gosselin & Schyns, 2000) to a dichotomous forced choice facial expression classification task. Stimuli consisted of six individual faces, each posing all of six different facial expressions plus neutral. While viewing various faces, participants chose one of two prompts presented below the stimulus to classify the facial image as exhibiting an emotion or the negation of that emotion (i.e. “happy” or “not happy”). In order to determine the regions used for classification, the images were masked using Gaussian windows (bubbles). Obstruction was adjusted adaptively in order to maintain 75% classification accuracy. The classic Gosselin and Schyns task was adapted for future application to testing preschool children, by reducing the number of trials in a test and increasing the N of observers tested. Classification images were calculated for each facial expression, across image identity and within. Results will be presented in terms of a comparison of the diagnostically useful regions for humans to the diagnostic regions used by an ideal observer (following Susskind, 2007). Regions showing an increase, relative to the ideal observer, will indicate regions of increased influence in the interpretation of the expression by human observers, whereas decreased regions will reflect areas of reduced influence in the human observers' decision-making process. Results will direct future explorations as we begin to manipulate expression intensity and test 5-7-year-old children as well as adults.