Abstract
Decoding facial expressions of emotions is a crucial ability for successful social interactions. However, little is known about the specific contribution of each cerebral hemisphere in the visual mechanisms underlying successful facial expression categorization. Here, we investigated interhemispheric differences in visual information use in the recognition of a subsample (i.e., anger, fear, disgust and happiness) of basic facial expressions, i.e. the type and quantity of information required for efficient categorization. The present study used the Bubbles technique (Gosselin & Schyns, 2001) to verify whether visual strategies in facial expression categorization differed between hemispheres. Sparse versions of emotional faces were created by sampling facial information at random spatial locations and at five non-overlapping spatial frequency bands. The average accuracy was maintained at 62.5% (halfway between chance and perfect performance) by adjusting the number of bubbles on a trial-by-trial basis using QUEST (Watson & Pelli, 1983). Fifteen participants (3 men; Mage = 23.13; SD = 3.04) each categorized 2200 sparsed stimuli that were either presented in central or peripheral vision (2.5° of visual angle away from the central fixation cross). Overall classification images (we summed the five spatial frequency bands) showing what information in the stimuli correlated with participants accuracy were constructed separately for each emotion and presentation location by performing a multiple linear regression on the bubbles locations and accuracy. A pixel test was applied on the classification image to determine statistical significance (Zcrit=3.36, p<0.05; corrected for multiple comparisons). Our results indicate that different facial regions were used by the left and right hemisphere in the categorization of fearful and angry faces, but not for happy and disgusted faces. These findings suggest that each hemisphere successfully uses different visual information while processing some of the basic emotions expressed by faces.
Meeting abstract presented at VSS 2014