Abstract
Previous studies using human face stimuli have revealed differences in their detectability and saliency when certain emotions are exhibited. Some of these studies have suggested that different spatial frequency bands are important for the recognition and classification of those emotions; however there is no universal agreement on which subsets of spatial frequencies, if any, are vital for emotional face classification. Many of these studies have been limited by the number of emotions examined or by their methods, for example, pitting one emotional state directly against another. In the present study we employed stimuli that contained twenty-four different emotional states from the McGill Face Database (affectionate, alarmed, amused, baffled, comforting, contented, convinced, depressed, entertained, fantasizing, fearful, flirtatious, friendly, hateful, hostile, joking, panicked, playful, puzzled, reflective, relaxed, satisfied, terrified and threatening). These faces were presented, randomly interleaved, either in their entirety or after the removal of their low or high spatial frequency content, in a novel, subjective, emotion classification task. The task required observers to "point-and-click" the location within a 2-dimentional emotion space whose axes represented perceived arousal level vs. valance (pleasant vs. unpleasant), no non-semantic processing of emotion names was required. Using spatial statistics we compared the 2-dimentional distributions per emotion and per frequently condition. The analysis indicated that different emotions were consistently classified across the entire space independent of spatial frequency content, both within and between observers. Hence, based on these data we conclude that both high and low spatial frequencies can be utilized when identifying and classifying the emotional state of a face.
Meeting abstract presented at VSS 2017