Purchase this article with an account.
Sylvain Roy, Cynthia Roy, Zakia Hammal, Daniel Fiset, Caroline Blais, Boutheina Jemel, Frédéric Gosselin; The use of spatio-temporal Information in decoding facial expression of emotions. Journal of Vision 2008;8(6):707. doi: 10.1167/8.6.707.
Download citation file:
© 2016 Association for Research in Vision and Ophthalmology.
Facial expressions of emotions guide adaptive behaviors by communicating information that can be used to rapidly infer the thoughts and feelings of others. This information has partially been characterized using static images (e.g., mouth in low spatial frequencies for happiness, eyes in high spatial frequencies for fear; Smiths et al., 2005), but relatively little is known about the contribution of facial movement (but see Cunningham, Kleiner & Büthoff, 2005). Thirty participants viewed 5,000 sparse versions of 80 static emotional faces, and thirty others viewed the 5,000 dynamic sparse counterparts corresponding to the six basic emotions from the STOIC database (Roy et al., 2007). Observers were required to categorize facial expressions as fearful, happy, sad, surprised, disgusted, or angered. More specifically, the sparse static stimuli sampled facial information at random locations at five one-octave SF bands (Gosselin & Schyns, 2001) and the sparse dynamic stimuli randomly sampled space and time (Vinette, Gosselin & Schyn, 2004). Online calibration of sampling density ensured 75% overall accuracy. We performed multiple linear regressions on sample locations (in space-time for dynamic stimuli) and accuracy to reveal the effective use of information for every emotion in the static and dynamic conditions. Our results with static stimuli essentially corroborate the findings of Smith et al., (2005) and our preliminary results with dynamic stimuli extend them by providing original data regarding the spatio-temporal characteristics of facial expression recognition—dynamic facial expressions appear to communicate unique spatio-temporal cues that may differentially contribute to recognition behavior.
This PDF is available to Subscribers Only