Abstract
Temporal cues such as affective vocal tones and scenes are well-known to alter people’s perceived category of facial expressions. However, it is unclear whether there are individual differences within typical adults in their susceptibility to temporal cues when classifying facial expressions. To examine this, we asked twenty-four participants, aged between 18 and 25 years, to classify a series of dynamic facial expressions that gradually unfolded from neutral to happy or sarcastic smiles. In two experimental conditions, facial expressions were temporally preceded by affective contexts: audiovisual clips depicting a happy or an angry scenario. The preceding contexts in a third condition (“no-context”) had no affective information (i.e., visual and auditory noise). First, compared to classifications in the no-context condition, classifications were more accurate (p < 0.001) when the affective contexts predicted the impending facial expressions (i.e., happy clips paired with happy smiles and angry clips paired with sarcastic smiles), suggesting facilitation from predictive contexts. A Pearson’s test of correlation revealed that, the less accurate our participants were at classifying facial expressions with no affective context, the higher the magnitude of this facilitation (i.e., increase in accuracy) was (p < 0.001). Second, compared to the no-context condition, classifications were relatively less accurate (p = 0.010) when the affective contexts were misleading (i.e., happy and angry clips paired with sarcastic and happy smiles, respectively), suggesting impairment from misleading contexts. There was no significant correlation between our participants’ accuracy to classify facial expressions without an affective context and the magnitude of these impairments (i.e., decrease in accuracy). Our findings suggest that people who are poorer at classifying facial expressions appearing without contextual information may be relatively more influenced by predictive, but not misleading, temporal cues when contextual information is available. Therefore, individual differences seem to modulate susceptibility to contextual cues when classifying facial expressions.