September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Individual differences in susceptibility to temporal contextual cues during classification of facial expressions
Author Affiliations
  • Ahamed Miflah Hussain Ismail
    University of Nottingham Malaysia
  • Kinenoita Irwantoro
    University of Nottingham Malaysia
  • Nathali Nimsha Nilakshi Lennon
    University of Nottingham Malaysia
Journal of Vision September 2021, Vol.21, 2548. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ahamed Miflah Hussain Ismail, Kinenoita Irwantoro, Nathali Nimsha Nilakshi Lennon; Individual differences in susceptibility to temporal contextual cues during classification of facial expressions. Journal of Vision 2021;21(9):2548.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Temporal cues such as affective vocal tones and scenes are well-known to alter people’s perceived category of facial expressions. However, it is unclear whether there are individual differences within typical adults in their susceptibility to temporal cues when classifying facial expressions. To examine this, we asked twenty-four participants, aged between 18 and 25 years, to classify a series of dynamic facial expressions that gradually unfolded from neutral to happy or sarcastic smiles. In two experimental conditions, facial expressions were temporally preceded by affective contexts: audiovisual clips depicting a happy or an angry scenario. The preceding contexts in a third condition (“no-context”) had no affective information (i.e., visual and auditory noise). First, compared to classifications in the no-context condition, classifications were more accurate (p < 0.001) when the affective contexts predicted the impending facial expressions (i.e., happy clips paired with happy smiles and angry clips paired with sarcastic smiles), suggesting facilitation from predictive contexts. A Pearson’s test of correlation revealed that, the less accurate our participants were at classifying facial expressions with no affective context, the higher the magnitude of this facilitation (i.e., increase in accuracy) was (p < 0.001). Second, compared to the no-context condition, classifications were relatively less accurate (p = 0.010) when the affective contexts were misleading (i.e., happy and angry clips paired with sarcastic and happy smiles, respectively), suggesting impairment from misleading contexts. There was no significant correlation between our participants’ accuracy to classify facial expressions without an affective context and the magnitude of these impairments (i.e., decrease in accuracy). Our findings suggest that people who are poorer at classifying facial expressions appearing without contextual information may be relatively more influenced by predictive, but not misleading, temporal cues when contextual information is available. Therefore, individual differences seem to modulate susceptibility to contextual cues when classifying facial expressions.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.