September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Mapping Dynamic Conversational Facial Expressions Across Cultures
Author Affiliations
  • Chaona Chen
    School of Psychology, University of Glasgow, Scotland G12 8QB
  • Oliver Garrod
    Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
  • Philippe Schyns
    School of Psychology, University of Glasgow, Scotland G12 8QB
    Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
  • Rachael Jack
    School of Psychology, University of Glasgow, Scotland G12 8QB
    Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
Journal of Vision August 2017, Vol.17, 834. doi:https://doi.org/10.1167/17.10.834
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chaona Chen, Oliver Garrod, Philippe Schyns, Rachael Jack; Mapping Dynamic Conversational Facial Expressions Across Cultures. Journal of Vision 2017;17(10):834. https://doi.org/10.1167/17.10.834.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Conversational facial expressions are the most pervasive forms of facial expressions in real social contexts (e.g., Rozin & Cohen, 2003), and used to manipulate the flow of conversation – for example, showing encouragement can extend interactions, whereas showing doubtful can re-route or terminate it. Although conversational facial expressions play a central role in human-human interaction (e.g., Bavelas & Chovil, 2000; Chovil, 1991) and human-robot interaction (e.g., Cassell, 2000), comparatively little is known about their face movement patterns, and whether these patterns are similar across cultures (but see also Ekman, 1979; Nusseck, Cunningham, Wallraven, & Bülthoff, 2008). Here, we address this knowledge gap by modelling 50+ dynamic conversational facial expressions in two cultures (54 Western, 58 East Asian observers) using a facial expression generator (Yu, Garrod, & Schyns, 2012), reverse correlation (Ahumada & Lovell, 1971) and subjective perception (see also Gill, Garrod, Jack, & Schyns, 2014; R. E. Jack, Garrod, & Schyns, 2014; R. E. Jack, Garrod, Yu, Caldara, & Schyns, 2012). Cross-cultural comparison of the resulting dynamic facial expression models showed clear cultural similarities in facial expressions such as contented, offended, and sorry that correspond to culturally common facial expressions of emotion (see R. Jack, Sun, Delis, Garrod, & Schyns, 2016). In contrast, facial expressions such as doubtful, sympathetic, and indecisive showed culture-specific accents. Together, our results enhance knowledge of conversational facial expressions, and anticipate their application in informing the design of culturally aware digital economy technologies, such as social robots (e.g., Foster et al., 2012) and virtual humans (e.g., Swartout et al., 2006) to support the evolving communication needs of modern society.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×