Abstract
With the advent of the digital economy, increasing globalization and cultural integration, cross-cultural social communication is increasing, where the mutual understanding of mental states (e.g., confusion, bored) is a key social skill. One of the most powerful tools in social communication is the face, which can flexibly create a broad spectrum of dynamic facial expressions. Yet, systematic cultural differences in face signalling and decoding (e.g., see Jack, 2013 for a review) presents a challenge to the evolving communication needs of modern society (e.g., designing culturally aware digital avatars and companion robots that can adaptively recognize and produce both culture-specific and universal face signals). Understanding which face signals support accurate communication across cultures, and those that produce confusions therefore remains a central question. To address this question, we used a 4D Generative Face Grammar (GFG, Yu et al., 2012) with reverse correlation (Ahumada & Lovell, 1972) to model the dynamic facial expressions of four mental states – ‘thinking,’ ‘interested,’ ‘bored’ and ‘confused’ – in 15 Western Caucasian (WC) and 15 East Asian (EA) observers (See Figure S1 Panel A. See also Jack et al., 2012, 2014, Gill et al., 2014). Cross-cultural comparison of the dynamic models revealed, for each mental state, clear commonalities (see Figure S1, Panel B, Common Signals) and cultural specificities in AU patterns (Culture-specific Signals). To illustrate, in ‘confused,’ Cheek Raiser/Lip Stretcher are culturally common, whereas Upper Lip Raiser is WC-specific and Jaw Drop is EA-specific. Similarly, in ‘thinking,’ the Chin Raiser is culturally common, whereas the Dimpler is WC-specific and, in contrast, Brow Lowerer/Nostril Compressor are EA-specific. Together, our data provides a common face signalling basis for cross-cultural social communication, and identifies confusing face signals, with implications for the digital economy (e.g., algorithms designed to automatically detect face signals, e.g., Vinciarelli et al., 2009).
Meeting abstract presented at VSS 2015