Several studies have examined the strategies developed by the visual system to perform this task (e.g., Blais, Fiset, Roy, Saumure, & Gosselin, 2017; Blais, Roy, Fiset, Arguin, & Gosselin,
2012; Dailey et al.,
2010; Elfenbein, Beaupré, Lévesque, & Hess,
2007; Fiset et al.,
2017; Smith, Cottrell, Gosselin, & Schyns,
2005; Smith, & Merlusca,
2014; Sullivan, Ruffman, & Hutton,
2007; Thibault, Levesque, Gosselin, & Hess,
2012). This research has mainly focused on the study of posed facial expressions—that is, expressions exhibited on request. This body of research has uncovered the use of specific visual features for the recognition of each basic facial expression: for instance, the eyes for fear (Adolphs et al.,
2005; Smith et al.,
2005), the mouth for happiness (Dunlap,
1927; Smith et al.,
2005), and the eyebrows, forehead, and eyes for sadness (Eisenbarth & Alpers,
2011; Smith et al.,
2005). The mouth has also been shown to be the most useful area for discriminating all the posed basic expressions from one another (Blais et al.,
2012; Duncan et al.,
2017). However, few studies have assessed how spontaneous expressions are actually decoded. Here we define spontaneous expressions as natural ones that are displayed by an individual without another person requesting such a display (for a similar definition, see Matsumoto, Olide, Schug, Willingham, & Callan,
2009). Previous studies that investigated the decoding of these expressions mostly focused on verifying whether individuals agree on which label to assign to a specific spontaneous expression, and showing a lower level of agreement compared with posed expressions (for a review, see Kayyal & Russell,
2013).