Finally, our findings revealed a very distinct trajectory for fear, supporting previous evidence that this expression has a special status within the framework of FER (
Richoz et al., 2015;
Rodger et al., 2015). We only observed a dynamic advantage for fear with a very low signal, peaking rapidly before converting into a static advantage. This initial dynamic advantage could be due to an increased saliency elicited by the wide and rapid opening of the eyes when a very low signal is available to the observers (see
Liu et al., 2022). Our data also revealed a static advantage for recognizing fear between 23% and 100% of signal. Although counterintuitive at first sight, this static advantage could be explained by the diagnostic information conveyed by the emotional expression of fear over time. Using Bayesian classifiers,
Jack et al. (2014) revealed that fear and surprise share similar muscular activations (upper lid raise, jaw drop) in early signaling dynamics, leading to systematic confusion between those two emotion categories. The critical diagnostic information (eyebrow raiser;
Jack et al., 2014) that allows to accurately distinguish both expressions becomes fully available only in later signaling dynamics. Static expressions of fear, displaying the fully evolved late signaling dynamics for 1 second, are maximally informative and could thus be advantageous for the categorization of this expression (see also,
Richoz et al., 2018b). Furthermore, given its unique evolutionary significance (i.e., indication of danger), the decoding of fear might recruit additional brain regions or faster neural pathways (e.g., the amygdala) that might shortcut the presumably longer processing trajectory of dynamic faces (e.g.,
Adolphs, 2008;
Furl, Henson, Friston, & Calder, 2013). For instance,
Furl et al. (2013) have shown that the amygdala plays a critical role in the decoding of static and dynamic fearful expressions by recruiting distinct brain areas in a context-sensitive fashion (form, or motion) to enhance and optimize their processing. With dynamic faces the amygdala targets the superior temporal sulcus and V5, both involved in the encoding of motion information (e.g.,
Pitcher et al., 2011;
Schultz & Pilz, 2009), whereas with static expressions the amygdala selectively targets the fusiform face area, an area dedicated to the processing of facial identity (e.g.,
Haxby, Hoffman, & Gobbini, 2000) and static facial expressions (e.g.,
Ganel, Valyear, Goshen-Gottstein, & Goodale, 2005). These findings suggest that the amygdala guides and controls how socially salient information is visually encoded by modulating its connections to dorsal and ventral brain regions.