Other models propose that different brain regions (especially the superior temporal sulcus, amygdala, insula, basal ganglia, and orbitofrontal cortex) house the computations of distinct categories of facial expression of emotion (Allison, Puce, & McCarthy,
2000; Calder, Lawrence, & Young,
2001; Haxby, Hoffman, & Gobbini,
2000; Phan, Wager, Taylor, & Liberzon,
2002; Phillips et al.,
1997; Sprengelmeyer, Rausch, Eysel, & Przuntek,
1998). These results support the categorical model (Ekman,
1992; Izard,
2009), which includes a discrete set of emotion categories, each with its own consistent and differential cortical activation pattern. A major claim is the role of the amygdala in fear (Adolphs,
2002; Calder et al.,
2001; Davis,
1992; Rotshtein et al.,
2010), although some studies have also found amygdala activation in the recognition of happiness (Killgore & Yurgelun-Todd,
2004), and others have argued the amygdala is involved in attention and decision making rather than in the processing of specific emotion categories (Adolphs,
2008). It is thus generally unclear if these patterns are unique to each emotion and which brain regions are involved with computations of which facial expression of emotion (Batty & Taylor,
2003). With such limited specificity, these models can only predict that facial expressions that are computed in visual areas located earlier in the visual processing hierarchy (Riesenhuber & Poggio,
2000) require a shorter exposure time. In addition, if each emotion category studied here requires approximately the same exposure time for its recognition by every subject, it would suggest that these categories are basic elements with similar neural representations in all of us (Ekman,
1992; Izard,
1992). This view is usually known as the universality hypothesis of emotions.