There is strong evidence for sensitivity to certain image properties (such as orientation and spatial frequency) early in visual processing (Hubel & Wiesel,
1968). The early stages appear to only detect simple elongated features such as edges and lines, neither points in an image nor arbitrarily complex structures. There is considerable evidence bearing on the issue of which spatial frequencies are used to convey different forms of facial information (for a review, see Ruiz-Soler & Beltran,
2006). In particular, low spatial frequencies (2–8 cycles per face) are thought to support holistic/configurational properties of the face (Goffaux, Hault, Michel, Vuong, & Rossion,
2005; Goffaux & Rossion,
2006) and only crude emotional information (Schyns & Oliva,
1999), while higher spatial frequencies (8–16 cycles per face) are thought to convey identity (Costen, Parker, & Craw,
1996; Gold, Bennett, & Sekuler,
1999) and more detailed expression (Norman & Ehrlich,
1987; Schyns & Oliva,
1999). Identity and facial expression are generally thought to be processed though a pathway from V1 to the fusiform area (Haxby et al.,
2001; Kanwisher, McDermott, & Chun,
1997) relying on higher spatial frequencies with the exception of fearful facial expressions which are though to be supported by a direct subcortical projection to the amygdala (LeDoux,
1996) using lower spatial frequencies (Vuilleumier, Armony, Driver, & Dolan,
2003). The latter is not an uncontroversial view since (a) high spatial frequency around the eyes may contribute to fearful expressions (Smith, Cottrell, Gosselin, & Schyns,
2005) and (b) the amygdala is responsive to a range of non-fearful emotions (Winston, O'Doherty, & Dolan,
2003).