Face specific neurons have been found both in monkey and human single cell recordings (Quiroga, Reddy, Kreiman, Koch, & Fried,
2005; Wang et al.,
2014). The processing of faces is distributed over several brain areas (Barraclough & Perrett,
2011), in which different populations of neurons encode facial identities, viewpoints, and emotional expressions (Gothard et al.,
2007; Hasselmo et al.,
1989). Decoding of fMRI activity patterns during working memory maintenance suggests memory representations can be found in visual (Christophel & Haynes,
2014; Harrison & Tong,
2009; Serences, Ester, Vogel, & Awh,
2009), parietal and frontal areas depending on the memorized visual stimuli, memory task (Lee, Kravitz, & Baker,
2013), and the level of abstractness of representation (Christophel, Klink, Spitzer, Roelfsema, & Haynes,
2017). Facial identity can be decoded from the activity patterns in temporal (Kriegeskorte, Formisano, Sorger, & Goebel,
2007; Natu et al.,
2010) and frontal (Guntupalli, Wheeler, & Gobbini,
2017) brain areas, and facial expression in occipital and temporal areas (Liang et al.,
2017). Memorized faces can also be reconstructed on the basis of activation patterns in the angular gyrus of the parietal cortex (Lee & Kuhl,
2016). Taken together, one interpretation of these studies is of a processing hierarchy in which simple visual features can be decoded from primary visual cortex but decoding more complex stimuli requires signals from association cortex. Our results showing asymmetric competition for memory resources for gratings and faces are consistent with this view; neural populations in primary visual cortex could encode orientation information (from both faces and gratings) while populations in face-specific regions higher in the processing hierarchy encode facial expression.