Abstract
Spatial frequency (SF) is a key factor in object and face perception. Faces are recognized best when the peak SF is 10 cycles/face. However, little work has analyzed the dependence of facial expressions on SF. Low SF (LSF) fearful faces lead to greater right fusiform activation than their neutral counterparts, even when masked by a high SF (HSF) face (Winston et al. 2003. Current Biology 13:1824–1829). Another study suggests that the Fusiform gyrus is activated preferentially by HSF information compared to LSF whereas the amygdala prefers LSF information. These findings suggest an automatic processing of emotion, using the magnocellular pathway and separation of the slow facial identity processing, using the parvocellular pathway (Vuilleumier et al. 2003. Nature Neuroscience 6: 624–631). Using synthetic faces, the current study shows that LSF (3.3 cycles/face) and HSF (30 cycles/face) information can be used to recognize emotion. A 2AFC paradigm was used while subjects discriminated happy, angry, fearful or sad faces from neutral. These results were then compared to impairments in facial identity processing. These results are compared to previous findings on peripheral and inverted presentation of faces. LSF defined facial expressions are harder to recognize than their HSF equivalents. Fear, sadness and happiness show greater detriments than anger when SF is lowered. LSF emotion recognition is impaired more than facial identity, suggesting that facial expression recognition is more dependent on an optimal spatial frequency range than facial identity recognition.
NIH Grant #EY002158 to HRW