June 2011
Volume 11, Issue 10
Free
Article  |   September 2011
The horizontal tuning of face perception relies on the processing of intermediate and high spatial frequencies
Author Affiliations
Journal of Vision September 2011, Vol.11, 1. doi:10.1167/11.10.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Valerie Goffaux, Jaap van Zon, Christine Schiltz; The horizontal tuning of face perception relies on the processing of intermediate and high spatial frequencies. Journal of Vision 2011;11(10):1. doi: 10.1167/11.10.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It was recently shown that expert face perception relies on the extraction of horizontally oriented visual cues. Picture-plane inversion was found to eliminate horizontal, suggesting that this tuning contributes to the specificity of face processing. The present experiments sought to determine the spatial frequency (SF) scales supporting the horizontal tuning of face perception. Participants were instructed to match upright and inverted faces that were filtered both in the frequency and orientation domains. Faces in a pair contained horizontal or vertical ranges of information in low, middle, or high SF (LSF, MSF, or HSF). Our findings confirm that upright (but not inverted) face perception is tuned to horizontal orientation. Horizontal tuning was the most robust in the MSF range, next in the HSF range, and absent in the LSF range. Moreover, face inversion selectively disrupted the ability to process horizontal information in MSF and HSF ranges. This finding was replicated even when task difficulty was equated across orientation and SF at upright orientation. Our findings suggest that upright face perception is tuned to horizontally oriented face information carried by intermediate and high SF bands. They further indicate that inversion alters the sampling of face information both in the orientation and SF domains.

Introduction
Faces are core social stimuli that the human brain processes effortlessly until the finest level of precision. The visual machinery enables humans to extract face identity and process modulations of social intentions and emotional states online despite constant modulations of appearances. 
Face processing, though robust in most everyday situations, is severely hampered when the face is turned upside down (Yin, 1969). Inversion in the picture plane damages the processing of various aspects of face information (identity, emotion, gaze processing, e.g., Calder, Young, Keane, & Dean, 2000; Jenkins & Langton, 2003; Valentine, 1988). Although it affects the processing of most objects, inversion impairs face perception so dramatically (e.g., Robbins & McKone, 2007) that the face inversion effect (FIE) is considered as marking the existence of visual mechanisms uniquely engaged for faces (Diamond & Carey, 1986; Leder & Carbon, 2006; Robbins & McKone, 2007). 
Extensive research has therefore been dedicated to clarifying the cause of the FIE. It suggested that the disproportionate effect of inversion on face perception is due to the disruption of interactive, so-called holistic, face processing (Farah, Tanaka, & Drain, 1995; Rhodes, Brake, & Atkinson, 1993; Sergent, 1984; Tanaka & Farah, 1993; Young, Hellawell, & Hay, 1987). Interactive processing refers to the observation that, when a face is viewed in canonical upright position, its local elements (e.g., the features) are not encoded independently of each other; rather, the perception of each element is influenced by the properties (shape, surface, and position) of the surrounding elements (Sergent, 1984). Once the face is inverted, interactive processing is strongly attenuated as features are found to be encoded more independently from each other (Bartlett & Searcy, 1993; Tanaka & Farah, 1993; Thompson, 1980). Its dramatic vulnerability to inversion suggested that face interactive processing is what makes face perception unique. However, evidence indicates that face inversion also impairs the processing of local feature shape (for a review, see McKone & Yovel, 2009). 
So far, most face perception studies have focused on the aspects of face processing (e.g., interactive versus featural processing) that are disrupted by inversion. Yet, the basic characteristics of the visual information underlying face perception and its vulnerability to planar inversion are still unclear. In other words, it remains unclear how high-level face representations are elaborated from the information provided by low-level visual regions like V1, where neurons are known to decompose retinal input along the dimensions of spatial frequency and orientation (e.g., De Valois, Albrecht, & Thorell, 1982; Hubel & Wiesel, 1968). 
To address this important topic, previous works aimed to determine the SF bands driving face perception. They showed that the low SF (below 8 cycles per face, cpf) serve the interactive processing of face features, while high SF (above 24 cpf) allow for the local encoding of individual feature cues (e.g., Goffaux, 2009, though see Cheung, Richler, Palmeri, & Gauthier, 2008). Intermediate SF (between 8 and 16 cpf) are thought to optimally support the extraction of face identity (Costen, Parker, & Craw, 1996; Gold, Bennett, & Sekuler, 1999; Näsänen, 1999). Face inversion was, however, found to disrupt the processing of low, intermediate, and high SFs to a comparable extent (Boutet, Collin, & Faubert, 2003; Gaspar, Sekuler, & Bennett, 2008; Goffaux, 2008; Willenbockel et al., 2010). The absence of FIE modulation across SF suggested that the same primary visual information is extracted in upright and inverted faces and is simply less efficiently encoded/integrated in the latter case (see also Sekuler, Gaspar, Gold, & Bennett, 2004; Yovel & Kanwisher, 2004). 
Recent evidence indicates that not only the SF but also the orientation content of the face stimulus significantly influences face perception and the emergence of FIE. First, it was shown that, compared to vertical information, the encoding of face identity is most efficient when face input is restricted to horizontal bands of face information (Dakin & Watt, 2009; Goffaux & Dakin, 2010). Furthermore, FIE and interactive face processing emerges when horizontal, but not vertical, information is available in the face stimulus (Goffaux & Dakin, 2010). The tuning of face-specific processing to horizontal orientations presumably relates to this orientation band revealing the vertical arrangement of inner facial features (Goffaux & Dakin, 2010; Goffaux & Rossion, 2007). 
The above evidence shows that the visual information at the core of the FIE and face perception, in general, relies on the presence of specific SFs and orientations in the visual input. So far, the contribution of SF and orientation to face perception has been explored separately. However, V1 neurons encode SF and orientation jointly and psychophysical work with grating stimuli indicates that orientation tuning depends on the SF content of the input image (Burr & Wijesunda, 1984; Phillips & Wilson, 1984). In order to determine the visual information at the core of the FIE, we sought to explore the joint contribution of these primary visual dimensions to upright and inverted face perception. This approach also enabled us to determine the spatial frequency scales supporting the horizontal tuning of face perception. 
Methods
Subjects
Fifty-three psychology students (Maastricht University, age range: 18–25) gained course credits in exchange of their participation in one of the experiments. Thirty-five subjects participated in the main experiment and eighteen in the control experiment. All subjects provided their written informed consent prior to participation. They were naive to the purpose of the experiments. They reported either normal or corrected-to-normal vision. The experimental protocol was approved by the Faculty Ethics Committee. 
Stimuli
Stimuli were pictures of ten male and ten female faces posing in a front view and neutral expression. The mean luminance value was subtracted from each image. Filtered stimuli were generated by Fast Fourier transforming the original image using MATLAB 7.0.1 and multiplying the Fourier energy with SF and orientation filters centered on 0° or 90° and on 4, 16, or 64 cycles per image (Figure 1). The bandwidth in orientation (14°) and SF (1 octave) was chosen to match the band-pass properties of V1 neurons (Blakemore & Campbell, 1969; Wilson & Bergen, 1979). Note that this filtering procedure leaves the phase structure of the image untouched and only alters the distribution of Fourier energy across SF and orientation. After inverse Fourier transform, the luminance and contrast of each image was adjusted to match the averaged values of the image set before Fourier transform. An egg-shaped mask was superimposed on all stimuli to remove any cue to face identity that may lie outside of the face area. Inverted stimuli were generated by vertically flipping each image. 
Figure 1
 
Stimuli. Faces were filtered to restrict information in selective SF–orientation bands. In order to equate performance across orientation by SF conditions at upright planar orientation, Gaussian white noise was superimposed on the face stimulus in horizontal MSF and horizontal HSF conditions in the control experiment. Presentation duration was also adjusted to this aim (see Methods section).
Figure 1
 
Stimuli. Faces were filtered to restrict information in selective SF–orientation bands. In order to equate performance across orientation by SF conditions at upright planar orientation, Gaussian white noise was superimposed on the face stimulus in horizontal MSF and horizontal HSF conditions in the control experiment. Presentation duration was also adjusted to this aim (see Methods section).
In the control experiment, Gaussian white noise was superimposed on horizontal MSF and horizontal HSF face stimuli (Figure 1, right part). The root-mean-square contrast of face and noise were equal resulting in a signal-to-noise ratio of 1. 
Stimuli were displayed against a gray background on an LCD screen using E-prime 1.1 (screen resolution: 1024 × 768, refresh rate: 60 Hz). Viewed at 60 cm, they subtended a visual angle of 6.2 × 8.8°. 
Procedure
Participants were instructed to match faces presented one after the other, in pairs. A trial started with a central cross. After 500 ms, the cross was replaced by a face. This face lasted for 500 ms in the main experiment. In the control experiment, its duration varied across orientation by SF conditions (1200-ms duration for vertically filtered faces, 300 ms for horizontal LSF faces, and 100 ms for horizontal MSF and horizontal HSF faces). After a 400-ms blank, the second face appeared and participants had to decide whether it was the same or different with respect to the first face. It remained on screen until participant's response (maximum of 3000 ms). On every trial, the position of the first face was randomly jittered in xy plane by 20 pixels. The second face was presented at screen center. Within a trial, faces were of the same planar orientation and content condition, but these factors randomly varied from one trial to the other. Participants were invited to rest every 20 trials, and they then also received written feedback about their accuracy (percent correct). Prior to the experiment, participants were trained to the task first with unfiltered faces and then with 20 filtered faces (yielding 20 upright and 20 inverted trials in each training part). The stimuli presented during training were randomly sampled from the set of stimuli used in the experiment. 
We decided to combine manipulations of duration and noise in our control experiment because pilot studies indicated that it was the most efficient way to equate performance across stimulus conditions at upright orientation while keeping viewing conditions in a conventional range. The drawback of this procedure is that we cannot determine exactly how noise and duration manipulations interacted with task performance. 
There were 20 trials per condition in the main experiment and 40 trials per condition in the control experiment. There were 24 within-subject conditions: planar orientation (upright, inverted), orientation content (horizontal, vertical), SF content (LSF, MSF, HSF), and similarity (same, different). Planar orientation and orientation content conditions were presented in randomly interleaved mini-blocks of 10 trials. SF and similarity conditions varied randomly on the trial level. 
Data analyses
Sensitivity d′ was computed based on hits and false alarm rates of each individual subject, following log-linear approach (Stanislaw & Todorov, 1999). In the main experiment, two subjects were excluded from the analyses because they were at chance level in at least one of the experimental conditions. The values of d′ were submitted to a 2 × 2 × 3 repeated-measures ANOVA with planar orientation (upright, inverted), orientation content (horizontal, vertical), and SF content (LSF, MSF, HSF) as within-subject factors. Conditions were compared two by two using post-hoc Bonferroni tests. 
In the control experiment, we used Bonferroni-corrected tests to compare upright to inverted and horizontal to vertical sensitivity to test our a priori assumptions derived from the findings of the main experiment. 
Effect size was computed using partial eta squared (Rosnow & Rosenthal, 1996). 
Results
The 2 × 2 × 3 ANOVA revealed significant main effects of planar orientation, SF content, and orientation content (planar orientation: F(1,33) = 67.4, p < 0.0001, partial eta squared: 0.67; SF content: F(2,66) = 35.4, p < 0.0001, partial eta squared: 0.52; orientation content: F(1,33) = 36, p < 0.0001, partial eta squared: 0.52). These main effects were moderated by a significant two-way interaction between planar orientation and orientation content (F(1,33) = 12.7, p < 0.001, partial eta squared: 0.28) and a significant three-way interaction between planar orientation, orientation content, and SF (F(2,66) = 3.1, p < 0.05, partial eta squared: 0.09). 
Post-hoc Bonferroni comparisons were used to compare horizontal and vertical conditions two by two across planar orientation and SF. This was done to investigate the potential influence of SF content on the horizontal advantage for face processing. When faces were upright, sensitivity was significantly better for horizontally filtered than vertically filtered faces in MSF and HSF (p < 0.00001, partial eta squared: 0.51 and p < 0.014, partial eta squared: 0.27, respectively) but not in LSF (p = 0.2, partial eta squared: 0.15; see Figure 2b, left part). The absence of horizontal advantage in LSF cannot be accounted for by a floor effect as performance was clearly above chance level in upright vertical LSF and horizontal LSF conditions (Figure 1a). When faces were inverted, there was no significant advantage for processing horizontal over vertical face information in any of the SF ranges (ps = 1, partial etas squared < 0.12; see Figure 2b, left part). 
Figure 2
 
(a) Average sensitivity data are shown for the different SF–orientation bands tested. Error bars represent mean square of errors (MSE). (b) Plots depict the size of orientation content effect (horizontal versus vertical) and inversion effect (upright versus inverted) expressed in partial eta squared. (c) Average sensitivity in the control experiment in which performance was equated across SF by orientation content conditions at upright orientation.
Figure 2
 
(a) Average sensitivity data are shown for the different SF–orientation bands tested. Error bars represent mean square of errors (MSE). (b) Plots depict the size of orientation content effect (horizontal versus vertical) and inversion effect (upright versus inverted) expressed in partial eta squared. (c) Average sensitivity in the control experiment in which performance was equated across SF by orientation content conditions at upright orientation.
Bonferroni post-hoc tests were also used to compare upright and inverted conditions two by two across SF and orientation bands of face information. This was done to address whether face inversion disrupted the processing of specific SF and/or orientation ranges or whether all ranges were comparably affected. When processing vertical bands of face information, the FIE was absent in each tested SF band (vertical LSF: p = 1, partial eta squared: 0.18; vertical MSF: p = 1, partial eta squared: 0.03; vertical HSF: p = 0.34, partial eta squared: 0.2; see Figure 2b, right part). When processing horizontal face information, the FIE was significant in MSF (horizontal MSF: p < 0.0001, partial eta squared: 0.53) and HSF (horizontal HSF: p < 0.0003, partial eta squared: 0.4) but not LSF (horizontal LSF: p = 0.2, partial eta squared: 0.25) band of information (see Figure 2b, right part). Effect size indicates that the FIE for horizontally filtered faces was more robust in the MSF range (accounting for 53% of sensitivity variance) than in the HSF range (accounting for 39% of variance). 
The finding that the FIE is most robust when processing horizontally oriented MSF and HSF face information suggests that these ranges of primary visual information mainly drive the horizontal tuning of upright face perception. This is confirmed by the observation that the advantage for processing horizontal over vertical information is significant in MSF and HSF ranges only. 
Alternatively, the FIE may have been larger for horizontally than vertically filtered faces due to differences in task difficulty. In the horizontal condition, the better sensitivity at upright orientation may have left more space for FIE to emerge than in vertical conditions. We addressed this issue in a control experiment where task difficulty was equated across horizontal and vertical conditions at upright orientation (ps > 0.08; see Figure 2c). In these circumstances, we still observed the FIE only for horizontally filtered faces in both MSF and HSF (FIE in horizontal LSF: p = 1, partial eta squared: 0.24; FIE in horizontal MSF: p < 0.005, partial eta squared: 0.63; FIE in horizontal HSF: p < 0.013, partial eta squared: 54; FIE in vertical LSF: p = 1, partial eta squared: 0.008; FIE in vertical MSF: p = 1, partial eta squared: 0.19; FIE in vertical HSF: p = 1, partial eta squared: 0.02). Again, the size of the FIE was larger in horizontal MSF than horizontal HSF band. 
Discussion
The present study investigated how expert face processing, as indexed by the FIE, is characterized by the primary visual information domains of orientation and SF. We also sought to determine the spatial scales supporting the horizontal tuning of face perception. Participants matched pairs of upright and inverted faces that were filtered both in the frequency and orientation domains. Faces in a pair contained horizontal or vertical ranges of information in low, middle, or high SF (LSF, MSF, or HSF). In the main experiment, we report the largest horizontal (over vertical) processing advantage and FIE in MSF, next in HSF, whereas no orientation tuning or FIE was observed in LSF faces. As mentioned earlier, MSF and HSF presumably transmit face cues relevant for face identification and local feature analysis, respectively (e.g., Goffaux, 2009; Näsänen, 1999). Our results thus indicate that the well-documented contribution of MSF to face individuation may relate to the extraction of horizontally oriented visual cues. Horizontal bands reveal the arrangement of facial features along the vertical axis, an aspect that was shown to vary substantially across individual faces (Goffaux & Dakin, 2010) and to be largely disrupted by face inversion (Goffaux & Rossion, 2007). In a recent paper (Goffaux, 2008), we showed that the processing of vertical feature arrangement at upright orientation was also best carried by MSF. The horizontal tuning observed in HSF may reflect the fact that local feature shape is carried by horizontal orientations. 
These findings are in line with recent image analyses conducted by Keil (2008, 2009). This author computed the whitened responses (by flattening the 1/f trend of SF amplitude spectrum) of Gabor filters to a large set of face images. He observed higher response amplitudes in a range of MSF close to the one tested here (situated between 10 and 15 cpf; Keil, 2008). Keil (2009) further showed that filter response maxima at MSF mainly occurred at horizontal orientations and that this was driven by the physical structure of inner facial features (mainly by the eyes and mouth but also by the nose). These findings suggest that the visual processing adapted to match the statistical properties of face images. However, our results show that the physical input properties of face images alone cannot account for the present results as the tuning to horizontal MSF and HSF cues observed at upright orientation was eliminated by inversion. Past (Goffaux & Dakin, 2010) and present evidence suggest that by providing the shape of inner facial features and their vertical arrangement, horizontal MSF and HSF bands of face information convey the most relevant cues for the high-level processing of face identity. 
In a control experiment, we further showed that the FIE selectively observed in horizontal MSF and HSF replicates when sensitivity is equated across orientation by SF conditions at upright orientation, discarding accounts in terms of task difficulty. The control experiment also confirms previous evidence that horizontal cues are more resistant to alterations in face appearance than vertical cues (Goffaux & Dakin, 2010) as we had to add noise and dramatically reduce the exposure duration of horizontal stimuli in order to equate sensitivity across horizontal and vertical conditions. 
In LSF, there was evidence neither for upright horizontal tuning nor for significant FIE. This is at odds with the studies that separately demonstrated the importance of horizontally oriented cues and LSF scales for the emergence of face-specific computations, such as interactive processing (Flevaris, Robertson, & Bentin, 2008; Goffaux, 2009; Goffaux & Dakin, 2010; Goffaux, Gauthier, & Rossion, 2003; Goffaux & Rossion, 2006; Halit, de Haan, Schyns, & Johnson, 2006). We therefore expected LSF to contribute to the horizontal tuning of face perception. There are several potential accounts for the absence of horizontal tuning in LSF. First, it may indicate that there is no orientation tuning for the processing of LSF face information, even when contrasting orientation ranges separated by 90° as done here. Accordingly, psychophysical evidence based on the use of grating stimuli indicated that orientation tuning is broad at coarse spatial scales (more than 60° at spatial scales comparable to those contained in our LSF condition) and sharpens with increasing SF (Burr & Wijesunda, 1984; Phillips & Wilson, 1984; see also Ferster & Miller, 2000; Troyer, Krukowski, Priebe, & Miller, 1998, though see Mazer, Vinje, McDermott, Schiller, & Gallant, 2002). Second, it may be that other orientations, untested here, drive interactive face processing in LSF. The previous evidence of horizontal orientation contribution to interactive processing (Goffaux & Dakin, 2010) may actually be limited to MSF and HSF ranges. Interactive processing was indeed shown to be attenuated compared to LSF but still present in the middle and high SFs (Goffaux & Rossion, 2006). Future studies should investigate FIE and interactive feature processing more systematically in the orientation domain in order to derive a more complete picture of the orientation tuning of face-specific processing. A third, related, possibility is that the FIE observed in horizontal orientation band of face information does not only reflect the disrupted interactivity of face processing but also the impaired processing of local feature shape cues, which has also been shown to suffer from face inversion (e.g., McKone & Yovel, 2009; Rhodes, Hayward, & Winkler, 2006) and is best conveyed by horizontal orientations (see Figure 1; Keil, 2009). 
More generally, our results indicate that the FIE observed in horizontal bands of face information is mainly due to the disrupted processing of intermediate and high SFs, to a lesser extent. The face information extracted at upright orientation thus seems to differ from that extracted at inverted orientation. This is an important finding since the exact causes of the FIE have been extensively investigated in the last decades (since the seminal finding by Yin, 1969) and are still harshly debated (see McKone & Yovel, 2009; Riesenhuber & Wolff, 2009; Rossion, 2008; Yovel, 2009). Inversion was shown to qualitatively alter the way faces are perceived by selectively disrupting the interactive, so-called holistic, processing of features while preserving the processing of their local properties (Farah et al., 1995; Rhodes et al., 1993; Sergent, 1984; Tanaka & Farah, 1993; Young et al., 1987). The qualitative view has been challenged by evidence that, in some circumstances (McKone & Yovel, 2009), inversion disrupts the processing of interactive and local properties of facial features equally (Riesenhuber, Jarudi, Gilad, & Sinha, 2004; Sekuler et al., 2004; Yovel & Kanwisher, 2004). This suggested that inversion affects face perception quantitatively by generally reducing its signal-to-noise ratio (a view already proposed by Valentine, 1988). Previous evidence that the FIE is of equal magnitude across SFs (Boutet et al., 2003; Gaspar et al., 2008; Goffaux, 2008; Willenbockel et al., 2010) further supported the quantitative view of the FIE. However, our results indicate that these previous findings were due to the use of stimuli that were unrestricted in the orientation domain. When faces were inverted in these studies, the disrupted processing of intermediate to high spatial frequency face information conveyed by horizontal orientation bands may have been compensated by information provided by the vertical orientation bands that are less vulnerable to face inversion, thus masking the effect of SF on FIE magnitude. 
Faces convey a wealth of fundamental social cues such as one individual's social intentions (via gaze direction) and emotional states (via expression). Here, we neglected those core social aspects as we used a discrimination task with faces displaying fixed gaze and neutral expression. Future studies will indicate whether these other aspects of face information show a similar SF–orientation tuning as observed here or whether orientation and SF are flexibly sampled depending on the processing goal (Morrison & Schyns, 2001). 
Our finding that face perception is constrained by primary dimensions of the visual input does not imply that its specificity originates early in visual processing, i.e., in V1. On the contrary, the fact that the horizontal tuning of expert face processing is disrupted by planar inversion unequivocally indicates that it is not merely accounted by the physical properties of the face image (as documented by Keil, 2008, 2009). Rather, we suggest that observer-dependent biases in the orientation and SF sampling of face information significantly contribute to the uniqueness of face perception. Considering primary dimensions of visual information thus yields improved insights on the visual information underlying high-level expert face perception (Watt & Dakin, 2010). 
Acknowledgments
We thank Steven C. Dakin for providing image filtering codes as well as Dietmar Hestermann and Sanne ten Oever for their help during data acquisition. 
Commercial relationships: none. 
Corresponding author: Valérie Goffaux. 
Email: Valerie.Goffaux@maastrichtuniversity.nl. 
Address: Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands. 
References
Bartlett J. C. Searcy J. (1993). Inversion and configuration of faces. Cognitive Psychology, 25, 281–316. [PubMed] [CrossRef] [PubMed]
Blakemore C. Campbell F. W. (1969). On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images. The Journal of Physiology, 203, 237–260. [PubMed] [CrossRef] [PubMed]
Boutet I. Collin C. Faubert J. (2003). Configural face encoding and spatial frequency information. Perception & Psychophysics, 65, 1078–1093. [PubMed] [CrossRef] [PubMed]
Burr D. C. Wijesunda S. (1984). Orientation discrimination depends on spatial frequency. Vision Research, 31, 1449–1452. [PubMed] [CrossRef]
Calder A. J. Young A. W. Keane J. Dean M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26, 527–551. [PubMed] [CrossRef] [PubMed]
Cheung O. S. Richler J. J. Palmeri T. J. Gauthier I. (2008). Revisiting the role of spatial frequencies in the holistic processing of faces. Journal Experimental Psychology: Human Perception and Performance, 34, 1327–1336. [PubMed] [CrossRef]
Costen N. P. Parker D. M. Craw I. (1996). Effects of high-pass and low-pass spatial filtering on face identification. Perception & Psychophysics, 58, 602–612. [PubMed] [CrossRef] [PubMed]
Dakin S. C. Watt R. J. (2009). Biological “bar codes” in human faces. Journal of Vision, 9(4):2, 1–10, http://www.journalofvision.org/content/9/4/2, doi:10.1167/9.4.2. [PubMed] [Article] [CrossRef] [PubMed]
De Valois R. L. Albrecht D. G. Thorell L. G. (1982). Spatial frequency selectivity of cells in macaque visual cortex. Vision Research, 22, 545–559. [PubMed] [CrossRef] [PubMed]
Diamond R. Carey S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. [PubMed] [CrossRef] [PubMed]
Farah M. J. Tanaka J. W. Drain H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21, 628–634. [PubMed] [CrossRef] [PubMed]
Ferster D. Miller K. D. (2000). Neural mechanisms of orientation selectivity in the visual cortex. Annual Review of Neuroscience, 23, 441–471. [PubMed] [CrossRef] [PubMed]
Flevaris A. V. Robertson L. C. Bentin S. (2008). Using spatial frequency scales for processing face features and face configuration: An ERP analysis. Brain Research, 15, 100–109. [PubMed] [CrossRef]
Gaspar C. Sekuler A. B. Bennett P. J. (2008). Spatial frequency tuning of upright and inverted face identification. Vision Research, 48, 2817–2826. [PubMed] [CrossRef] [PubMed]
Goffaux V. (2008). The horizontal and vertical relations in upright faces are transmitted by different spatial frequency ranges. Acta Psychologica, 128, 119–126. [PubMed] [CrossRef] [PubMed]
Goffaux V. (2009). Spatial interactions in upright and inverted faces: Re-exploration of spatial scale influence. Vision Research, 49, 774–781. [PubMed] [CrossRef] [PubMed]
Goffaux V. Dakin S. C. (2010). Horizontal information drives the behavioral signatures of face processing. Frontiers in Psychology, 1, 1–14. [PubMed]
Goffaux V. Gauthier I. Rossion B. (2003). Spatial scale contribution to early visual differences between face and object processing. Cognitive Brain Research, 16, 416–424. [PubMed] [CrossRef] [PubMed]
Goffaux V. Rossion B. (2006). Faces are “spatial”—Holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception and Performance, 32, 1023–1039. [PubMed] [CrossRef] [PubMed]
Goffaux V. Rossion B. (2007). Face inversion disproportionately impairs the perception of vertical but not horizontal relations between features. Journal of Experimental Psychology: Human Perception and Performance, 33, 995–1002. [PubMed] [CrossRef] [PubMed]
Gold J. Bennett P. J. Sekuler A. B. (1999). Identification of band-pass filtered letters and faces by human and ideal observers. Vision Research, 39, 3537–3560. [PubMed] [CrossRef] [PubMed]
Halit H. de Haan M. Schyns P. G. Johnson M. H. (2006). Is high-spatial frequency information used in the early stages of face detection? Brain Research, 30, 154–161. [PubMed] [CrossRef]
Hubel D. H. Wiesel T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195, 215–243. [PubMed] [CrossRef] [PubMed]
Jenkins J. Langton S. R. (2003). Configural processing in the perception of eye-gaze direction. Perception, 32, 1181–1188. [PubMed] [CrossRef] [PubMed]
Keil M. S. (2008). Does face image statistics predict a preferred spatial frequency for human face processing? Proceedings of the Royal Society of London B: Biological Sciences, 275, 2095–2100. [PubMed] [CrossRef]
Keil M. S. (2009). “I look in your eyes, honey”: Internal face features induce spatial frequency preference for human face processing. PLoS Computational Biology, 5, e1000329. [PubMed]
Leder H. Carbon C. C. (2006). Face-specific configural processing of relational information. British Journal of Psychology, 97, 19–29. [PubMed] [CrossRef] [PubMed]
Mazer J. A. Vinje W. E. McDermott J. Schiller P. H. Gallant J. L. (2002). Spatial frequency and orientation tuning dynamics in area V1. Proceedings of the National Academy of Sciences of the United States of America, 99, 1645–1650. [PubMed] [CrossRef] [PubMed]
McKone E. Yovel G. (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychonomic Bulletin & Review, 16, 778–797. [PubMed] [CrossRef] [PubMed]
Morrison D. J. Schyns P. G. (2001). Usage of spatial scales for the categorization of faces, objects, and scenes. Psychonomic Bulletin & Review, 8, 454–469. [PubMed] [CrossRef] [PubMed]
Näsänen R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision Research, 39, 3824–3833. [PubMed] [CrossRef] [PubMed]
Phillips G. C. Wilson H. R. (1984). Orientation bandwidths of spatial mechanisms measured by masking. Journal of the Optical Society of America A, 1, 226–232. [PubMed] [CrossRef]
Rhodes G. Brake S. Atkinson A. P. (1993). What's lost in inverted faces? Cognition, 47, 25–57. [PubMed] [CrossRef] [PubMed]
Rhodes G. Hayward W. G. Winkler C. (2006). Expert face coding: Configural and component coding of own-race and other-race faces. Psychonomic Bulletin & Review, 13, 499–505. [PubMed] [CrossRef] [PubMed]
Riesenhuber M. Jarudi I. Gilad S. Sinha P. (2004). Face processing in humans is compatible with a simple shape-based model of vision. Proceedings of the Royal Society B: Biological Sciences, 271(Suppl. 6), S448–S450. [PubMed] [CrossRef]
Riesenhuber M. Wolff B. S. (2009). Task effects, performance levels, features, configurations, and holistic face processing: A reply to Rossion. Acta Psychologica, 132, 286–292. [PubMed] [CrossRef] [PubMed]
Robbins R. McKone E. (2007). No face-like processing for objects-of-expertise in three behavioural tasks. Cognition, 103, 34–79. [PubMed] [CrossRef] [PubMed]
Rosnow R. L. Rosenthal R. (1996). Computing contrasts, effect sizes, and counternulls on other people's published data: General procedures for research consumers. Psychological Methods, 1, 331–340. [CrossRef]
Rossion B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 128, 274–289. [PubMed] [CrossRef] [PubMed]
Sekuler A. B. Gaspar C. M. Gold J. M. Bennett P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14, 391–396. [PubMed] [CrossRef] [PubMed]
Sergent J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. [PubMed] [CrossRef] [PubMed]
Stanislaw H. Todorov N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31, 137–149. [PubMed] [CrossRef]
Tanaka J. W. Farah M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 46, 225–245. [PubMed] [CrossRef] [PubMed]
Thompson P. (1980). Margaret Thatcher—A new illusion. Perception, 9, 483–484. [PubMed] [CrossRef] [PubMed]
Troyer T. W. Krukowski A. E. Priebe N. J. Miller K. D. (1998). Contrast-invariant orientation tuning in cat visual cortex: Thalamocortical input tuning and correlation-based intracortical connectivity. Journal of Neuroscience, 18, 5908–5927. [PubMed] [PubMed]
Valentine T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [PubMed] [CrossRef] [PubMed]
Watt R. J. Dakin S. C. (2010). The utility of image descriptions in the initial stages of vision: A case study of printed text. British Journal of Psychology, 101, 1–26. [PubMed] [CrossRef] [PubMed]
Willenbockel V. Fiset D. Chauvin A. Blais C. Arguin M. Tanaka J. W. et al. (2010). Does face inversion change spatial frequency tuning? Journal of Experimental Psychology: Human Perception and Performance, 36, 122–135. [PubMed] [CrossRef] [PubMed]
Wilson H. R. Bergen J. R. (1979). A four mechanism model for threshold spatial vision. Vision Research, 19, 19–32. [PubMed] [CrossRef] [PubMed]
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Young A. M. Hellawell D. Hay D. C. (1987). Configural information in face perception. Perception, 10, 747–759. [PubMed] [CrossRef]
Yovel G. (2009). The shape of facial features and the spacing among them generate similar inversion effects: A reply to Rossion (2008). Acta Psychologica, 132, 293–299. [PubMed] [CrossRef] [PubMed]
Yovel G. Kanwisher N. (2004). Face perception: Domain specific, not process specific. Neuron, 44, 889–898. [PubMed] [PubMed]
Figure 1
 
Stimuli. Faces were filtered to restrict information in selective SF–orientation bands. In order to equate performance across orientation by SF conditions at upright planar orientation, Gaussian white noise was superimposed on the face stimulus in horizontal MSF and horizontal HSF conditions in the control experiment. Presentation duration was also adjusted to this aim (see Methods section).
Figure 1
 
Stimuli. Faces were filtered to restrict information in selective SF–orientation bands. In order to equate performance across orientation by SF conditions at upright planar orientation, Gaussian white noise was superimposed on the face stimulus in horizontal MSF and horizontal HSF conditions in the control experiment. Presentation duration was also adjusted to this aim (see Methods section).
Figure 2
 
(a) Average sensitivity data are shown for the different SF–orientation bands tested. Error bars represent mean square of errors (MSE). (b) Plots depict the size of orientation content effect (horizontal versus vertical) and inversion effect (upright versus inverted) expressed in partial eta squared. (c) Average sensitivity in the control experiment in which performance was equated across SF by orientation content conditions at upright orientation.
Figure 2
 
(a) Average sensitivity data are shown for the different SF–orientation bands tested. Error bars represent mean square of errors (MSE). (b) Plots depict the size of orientation content effect (horizontal versus vertical) and inversion effect (upright versus inverted) expressed in partial eta squared. (c) Average sensitivity in the control experiment in which performance was equated across SF by orientation content conditions at upright orientation.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×