June 2017
Volume 17, Issue 6
Open Access
Article  |   June 2017
Personal familiarity enhances sensitivity to horizontal structure during processing of face identity
Author Affiliations
  • Matthew V. Pachai
    Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
    Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Canada
    matt.pachai@epfl.ch
  • Allison B. Sekuler
    Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Canada
  • Patrick J. Bennett
    Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Canada
  • Philippe G. Schyns
    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
  • Meike Ramon
    Department of Psychology, Visual and Social Neuroscience, University of Fribourg, Fribourg, Switzerland
    meike.ramon@gmail.com
Journal of Vision June 2017, Vol.17, 5. doi:https://doi.org/10.1167/17.6.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew V. Pachai, Allison B. Sekuler, Patrick J. Bennett, Philippe G. Schyns, Meike Ramon; Personal familiarity enhances sensitivity to horizontal structure during processing of face identity. Journal of Vision 2017;17(6):5. https://doi.org/10.1167/17.6.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What makes identification of familiar faces seemingly effortless? Recent studies using unfamiliar face stimuli suggest that selective processing of information conveyed by horizontally oriented spatial frequency components supports accurate performance in a variety of tasks involving matching of facial identity. Here, we studied upright and inverted face discrimination using stimuli with which observers were either unfamiliar or personally familiar (i.e., friends and colleagues). Our results reveal increased sensitivity to horizontal spatial frequency structure in personally familiar faces, further implicating the selective processing of this information in the face processing expertise exhibited by human observers throughout their daily lives.

Introduction
Humans encounter and process the information conveyed by many familiar faces on a daily basis. For typically developed individuals, familiar face detection, discrimination, and identification appear effortless despite the large variations introduced to the retinal image by differences in lighting, viewpoint, and expression (Bruce et al., 1999; Burton, White, & McNeill, 2010; Burton, Wilson, Cowan, & Bruce, 1999; Megreya & Burton, 2006; Ramon, 2015b). Importantly, although such variations leave personally familiar face recognition largely unaffected, they can dramatically impair unfamiliar face processing (Hill & Bruce, 1996; O'Toole, Edelman, & Bulthoff, 1998), and continue to pose a challenge for automatic face recognition systems (White, Dunn, Schmid, & Kemp, 2015). 
Several studies suggest that familiarity enhances the perception of upright faces. For example, personal familiarity with faces expedites the processing of social cues such as gaze direction and head angle (Visconti di Oleggio Castello, Guntupalli, Yang, & Gobbini, 2014). Findings from visual search paradigms indicate that detection of personally familiar versus unfamiliar faces requires reduced attention (Tong & Nakayama, 1999), and familiar face processing can operate even in the absence of awareness induced by continuous flash suppression (Gobbini et al., 2013). Moreover, in categorization tasks, familiar face judgments are faster than unfamiliar face judgments. This advantage is reliably observed irrespective of whether observers are asked to indicate their decision manually (Caharel, Ramon, & Rossion, 2014; Ramon, Caharel, & Rossion, 2011), or using a saccadic response (Visconti di Oleggio Castello & Gobbini, 2015). However, the underlying basis of these advantages remains largely unclear. The limited investigations of this issue to date suggest that personal familiarity increases the range of spatial frequencies used for face identity matching (Watier & Collin, 2009), while more recent studies report that, in face discrimination tasks, familiarity can be accompanied by increased sensitivity to the configuration of facial features (Ramon, 2015b) and differential processing of vertically arranged facial structure (Ramon, 2015a). 
Although more work is required to understand the nature of familiar face processing, face expertise in general has been examined in detail over the past several decades. Typically, such studies quantify expertise by measuring the effect of picture-plane inversion on face processing. Inverted faces are interesting because, although the actual power spectrum information available to the observer is identical to that in an upright face, observers' performance is dramatically impaired, suggesting a difference in processing efficiency resulting from expertise with upright exemplars (Gold, Bennett, & Sekuler, 1999; Sekuler, Gaspar, Gold, & Bennett, 2004). Indeed, that face processing is reliably slower and less accurate following inversion is a robustly reported result termed the face inversion effect (FIE; Yin, 1969). The FIE has been documented across several different paradigms, particularly unfamiliar face identity matching (Carey & Diamond, 1977; Rossion, 2009; Rossion & Gauthier, 2002; Valentine, 1988; Van Belle, De Graef, Verfaillie, Rossion, & Lefevre, 2010; Vizioli, Foreman, Rousselet, & Caldara, 2010). Furthermore, evidence from both healthy and brain-damaged individuals suggests a direct relationship between the magnitude of the FIE and face processing ability: The FIE is reduced or absent in individuals suffering from developmental or acquired prosopagnosia (Avidan, Tanzer, & Behrmann, 2011; Busigny & Rossion, 2010; Russell, Chatterjee, & Nakayama, 2012), and in healthy subjects its magnitude is correlated with individuals' face processing abilities (Russell, Duchaine, & Nakayama, 2009; Pachai, Sekuler, & Bennett, 2013). 
Whether the FIE is caused by qualitative or quantitative changes in face processing is still debated (Farah, Tanaka, & Drain, 1995; Gaspar, Sekuler, & Bennett, 2008; Sekuler et al., 2004; Van Belle et al., 2010; Willenbockel et al., 2010). However, recent psychophysical studies quantifying the information utilized for identity processing in upright and inverted unfamiliar faces have demonstrated a marked difference in the processing of horizontal, relative to vertical, spatial frequency components (Goffaux & Dakin, 2010; Goffaux & Greenwood, 2016; Pachai et al., 2013). In fact, this horizontal structure appears to be a particularly diagnostic cue for processing facial identity (Dakin & Watt, 2009), and sensitivity to horizontal structure is correlated with individual differences in both upright identity processing and the magnitude of the FIE (Pachai et al., 2013). 
In the present study we explored, for the first time, the relationship between the processing differences underlying the FIE and those associated with personal familiarity. In addition, we explored whether familiarity-dependent processing differences in horizontal selectivity are specific to the typically viewed face orientation, or if they generalize to inverted exemplars. To this end, we tested observers in a 1-of-10 face identity matching task, using both upright and inverted face stimuli, in which the available orientation information (i.e., horizontal and vertical structure) was parametrically manipulated. Moreover, our face stimuli were derived from individuals who were either personally familiar (i.e., depicted their colleagues) or unfamiliar to the participants. Given that horizontal selectivity, as well as personal familiarity, is associated with accurate upright face identification, we anticipated greater horizontal selectivity in observers personally familiar with the face stimuli. Further, because greater horizontal selectivity is associated with a larger FIE, we anticipated that personal familiarity would increase the magnitude of the FIE. 
Methods
Observers
Two groups of observers completed the experiment. The experimental group (n = 18, mean age = 35 years, range = 25–48) comprised members of the Institute of Neuroscience and School of Psychology at University of Glasgow, UK, who were personally familiar with the face identities used as stimuli in the experiment. For each familiar observer, we recruited an observer of comparable age and sex (n = 18, mean age = 35 years, range = 25–49) from the Department of Psychology, Neuroscience & Behaviour at McMaster University, Canada. These observers were unfamiliar with the face identities in the experiment, and served as the control group. All observers had normal or corrected-to-normal vision, provided written consent prior to beginning the experiment, and were financially compensated for their participation. 
Apparatus
The experimental group was tested at the University of Glasgow and the control group at McMaster University. At the University of Glasgow, the stimuli were presented on a flat screen CRT monitor with a resolution of 1280 × 1024 pixels, screen size of 40 × 30 cm, frame rate of 100 Hz, and average luminance of approximately 17 cd/m2. At McMaster University the stimuli were presented on a flat screen CRT monitor with a resolution of 1280 × 1024 pixels, screen size of 32 × 24 cm, frame rate of 100 Hz, and average luminance of approximately 21 cd/m2. At both locations, the stimuli were presented using MATLAB and the Psychophysics and Video Toolboxes (Brainard, 1997; Pelli, 1997). To match stimulus size in terms of the most relevant factor, degrees of visual angle, stimuli were viewed at a distance of 85 cm at University of Glasgow and 68 cm at McMaster University. 
Stimuli
Face stimuli were drawn from a large database of three-dimensional (3-D) face models developed at the Institute of Neuroscience and School of Psychology at University of Glasgow. Twenty 3-D models were selected from this database; three images were rendered per identity (slightly left-facing, front-facing, slightly right-facing) using 3-D Studio Max software, and each image was converted to grayscale. The final stimuli were cropped to exclude the hair and ears, and centered in a 512 × 512 pixel array that subtended 5.4° visual angle. 
Across trials, we manipulated the orientation information available to observers by filtering the stimuli in the Fourier domain. Specifically, we selectively retained frequency components from the target face using sharp-edged orientation filters centred on 0° (horizontal) or 90° (vertical) with one of 12 bandwidths ranging from 15° to 180° in 15° steps. Note that 90° is the largest bandwidth at which the horizontal and vertical filters isolated independent frequency components, and 180° filters passed all frequency components, resulting in unfiltered faces. The stimuli were adjusted to a root-mean-squared contrast of 0.2 after filtering (see Figure 1 for a demonstration of the filtered stimuli). 
Figure 1
 
Examples of stimulus material. Example stimuli filtered to retain horizontal (top) or vertical (bottom) spatial frequency components with bandwidths ranging from 30° to 180° in 30° steps. Inset with each face is a representation of the sharp-edged filter applied in the Fourier domain, where white indicates retained spatial frequency components and black indicates removed components. Note that, in such Fourier representations, horizontally oriented (i.e., 0°) spatial frequency components are represented along the vertical meridian.
Figure 1
 
Examples of stimulus material. Example stimuli filtered to retain horizontal (top) or vertical (bottom) spatial frequency components with bandwidths ranging from 30° to 180° in 30° steps. Inset with each face is a representation of the sharp-edged filter applied in the Fourier domain, where white indicates retained spatial frequency components and black indicates removed components. Note that, in such Fourier representations, horizontally oriented (i.e., 0°) spatial frequency components are represented along the vertical meridian.
Procedure
As displayed in Figure 2, each trial began with a fixation cross presented at the center of the screen for 1000 ms, followed by a 100-ms blank screen. The target was presented for 250 ms, followed by a 100-ms blank screen and a response screen containing 10 unfiltered alternatives. One of these alternatives was always the target face, and the other alternatives were selected randomly from the remaining 19 identities with the constraint that the response screen always contained five male faces and five female faces. To discourage image matching, target faces were always facing left or right (randomly selected), and response screen faces were always front-facing. Observers selected their response using a mouse click with no time constraint; and feedback was provided using 600- and 200-Hz tones for correct and incorrect responses, respectively. 
Figure 2
 
Experimental procedure. Displayed here is the temporal structure of a trial; target faces were always looking to the left or the right, while faces presented on the response screen were always front-facing.
Figure 2
 
Experimental procedure. Displayed here is the temporal structure of a trial; target faces were always looking to the left or the right, while faces presented on the response screen were always front-facing.
Design
The experiment comprised two sessions, and the orientation of target and response screen faces (i.e., upright or inverted) varied across sessions. The order of sessions was counterbalanced and matched across groups. Each session included 480 randomly intermixed trials (20 Identities × 2 Filter Orientations × 12 Bandwidths) with a short, self-timed break after 240 trials. Each session took approximately 45 min to complete. Before completing both sessions, observers were asked to quantify their familiarity with the 20 identities; a list of 20 names was presented, and observers rated their familiarity on a scale of 1–5, where a rating of 1 indicated “I don't know this individual” and 5 indicated “I am personally familiar with this individual.” 
Data analysis
For each observer, we fitted four psychometric functions (2 Face Orientations × 2 Filter Orientations) relating proportion correct to filter bandwidth. These functions were computed using generalized linear models with a probit link function, where the lower asymptote was set to 10% and the upper asymptote was a free parameter. These models provided a good fit to the data for all observers, so for subsequent analyses, proportion correct was extracted from these fitted functions. All statistical analyses were conducted using R (R Development Core Team, 2016). 
Results
Familiarity ratings
Of the 18 observers recruited in the UK, one reported a mean familiarity of 2.3 out of a possible 5. Due to this low self-reported familiarity, and before viewing their data, this observer and the corresponding control were excluded from subsequent analyses. For the remaining 17 observers, mean familiarity was rated as 4.3 (range: 3.6–5.0). In the control group, 10 observers rated their mean familiarity as 1, indicating that they did not know any of the face identities. The remaining seven observers rated their mean familiarity as 1.1, indicating familiarity with exactly one of the 20 face identities. The difference in familiarity between the two groups was highly significant, t(17) = 23.07, p < 0.0001. Based on these results, we proceeded to analyze face identification accuracy and horizontal selectivity with confidence in our familiarity manipulation. 
Face identification accuracy
Mean proportion correct for each condition is plotted in Figure 3. We submitted proportion correct for the 180° filter condition to a 2 (Familiarity) × 2 (Face Orientation) analysis of variance (ANOVA) with familiarity as a between-subjects factor and face orientation as a within-subjects factor. Recall that 180° filters produce intact, unfiltered faces, so this comparison quantifies the effect of familiarity on overall identity matching accuracy. This analysis revealed a significant main effect of familiarity, F(1, 32) = 14.87, p = 0.0005, a significant main effect of face orientation, F(1, 32) = 9.32, p = 0.0045, and a nonsignificant Familiarity × Face Orientation interaction, F(1, 32) = 0.02, p = 0.8947. These results reflect lower accuracy for unfamiliar observers than familiar observers, lower accuracy for inverted faces than upright faces, and no significant difference in the face inversion effect between unfamiliar observers and familiar observers. 
Figure 3
 
Mean performance. Proportion correct on the 1-of-10 face identity matching task plotted as a function of filter bandwidth for each face and filter orientation in (A) personally familiar and (B) unfamiliar observers. The largest bandwidth is 90°, indicated by the vertical dotted line, and is the bandwidth at which the horizontal and vertical filters passed independent subsets of the total orientation information. Note also that a filter bandwidth of 180° passes all orientation information and results in unfiltered faces. Solid and dashed lines indicate best-fitting psychometric functions fit to the data using probit generalized linear models with the upper asymptote as a free parameter. Error bars represent ±1 SEM.
Figure 3
 
Mean performance. Proportion correct on the 1-of-10 face identity matching task plotted as a function of filter bandwidth for each face and filter orientation in (A) personally familiar and (B) unfamiliar observers. The largest bandwidth is 90°, indicated by the vertical dotted line, and is the bandwidth at which the horizontal and vertical filters passed independent subsets of the total orientation information. Note also that a filter bandwidth of 180° passes all orientation information and results in unfiltered faces. Solid and dashed lines indicate best-fitting psychometric functions fit to the data using probit generalized linear models with the upper asymptote as a free parameter. Error bars represent ±1 SEM.
Horizontal selectivity
Next we examined the effect of familiarity and face orientation on the selective processing of horizontal structure. To begin, we conducted a 2 (Familiarity) × 2 (Face Orientation) × 2 (Filter Orientation) × 6 (Filter Bandwidth) on the raw data from 15°–90°. The bandwidth range of 15°–90° was chosen because 90° is the largest bandwidth at which horizontal and vertical filters isolate independent subsets of the total orientation information. The results of this ANOVA are summarized in Table 1. In brief, all main effects and interactions were significant except the interactions of filter orientation × filter bandwidth and familiarity × filter orientation × filter bandwidth. 
Table 1
 
Omnibus analysis of variance (ANOVA) summary. Notes: Degrees of freedom, F values, and p values resulting from a 2 (Familiarity Group) × 2 (Face Orientation) × 2 (Filter Orientation) × 6 (Filter Bandwidth) ANOVA conducted on raw proportion correct from 15°–90° bandwidth. ori = orientation; filt = filter; bw = bandwidth; df = degrees of freedom.
Table 1
 
Omnibus analysis of variance (ANOVA) summary. Notes: Degrees of freedom, F values, and p values resulting from a 2 (Familiarity Group) × 2 (Face Orientation) × 2 (Filter Orientation) × 6 (Filter Bandwidth) ANOVA conducted on raw proportion correct from 15°–90° bandwidth. ori = orientation; filt = filter; bw = bandwidth; df = degrees of freedom.
Given the abundance of significant effects in the omnibus ANOVA, we simplified our subsequent analyses by computing a composite measure of orientation sensitivity across bandwidth. Specifically, we defined orientation sensitivity as the mean fitted proportion correct from 15°–90°, computed separately for horizontal and vertical filters. For this analysis we chose to utilize the proportion correct predicted from psychometric curves fit to the entire bandwidth range. This procedure reduced the effect of measurement noise, although analyses of the raw proportion correct values yielded qualitatively similar results. For ease of communication, we term the resulting measure orientation sensitivity, as it quantifies overall sensitivity to information conveyed by horizontally and vertically oriented filters irrespective of bandwidth. Orientation sensitivity, plotted in Figure 4, was submitted to a 2 (Familiarity) × 2 (Face Orientation) × 2 (Filter Orientation) ANOVA with face and filter orientation as within-subjects factors and familiarity as a between-subjects factor. This analysis revealed a significant main effect of familiarity, F(1, 32) = 30.82, p < 0.0001; a significant main effect of face orientation, F(1, 32) = 214.67, p < 0.0001; and a significant main effect of filter orientation, F(1, 32) = 132.08, p < 0.0001. These main effects were qualified by significant interactions of Familiarity × Face Orientation, F(1, 32) = 40.57, p < 0.0001; Familiarity × Filter Orientation, F(1, 32) = 13.13, p < 0.001; Face Orientation × Filter Orientation, F(1, 32) = 205.4, p < 0.0001; and Familiarity × Face Orientation × Filter Orientation, F(1, 32) = 38.22, p < 0.0001. 
Figure 4
 
Orientation sensitivity. Boxplots of mean proportion correct from 15° to 90° extracted from the best-fitting psychometric functions fit to the individual data for (A) personally familiar and (B) unfamiliar observers. The central line in each box represents the median, whereas the upper and lower bounds represent the 75th and 25th percentiles and the open circles represent outliers falling beyond this range. The dotted horizontal line represents chance performance.
Figure 4
 
Orientation sensitivity. Boxplots of mean proportion correct from 15° to 90° extracted from the best-fitting psychometric functions fit to the individual data for (A) personally familiar and (B) unfamiliar observers. The central line in each box represents the median, whereas the upper and lower bounds represent the 75th and 25th percentiles and the open circles represent outliers falling beyond this range. The dotted horizontal line represents chance performance.
Having observed several significant interactions in the previous ANOVA, we next analyzed separately the orientation sensitivity for upright and inverted faces. For upright faces, a 2 (Familiarity) × 2 (Filter Orientation) ANOVA revealed significant main effects of familiarity, F(1, 32) = 42.7, p < 0.0001, and filter orientation, F(1, 32) = 220.13, p < 0.0001, as well as a significant familiarity × filter orientation interaction, F(1, 32) = 29.14, p < 0.0001. This result reflects higher overall accuracy for familiar observers, higher accuracy for horizontal filters, and a larger effect of filter orientation in familiar observers. For inverted faces, the same ANOVA revealed a significant main effect of familiarity, F(1, 32) = 6.88, p = 0.0132, a significant main effect of filter orientation, F(1, 32) = 11.63, p = 0.0018, and a nonsignificant Familiarity × Filter Orientation interaction, F(1, 32) = 0.003, p = 0.95. This result reflects higher accuracy for horizontal filters and familiar observers, with no effect of group on horizontal selectivity. 
Finally, because the orientation sensitivity plotted in Figure 4 appears to approach chance performance (10%) in some conditions, potentially obscuring our ability to detect certain effects, we computed a series of one-tailed t tests evaluating whether orientation selectivity significantly exceeds chance levels in each face and filter orientation condition. These tests were highly significant in each case, all ts(16) ≥ 5.0, all ps < 0.0001, eliminating this potential concern. 
The results from the ANOVAs suggest that personal familiarity results in greater upright face identification performance by selectively improving the use of information conveyed by horizontal structure in the Fourier domain. To visualize this result more directly, we computed a measure of horizontal selectivity, subtracting proportion correct with vertical filters from that with horizontal filters at each bandwidth. This measure is plotted in Figure 5. We submitted these data to a 2 (Familiarity) × 2 (Face Orientation) × 6 (Filter Bandwidth) ANOVA using the data in the 15° to 90° range. This analysis revealed significant main effects of familiarity, F(1, 32) = 10.39, p = 0.0029, and face orientation, F(1, 32) = 99.42, p < 0.0001, with no main effect of filter bandwidth, F(5, 160) = 1.45, p = 0.2078. These main effects were qualified by significant interactions of Familiarity × Face Orientation, F(1, 32) = 34.27, p < 0.0001; Face Orientation × Filter Bandwidth, F(5, 160) = 4.08, p = 0.0017; and Familiarity × Face Orientation × Filter Bandwidth, F(5, 160) = 3.95, p = 0.0021. The Familiarity × Filter Bandwidth interaction was not significant, F(5, 160) = 1.66, p = 0.1465. These results suggest that horizontal selectivity was greater for upright, personally familiar faces relative to all other conditions tested. 
Further inspection of Figure 5 suggests that horizontal selectivity reaches its peak earlier for upright faces than inverted faces, particularly in the familiar group. To quantify these patterns, we evaluated the linear and quadratic trends of horizontal selectivity as a function of filter bandwidth from 15°–90°. In the familiar group, the linear trend was not significant for either face orientation (upright: t(16) = 1.30, p = 0.2126 | inverted: t(16) = 1.89, p = 0.0772), while the quadratic trend was significant for upright, t(16) = −6.57, p < 0.0001, but not inverted, t(16) = 0.99, p = 0.3364, faces. In the unfamiliar group, the linear trend was significant for upright, t(16) = 6.69, p < 0.0001, but not inverted, t(16) = 0.38, p = 0.7082, faces, which was also true of the quadratic trend (upright: t(16) = −2.52, p = 0.0229 | inverted: t(16) = −0.70, p = 0.4948). Taken together, these results reveal that horizontal selectivity remains unchanged from 15°–90° for inverted faces in both groups. Conversely, horizontal selectivity for upright faces reaches a plateau almost immediately in the familiar group, with no linear increase across the 15°–90° bandwidth range, while the unfamiliar group demonstrates a later peak combined with a linear increase across this range. 
Figure 5
 
Horizontal selectivity. Horizontal selectivity is defined as the difference in proportion correct between horizontally and vertically filtered faces at each bandwidth, and is plotted separately for each face orientation in (A) personally familiar and (B) unfamiliar observers. Error bars represent ± 1 SEM.
Figure 5
 
Horizontal selectivity. Horizontal selectivity is defined as the difference in proportion correct between horizontally and vertically filtered faces at each bandwidth, and is plotted separately for each face orientation in (A) personally familiar and (B) unfamiliar observers. Error bars represent ± 1 SEM.
Discussion
In recent years, several studies have revealed the importance of structure conveyed by horizontally oriented spatial frequency components for face processing (Dakin & Watt, 2009; Goffaux & Dakin, 2010; Goffaux & Greenwood, 2016; Pachai et al., 2013). This information plays a crucial role for face-related tasks throughout the lifespan, with preferential processing of horizontal structure demonstrated as early as 3 months of age (de Heering et al., 2016), and through older age (Goffaux, Poncin, & Schiltz, 2015; Pachai, Corrow, Bennett, Barton, & Sekuler, 2015). 
The present study is the first to investigate whether personal familiarity is associated with a selective increase in processing the diagnostic information conveyed by the horizontal band of familiar faces. In line with our expectations, we found increased horizontal selectivity for upright, familiar faces. This finding adds to an increasing body of evidence demonstrating the importance of horizontally oriented spatial frequency components for processing of facial identity. 
Given previous results demonstrating that horizontal structure is a highly diagnostic source of information in the face (Dakin & Watt, 2009; Goffaux & Greenwood, 2016), the selective processing of which is correlated with face discrimination performance (Pachai et al., 2013), the present findings may shed light on the nature of the advantages associated with personal familiarity. Specifically, phenomena such as more robust processing of facial identity (Bruce et al., 1999; Burton et al., 2010; Burton et al., 1999; Megreya & Burton, 2006), as well as facilitated detection (Gobbini et al., 2013; Tong & Nakayama, 1999; Visconti di Oleggio Castello & Gobbini, 2015) and categorization (Ramon et al., 2011) of personally familiar faces may result from more efficient processing of the horizontal spatial frequency band in these stimuli. Our results are also in line with increased sensitivity to vertically arranged components for familiar faces (Ramon, 2015b), as this information is conveyed by the horizontal band of spatial frequency components. Readers can verify this by inspection of Figure 1, which reveals how the configuration of features such as the eyes, eyebrows, and mouth are captured more clearly by the horizontal band than the vertical. 
Moreover, it recently has been shown that the availability of horizontal structure underlies the face-specific N170 response (Hashemi, Pachai, Bennett, & Sekuler, 2014; Jacques, Schiltz, & Goffaux, 2014), and may drive the responses of face-preferential brain regions (Goffaux, Duecker, Hausfeld, Schiltz, & Goebel, 2016; Taubert, Goffaux, Van Belle, Vanduffel, & Vogels, 2016). This suggests that this framework for understanding face processing may to some extent unify both behavioral and neurophysiological results in the field. Indeed, previous studies have demonstrated familiarity-dependent modulation of the N170 (Caharel, Courtay, Bernard, Lalonde, & Rebai, 2005; Caharel, Fiori, Bernard, Lalonde, & Rebai, 2006; Kloth et al., 2006; Wild-Wall, Dimigen, & Sommer, 2008), as well as differential neural processing within regions of the face processing network (Gobbini, Leibenluft, Santiago, & Haxby, 2004; Leibenluft, Gobbini, Harrison, & Haxby, 2004; Ramon, Vizioli, Liu-Shuang, & Rossion, 2015). 
It is important to note that our findings do not offer insights into how specifically this sensitivity to horizontal structure develops. However, ideal observer analyses demonstrate that this orientation band carries relatively more diagnostic information for identity discrimination (e.g. Pachai et al., 2013). It is therefore optimal for the human visual system to make preferential use of this information when performing identity-related tasks, and unsurprising that efficient use of this information predicts face identification performance, while less efficient use of this information predicts the magnitude of the deficit resulting from picture-plane inversion (Pachai et al., 2013). We show that the increased expertise resulting from personal familiarity improves this horizontal selectivity, which has been observed in infants as young as 3 months of age (de Heering et al., 2016). 
The present results represent a step toward characterizing the importance of horizontal structure for everyday face processing tasks, but several questions in this domain remain unanswered. For example, it is largely unclear how sensitivity to horizontal and vertical structure relates to processing of identity in neuropsychological populations (Ramon, Busigny, Gosselin, & Rossion, 2015; Richoz, Jack, Garrod, Schyns, & Caldara, 2015; see Pachai et al., 2015 for recent evidence relating to developmental prosopagnosia), or during the processing of faces with little to no exposure (i.e., other-race or other-age faces; de Heering, de Liedekerke, Deboni, & Rossion, 2010; de Heering & Rossion, 2008; Jack & Schyns, 2015; Kuefner, Macchi Cassia, Picozzi, & Bricolo, 2008). Likewise, structure-dependent sensitivity has never been investigated with experimentally learned identities, be it based on individual images (e.g., Schyns, Bonnar, & Gosselin, 2002), or a number of different view-points (i.e., rotation in depth; e.g., Hill, Schyns, & Akamastu, 1997). We believe this approach holds promise to understand these and other face perception phenomena. 
Acknowledgments
This work was supported by the Natural Sciences and Engineering Research Council of Canada and the Belgian National Foundation for Scientific Research. P. G. S. received support from the Wellcome Trust (UK; 107802) and MURI/EPSRC (USA, UK; 172046-01). This publication was supported by the Council of the University of Fribourg (Hochschulrat der Universität Freiburg). The authors express their gratitude to Donna Waxman for her help collecting participants at McMaster, and to all the participants for their cooperation. 
Commercial relationships: none. 
Corresponding authors: Matthew V. Pachai; Meike Ramon. 
Address: Ecole Polytechnique Fédérale de Lausanne, Laboratory of Psychophysics, Brain Mind Institute, Lausanne, Switzerland; University of Fribourg, Department of Psychology, Fribourg, Switzerland. 
References
Avidan, G., Tanzer, M.,& Behrmann, M. (2011). Impaired holistic processing in congenital prosopagnosia. Neuropsychologia, 49 (9), 2541–2552, doi:10.1016/j.neuropsychologia.2011.05.002.
Balas, B. J., Schmidt, J.,& Saville, A. (2015). A face detection bias for horizontal orientations develops in middle childhood. Frontiers in Psychology, 6, 772, doi:10.3389/fpsyg.2015.00772.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10 (4), 433–436.
Bruce, V., Henderson, Z., Greenwood, K., Hancock, P. J. B., Burton, A. M.,& Miller, P. (1999). Verification of face identities from images captured on video. Journal of Experimental Psychology: Applied, 5 (4), 339–360.
Burton, A. M., White, D.,& McNeill, A. (2010). The Glasgow Face Matching Test. Behavior Research Methods, 42 (1), 286–291, doi:10.3758/BRM.42.1.286.
Burton, A. M., Wilson, S., Cowan, M.,& Bruce, V. (1999). Face recognition in poor-quality video: Evidence from security surveillance. Psychological Science, 10, 243–248.
Busigny, T.,& Rossion, B. (2010). Acquired prosopagnosia abolishes the face inversion effect. Cortex, 46 (8), 965–981, doi:10.1016/j.cortex.2009.07.004.
Caharel, S., Courtay, N., Bernard, C., Lalonde, R.,& Rebai, M. (2005). Familiarity and emotional expression influence an early stage of face processing: An electrophysiological study. Brain Cognition, 59, 96–100, doi:10.1016/j.bandc.2005.05.005.
Caharel, S., Fiori, N., Bernard, C., Lalonde, R.,& Rebai, M. (2006). The effects of inversion and eye displacements of familiar and unknown faces on early and late-stage ERPs. International Journal of Psychophysiology, 62, 141–151.
Caharel, S., Ramon, M.,& Rossion, B. (2014). Face familiarity decisions take 200 msec in the human brain: Electrophysiological evidence from a go/no-go speeded task. Journal of Cognitive Neuroscience, 26 (1), 81–95, doi:10.1162/jocn_a_00451.
Carey, S.,& Diamond, R. (1977). From piecemeal to configurational representation of faces. Science, 195 (4275), 312–314.
Dakin, S. C.,& Watt, R. J. (2009). Biological “bar codes” in human faces. Journal of Vision, 9 (4): 2, 1–10, doi:10.1167/9.4.2. [PubMed] [Article]
de Heering, A., de Liedekerke, C., Deboni, M.,& Rossion, B. (2010). The role of experience during childhood in shaping the other-race effect. Developmental Science, 13 (1), 181–187, doi:10.1111/j.1467-7687.2009.00876.x.
de Heering, A., Goffaux, V., Dollion, N., Godard, O., Durand, K.,& Baudouin, J.-Y. (2016). Three-month-old infants' sensitivity to horizontal information within faces. Developmental Psychobiology, 58, 536–542.
de Heering, A.,& Rossion, B. (2008). Prolonged visual experience in adulthood modulates holistic face perception. PLoS One, 3 (5), e2317, doi:10.1371/journal.pone.0002317.
Farah, M. J., Tanaka, J. W.,& Drain, H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception & Performance, 21 (3), 628–634.
Gaspar, C., Sekuler, A. B.,& Bennett, P. J. (2008). Spatial frequency tuning of upright and inverted face identification. Vision Research, 48 (28), 2817–2826, doi:10.1016/j.visres.2008.09.015.
Gobbini, M. I., Gors, J. D., Halchenko, Y. O., Rogers, C., Guntupalli, J. S., Hughes, H.,& Cipolli, C. (2013). Prioritized detection of personally familiar faces. PLoS One, 8 (6), e66620, doi:10.1371/journal.pone.0066620.
Gobbini, M. I., Leibenluft, E., Santiago, N.,& Haxby, J. V. (2004). Social and emotional attachment in the neural representation of faces. NeuroImage, 22, 1628–1635, doi:10.1016/j.neuroimage.2004.03.049.
Goffaux, V.,& Dakin, S. C. (2010). Horizontal information drives the behavioral signatures of face processing. Frontiers in Psychology, 1, 143, doi:10.3389/fpsyg.2010.00143.
Goffaux, V., Duecker, F., Hausfeld, L., Schiltz, C.,& Goebel, R. (2016). Horizontal tuning for faces originates in high-level Fusiform Face Area. Neuropsychologia, 81, 1–11, doi:10.1016/j.neuropsychologia.2015.12.004.
Goffaux, V.,& Greenwood, J. A. (2016). The orientation selectivity of face identification. Scientific Reports, 6, 34204, doi: 10.1038/srep34204.
Goffaux, V., Poncin, A.,& Schiltz, C. (2015). Selectivity of face perception to horizontal information over lifespan (from 6 to 74 years old). PLoS One, 10 (9), e0138812, doi:10.1371/journal.pone.0138812.
Gold, J., Bennett, P. J.,& Sekuler, A. B. (1999). Signal but not noise changes with perceptual learning. Nature, 402 (6758), 176–178, doi:10.1038/46027.
Hashemi, A., Pachai, M. V., Bennett, P. J.,& Sekuler, A. B. (1999). The N170 is driven by the presence of horizontal facial structure. Journal of Vision, 14 (10): 130, doi:10.1167/14.10.130. [Abstract]
Hill, H.,& Bruce, V. (1996). Effects of lighting on the perception of facial surfaces. Journal of Experimental Psychology: Human Perception & Performance, 22 (4), 986–1004.
Hill, H., Schyns, P. G.,& Akamatsu, S. U. (1997). Information and viewpoint dependence in face recognition. Cognition, 62 (2), 201–222.
Jack, R. E.,& Schyns, P. G. (2015). The human face as a dynamic tool for social communication. Current Biology, 25 (14), R621–R634, doi:10.1016/j.cub.2015.05.052.
Jacques, C., Schiltz, C.,& Goffaux, V. (2014). Face perception is tuned to horizontal orientation in the N170 time window. Journal of Vision, 14 (2): 5, 1–18, doi:10.1167/14.2.5. [PubMed] [Article]
Kloth, N., Dobel, C., Schweinberger, S. R., Zwitserlood, P., Bolte, J.,& Junghofer, M. (2006). Effects of personal familiarity on early neuromagnetic correlates of face perception. European Journal of Neuroscience, 24, 3317–3321, doi:10.1111/j.1460-9568.2006.05211.x.
Kuefner, D., Macchi Cassia, V., Picozzi, M.,& Bricolo, E. (2008). Do all kids look alike? Evidence for an other-age effect in adults. Journal of Experimental Psychology: Human Perception & Performance, 34 (4), 811–817, doi:10.1037/0096-1523.34.4.811.
Leibenluft, E., Gobbini, M. I., Harrison, T.,& Haxby, J. V. (2004). Mothers' neural activation in response to pictures of their children and other children. Biological Psychiatry, 56, 225–232, doi:10.1016/j.biopsych.2004.05.017.
Megreya, A. M.,& Burton, A. M. (2006). Unfamiliar faces are not faces: Evidence from a matching task. Memory & Cognition, 34 (4), 865–876.
O'Toole, A. J., Edelman, S.,& Bulthoff, H. H. (1998). Stimulus-specific effects in face recognition over changes in viewpoint. Vision Research, 38 (15–16), 2351–2363.
Pachai, M. V., Corrow, S., Bennett, P. J., Barton, J. J.,& Sekuler, A. B. (2015). Sensitivity to horizontal structure and face identification in developmental prosopagnosia and healthy aging. Perception ECVP Abstract Supplement, 44, 97–98.
Pachai, M. V., Sekuler, A. B.,& Bennett, P. J. (2013). Sensitivity to information conveyed by horizontal contours is correlated with face identification accuracy. Frontiers in Psychology, 4, 74, doi:10.3389/fpsyg.2013.00074.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442.
R Development Core Team. (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing. Available from: https://www.r-project.org/
Ramon, M. (2015a). Differential processing of vertical interfeature relations due to real-life experience with personally familiar faces. Perception, 44 (4), 368–382.
Ramon, M. (2015b). Perception of global facial geometry is modulated through experience. PeerJ, 3, e850, doi:10.7717/peerj.850.
Ramon, M., Busigny, T., Gosselin, F.,& Rossion, B. (2015). All new kids on the block? Personally familiar face processing in a case of pure prosopagnosia following brain damage. Journal of Vision, 15 (12): 1205, doi:10.1167/15.12.1205. [Abstract]
Ramon, M., Caharel, S.,& Rossion, B. (2011). The speed of recognition of personally familiar faces. Perception, 40 (4), 437–449.
Ramon, M., Vizioli, L., Liu-Shuang, J.,& Rossion, B. (2015). Neural microgenesis of personally familiar face recognition. Proceedings of the National Academy of Science, USA, 112 (35), E4835–E4844, doi:10.1073/pnas.1414929112.
Richoz, A. R., Jack, R. E., Garrod, O. G., Schyns, P. G.,& Caldara, R. (2015). Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression. Cortex, 65, 50–64, doi:10.1016/j.cortex.2014.11.015.
Rossion, B. (2009). Distinguishing the cause and consequence of face inversion: The perceptual field hypothesis. Acta Psychologica (Amsterdam), 132 (3), 300–312, doi:10.1016/j.actpsy.2009.08.002.
Rossion, B.,& Gauthier, I. (2002). How does the brain process upright and inverted faces? Behavioral and Cognitive Neuroscience Reviews, 1 (1), 63–75.
Russell, R., Chatterjee, G.,& Nakayama, K. (2012). Developmental prosopagnosia and super-recognition: No special role for surface reflectance processing. Neuropsychologia, 50 (2), 334–340, doi:10.1016/j.neuropsychologia.2011.12.004.
Russell, R., Duchaine, B.,& Nakayama, K. (2009). Super-recognizers: People with extraordinary face recognition ability. Psychnomic Bulletin and Review, 16 (2), 252–257, doi:10.3758/PBR.16.2.252.
Schyns, P. G., Bonnar, L.,& Gosselin. F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13 (5), 402–409.
Sekuler, A. B., Gaspar, C. M., Gold, J. M.,& Bennett, P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14 (5), 391–396, doi:10.1016/j.cub.2004.02.028.
Taubert, J., Goffaux, V., Van Belle, G., Vanduffel, W.,& Vogels, R. (2016). The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex. Scientific Reports, 6, 21189, doi:10.1038/srep21189.
Tong, F.,& Nakayama, K. (1999). Robust representations for faces: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance, 25 (4), 1016–1035.
Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79 (Pt 4), 471–491.
Van Belle, G., De Graef, P., Verfaillie, K., Rossion, B.,& Lefevre, P. (2010). Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation. Journal of Vision, 10 (5): 10, 1–13, doi:10.1167/10.5.10. [PubMed] [Article]
Visconti di Oleggio Castello, M.,& Gobbini, M. I. (2015). Familiar face detection in 180 ms. PLoS One, 10 (8), e0136548, doi:10.1371/journal.pone.0136548.
Visconti di Oleggio Castello, M., Guntupalli, J. S., Yang, H.,& Gobbini, M. I. (2014). Facilitated detection of social cues conveyed by familiar faces. Frontiers in Human Neuroscience, 8, 678, doi:10.3389/fnhum.2014.00678.
Vizioli, L., Foreman, K., Rousselet, G. A.,& Caldara, R. (2010). Inverting faces elicits sensitivity to race on the N170 component: A cross-cultural study. Journal of Vision, 10 (1): 15, 1–23, doi:10.1167/10.1.15. [PubMed] [Article]
Watier, N. N.,& Collin, C. A. (2009). Effects of familiarity on spatial frequency thresholds for face matching. Perception, 38 (10), 1497–1507.
White, D., Dunn, J. D., Schmid, A. C.,& Kemp, R. I. (2015). Error rates in users of automatic face recognition software. PLoS One, 10 (10), e0139827, doi:10.1371/journal.pone.0139827.
Wild-Wall, N., Dimigen, O.,& Sommer, W. (2008). Interaction of facial expressions and familiarity: ERP evidence. Biological Psychology, 77 (2), 138–149, doi:10.1016/j.biopsycho.2007.10.001.
Willenbockel, V., Fiset, D., Chauvin, A., Blais, C., Arguin, M., Tanaka, J. W.,… Gosselin, F. (2010). Does face inversion change spatial frequency tuning? Journal of Experimental Psychology: Human Perception & Performance, 36 (1), 122–135, doi:10.1037/a0016465.
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145.
Figure 1
 
Examples of stimulus material. Example stimuli filtered to retain horizontal (top) or vertical (bottom) spatial frequency components with bandwidths ranging from 30° to 180° in 30° steps. Inset with each face is a representation of the sharp-edged filter applied in the Fourier domain, where white indicates retained spatial frequency components and black indicates removed components. Note that, in such Fourier representations, horizontally oriented (i.e., 0°) spatial frequency components are represented along the vertical meridian.
Figure 1
 
Examples of stimulus material. Example stimuli filtered to retain horizontal (top) or vertical (bottom) spatial frequency components with bandwidths ranging from 30° to 180° in 30° steps. Inset with each face is a representation of the sharp-edged filter applied in the Fourier domain, where white indicates retained spatial frequency components and black indicates removed components. Note that, in such Fourier representations, horizontally oriented (i.e., 0°) spatial frequency components are represented along the vertical meridian.
Figure 2
 
Experimental procedure. Displayed here is the temporal structure of a trial; target faces were always looking to the left or the right, while faces presented on the response screen were always front-facing.
Figure 2
 
Experimental procedure. Displayed here is the temporal structure of a trial; target faces were always looking to the left or the right, while faces presented on the response screen were always front-facing.
Figure 3
 
Mean performance. Proportion correct on the 1-of-10 face identity matching task plotted as a function of filter bandwidth for each face and filter orientation in (A) personally familiar and (B) unfamiliar observers. The largest bandwidth is 90°, indicated by the vertical dotted line, and is the bandwidth at which the horizontal and vertical filters passed independent subsets of the total orientation information. Note also that a filter bandwidth of 180° passes all orientation information and results in unfiltered faces. Solid and dashed lines indicate best-fitting psychometric functions fit to the data using probit generalized linear models with the upper asymptote as a free parameter. Error bars represent ±1 SEM.
Figure 3
 
Mean performance. Proportion correct on the 1-of-10 face identity matching task plotted as a function of filter bandwidth for each face and filter orientation in (A) personally familiar and (B) unfamiliar observers. The largest bandwidth is 90°, indicated by the vertical dotted line, and is the bandwidth at which the horizontal and vertical filters passed independent subsets of the total orientation information. Note also that a filter bandwidth of 180° passes all orientation information and results in unfiltered faces. Solid and dashed lines indicate best-fitting psychometric functions fit to the data using probit generalized linear models with the upper asymptote as a free parameter. Error bars represent ±1 SEM.
Figure 4
 
Orientation sensitivity. Boxplots of mean proportion correct from 15° to 90° extracted from the best-fitting psychometric functions fit to the individual data for (A) personally familiar and (B) unfamiliar observers. The central line in each box represents the median, whereas the upper and lower bounds represent the 75th and 25th percentiles and the open circles represent outliers falling beyond this range. The dotted horizontal line represents chance performance.
Figure 4
 
Orientation sensitivity. Boxplots of mean proportion correct from 15° to 90° extracted from the best-fitting psychometric functions fit to the individual data for (A) personally familiar and (B) unfamiliar observers. The central line in each box represents the median, whereas the upper and lower bounds represent the 75th and 25th percentiles and the open circles represent outliers falling beyond this range. The dotted horizontal line represents chance performance.
Figure 5
 
Horizontal selectivity. Horizontal selectivity is defined as the difference in proportion correct between horizontally and vertically filtered faces at each bandwidth, and is plotted separately for each face orientation in (A) personally familiar and (B) unfamiliar observers. Error bars represent ± 1 SEM.
Figure 5
 
Horizontal selectivity. Horizontal selectivity is defined as the difference in proportion correct between horizontally and vertically filtered faces at each bandwidth, and is plotted separately for each face orientation in (A) personally familiar and (B) unfamiliar observers. Error bars represent ± 1 SEM.
Table 1
 
Omnibus analysis of variance (ANOVA) summary. Notes: Degrees of freedom, F values, and p values resulting from a 2 (Familiarity Group) × 2 (Face Orientation) × 2 (Filter Orientation) × 6 (Filter Bandwidth) ANOVA conducted on raw proportion correct from 15°–90° bandwidth. ori = orientation; filt = filter; bw = bandwidth; df = degrees of freedom.
Table 1
 
Omnibus analysis of variance (ANOVA) summary. Notes: Degrees of freedom, F values, and p values resulting from a 2 (Familiarity Group) × 2 (Face Orientation) × 2 (Filter Orientation) × 6 (Filter Bandwidth) ANOVA conducted on raw proportion correct from 15°–90° bandwidth. ori = orientation; filt = filter; bw = bandwidth; df = degrees of freedom.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×