Open Access
Article  |   December 2017
Orientations for the successful categorization of facial expressions and their link with facial features
Author Affiliations
Journal of Vision December 2017, Vol.17, 7. doi:https://doi.org/10.1167/17.14.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Justin Duncan, Frédéric Gosselin, Charlène Cobarro, Gabrielle Dugas, Caroline Blais, Daniel Fiset; Orientations for the successful categorization of facial expressions and their link with facial features. Journal of Vision 2017;17(14):7. https://doi.org/10.1167/17.14.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic—i.e., task relevant—orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions—surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

Introduction
Complex states of mind such as emotions can be inferred simply by looking at other people's faces. Thus, the human face can be seen as a tool for nonverbal communication (e.g., Haxby, Hoffman, & Gobbini, 2000; Jack & Schyns, 2015), and the skill to competently process this visual information is likely an important one for successful social interactions. The mechanisms underlying this task have been studied for almost a century already (e.g., Dunlap, 1927). However, it is only recently that the low-level visual properties underlying this skill have started to garner attention. 
In human early visual cortices, the variations in luminance that make up complex stimuli are processed by discrete channels, sensitive to specific spatial frequency and orientation values (for review, see De Valois & De Valois, 1990). Their various combinations play the crucial role of giving visual signals their form and shape—which can then be interpreted by higher visual cortices as features, faces, and expressions. Specific features, for instance, need to be revealed in distinct spatial frequency bands for optimal face identification (e.g., Butler, Blais, Gosselin, Bub, & Fiset, 2010; Gosselin & Schyns, 2001; Schyns, Bonnar, & Gosselin, 2002) and expression categorization (e.g., Smith, Gosselin, Cottrell, & Schyns, 2005). Face identification has also been shown to rely on a specific range of spatial frequencies (e.g., Gaspar, Sekuler, & Bennet, 2008; Gold, Bennett, & Sekuler, 1999; Näsänen, 1999; Royer et al., 2017; Willenbockel, Fiset, et al., 2010). Relatedly, it has been shown that the spatial frequency spectrum is a good predictor of the distance at which facial expressions are better recognized (Smith & Schyns, 2009). 
More recently, studies in the field of face perception have begun to investigate the orientation spectrum of visual signals and revealed its high importance for the visual system. More specifically, research has demonstrated that horizontal information is especially critical for accurate face detection (Balas, Schmidt, & Saville, 2015) and identification (Dakin & Watt, 2009; Goffaux & Dakin, 2010; Pachai, Sekuler, & Bennett, 2013). They have also shown that sensitivity to horizontal information increases with familiarity (Pachai, Sekuler, Bennett, Schyns, & Ramon, 2017), and strongly correlates with face identification ability (Pachai et al., 2013). 
The case for horizontal information was also made stronger by the fact that the face inversion effect—that is, a disproportionate decline in face identification accuracy when faces are presented upside-down (Yin, 1969)—is mainly associated with a reduction in sensitivity to horizontal information (Goffaux & Dakin, 2010; Goffaux & Greenwood, 2016; Pachai et al., 2013). It is also reinforced by findings that have linked this information with face-selective neuroimaging markers. The electrophysiological N170 face-selective component (for review, see Eimer, 2011; Rossion, 2014), for instance, appears to be linked with the processing of horizontal information in faces. Indeed, this component typically shows a distinct increase in amplitude upon perception of inverted faces, and this “N170 face inversion effect” is dampened when the phase of horizontal information is randomized (Jacques, Schiltz, & Goffaux, 2014). In addition, the functionally defined “fusiform face area” (FFA; Kanwisher, McDermott, & Chun, 1997), suggested to be the cortical source of the N170 (Sadeh, Podlipsky, Zhdanov, & Yovel, 2010), was also reported to exhibit horizontal selectivity for faces (Goffaux, Hausfeld, Schiltz, & Goebel, 2016). 
At the moment, very little is known about the visual system's reliance on the orientation spectrum during facial expression recognition. As is the case for face identification, however, the discrimination between happy and sad facial expressions has been shown to rely disproportionately on horizontal information (Balas & Huynh, 2015; Huynh & Balas, 2014). Interestingly, vertical information was also shown to be useful for this task, but only when emotions are expressed with an open mouth. These crucial findings by Balas and Huynh (2014, 2015) are the first to suggest that horizontal information plays an important role in the categorization of facial expressions, but they are also limited in two ways. First, it has been established that diagnostic—that is, task-relevant—information varies as a factor of task demand (e.g., Gosselin & Schyns, 2001; Schyns et al., 2002). It has also been shown that diagnostic information for a given expression changes as a function of the expression against which it is compared (e.g., afraid vs. happy, or afraid vs. angry), and of the number of expressions (e.g., two, three, or seven) against which it is compared (Smith & Merlusca, 2014). It is therefore possible that these results pertaining to horizontal—and vertical— information are specific to the “happy versus sad” discrimination paradigm and not indicative of the processes underlying the categorization of other facial expressions. The second limitation is linked with the fact that performance was only compared for horizontal and vertical information. Unlike facial identity, expressions are more heterogeneous and dynamic by nature, and thus involve considerable feature shape differences at their apex. This could have an effect on the distribution of energy along the orientation spectrum which is not accounted for by the measured orientations subset. In our opinion, these limitations warrant an unconstrained investigation of the entire orientation spectrum for all the facial expressions. 
It is generally proposed that there are six basic emotion categories: anger, disgust, fear, happiness, sadness, and surprise (Ekman & Friesen, 1975; Izard, 1971; but also see Jack, Sun, Delis, Garrod, & Schyns, 2016). The distribution of diagnostic cues varies considerably across facial expressions (Eisenbarth & Alpers, 2011; Fiset et al., 2017; Jack, Garrod, & Schyns, 2014; Smith et al., 2005; Smith & Merlusca, 2014; Smith & Schyns, 2009; Wang, Friel, Gosselin, & Schyns, 2011). Upper facial features, for instance, are particularly diagnostic of the expressions of fear, sadness, and anger, whereas lower features are more diagnostic of the expressions of surprise, disgust, happiness, and of neutrality. Despite these differences, however, evidence suggests that the mouth is used to a greater extent than the eyes when categorizing facial expressions (Blais, Roy, Fiset, Arguin, & Gosselin, 2012; Calvo, Fernández-Martín, & Nummenmaa, 2014; see also Blais et al., 2017; Peterson & Eckstein, 2012). To the best of our knowledge, the link between facial features and orientations has never been investigated before. Because orientation is a global image property, it is impossible to know with certainty which facial features are used in a given orientation band. There are, however, some informative clues in this regard. Face identification, for instance, is usually explained in terms of utilization of the eye region (Butler et al., 2010; Caldara et al., 2005; Gosselin & Schyns, 2001; Royer, Blais, Déry, & Fiset, 2016; Schyns et al., 2002; Sekuler, Gaspar, Gold, & Bennett, 2004), a region that is particularly rich in horizontal mid-to-high spatial frequency content (Keil, 2009). This is of particular interest, given that the processing of facial horizontal information was shown to be largely supported by this range of spatial frequencies (Goffaux, van Zon, & Schiltz, 2011). Thus, we could reasonably expect that processing of horizontal information and of the eye region are intimately linked—at the very least in a face identification task. The question, however, was never directly investigated for any type of face processing task and thus remains the product of speculation. 
The present research had two main objectives. First, we wanted to thoroughly examine the role of orientations in the processing of facial expressions, accounting for all the basic emotion categories and neutrality. Because we did not want to limit our exploration to a subset of the orientation spectrum, we developed orientation bubbles. Like bubbles (Gosselin & Schyns, 2001; Schyns et al., 2002) and spatial frequency bubbles (Willenbockel et al., 2010), orientation bubbles is a data-driven procedure which randomly samples the dimension of interest—here, the orientation spectrum—over a number of trials to quantify its use by the visual system. 
Second, we wanted to explore the link between utilization of the orientation spectrum and of local facial cues during expression categorization. Because this bridging attempt is completely novel, we opted to investigate the question with two separate tasks which were accomplished by the same participants in an interleaved fashion: one with orientation bubbles, and the other with location bubbles. On its own, each task allowed us to replicate previous findings; and the orientation bubbles task allowed us to also expand upon the existing literature. We were then able to correlate individual orientation and local profiles to reveal the link between orientation diagnosticity and facial feature diagnosticity. 
Below, we report the results of three experiments. In Experiment 1, we assessed the validity of orientation bubbles using a simple plaid detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic orientations for each of the basic facial expressions and neutrality in a categorization task. Experiment 3 was identical to Experiment 2, except that stimuli were instead randomly sampled with location bubbles (see experiment 1 in Gosselin & Schyns, 2001). Critically, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. To anticipate the results, we found horizontal information to be highly diagnostic for neutrality and all the basic facial expressions except surprise. We also found that individual differences in horizontal tuning strongly correlate with the aptitude with which the categorization of expressions is carried. Finally, we show that horizontal tuning is best predicted by diagnosticity of the eye region. 
Experiment 1
The main purpose of this experiment was to test whether orientation bubbles can successfully reveal the precise orientation content that is diagnostic of a task. To this end, we employed a detection task similar to the one employed by Willenbockel, Fiset, and colleagues for a similar purpose (experiment 1; 2010). A plaid—the sum of two sinusoidal gratings with orthogonal orientations—was randomly filtered in the orientation domain with orientation bubbles, and subjects were asked to indicate whether the stimulus was present or absent. We then applied a classification image analysis (Eckstein & Ahumada, 2002; Gosselin & Schyns, 2004), aiming to retrieve the plaid's embedded orientation signals from orientation bubbles data. 
Methods
Participants
Ten subjects were recruited at the University of Québec in Outaouais (UQO) and received a sum equivalent to 12$ per hour for their participation. All had normal or corrected-to-normal visual acuity. This experiment was approved by the Research Ethics Committee at the University of Québec in Outaouais and was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). 
Apparatus
The experiment was conducted on Apple Mac Mini computers (Intel i7 2.6GHz processor) using custom programs written in Matlab (Natick, MA), and functions from the Image Processing Toolbox and the Psychophysics toolbox (Brainard, 1997; Pelli, 1997). Stimuli were displayed on a 23-in Samsung LCD monitor with evenly distributed luminance levels. The stimulus average luminance was equal to that of the uniform gray background (107.2 cd/m2). Screen resolution was set to 1920 ×1080, and the refresh rate was 100 Hz. Participants sat in a dark room, and a chin rest was used to ensure that they maintained a viewing distance of 57 cm. 
Procedure
Participants were instructed to perform a plaid detection task. The original plaid, a 256 × 256 pixels array subtending 6.8° of visual angle, was constructed by summing two sinusoidal gratings with a spatial frequency of 27.2 cycles/image or four cycles/°: one with an orientation of −90° (horizontal), and the other with an orientation of −180° (see Figure 1A). The phase of both gratings was randomized on each trial. On target-present trials (probability of 50%), the plaid was randomly filtered in the orientation domain with orientation bubbles (below) and embedded in Gaussian white noise. On target-absent trials, only Gaussian white noise was presented. 
Figure 1
 
Illustration of orientation bubbles filtering. A plaid (A) is converted to its Fourier spectrum with the Fast Fourier Transform (FFT) algorithm, and its quadrants are shifted (B). An orientation sampling vector (C) is created by summing ten pairs of Von Mises orientation samples (orientation bubbles). Then, the orientation sampling matrix (D) is created by applying the orientation sampling vector to an orientation matrix. Orientation filtering is carried by dot multiplying (.*) the orientation sampling matrix and the shifted plaid Fourier spectrum. The experimental stimulus is then reconstructed by Inverse-FFT (IFFT), and Gaussian white noise is added (E).
Figure 1
 
Illustration of orientation bubbles filtering. A plaid (A) is converted to its Fourier spectrum with the Fast Fourier Transform (FFT) algorithm, and its quadrants are shifted (B). An orientation sampling vector (C) is created by summing ten pairs of Von Mises orientation samples (orientation bubbles). Then, the orientation sampling matrix (D) is created by applying the orientation sampling vector to an orientation matrix. Orientation filtering is carried by dot multiplying (.*) the orientation sampling matrix and the shifted plaid Fourier spectrum. The experimental stimulus is then reconstructed by Inverse-FFT (IFFT), and Gaussian white noise is added (E).
A trial began with the presentation of a fixation cross (450 ms), which was followed by the stimulus (850 ms). The screen then went blank and remained as such until the subject responded using the appropriate keys on the keyboard. Subjects first performed 10 practice trials, and then completed three blocks of 100 trials each. 
Orientation bubbles
At the beginning of a trial, the orientation content of a stimulus—here, a plaid—was randomly sampled using custom code (available at the address http://lpvs-uqo.ca/wp-content/uploads/2017/06/orbs.zip) and functions from the Image Processing Toolbox in Matlab. The procedure, described below, is illustrated in Figure 1
First, a target image (Figure 1A) was run through the Fast Fourier Transform (FFT) algorithm to generate its Fourier spectrum (Figure 1B). Second, an orientation sampling vector was created (Figure 1C). It consisted in ten pairs of “Von Mises” orientation samples, or orientation bubbles. The Von Mises is a circular function analogous to the Wrapped Normal distribution and ranges from −180° to +180°. It has two parameters: μ, which designates the orientation (in degree) at which the distribution peaks, and Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(\kappa \), which determines the width (in degree) of the distribution. One bubble comprised two Von Mises, one with parameters μi and Display Formula\(\kappa \), and the second with parameters μi+180° and Display Formula\(\kappa \)—the second Von Mises ensures identical sampling of the symmetrical FFT quadrants. The μi parameters, with I = 1 to 10, were randomly drawn with replacement from a rectangular distribution of all orientations, whereas the Display Formula\(\kappa \) parameter was always equal to 45.51 (full width at half maximum [FWHM] = 20°), which corresponds approximately to the estimated width of the orientation channels in face identification (Dakin & Watt, 2009). The sampling proportions of the orientation sampling vector were capped at 1 (since they can be greater than 1 if two or more bubbles are close to each other). Third, an orientation sampling matrix of dimension 256 × 256 was created (Figure 1D) by applying the orientation sampling vector to an orientation matrix. This orientation matrix was equal to tan−1[(y − 127)/(x − 127)], with x and y corresponding, respectively, to the column and the row of the orientation matrix. Fourth, and finally, the orientation sampling matrix was dot-multiplied with the image Fourier spectrum, and the resulting Fourier spectrum was inverse Fast Fourier Transformed (Figure 1E). Gaussian white noise was added to the filtered stimulus to maintain performance at 75% of correct responses. The appropriate noise level was estimated on a trial-by-trial basis using QUEST (Watson & Pelli, 1983). 
Results and discussion
To find out which parts of the orientation spectrum were associated with accuracy, we first performed for each subject what amounts to a multiple linear regression analysis of orientation sampling vectors (independent variable) on response accuracy scores (dependent variable). The logic here is that the more the information revealed by orientation bubbles matches observer representations, the greater the probability of a correct response. The analysis was conducted on “plaid present” trials, and carried out by calculating a weighted sum of the orientation sampling vectors, allocating positive weights to filters associated with correct responses and negative weights to filters associated with incorrect responses. To give equal weight to correct and incorrect trials, accuracy scores were transformed into z scores across the appropriate subset of trials. The outcome was a series of 10 vectors of spatially correlated regression coefficients—henceforth called classification vectors, or CVs—quantifying the strength of association between orientations and accurate detection of the plaid. Every CV was then individually z scored with the mean and standard deviation of the null hypothesis; the parameters of which were estimated by simulating 100 CVs. Each simulated CV was generated with a weighted sum of orientation sampling vectors, using instead random permutations of z-scored accuracies from the “plaid present” trials subset. 
To retrieve the plaid's diagnostic information, a group CV was created by first summing individually z-scored CVs across subjects, and then dividing the outcome by √n, where n is the sample size. A pixel test (Chauvin, Worsley, Schyns, Arguin, & Gosselin, 2005) was used to determine the statistical threshold (Zcrit = 2.49, p < 0.05; one-tailed). The pixel test applies a statistical correction for multiple observations, while also taking into account the spatial correlation that results from the 1D orientation bubble size. 
Results for half of the symmetric orientation spectrum are shown in Figure 2, which plots the z-scored regression coefficients (black line) and the significance threshold (gray dotted line) along the orientation spectrum. These coefficients represent the strength of the correlation between orientation and performance. As expected, two significant peaks emerge near the −180° vertical axis (Zmax = 13.26) and the −90° horizontal axis (Zmax = 11.15). 
Figure 2
 
Experiment 1 group classification vector. Orientation bubbles accurately revealed the diagnostic information of the plaid, with significant peaks emerging at −0.62° (vertical axis) and −88.14° (horizontal axis), Zcrit = 2.49, p < 0.05.
Figure 2
 
Experiment 1 group classification vector. Orientation bubbles accurately revealed the diagnostic information of the plaid, with significant peaks emerging at −0.62° (vertical axis) and −88.14° (horizontal axis), Zcrit = 2.49, p < 0.05.
We used a 50% “area orientation measure” (AOM; analogous to the fractional area technique used to estimate component latencies in electrophysiological studies) to estimate peak positions. This method was chosen because it is less sensitive to the shape of tuning curves (for a similar application of the procedure, see Tadros, Dupuis-Roy, Fiset, Arguin, & Gosselin, 2013). The AOM estimates of the vertical and horizontal peaks were −0.62° (20.4° bandwidth) and −88.64° (18.3° bandwidth), respectively. Neither the vertical, t(9) = 0.75, CI 95 = [−3.66°, 1.83°], p > 0.05) nor the horizontal, t(9) = 0.04, CI 95 = [−4.04°, 3.9°], p > 0.05) peak significantly differed from its respective reference (−180°, −90°) value. 
Critically, no other part of the orientation spectrum correlated with task responses. We are thus confident that orientation bubbles can effectively recover the diagnostic orientations for a task. 
Experiment 2: Orientation bubbles
Experiment 2 was designed to reveal the diagnostic orientation content for the successful categorization of the basic facial expressions (anger, disgust, fear, happiness, sadness, surprise) and of neutral expressions. Importantly, the blocks of Experiment 2 and 3 were interleaved within subjects. Fifty percent of participants began with a block from Experiment 2, and the other fifty percent began with a block from Experiment 3
Methods
Participants
Forty subjects participated in Experiments 2 and 3. They received a sum equivalent to 12$ per hour for their participation. All had normal or corrected-to-normal visual acuity. This experiment was approved by the Research Ethics Committee at the University of Québec in Outaouais and was conducted in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). 
Apparatus
In this experiment, only the monitor and viewing distance differed from Experiment 1. The monitor was a 24-inch BenQ LCD monitor with a refresh rate of 120 Hz and evenly distributed luminance levels. Screen resolution was 1920 × 1080. Participants sat in a dark room, and a chinrest was used to ensure that they maintained a distance of 65 cm between them and the screen. 
Stimuli
Seventy gray scale pictures of faces, 10 identities (five females and five males) times seven facial expression of emotions from the Karolinska Directed Emotional Faces (KDEF) database (Lundqvist, Flykt, & Öhman, 1998) were used. Each identity depicted the six basic facial expressions and neutrality. Faces were spatially aligned on the positions of the main internal facial features—eyes, nose, mouth—using translation, rotation, and scaling. 
Images were downscaled to a resolution of 256 × 256 pixels, and a gray oval that blended with the background (66.33 cd/m2) was applied to the face in order to hide the facial contour and external features. Faces spanned 4.3° of visual angle horizontally (6.1 vertically). The spatial frequency spectra and luminance histograms of images were equalized with the SHINE toolbox (Willenbockel, Sadr, et al., 2010) to minimize the influence of low-level variance across stimuli on observer responses, and thus better capture the contribution of internal representations. 
Procedure
Before the experimental tasks started, participants were initially given a maximum of 20 min to familiarize themselves with the stimuli. Then, they completed several training blocks (140 trials each) in which they were required to reach a performance criterion of 95% correct categorization for each facial expression individually. Those training blocks were meant to prepare the participants for both the orientation bubbles (Experiment 2) and location bubbles (Experiment 3) tasks. 
A training trial began with a fixation cross (500 ms) located in the middle of the screen, followed by a face stimulus that remained on the screen until the correct response was given. Participants responded by pressing one of the seven assigned keys on the computer's keyboard—that is, one key per expression category. If an error was made, the correct expression label appeared 1° of visual angle below the face, and participants were instructed to re-examine the stimulus and input the correct answer—trial response was still considered incorrect. Input of the correct answer automatically initiated the next practice trial or ended the block if all trials had been completed. Participants completed as many practice blocks as was needed to reach the performance criterion (M = 3.46, SD = 2.01), and then moved on to the experimental tasks. 
Participants completed a total of twenty-four experimental blocks, each comprising 140 trials. They started with either a block of orientation bubbles (Experiment 2) or location bubbles (Experiment 3), and subsequently alternated between one block of each task. Experiments 2 and 3 thus each comprised a total of 1,680 experimental trials, with 240 trials per facial expression. 
An experimental trial began with a fixation cross (500 ms) in the center of the screen. It was immediately followed by a face stimulus (150 ms) filtered with orientation bubbles (examples can be seen in Figure 3), after which, the screen went blank until participants responded using one of the seven assigned keyboard keys—i.e., one key per expression. Task difficulty was controlled by adjusting the RMS contrast of orientation-filtered faces in order to maintain the criterion performance of 57.14% correct responses—that is, halfway between 100% correct (perfect) and 14.3% correct (chance) responses. The appropriate RMS contrast level of filtered face stimuli was estimated on a trial-by-trial basis, using QUEST (Watson & Pelli, 1983), and dithering was applied to reduce aliasing (Allard & Faubert, 2008). Image contrast was modulated for overall performance—instead of independently for each expression—because all expressions are not equally easy to categorize, and we did not want the contrast level to act as a cue to the correct response. 
Figure 3
 
Examples of orientation bubbles filtered stimuli (left column), along with the corresponding orientation sampling matrices (right column), as applied in Experiment 2. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 3
 
Examples of orientation bubbles filtered stimuli (left column), along with the corresponding orientation sampling matrices (right column), as applied in Experiment 2. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Results and discussion
The first two experimental blocks were discarded for the analysis. Participants needed an average RMS contrast of 0.0134 (SD = 0.009) to respond correctly on 57.14% of trials, and the overall average response time on correct trials was 1,238 ms (SD = 212 ms). Performance (percent correct) varied considerably across facial expressions: anger (M = 54.9%, SD= 13.8%), sadness (M = 61.9%, SD = 12.7%), disgust (M = 57%, SD = 12.5%), fear (M = 53.8%, SD = 12.1%), happiness (M = 91.3%, SD = 4.5%), surprise (M = 66.5%, SD = 14.6%), and neutrality (M = 62.7%, SD = 15.9%). Response times (milliseconds) on correct trials also varied between facial expressions: anger (M = 1,359, SD = 334), sadness (M = 1,286, SD = 238), disgust (M = 1,358, SD = 215), fear (M = 1,569, SD = 336), happiness (M = 910, SD = 249), surprise (M = 1,299, SD = 281), and neutrality (M = 1,180, SD = 249). 
To uncover which parts of the orientation spectrum were associated with accuracy, we first performed, for each subject and expression combination, what amounts to a multiple linear regression analysis of orientation sampling vectors on response accuracy scores. The analysis was carried by calculating a weighted sum of orientation sampling vectors, allocating positive weights to filters associated with correct responses and negative weights to filters associated with incorrect responses. The weights in question were the accuracy scores from the appropriate subset of trials—angry trials for anger, and so forth—which were transformed into z scores. The outcome was thus a series of 40 × 7 classification vectors (CVs). That is, for every subject, seven CVs (one per expression) were created. Every CV was then individually z scored with the mean and standard deviation of the null hypothesis, the parameters of which were estimated by simulating 100 CVs with random permutations of z-scored accuracies from the appropriate subset of trials. 
To retrieve the diagnostic information for individual expressions, seven group CVs (one per expression) were obtained by first summing individually z-scored CVs within expression and across subjects, and then dividing the outcome by √n, where n is the number of subjects. To retrieve the diagnostic information for combined expressions, a pooled expressions CV was created by first summing the above group CVs, and then dividing the outcome by √e, where e is the number of expressions. A two-tailed pixel test (Chauvin et al., 2005) was used to determine the statistical threshold (Zcrit = 2.49, p < 0.05). 
Results for half of the symmetrical orientation spectrum are shown in Figure 4, which plots the z-scored regression coefficients (red line) and the significance thresholds (gray dotted lines) along the orientation spectrum, for each individual expression and for combined expressions. Additionally, Figure 4 also shows expressions revealed through their respective diagnostic filters (bottom images). As can be seen, information bundled around the −90° horizontal axis is diagnostic for anger (Zmax = 4.72), disgust (Zmax = 6.59), fear (Zmax = 3.9), happiness (Zmax = 2.85), sadness (Zmax = 6.41), neutrality (Zmax = 7.83), and pooled expressions (Zmax = 11.67), all ps < 0.05. The only exception is surprise, for which information at the −157.5° oblique-vertical axis is diagnostic (Zmax = 3.17, p < 0.05). Furthermore, and in addition to information around the horizontal axis, information around the −180° vertical axis was also marginally diagnostic for the correct categorization of fear (Zmax = 1.62, p < 0.1). It thus appears that overall, facial expression categorization as a process is strongly supported by horizontal information. 
Figure 4
 
Experiment 2 group classification vectors. Each graph plots the z-scored coefficients that resulted from regressing orientation sampling vectors on performance scores (correct/incorrect), for human observers (black line) and the model observer (red line), along with the statistical threshold (dotted gray line), Zcrit = 2.49, p < 0.05. For the expression of fear (top-rightmost graph), the z-scored coefficients that resulted from regressing orientation sampling vectors on hits and false alarms are also plotted (green line). In the top left corner of each graph, a face depicting the appropriate expression is revealed through the diagnostic (bottom) and antidiagnostic (top) orientation band for this expression. In the top right corner of each graph, the Pearson correlation between observer and model orientation profiles can be seen. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 4
 
Experiment 2 group classification vectors. Each graph plots the z-scored coefficients that resulted from regressing orientation sampling vectors on performance scores (correct/incorrect), for human observers (black line) and the model observer (red line), along with the statistical threshold (dotted gray line), Zcrit = 2.49, p < 0.05. For the expression of fear (top-rightmost graph), the z-scored coefficients that resulted from regressing orientation sampling vectors on hits and false alarms are also plotted (green line). In the top left corner of each graph, a face depicting the appropriate expression is revealed through the diagnostic (bottom) and antidiagnostic (top) orientation band for this expression. In the top right corner of each graph, the Pearson correlation between observer and model orientation profiles can be seen. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Moreover, there were also emotion categories for which there was information that negatively correlated with performance. This orientation information, when revealed, tends to systematically lead to incorrect responses (i.e., antidiagnostic information; see, for example, Roy, Fiset, Taschereau-Dumouchel, Gosselin, & Rainville, 2013). We have found antidiagnostic information for anger (Zmin = −2.52), disgust (Zmin = −3.35), fear (Zmin = −2.81), sadness (Zmin = −4.28) and neutrality (Zmin = −4.75), but not for happiness or surprise. Contrary to diagnostic information, which is largely bundled near the horizontal axis, antidiagnostic information is scattered along the rest of the orientation spectrum. Expressions revealed through their respective antidiagnostic filters can be observed in Figure 4 (top images). Antidiagnostic disgust looks like anger, and disgust was in fact miscategorized as anger on 18.6% of trials; antidiagnostic fear looks like surprise, and fear was in fact miscategorized as surprise on 19.3% of trials; finally, antidiagnostic sadness looks like disgust, and sadness was in fact miscategorized as disgust on 10.3% of trials. Although it is less obvious looking at antidiagnostic anger and neutrality, angry and neutral stimuli were both miscategorized as sadness on 11.2% and 12.2% of trials, respectively. 
To benchmark information across face orientation spectra, we built a model observer (for details, see Blais et al., 2012; Smith et al., 2005) that was subjected to the orientation bubbles task with essentially the same experimental constraints as our human observers. Thus, the model performed the same number of trials, with the same orientation bubbles filters, and the same performance criterion—i.e., 57.14% overall correct responses. On each trial, an orientation sampling matrix was created and it was applied to the trial stimulus and to each of the possible 70 face images. However, instead of modulating stimulus RMS contrast, we modulated the proportion of Gaussian white noise—estimated with QUEST (Watson & Pelli, 1983)—that was added to the masked stimulus. Thus stimulus RMS contrast was constant while we varied noise RMS contrast to control the model's performance (e.g., Blais et al., 2012; Smith et al., 2005). The model calculated the Pearson correlation between the noisy filtered stimulus and each of the filtered face images. In a winner-take-all fashion, the model's categorization response was the emotion expressed by the face image that maximally correlated with the noisy stimulus. Model CVs, which depicted the available information, were then generated and z scored using the exact same procedure as for observer CVs. Even though this model observer is very efficient, it isn't the ideal observer. We chose to implement this particular model to allow direct comparison with Blais et al. (2012) and Smith et al. (2005). 
As can be seen in Figure 4 (black lines), useful information (Zcrit = 2.49, p < 0.05; two-tailed) was exclusively concentrated on the −90° horizontal axis for anger (Zmax = 7.6), sadness (Zmax = 9.07), disgust (Zmax = 7.52), fear (Zmax = 8.31), happiness (Zmax = 6.75), surprise (Zmax = 7.98), neutrality (Zmax = 8.55), and for pooled expressions (Zmax = 21.04). As can be seen in Figure 4 (top right corners of orientation profiles), human strategies on average strongly correlated with the model profile (M = 0.74, SD = 0.44). The only notable difference was surprise, which negatively correlates with the available information (r = −0.34). 
This result is puzzling and we were thus interested in elucidating why participants did not align themselves with the available information. First, we found a considerable degree of confusion between surprise and fear: Indeed, of all the surprise-present trials (M = 66.5% correct responses), it was confused with fear 19.23% of the time (vs. 14.27% for the combined remaining expressions). We next verified if and how orientation influenced response patterns on surprise-present trials. To answer this question, we performed two classification vector analyses. For the first analysis, we summed orientation sampling vectors on surprise-present trials, using “surprise” (correct) and “fear” (incorrect) responses as weights. The result is that horizontal information appears to have consistently led to “fear” responses (Zmin = −2.19, p < 0.1), and oblique information led to “surprise” responses (Zmax = 3.06, p < 0.05). For the second analysis, we summed orientation sampling vectors on surprise-present trials, using “surprise” and “other” (i.e., anger, disgust, happiness, neutrality, or sadness responses) responses as weights. Strikingly, horizontal information appears to have led to “surprise” responses in this instance (Zmax = 1.43, p < 0.1), but not oblique information (Zmax = 1.16, p > 0.1). Thus, it appears that subjects were able to categorize surprise as such when using horizontal information, but they were also highly susceptible to incorrectly categorize the expression as fearful. Ultimately, this results in a null correlation between horizontal information and performance when we take into account all surprise-present trials (Figure 4). 
The case of fear is also an interesting one because vertical information marginally correlated with performance for human observers, but not for the model observer. We performed a secondary analysis to remove from correct “fear” responses variance that can be explained by an overall greater disposition to simply respond “fear,” irrespective of the displayed facial expression. We did so by calculating a weighted sum of orientation sampling vectors, similar to the procedure described above; only this time, the weights were hits (respond “fear” on fear-present trials) and false alarms (respond “fear” on fear-absent trials), transformed into z scores across the appropriate trials subset. The resulting classification vector is illustrated in Figure 4 (top rightmost graph, green line). As can be seen, a single peak emerged around the −90° horizontal axis. Thus, vertical information led to a similar probability of hits and false alarms, consistent with the hypothesis that this information creates a perceptual response bias toward fear. 
Finally, we verified whether expression categorization ability level could be predicted from the utilization of horizontal information. We calculated this score by applying a 1D Gaussian filter (FWHM= 20°, sum equal to 1) centered on the −90° horizontal axis of the z-scored individual classification vectors for pooled expressions. The Gaussian was centered on the horizontal axis because our model observer revealed this to be the most information-rich orientation band, supporting findings in the face processing literature (e.g., Pachai, Sekuler & Bennett, 2013). The sum of each resulting product vector was thus a weighted averages of horizontal information utilization, giving maximal weight to regression coefficients that fell square on the horizontal axis, and a gradually decreasing weight as coefficients fell further away from this axis. We then correlated this measure of horizontal tuning with contrast sensitivity—the reciprocal of the contrast threshold—which is a direct measure of the amount of information that was needed to maintain 57.14% correct responses in the task. As can be seen in Figure 5, both measures strongly correlated, r = 0.64, CI 95 = [0.43, 0.8], p < 0.001. This closely parallels previous results, which have shown that facial identification ability was linked with horizontal tuning (Pachai, Sekuler, & Bennett, 2013). 
Figure 5
 
The association between Experiment 2 image contrast sensitivity and horizontal tuning, r = 0.64, p < 0.001.
Figure 5
 
The association between Experiment 2 image contrast sensitivity and horizontal tuning, r = 0.64, p < 0.001.
Experiment 3: Location bubbles
The goal of Experiment 3 was to sample facial cues using location bubbles (experiment 1 in Gosselin & Schyns, 2001) in order to measure facial feature diagnosticity and correlate it with orientation profiles (as measured in Experiment 2). As already mentioned, the blocks of Experiments 2 and 3 were interleaved within subjects, and 50% of participants began with a block from Experiment 3 while the other 50% began with a block from Experiment 2. We first analyzed local bubbles data to reveal diagnostic face regions. We then correlated individual horizontal tuning with local diagnosticity profiles. 
Participants, apparatus, and stimuli: Same as in Experiment 2
Procedure
The procedure was the same as Experiment 2 except for two elements. First, instead of orientation bubbles, face stimuli were revealed through an opaque mask punctured by a number of randomly located Gaussian apertures (henceforth called the “bubbles mask”) with a FWHM of 39.96 pixels, or 0.95° of visual angle (examples can be seen in Figure 6; for more details, see Gosselin & Schyns, 2001, experiment 1). Second, task difficulty was controlled by adjusting the number of bubbles in order to maintain the criterion performance of 57.14%. The appropriate number of bubbles was estimated on a trial-by-trial basis, using QUEST (Watson & Pelli, 1983). 
Figure 6
 
Examples of location bubbles filtered stimuli (left column), along with the corresponding bubbles mask (right column), as applied in Experiment 3. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 6
 
Examples of location bubbles filtered stimuli (left column), along with the corresponding bubbles mask (right column), as applied in Experiment 3. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Results and discussion
The first two experimental blocks were discarded for the analysis. Participants needed an average of 27.16 (SD = 13.08) bubbles to respond correctly on 57.14% of trials, and the average response time on correct trials was 1,238 ms (SD = 261 ms). Performance (percent correct) varied across expressions: anger (M = 49.4%, SD = 11.4%), sadness (M = 63%, SD = 10.4%), disgust (M = 57.5%, SD = 12.3%), fear (M = 50%, SD = 11.3%), happiness (M = 85.5%, SD = 7%), surprise (M = 58.2%, SD = 13.4%), and neutrality (M = 62.7%, SD = 16.9%). Response times (milliseconds) on correct trials also varied considerably between expressions: anger (M = 1,360, SD = 336), sadness (M = 1,278, SD = 274), disgust (M = 1,353, SD = 281), fear (M = 1,586, SD = 416), happiness (M = 878, SD = 207), surprise (M = 1,336, SD = 356), and neutrality (M = 1,172, SD = 292). 
To uncover which facial cues more often led to accurate responses, we performed for each subject and each expression, the same procedure as for Experiment 2, but for bubbles masks instead of orientation sampling vectors. The outcome of this procedure was 40 × 7 planes of 256 by 256 spatially correlated regression coefficients (henceforth called the classification image; Eckstein & Ahumada, 2002; Gosselin & Schyns, 2004). These reveal the association between image pixels and accurate categorization of the corresponding facial expression. Classification images were then individually z scored with the mean and standard deviation of the null hypothesis (100 simulated classification images). 
To retrieve the diagnostic local information for individual expressions, seven group classification images were obtained by first summing individually z-scored classification images within expression and across subjects, and then dividing the outcome by √n. For combined expressions, a pooled expressions classification image was created by summing the above group classification images and dividing the outcome by √e. A pixel test (Chauvin et al., 2005) was used to determine the statistical threshold (Zcrit = 3.4, p < 0.05; two-tailed). 
Results are shown in Figure 7, which overlays significant regression coefficients (colored pixels, p < 0.05) on grayscale face images. As can be seen, different facial features are linked with the categorization of the various facial expressions. For pooled expressions, both the eyes (Zmax = 6.39) and the mouth (Zmax = 10.9) significantly correlated with performance, and the difference between the two regions was marginally significant (Zdif = 3.19, p < 0.1). Thus, our results replicate the finding that the mouth is overall the most diagnostic area (Blais et al., 2012). 
To benchmark the information revealed by location bubbles, we built a model observer similar to our orientation bubbles model observer. It was thus subjected to the location bubbles task with essentially the same experimental constraints as human observers (for details, see Blais et al., 2012; Smith et al., 2005). On each trial, a bubbles mask was created and applied to the stimulus and each of the possible 70 face images. Instead of modulating the number of bubbles, we modulated the proportion of Gaussian white noise, estimated with QUEST (Watson & Pelli, 1983) that was added to the masked stimulus. The model calculated the Pearson correlation between the noisy filtered stimulus and each of the filtered face images. The model's response was the emotion expressed by the face image that maximally correlated with the noisy masked stimulus. Model classification images depicting the available information were then generated and z scored using the exact same procedure as for observer classification images. Usable facial information varied across expressions and, on average, the Pearson correlation between human and model observer profiles was strong (M = 0.72, SD = 0.11). For pooled expressions, available information was concentrated around the eyes (Zmax = 8.17) and the mouth (Zmax = 9.48), and the difference between the two was nonsignificant (Zdif = 0.93). 
Before investigating the link between the utilization of local information and of the orientation spectrum, we first looked at the link between our two task performance metrics—see Royer, Blais, Gosselin, Duncan, and Fiset (2015) for evidence showing that the amount of information revealed by bubbles is a good predictor of face processing abilities. As can be seen in Figure 8, individual differences in contrast sensitivity (Experiment 2) strongly correlated with differences in the number of bubbles (Experiment 3), r = −0.71, CI 95= [−0.84, −0.51], p < 0.001. This suggests that our two task manipulations tapped into a common perceptual mechanism for categorizing facial expressions. We thereafter looked at the correlation between utilization of orientation and of facial features. 
Figure 7
 
Experiment 3 group classification images. Areas depicted in color significantly correlated with task performance, Zcrit = 3.4, p < 0.05. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 7
 
Experiment 3 group classification images. Areas depicted in color significantly correlated with task performance, Zcrit = 3.4, p < 0.05. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 8
 
The association between image contrast sensitivity (Experiment 2) and the number of bubbles (Experiment 3), r = −0.71, p < 0.001.
Figure 8
 
The association between image contrast sensitivity (Experiment 2) and the number of bubbles (Experiment 3), r = −0.71, p < 0.001.
To look at the link between horizontal tuning and utilization of facial features, we performed a multiple linear regression analysis of facial feature diagnosticity (independent variable) on horizontal tuning as described in Experiment 2 (dependent variable). For facial features, diagnosticity scores were defined as the maximum z-scored regression coefficient that fell within a region of interest (ROI) of the smooth classification images. These were extracted for each subject in the six following discrete ROIs (illustrated in Figure 9): the eyebrow junction, eyebrows, eyes, nose, nasolabial folds, and the mouth. We obtained a significant equation, F(6, 33) = 3.36, p < 0.05, with an R2 equal to 0.38. Interestingly, the eye region was the only significant predictor in this equation, t(39) = 3.8, p < 0.01 (all other features, p > 0.2). More specifically, the correlation between eye diagnosticity and utilization of horizontal information was r = .54, CI 95 = [0.27, 0.73], p < 0.001. Thus, it globally appears that individual differences in utilization of horizontal information is intimately linked with differences in utilization of the eye region. 
Figure 9
 
Color coded regions of interest (ROI) used in the multiple linear regression analysis—comparing orientation and local diagnostic profiles—are overlaid on a grayscale picture of a face. Red = eyes; orange = eyebrows; yellow = eyebrow junction; green = nose; blue = nasolabial folds; purple = mouth. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 9
 
Color coded regions of interest (ROI) used in the multiple linear regression analysis—comparing orientation and local diagnostic profiles—are overlaid on a grayscale picture of a face. Red = eyes; orange = eyebrows; yellow = eyebrow junction; green = nose; blue = nasolabial folds; purple = mouth. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
General discussion
Our first objective with the present work was to explore the role played by the orientation spectrum in the categorization of the six basic facial expressions and of neutrality. We have developed orientation bubbles, which allow the extraction of precise orientation profiles, and validated the procedure in Experiment 1 using a simple plaid detection task. We thereafter proceeded in exploring the role of orientations in the categorization of facial expressions. 
Previous work had already demonstrated that horizontal information plays a crucial role in the categorization of happy and sad facial expressions by contrasting performance with horizontal and vertical information (Balas & Huynh, 2015; Huynh & Balas, 2014). In Experiment 2, we have addressed two limitations of this work, applying orientation bubbles to all the basic facial expressions and neutrality. Overall, we found a strong link between horizontal information and the successful categorization neutrality and the basic expressions—except surprise. Our results thus replicated findings pertaining to the recognition of happy and sad expressions (Balas & Huynh, 2015; Huynh & Balas, 2014), and also expanded upon those by uncovering the link between horizontal information and other expressions. Additionally, we found antidiagnostic—oblique-to-vertical—orientations for a handful of expressions. This means that some information, if relied too much upon by the visual system, systematically leads to incorrect categorization responses. 
Pearson correlations between human and model—horizontally-tuned—profiles were on average quite strong for individual and pooled expressions. The only exception was surprise, for which the human strategy instead rested on information in the oblique-vertical axes, and this strategy negatively correlated with the model strategy. The results of our secondary analyses suggest that this is not because participants were incapable of using horizontal information to categorize surprise. Instead, they suggest that, on surprise-present trials, horizontal information systematically led to “fear” responses. In other words, had we not included fear among our expression categories, it is possible that subjects would have shown successful utilization of horizontal information to categorize surprise. We can only speculate as to why fear caused this to happen. One possibility is that surprise, as revealed through a horizontal filter, might be harder to dissociate from internal representations of fear. By revealing surprise through its diagnostic oblique-vertical band on the other hand, the rounded open mouth that is typical of this expression becomes more evident. Furthermore, when other expressions are revealed through surprise's diagnostic orientation bands, the teeth and nasolabial folds—which are not typically associated with surprise—emerge from the picture. Thus, it might be easier to untangle surprise from other internal representations when an expression is revealed through these oblique-vertical bands, even if this strategy is not well tuned to the available information—as revealed by the model observer. 
Our second objective was to provide the first empirical investigation of the link between the utilization of the orientation spectrum and of local facial features. Experiments 2 and 3 were designed with this specific goal in mind. Orientation and location bubbles blocks were interleaved within subjects, such that we could analyze the data from the two tasks using an individual differences approach. We found that individual differences in task performance metrics, contrast sensitivity (Experiment 2) and number of bubbles (Experiment 3), strongly correlated. We also found that individual differences in utilization of horizontal information were best predicted by eye diagnosticity alone. No other feature was associated with these variables—not even the mouth. 
At first, this result might perhaps seem surprising, given the importance of the mouth for human observers carrying this task (Blais et al., 2012; Calvo, Fernández-Martín, & Nummenmaa 2014). Our location bubbles model observer, however, revealed that the eyes and the mouth convey information in about the same proportions, replicating previous results (Blais et al., 2012). Furthermore, as already mentioned, our orientation bubbles model observer revealed horizontal information to carry the most information for categorizing all the basic facial expressions. Thus, individuals who made better use of horizontal information used a strategy that was de facto better aligned with the available information, suggesting that they were in fact more efficient. A possible explanation of our results is thus that this increase in horizontal processing was reflected by an increase in eye processing—the mouth being used by all observers irrespective of horizontal tuning. 
More specifically, the mechanism for this could lie in the processing of horizontal information in mid-to-high spatial frequencies. Indeed, recent findings indicate that the horizontal tuning of face processing mechanisms is best supported by this frequency range (Goffaux, van Zon, & Schiltz, 2011). Moreover, a study comparing Gabor filter responses to hundreds of pictures of human faces found that the eyes specifically contain more horizontal energy in these spatial frequencies (Keil, 2009). 
Our results, along with those of Pachai and colleagues (2013), have shown that individual ability levels in face identification and expression categorization are well predicted by horizontal tuning. Thus, it could be that the information pertinent to these tasks is processed by a common cerebral region. For instance, recent findings suggest that overlap between face identification and facial expression recognition might occur in regions such as the fusiform gyrus and the functionally defined fusiform face area (FFA; Kanwisher et al., 1997; see also, for review, Duchaine & Yovel, 2015). The FFA was shown to respond equally strongly to emotional and neutral faces (e.g., Winston, Vuilleumier, & Dolan, 2003), and evidence suggests that the FFA responds reliably to the eye region—and also to the mouth region—in faces expressing fear (Smith et al., 2008). Furthermore, the FFA was found to be the only region—among the primary visual cortex and the occipital face area—that responds selectively to the horizontal information of faces (Goffaux et al., 2016). Thus, the FFA could subtend the diagnosticity of horizontal information and of local features, for both facial identity and expressions. 
Relatedly, the N170 (see, for review, Eimer, 2011; Rossion, 2014), which is suggested to emerge from FFA activity (Sadeh et al., 2010), also appears to be sensitive to horizontal facial information (Jacques, Schiltz, & Goffaux, 2014). Additionally, this component has been likened to an eye detector (Rousselet, Ince, van Rijsbergen, & Schyns, 2014; Schyns, Jentzsch, Johnson, Schweinberger, & Gosselin, 2003; Smith, Gosselin, & Schyns, 2004), and to a diagnostic information integrator for facial expressions (Schyns, Petro, & Smith, 2007). By showing the link between horizontal tuning and eye utilization, our results could potentially help bridge the gap, supporting the notion that these findings might be different sides of a same coin. 
Acknowledgments
We would like to thank two anonymous reviewers for insightful comments which helped us improve this article. This work was supported by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC) to Daniel Fiset and an Undergraduate Scholarship from NSERC to Gabrielle Dugas. 
Commercial relationships: none. 
Corresponding author: Daniel Fiset. 
Address: Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Gatineau, Qc, Canada. 
References
Allard, R., & Faubert, J. (2008). The noisy-bit method for digital displays: Converting a 256 luminance resolution into a continuous resolution. Behavior Research Methods, 40 (3), 735–743.
Balas, B. J., & Huynh, C. M. (2015). Face and body emotion recognition depend on different orientation sub-bands. Visual Cognition, 23 (6), 659–677.
Balas, B. J., Schmidt, J., & Saville, A. (2015). A face detection bias for horizontal orientations develops in middle childhood. Frontiers in Psychology, 6, 772.
Blais, C., Fiset, D., Roy, C., Saumure Régimbald, C., & Gosselin, F. (2017). Eye fixation patterns for categorizing static and dynamic facial expressions. Emotion, 17 (7), 1107–1119.
Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50 (12), 2830–2838.
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10 (4), 433–436.
Butler, S., Blais, C., Gosselin, F., Bub, D., & Fiset, D. (2010). Recognizing famous people. Attention, Perception and Psychophysics, 72 (6), 1444–1449.
Caldara, R., Schyns, P., Mayer, E., Smith, M. L., Gosselin, F., & Rossion, B. (2005). Does prosopagnosia take the eyes out of face representations? Evidence for a defect in representing diagnostic facial information following brain damage. Journal of Cognitive Neuroscience, 17 (10), 1652–1666.
Calvo, M. G., Fernández-Martín, A., & Nummenmaa, L. (2014). Facial expression recognition in peripheral versus central vision: Role of the eyes and the mouth. Psychological Research, 78 (2), 180–195.
Chauvin, A., Worsley, K. J., Schyns, P. G., Arguin, M., & Gosselin, F. (2005). Accurate statistical tests for smooth classification images. Journal of Vision, 5 (9): 1, 659–667, doi:10.1167/5.9.1. [PubMed] [Article]
Dakin, S. C., & Watt, R. J. (2009). Biological “bar codes” in human faces. Journal of Vision, 9 (4): 2, 1–10, doi:10.1167/9.4.2. [PubMed] [Article]
De Valois, R. L., & De Valois, K. K. (1990). Spatial Vision. New York: Oxford University Press.
Duchaine, B., & Yovel, G. (2015). A revised neural framework for face processing. Annual Review of Vision Science, 1, 393–416.
Dunlap, K. (1927). The role of eye-muscles and mouth-muscles in the expression of the emotions. Genetic Psychology Monographs, 2 (3), 199–233.
Eckstein, M. P.,& Ahumada, A. J.,Jr. (2002). Classification images: A tool to analyze visual strategies. Journal of Vision, 2 (1): 1, doi:10.1167/2.1.i. [PubMed] [Article]
Eimer, M. (2011). The face-sensitive N170 component of the event-related brain potential. In Calder, A. J. Rhodes, G. Johnson, M. H. & Haxby J. V. (Eds.), The Oxford handbook of face perception (pp. 329–344). New York: Oxford University Press.
Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion, 11 (4), 860–865.
Ekman, P. & Friesen, W. V. (1975). Unmasking the face. Englewood Cliffs, NJ: Prentice Hall.
Fiset, D., Blais, C., Royer, J., Richoz, A.R., Dugas, G., & Caldara, R. (2017). Mapping the impairment in decoding static facial expression of emotions in prosopagnosia. Social Cognitive and Affective Neuroscience, 12 (8), 1334–1341.
Gaspar, C. M., Sekuler, A. B., & Bennett, P. J. (2008). Spatial frequency tuning of upright and inverted face identification. Vision Research, 48, 2817–2826.
Goffaux, V., & Dakin, S. C. (2010). Horizontal information drives the behavioral signatures of face processing. Frontiers in Psychology, 1, 143.
Goffaux, V., & Greenwood, J. A. (2016). The orientation selectivity of face identification. Scientific Reports, 6, 34204, doi: 10.1038/srep34204.
Goffaux, V. Hausfeld, L., Schiltz, C., & Goebel, R. (2016). Horizontal tuning for faces originates in high-level fusiform face area. Neuropsychologia, 81, 1–17, doi: 10.1016/j.neuropsychologia.2015.12.004.
Goffaux, V., van Zon, J., & Schiltz, C. (2011). The horizontal tuning of face perception relies on the processing of intermediate and high spatial frequencies. Journal of Vision, 11 (10): 1, 1–9, doi:10.1167/11.10.1. [PubMed] [Article]
Gold, J., Bennett, P. J., & Sekuler, A. B. (1999). Identification of band-pass filtered letters and faces by human and ideal observers. Vision Research, 39 (21), 3537–3560.
Gosselin, F., & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41 (17), 2261–2271.
Gosselin, F., & Schyns, P. G. (2004). A picture is worth thousands of trials: Rendering the use of visual information from spiking neurons to recognition. Cognitive Science, 28 (2), 141–146.
Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4 (6), 223–233.
Huynh, C. M., & Balas, B. (2014). Emotion recognition (sometimes) depends on horizontal orientations. Attention, Perception, & Psychophysics, 76 (5), 1381–1392.
Izard, C. (1971). The face of emotion. New York, NY: Appleton-Century-Crofts.
Jack, R. E., Garrod, O. G. B., & Schyns, P. G. (2014). Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Current Biology, 24 (2), 187–192.
Jack, R. E., & Schyns, P. G. (2015). The human face as a dynamic tool for social communication. Current Biology, 25 (14), R621–R634.
Jack, R. E., Sun, W., Delis, I., Garrod, O. G. B., & Schyns, P. G. (2016). Four not six: Revealing culturally common facial expressions of emotion. Journal of Experimental Psychology: General, 145 (6), 708–730, doi: 10.1037/xge0000162.
Jacques, C., Schiltz, C., & Goffaux, V. (2014). Face perception is tuned to horizontal orientation in the N170 time window. Journal of Vision, 14 (2): 5, 1–18, doi:10.1167/14.2.5. [PubMed] [Article]
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience, 17 (11), 4302–4311.
Keil, M. S. (2009). “I look in your eyes, honey”: Internal face features induce spatial frequency preference for human face processing. PLoS Computational Biology, 5 (3), e1000329.
Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces—KDEF [CD ROM]. Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9.
Näsänen, R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision Research, 39 (23), 3824–3833.
Pachai, M. V., Sekuler, A. B., & Bennett, P. J. (2013). Sensitivity to information conveyed by horizontal contours is correlated with face identification accuracy. Frontiers in Psychology, 4, 74, doi:10.3389/fpsyg.2013.00074.
Pachai, M. V., Sekuler, A. B., Bennett, P. J., Schyns, P. G., & Ramon, M. (2017). Personal familiarity enhances sensitivity to horizontal structure during processing of face identity. Journal of Vision, 17 (6): 5, 1–11, doi:10.1167/17.6.5. [PubMed] [Article]
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442.
Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences, USA, 109 (48), E3314–E3323.
Rossion, B. (2014). Understanding face perception by means of human electrophysiology. Trends in Cognitive Sciences, 18 (6), 310–318.
Rousselet, G. A., Ince, R. A., van Rijsbergen, N. J., & Schyns, P. G. (2014). Eye coding mechanisms in early human face event-related potentials. Journal of Vision, 14 (13): 7, 1–24, doi:10.1167/14.13.7. [PubMed] [Article]
Roy, C., Fiset, D., Taschereau-Dumouchel, V., Gosselin, F. & Rainville, P. (2013). A refined examination of the facial cues contributing to vicarious effects on self-pain and spinal responses. The Journal of Pain, 14, 1475–1484.
Royer, J., Blais, C., Gosselin, F., Duncan, J., & Fiset, D. (2015). When less is more: Impact of face processing ability on recognition of visually degraded faces. Journal of Experimental Psychology: Human Perception and Performance, 41 (5), 1179–1183
Royer, J., Blais, C., Déry, K., & Fiset, D. (2016). For best results, use the eyes: Individual differences and diagnostic features in face recognition. Journal of Vision, 16 (12): 77, doi:10.1167/16.12.77. [Abstract]
Royer, J., Willenbockel, V., Gosselin, F., Blais, C., Leclerc, J., Lafortune, S., & Fiset, D. (2017). The influence of natural contour and face size on the spatial frequency tuning for identifying upright and inverted faces. Psychological Research, 81 (1), 13–23, doi: 10.1007/s00426-015-0740-3.
Sadeh, B., Podlipsky, I., Zhdanov, A., & Yovel, G. (2010). Event-related potential and functional MRI measures of face-selectivity are highly correlated: A simultaneous ERP-fMRI investigation. Human Brain Mapping, 31 (10), 1490–1501.
Schyns, P. G., Bonnar, L., & Gosselin, F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13 (5), 402–409.
Schyns, P. G., Jentzsch, I., Johnson, M., Schweinberger, S. R., & Gosselin, F. (2003). A principled method for determining the functionality of brain responses. Neuroreport, 14 (13), 1665–1669.
Schyns, P. G., Petro, L. S., & Smith, M. L. (2007). Dynamics of visual information integration in the brain for categorizing facial expressions. Current Biology, 17 (18), 1580–1585.
Sekuler, A., Gaspar, C. M., Gold, J. M., & Bennett, P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14 (5), 391–396.
Smith, F. W., Muckli, L., Brennan, D., Pernet, C., Smith, M. L., Belin, P.,. . . Schyns, P. G. (2008). Classification images reveal the information sensitivity of brain voxels in fMRI. NeuroImage, 40 (4), 1643–1654.
Smith, F. W., & Schyns, P. (2009). Smile through your fear and sadness: Transmitting and identifying facial expressions over a range of viewing distances. Psychological Science, 20 (10), 1202–1208.
Smith, M. L., Gosselin, F., Cottrell, G. W., & Schyns, P. G. (2005). Transmitting and decoding facial expressions of emotion. Psychological Science, 16 (3), 184–189.
Smith, M. L., Gosselin, F., & Schyns, P. G. (2004). Receptive fields for flexible face categorizations. Psychological Science, 15 (11), 753–761.
Smith, M. L., & Merlusca, C. (2014). How task shapes the use of information during facial expression categorizations. Emotion, 14 (3), 478–487.
Tadros, K., Dupuis-Roy, N., Fiset, D., Arguin, M., & Gosselin, F. (2013). Reading laterally: The cerebral hemispheric use of spatial frequencies in visual work recognition. Journal of Vision, 13 (1): 4, 1–12, doi:10.1167/13.1.4. [PubMed] [Article]
Wang, H. F., Friel, N., Gosselin, F., & Schyns, P. G. (2011). Effective bubbles for visual categorization tasks. Vision Research, 51 (12), 1318–1323.
Watson, A. B., & Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33 (2), 113–120.
Willenbockel, V., Fiset, D., Chauvin, A., Blais, C., Arguin, M., Tanaka, J. W.,… Gosselin, F. (2010). Does face inversion change spatial frequency tuning? Journal of Experimental Psychology: Human Perception and Performance, 36 (1), 122–135.
Willenbockel, V., Sadr, J., Fiset, D., Horne, G. O., Gosselin, F., & Tanaka, J. W. (2010). Controlling low-level image properties: the SHINE toolbox. Behavior Research Methods, 42 (3), 671–684.
Winston, J. S., Vuilleumier, P., & Dolan, R. J. (2003). Effects of low-spatial frequency components of fearful faces on fusiform cortex activity. Current Biology, 13 (20), 1824–1829.
Yin, R. K. (1969). Looking at upside down faces. Journal of Experimental Psychology, 81 (1), 141–145.
Figure 1
 
Illustration of orientation bubbles filtering. A plaid (A) is converted to its Fourier spectrum with the Fast Fourier Transform (FFT) algorithm, and its quadrants are shifted (B). An orientation sampling vector (C) is created by summing ten pairs of Von Mises orientation samples (orientation bubbles). Then, the orientation sampling matrix (D) is created by applying the orientation sampling vector to an orientation matrix. Orientation filtering is carried by dot multiplying (.*) the orientation sampling matrix and the shifted plaid Fourier spectrum. The experimental stimulus is then reconstructed by Inverse-FFT (IFFT), and Gaussian white noise is added (E).
Figure 1
 
Illustration of orientation bubbles filtering. A plaid (A) is converted to its Fourier spectrum with the Fast Fourier Transform (FFT) algorithm, and its quadrants are shifted (B). An orientation sampling vector (C) is created by summing ten pairs of Von Mises orientation samples (orientation bubbles). Then, the orientation sampling matrix (D) is created by applying the orientation sampling vector to an orientation matrix. Orientation filtering is carried by dot multiplying (.*) the orientation sampling matrix and the shifted plaid Fourier spectrum. The experimental stimulus is then reconstructed by Inverse-FFT (IFFT), and Gaussian white noise is added (E).
Figure 2
 
Experiment 1 group classification vector. Orientation bubbles accurately revealed the diagnostic information of the plaid, with significant peaks emerging at −0.62° (vertical axis) and −88.14° (horizontal axis), Zcrit = 2.49, p < 0.05.
Figure 2
 
Experiment 1 group classification vector. Orientation bubbles accurately revealed the diagnostic information of the plaid, with significant peaks emerging at −0.62° (vertical axis) and −88.14° (horizontal axis), Zcrit = 2.49, p < 0.05.
Figure 3
 
Examples of orientation bubbles filtered stimuli (left column), along with the corresponding orientation sampling matrices (right column), as applied in Experiment 2. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 3
 
Examples of orientation bubbles filtered stimuli (left column), along with the corresponding orientation sampling matrices (right column), as applied in Experiment 2. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 4
 
Experiment 2 group classification vectors. Each graph plots the z-scored coefficients that resulted from regressing orientation sampling vectors on performance scores (correct/incorrect), for human observers (black line) and the model observer (red line), along with the statistical threshold (dotted gray line), Zcrit = 2.49, p < 0.05. For the expression of fear (top-rightmost graph), the z-scored coefficients that resulted from regressing orientation sampling vectors on hits and false alarms are also plotted (green line). In the top left corner of each graph, a face depicting the appropriate expression is revealed through the diagnostic (bottom) and antidiagnostic (top) orientation band for this expression. In the top right corner of each graph, the Pearson correlation between observer and model orientation profiles can be seen. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 4
 
Experiment 2 group classification vectors. Each graph plots the z-scored coefficients that resulted from regressing orientation sampling vectors on performance scores (correct/incorrect), for human observers (black line) and the model observer (red line), along with the statistical threshold (dotted gray line), Zcrit = 2.49, p < 0.05. For the expression of fear (top-rightmost graph), the z-scored coefficients that resulted from regressing orientation sampling vectors on hits and false alarms are also plotted (green line). In the top left corner of each graph, a face depicting the appropriate expression is revealed through the diagnostic (bottom) and antidiagnostic (top) orientation band for this expression. In the top right corner of each graph, the Pearson correlation between observer and model orientation profiles can be seen. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 5
 
The association between Experiment 2 image contrast sensitivity and horizontal tuning, r = 0.64, p < 0.001.
Figure 5
 
The association between Experiment 2 image contrast sensitivity and horizontal tuning, r = 0.64, p < 0.001.
Figure 6
 
Examples of location bubbles filtered stimuli (left column), along with the corresponding bubbles mask (right column), as applied in Experiment 3. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 6
 
Examples of location bubbles filtered stimuli (left column), along with the corresponding bubbles mask (right column), as applied in Experiment 3. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 7
 
Experiment 3 group classification images. Areas depicted in color significantly correlated with task performance, Zcrit = 3.4, p < 0.05. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 7
 
Experiment 3 group classification images. Areas depicted in color significantly correlated with task performance, Zcrit = 3.4, p < 0.05. Images [BF01AFS - BF01ANS - BF01DIS - BF01HAS - BF01NES - BF01SAS - BF01SUS] from the KDEF recreated with the copyright holder's permission.
Figure 8
 
The association between image contrast sensitivity (Experiment 2) and the number of bubbles (Experiment 3), r = −0.71, p < 0.001.
Figure 8
 
The association between image contrast sensitivity (Experiment 2) and the number of bubbles (Experiment 3), r = −0.71, p < 0.001.
Figure 9
 
Color coded regions of interest (ROI) used in the multiple linear regression analysis—comparing orientation and local diagnostic profiles—are overlaid on a grayscale picture of a face. Red = eyes; orange = eyebrows; yellow = eyebrow junction; green = nose; blue = nasolabial folds; purple = mouth. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
Figure 9
 
Color coded regions of interest (ROI) used in the multiple linear regression analysis—comparing orientation and local diagnostic profiles—are overlaid on a grayscale picture of a face. Red = eyes; orange = eyebrows; yellow = eyebrow junction; green = nose; blue = nasolabial folds; purple = mouth. Image [BF01AFS] from the KDEF recreated with the copyright holder's permission.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×