**Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic—i.e., task relevant—orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions—surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.**

**Introduction**

**Experiment 1**

**Methods**

**Participants**

**Apparatus**

**Procedure**

**Orientation bubbles**

*orientation sampling vector*was created (Figure 1C). It consisted in ten pairs of “Von Mises” orientation samples, or

*orientation bubbles*. The Von Mises is a circular function analogous to the Wrapped Normal distribution and ranges from −180° to +180°. It has two parameters:

*μ*, which designates the orientation (in degree) at which the distribution peaks, and

*μi*and

*μi*+180° and

*μi*parameters, with

*I*= 1 to 10, were randomly drawn with replacement from a rectangular distribution of all orientations, whereas the

*orientation sampling matrix*of dimension 256 × 256 was created (Figure 1D) by applying the orientation sampling vector to an orientation matrix. This orientation matrix was equal to tan

^{−1}[(

*y*− 127)/(

*x*− 127)], with

*x*and

*y*corresponding, respectively, to the column and the row of the orientation matrix. Fourth, and finally, the orientation sampling matrix was dot-multiplied with the image Fourier spectrum, and the resulting Fourier spectrum was inverse Fast Fourier Transformed (Figure 1E). Gaussian white noise was added to the filtered stimulus to maintain performance at 75% of correct responses. The appropriate noise level was estimated on a trial-by-trial basis using QUEST (Watson & Pelli, 1983).

**Results and discussion**

*z*scores across the appropriate subset of trials. The outcome was a series of 10 vectors of spatially correlated regression coefficients—henceforth called classification vectors, or CVs—quantifying the strength of association between orientations and accurate detection of the plaid. Every CV was then individually

*z*scored with the mean and standard deviation of the null hypothesis; the parameters of which were estimated by simulating 100 CVs. Each simulated CV was generated with a weighted sum of orientation sampling vectors, using instead random permutations of

*z-*scored accuracies from the “plaid present” trials subset.

*z*-scored CVs across subjects, and then dividing the outcome by √

*n*, where

*n*is the sample size. A pixel test (Chauvin, Worsley, Schyns, Arguin, & Gosselin, 2005) was used to determine the statistical threshold (

*Zcrit*= 2.49,

*p*< 0.05; one-tailed). The pixel test applies a statistical correction for multiple observations, while also taking into account the spatial correlation that results from the 1D orientation bubble size.

*z*-scored regression coefficients (black line) and the significance threshold (gray dotted line) along the orientation spectrum. These coefficients represent the strength of the correlation between orientation and performance. As expected, two significant peaks emerge near the −180° vertical axis (

*Zmax*= 13.26) and the −90° horizontal axis (

*Zmax*= 11.15).

*t*(9) = 0.75, CI 95 = [−3.66°, 1.83°],

*p*> 0.05) nor the horizontal,

*t*(9) = 0.04, CI 95 = [−4.04°, 3.9°],

*p*> 0.05) peak significantly differed from its respective reference (−180°, −90°) value.

**Experiment 2: Orientation bubbles**

**Methods**

**Participants**

**Apparatus**

**Stimuli**

^{2}) was applied to the face in order to hide the facial contour and external features. Faces spanned 4.3° of visual angle horizontally (6.1 vertically). The spatial frequency spectra and luminance histograms of images were equalized with the SHINE toolbox (Willenbockel, Sadr, et al., 2010) to minimize the influence of low-level variance across stimuli on observer responses, and thus better capture the contribution of internal representations.

**Procedure**

*M*= 3.46,

*SD*= 2.01), and then moved on to the experimental tasks.

**Results and discussion**

*SD*= 0.009) to respond correctly on 57.14% of trials, and the overall average response time on correct trials was 1,238 ms (

*SD*= 212 ms). Performance (percent correct) varied considerably across facial expressions: anger (

*M*= 54.9%,

*SD*= 13.8%), sadness (

*M*= 61.9%,

*SD*= 12.7%), disgust (

*M*= 57%,

*SD*= 12.5%), fear (

*M*= 53.8%,

*SD*= 12.1%), happiness (

*M*= 91.3%,

*SD*= 4.5%), surprise (

*M*= 66.5%,

*SD*= 14.6%), and neutrality (

*M*= 62.7%,

*SD*= 15.9%). Response times (milliseconds) on correct trials also varied between facial expressions: anger (

*M*= 1,359,

*SD*= 334), sadness (

*M*= 1,286,

*SD*= 238), disgust (

*M*= 1,358,

*SD*= 215), fear (

*M*= 1,569,

*SD*= 336), happiness (

*M*= 910,

*SD*= 249), surprise (

*M*= 1,299,

*SD*= 281), and neutrality (

*M*= 1,180,

*SD*= 249).

*z*scores. The outcome was thus a series of 40 × 7 classification vectors (CVs). That is, for every subject, seven CVs (one per expression) were created. Every CV was then individually

*z*scored with the mean and standard deviation of the null hypothesis, the parameters of which were estimated by simulating 100 CVs with random permutations of

*z*-scored accuracies from the appropriate subset of trials.

*z*-scored CVs within expression and across subjects, and then dividing the outcome by √

*n*, where

*n*is the number of subjects. To retrieve the diagnostic information for combined expressions, a pooled expressions CV was created by first summing the above group CVs, and then dividing the outcome by √

*e*, where

*e*is the number of expressions. A two-tailed pixel test (Chauvin et al., 2005) was used to determine the statistical threshold (

*Zcrit*= 2.49,

*p*< 0.05).

*z*-scored regression coefficients (red line) and the significance thresholds (gray dotted lines) along the orientation spectrum, for each individual expression and for combined expressions. Additionally, Figure 4 also shows expressions revealed through their respective diagnostic filters (bottom images). As can be seen, information bundled around the −90° horizontal axis is diagnostic for anger (

*Zmax*= 4.72), disgust (

*Zmax*= 6.59), fear (

*Zmax*= 3.9), happiness (

*Zmax*= 2.85), sadness (

*Zmax*= 6.41), neutrality (

*Zmax*= 7.83), and pooled expressions (

*Zmax*= 11.67), all

*p*s < 0.05. The only exception is surprise, for which information at the −157.5° oblique-vertical axis is diagnostic (

*Zmax*= 3.17,

*p*< 0.05). Furthermore, and in addition to information around the horizontal axis, information around the −180° vertical axis was also marginally diagnostic for the correct categorization of fear (

*Zmax*= 1.62,

*p*< 0.1). It thus appears that overall, facial expression categorization as a process is strongly supported by horizontal information.

*Zmin*= −2.52), disgust (

*Zmin*= −3.35), fear (

*Zmin*= −2.81), sadness (

*Zmin*= −4.28) and neutrality (

*Zmin*= −4.75), but not for happiness or surprise. Contrary to diagnostic information, which is largely bundled near the horizontal axis, antidiagnostic information is scattered along the rest of the orientation spectrum. Expressions revealed through their respective antidiagnostic filters can be observed in Figure 4 (top images). Antidiagnostic disgust looks like anger, and disgust was in fact miscategorized as anger on 18.6% of trials; antidiagnostic fear looks like surprise, and fear was in fact miscategorized as surprise on 19.3% of trials; finally, antidiagnostic sadness looks like disgust, and sadness was in fact miscategorized as disgust on 10.3% of trials. Although it is less obvious looking at antidiagnostic anger and neutrality, angry and neutral stimuli were both miscategorized as sadness on 11.2% and 12.2% of trials, respectively.

*z*scored using the exact same procedure as for observer CVs. Even though this model observer is very efficient, it isn't the ideal observer. We chose to implement this particular model to allow direct comparison with Blais et al. (2012) and Smith et al. (2005).

*Zcrit =*2.49,

*p*< 0.05; two-tailed) was exclusively concentrated on the −90° horizontal axis for anger (

*Zmax*= 7.6), sadness (

*Zmax*= 9.07), disgust (

*Zmax*= 7.52), fear (

*Zmax*= 8.31), happiness (

*Zmax*= 6.75), surprise (

*Zmax*= 7.98), neutrality (

*Zmax*= 8.55), and for pooled expressions (

*Zmax*= 21.04). As can be seen in Figure 4 (top right corners of orientation profiles), human strategies on average strongly correlated with the model profile (

*M*= 0.74,

*SD*= 0.44). The only notable difference was surprise, which negatively correlates with the available information (

*r*= −0.34).

*M*= 66.5% correct responses), it was confused with fear 19.23% of the time (vs. 14.27% for the combined remaining expressions). We next verified if and how orientation influenced response patterns on surprise-present trials. To answer this question, we performed two classification vector analyses. For the first analysis, we summed orientation sampling vectors on surprise-present trials, using “surprise” (correct) and “fear” (incorrect) responses as weights. The result is that horizontal information appears to have consistently led to “fear” responses (

*Zmin*= −2.19,

*p*< 0.1), and oblique information led to “surprise” responses (

*Zmax*= 3.06,

*p*< 0.05). For the second analysis, we summed orientation sampling vectors on surprise-present trials, using “surprise” and “other” (i.e., anger, disgust, happiness, neutrality, or sadness responses) responses as weights. Strikingly, horizontal information appears to have led to “surprise” responses in this instance

*(Zmax*= 1.43,

*p*< 0.1), but not oblique information (

*Zmax*= 1.16,

*p*> 0.1). Thus, it appears that subjects were able to categorize surprise as such when using horizontal information, but they were also highly susceptible to incorrectly categorize the expression as fearful. Ultimately, this results in a null correlation between horizontal information and performance when we take into account all surprise-present trials (Figure 4).

*z*scores across the appropriate trials subset. The resulting classification vector is illustrated in Figure 4 (top rightmost graph, green line). As can be seen, a single peak emerged around the −90° horizontal axis. Thus, vertical information led to a similar probability of hits and false alarms, consistent with the hypothesis that this information creates a perceptual response bias toward fear.

*z*-scored individual classification vectors for pooled expressions. The Gaussian was centered on the horizontal axis because our model observer revealed this to be the most information-rich orientation band, supporting findings in the face processing literature (e.g., Pachai, Sekuler & Bennett, 2013). The sum of each resulting product vector was thus a weighted averages of horizontal information utilization, giving maximal weight to regression coefficients that fell square on the horizontal axis, and a gradually decreasing weight as coefficients fell further away from this axis. We then correlated this measure of horizontal tuning with contrast sensitivity—the reciprocal of the contrast threshold—which is a direct measure of the amount of information that was needed to maintain 57.14% correct responses in the task. As can be seen in Figure 5, both measures strongly correlated,

*r*= 0.64, CI 95 = [0.43, 0.8],

*p*< 0.001. This closely parallels previous results, which have shown that facial identification ability was linked with horizontal tuning (Pachai, Sekuler, & Bennett, 2013).

**Experiment 3: Location bubbles**

**Participants**,

**apparatus**, and

**stimuli**: Same as in Experiment 2

**Procedure**

**Results and discussion**

*SD*= 13.08) bubbles to respond correctly on 57.14% of trials, and the average response time on correct trials was 1,238 ms (

*SD*= 261 ms). Performance (percent correct) varied across expressions: anger (

*M*= 49.4%,

*SD*= 11.4%), sadness (

*M*= 63%,

*SD*= 10.4%), disgust (

*M*= 57.5%,

*SD*= 12.3%), fear (

*M*= 50%,

*SD*= 11.3%), happiness (

*M*= 85.5%,

*SD*= 7%), surprise (

*M*= 58.2%,

*SD*= 13.4%), and neutrality (

*M*= 62.7%,

*SD*= 16.9%). Response times (milliseconds) on correct trials also varied considerably between expressions: anger (

*M*= 1,360,

*SD*= 336), sadness (

*M*= 1,278,

*SD*= 274), disgust (

*M*= 1,353,

*SD*= 281), fear (

*M*= 1,586,

*SD*= 416), happiness (

*M*= 878,

*SD*= 207), surprise (

*M*= 1,336,

*SD*= 356), and neutrality (

*M*= 1,172,

*SD*= 292).

*z*scored with the mean and standard deviation of the null hypothesis (100 simulated classification images).

*z*-scored classification images within expression and across subjects, and then dividing the outcome by √

*n*. For combined expressions, a pooled expressions classification image was created by summing the above group classification images and dividing the outcome by √

*e*. A pixel test (Chauvin et al., 2005) was used to determine the statistical threshold (

*Zcrit*= 3.4,

*p*< 0.05; two-tailed).

*p*< 0.05) on grayscale face images. As can be seen, different facial features are linked with the categorization of the various facial expressions. For pooled expressions, both the eyes (

*Zmax*= 6.39) and the mouth (

*Zmax*= 10.9) significantly correlated with performance, and the difference between the two regions was marginally significant (

*Zdif*= 3.19,

*p*< 0.1). Thus, our results replicate the finding that the mouth is overall the most diagnostic area (Blais et al., 2012).

*z*scored using the exact same procedure as for observer classification images. Usable facial information varied across expressions and, on average, the Pearson correlation between human and model observer profiles was strong (

*M*= 0.72,

*SD*= 0.11). For pooled expressions, available information was concentrated around the eyes (

*Zmax*= 8.17) and the mouth (

*Zmax*= 9.48), and the difference between the two was nonsignificant (

*Zdif*= 0.93).

*r*= −0.71, CI 95= [−0.84, −0.51],

*p*< 0.001. This suggests that our two task manipulations tapped into a common perceptual mechanism for categorizing facial expressions. We thereafter looked at the correlation between utilization of orientation and of facial features.

*z*-scored regression coefficient that fell within a region of interest (ROI) of the smooth classification images. These were extracted for each subject in the six following discrete ROIs (illustrated in Figure 9): the eyebrow junction, eyebrows, eyes, nose, nasolabial folds, and the mouth. We obtained a significant equation,

*F*(6, 33) = 3.36,

*p*< 0.05, with an

*R*equal to 0.38. Interestingly, the eye region was the only significant predictor in this equation,

^{2}*t*(39) = 3.8,

*p*< 0.01 (all other features,

*p*> 0.2). More specifically, the correlation between eye diagnosticity and utilization of horizontal information was

*r*= .54, CI 95 = [0.27, 0.73],

*p*< 0.001. Thus, it globally appears that individual differences in utilization of horizontal information is intimately linked with differences in utilization of the eye region.

**General discussion**

*de facto*better aligned with the available information, suggesting that they were in fact more efficient. A possible explanation of our results is thus that this increase in horizontal processing was reflected by an increase in eye processing—the mouth being used by all observers irrespective of horizontal tuning.

**Acknowledgments**

**References**

*Behavior Research Methods*, 40 (3), 735–743.

*Visual Cognition*, 23 (6), 659–677.

*Frontiers in Psychology*, 6, 772.

*Emotion*, 17 (7), 1107–1119.

*Neuropsychologia*, 50 (12), 2830–2838.

*Spatial Vision*, 10 (4), 433–436.

*Attention, Perception and Psychophysics*, 72 (6), 1444–1449.

*Journal of Cognitive Neuroscience*, 17 (10), 1652–1666.

*Psychological Research*, 78 (2), 180–195.

*Journal of Vision*, 5 (9): 1, 659–667, doi:10.1167/5.9.1. [PubMed] [Article]

*Journal of Vision*, 9 (4): 2, 1–10, doi:10.1167/9.4.2. [PubMed] [Article]

*Spatial Vision*. New York: Oxford University Press.

*Annual Review of Vision Science*, 1, 393–416.

*Genetic Psychology Monographs*, 2 (3), 199–233.

*Journal of Vision*, 2 (1): 1, doi:10.1167/2.1.i. [PubMed] [Article]

*The Oxford handbook of face perception*(pp. 329–344). New York: Oxford University Press.

*Emotion*, 11 (4), 860–865.

*Unmasking the face*. Englewood Cliffs, NJ: Prentice Hall.

*Social Cognitive and Affective Neuroscience*, 12 (8), 1334–1341.

*Vision Research*, 48, 2817–2826.

*Frontiers in Psychology*, 1, 143.

*Scientific Reports*, 6, 34204, doi: 10.1038/srep34204.

*Neuropsychologia*, 81, 1–17, doi: 10.1016/j.neuropsychologia.2015.12.004.

*Journal of Vision*, 11 (10): 1, 1–9, doi:10.1167/11.10.1. [PubMed] [Article]

*Vision Research*, 39 (21), 3537–3560.

*Vision Research*, 41 (17), 2261–2271.

*Cognitive Science*, 28 (2), 141–146.

*Trends in Cognitive Sciences*, 4 (6), 223–233.

*Attention, Perception, & Psychophysics*, 76 (5), 1381–1392.

*The face of emotion*. New York, NY: Appleton-Century-Crofts.

*Current Biology*, 24 (2), 187–192.

*Current Biology*, 25 (14), R621–R634.

*Journal of Experimental Psychology: General*, 145 (6), 708–730, doi: 10.1037/xge0000162.

*Journal of Vision*, 14 (2): 5, 1–18, doi:10.1167/14.2.5. [PubMed] [Article]

*The Journal of Neuroscience*, 17 (11), 4302–4311.

*PLoS Computational Biology*, 5 (3), e1000329.

*Vision Research*, 39 (23), 3824–3833.

*Frontiers in Psychology*, 4, 74, doi:10.3389/fpsyg.2013.00074.

*Journal of Vision*, 17 (6): 5, 1–11, doi:10.1167/17.6.5. [PubMed] [Article]

*Spatial Vision*, 10 (4), 437–442.

*Proceedings of the National Academy of Sciences, USA*, 109 (48), E3314–E3323.

*Trends in Cognitive Sciences*, 18 (6), 310–318.

*Journal of Vision*, 14 (13): 7, 1–24, doi:10.1167/14.13.7. [PubMed] [Article]

*The Journal of Pain*, 14, 1475–1484.

*Journal of Experimental Psychology: Human Perception and Performance*, 41 (5), 1179–1183

*Journal of Vision*, 16 (12): 77, doi:10.1167/16.12.77. [Abstract]

*Psychological Research*, 81 (1), 13–23, doi: 10.1007/s00426-015-0740-3.

*Human Brain Mapping*, 31 (10), 1490–1501.

*Psychological Science*, 13 (5), 402–409.

*Neuroreport*, 14 (13), 1665–1669.

*Current Biology*, 17 (18), 1580–1585.

*Current Biology*, 14 (5), 391–396.

*NeuroImage*, 40 (4), 1643–1654.

*Psychological Science*, 20 (10), 1202–1208.

*Psychological Science*, 16 (3), 184–189.

*Psychological Science*, 15 (11), 753–761.

*Emotion*, 14 (3), 478–487.

*Journal of Vision*, 13 (1): 4, 1–12, doi:10.1167/13.1.4. [PubMed] [Article]

*Vision Research*, 51 (12), 1318–1323.

*Perception & Psychophysics*, 33 (2), 113–120.

*Journal of Experimental Psychology: Human Perception and Performance*, 36 (1), 122–135.

*Behavior Research Methods*, 42 (3), 671–684.

*Current Biology*, 13 (20), 1824–1829.

*Journal of Experimental Psychology*, 81 (1), 141–145.