Free
Research Article  |   October 2009
Portraits made to measure: Manipulating social judgments about individuals with a statistical face model
Author Affiliations
Journal of Vision October 2009, Vol.9, 12. doi:https://doi.org/10.1167/9.11.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mirella Walker, Thomas Vetter; Portraits made to measure: Manipulating social judgments about individuals with a statistical face model. Journal of Vision 2009;9(11):12. https://doi.org/10.1167/9.11.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The social judgments people make on the basis of the facial appearance of strangers strongly affect their behavior in different contexts. However, almost nothing is known about the physical information underlying these judgments. In this article, we present a new technology (a) to quantify the information in faces that is used for social judgments and (b) to manipulate the image of a human face in a way which is almost imperceptible but changes the personality traits ascribed to the depicted person. This method was developed in a high-dimensional face space by identifying vectors that capture maximum variability in judgments of personality traits. Our method of manipulating the salience of these vectors in faces was successfully transferred to novel photographs from an independent database. We evaluated this method by showing pairs of face photographs which differed only in the salience of one of six personality traits. Subjects were asked to decide which face was more extreme with respect to the trait in question. Results show that the image manipulation produced the intended attribution effect. All response accuracies were significantly above chance level. This approach to understanding and manipulating how a person is socially perceived could be useful in psychological research and could also be applied in advertising or the film industries.

Introduction
The facial appearance of individuals is an important agent in the social world. People are ready to attribute personality traits to unfamiliar individuals on the basis of their facial appearance (Bruce & Young, 1986). These inferences affect face memory (Bower & Karlin, 1974) and have a strong impact on social behavior in different domains. Inferences on competence which were made on the basis of portraits of political candidates after a 1-second exposure even predicted the outcome of elections (Todorov, Mandisodza, Goren, & Hall, 2005). The leadership ability perceived in the faces of CEOs was found to correlate with the company's profits (Rule & Ambady, 2008). The likeability perceived in a face has an influence on how well the face is recognized later, when it is presented together with other faces. In the applicational context of eyewitness testimony, the human willingness to ascribe traits on the basis of individuals' facial appearance can have far-reaching consequences (Mueller, Heesacker, & Ross, 1984). The fact that people can make personality trait judgments from faces in a brief instant supports the hypothesis that trait inferences from faces are made via unreflected judgment mechanisms (Chaiken & Trope, 1999). 
Surprisingly little is known about the physical information underlying these social judgments. While some research has been done on how facial characteristics are related to perceptions of attractiveness (Potter & Corneille, 2008) and to categorization by gender (Huart, Corneille, & Becquart, 2005), ethnicity (Corneille, Huart, Becquart, & Brédart, 2004; MacLin & Malpass, 2001), and emotional states (Etcoff & Magee, 1992; Halberstadt & Niedenthal, 2001), the way people form personality trait judgments from faces has not yet been studied intensively. 
A first approach to investigating the functional basis of trait judgments based on facial appearance has been undertaken recently (Oosterhof & Todorov, 2008; Todorov, Said, Engell, & Oosterhof, 2008). In a first step, the authors reduced the large variety of trait judgments that are spontaneously used to describe persons on the basis of their facial appearance to two dimensions that capture most of the variance of social judgments: valence and dominance. Subsequently statistical models were developed to successfully represent these dimensions. This was done by collecting judgments of valence and dominance with regard to 300 randomly generated faces that were based on 3D laser scans of faces. Mean values for every face and dimension allowed for describing vectors in the face space that represent these dimensions and can be used to manipulate them in faces of the underlying database. This work is impressive because of its successful transformation of social judgments into the statistical face space. A disadvantage of this method is that the face space accounts only for variations in shape, not in reflectance among faces. Behavioral studies have shown, however, that reflectance information plays an important role in face perception and recognition tasks (see e.g., Hill, Bruce, & Akamatsu, 1995; O'Toole, Vetter, & Blanz, 1999; Yip & Sinha, 2002). In addition, this method to manipulate the two social dimensions was applied to randomly generated faces without facial hair or other cues such as make-up or accessories. The resulting faces therefore have a somewhat synthetic appearance. 
A perception model based on real faces
In this article, we will present a new approach to understanding which information in faces is used to make social judgments. This approach is based on a complex face space derived from a three-dimensional statistical model of 200 laser scans of real faces (O'Toole, Vetter, Troje, & Bülthoff, 1997; Troje & Bülthoff, 1999). Our aim is to build a social perception model of the 3D shape and reflectance of faces that enables us (a) to quantify and manipulate the way persons are socially perceived along six dimensions and (b) to apply this method to novel 2D photographs of faces with realistic-looking results. 
The strategy we apply to measure the physical information which underlies the perception of these social dimensions is simple and backward-engineered. We begin with a sample of real faces and gather information about various personality traits. Similar to earlier approaches to modeling facial attributes (Blanz & Vetter, 1999), we project these social dimensions into our face space in order to identify dimensions which capture these personality traits. To generate novel face images, we fit our face model to a face on a photograph, which results in a three-dimensional representation of the head. We then add the vectors relating to the different social dimensions and render the resulting head back into its original context (Blanz, Scherbaum, Vetter, & Seidel, 2004) (Figure 1). Further technical details are described in the Implemented face space section. 
Figure 1
 
Through the enrichment of our morphable face model based on 200 3D scans of faces by personality trait judgments we can locate the directions with maximum variability according to these personality traits. Any photograph of any human face can be analyzed by fitting our model to the face image. The resulting 3D head can be manipulated by adding or subtracting personality trait vectors and rendered back into the original photograph.
Figure 1
 
Through the enrichment of our morphable face model based on 200 3D scans of faces by personality trait judgments we can locate the directions with maximum variability according to these personality traits. Any photograph of any human face can be analyzed by fitting our model to the face image. The resulting 3D head can be manipulated by adding or subtracting personality trait vectors and rendered back into the original photograph.
Experiment 1
Method
Participants
Five hundred thirty-eight subjects participated in this experiment. Six of them were randomly selected to win a CD. Two hundred thirty-five participated in Experiment 1A. Their ages ranged from 17 to 62 years with an average age of 25.2 years. Three hundred three subjects participated in Experiment 1B. Their average age was 24.6 years with values ranging from 17 to 61. 
Stimuli
In Studies 1A and 1B, we used images of faces without any additional information such as hair, beard, jewelry, etc. The images were produced by rendering information from 100 male and 100 female three-dimensional registered laser scans of faces. All faces were displayed in color, frontal view, with the same lighting and with neutral facial expressions. 
Procedure
Studies 1A and 1B were conducted via the Internet. Each subject was randomly assigned to one of eight conditions, which contained 25 out of 200 face stimuli. 
In Experiment 1A, subjects were welcomed on the first Web page and were told that the study was about impression formation. They were asked to look at the images and to answer the questions quickly. On the second page they were shown the first face image and were asked to judge the person depicted according to aggressiveness, attractiveness, extroversion, likeability, risk seeking, social skills, and trustworthiness on a 7-point unipolar Likert scale. On the next page they were asked for ratings of masculinity/femininity of the respective person on a scale of the same type. These steps were then repeated with the other 24 face images. The faces were presented in random order. 
In Experiment 1B, subjects were welcomed on the first page and were told that the study was about gender categorization. They were asked to look at the images and to answer the questions rapidly. On the second page they were shown the first face image and were asked to decide whether the person was a man or a woman on a 5-point Likert scale. This was repeated with the other 24 face images. The images were presented in random order. 
On the last page subjects were thanked and were asked for their e-mail addresses in order to take part in the lottery. 
Analysis and results
Results show that subjects were willing to make the social judgments required, choosing the response option “I do not know” in only 2% of all cases. 
A mean was computed for each face and attribute. We used a principal components analysis (PCA) to find out whether subjects formed a global judgment reflected in the finer-grained personality trait judgments or if they judged the different personality traits independently. The six personality traits (aggressiveness, extroversion, likeability, risk seeking, social skills, and trustworthiness) and three potential global factors (attractiveness, masculinity/femininity, and gender) were subjected to PCA. The suitability of data for factor analysis was assessed before performing the PCA. Inspection of the correlation matrix revealed the presence of many coefficients of .3 and above. The Kaiser–Meyer–Oklin value was .78, thus exceeding the recommended value of .6 (Kaiser, 1970, Kaiser & Rice, 1974), and Bartlett's test of sphericity (Bartlett, 1954) reached statistical significance, supporting the factorability of the correlation matrix, χ2 = 2217.41, df = 36, p < .001. 
The PCA revealed the presence of three components with eigenvalues exceeding 1 which explained a total of 90.11% of the variance. Component 1 contributes 52.1%, Component 2 contributes 24.6%, and Component 3 contributes 13.5% of the variance, respectively. An inspection of the scree plot revealed no clear break. All three factors were further investigated using the scree test (Cattell, 1966). This was also supported by the results of parallel analysis which showed three components with eigenvalues exceeding the corresponding criterion values for a randomly generated data matrix of the same size (9 × 200). 
The component matrix shows that most of the items load quite strongly (above .4) on Components 1 and 2, only two items load quite strongly on Component 3. The pattern matrix from the oblimin rotation reveals 7 item loadings above .3 on Component 1, 4 item loadings above .3 on Component 2, and only 2 item loadings above .3 on Component 3. Therefore, a two-factor solution seemed appropriate. 
The two-component solution explained a total of 77% of the variance, with Component 1 contributing 52% and Component 2 contributing 25%. To facilitate the interpretation of these two components, oblimin rotation was performed. Both components showed a number of strong loadings but certain variables loaded substantially on both of the components. There was a weak negative correlation between the two factors ( r = −.19). 
Inspection of the pattern matrix ( Table 1) revealed that every item loaded strongly on one of the two factors, but three items loaded on both components: Extroversion had a very high loading on Component 1 (.77) and Component 2 (.51), trustworthiness had a very high loading on Component 1 (.76) and a high negative loading on Component 2 (−.44), and aggressiveness had a high loading on Component 2 (.76) and a high negative loading on Component 1 (−.36). Inspection of the structure matrix ( Table 1) revealed five items with high loadings on both components: social skills (.92 on Component 1 and −.43 on Component 2), likeability (.92 on Component 1 and −.45 on Component 2), extroversion (.67 on Component 1 and .36 on Component 2), trustworthiness (.85 on Component 1, −.59 on Component 2), and aggressiveness (−.50 on Component 1 and .82 on Component 2). 
Table 1
 
Pattern and structure matrix for PCA with oblimin rotation of the two-factor solution for the items social skills (S), likeability (L), attractiveness (At), extroversion (E), trustworthiness (T), risk seeking (R), masculinity/femininity (MF), aggressiveness (Ag), and gender (G).
Table 1
 
Pattern and structure matrix for PCA with oblimin rotation of the two-factor solution for the items social skills (S), likeability (L), attractiveness (At), extroversion (E), trustworthiness (T), risk seeking (R), masculinity/femininity (MF), aggressiveness (Ag), and gender (G).
Item Component 1 (social skills) Component 2 (risk seeking) Communalities
Pattern Structure Pattern Structure
S .871 .920 −.262 −.427 .913
L .863 .918 −.290 −.454 .924
At .817 .801 .086 −.069 .649
E .766 .670 .507 .361 .697
T .764 .848 −.443 −.588 .907
R .133 −.040 .912 .887 .803
MF .065 .216 −.799 −.811 .662
Ag −.356 −.499 .755 .823 .800
G .050 .188 −.726 −.735 .543
 

Note: Major loadings for each item are in boldface.

Discussion
Results from the two-factor PCA show that aggressiveness, one of the potential global factors for the finer-grained personality trait judgments, loads quite strongly on the first component, whereas the other two, masculinity/femininity and gender, load quite strongly on the second component. Inspection of the communalities column in Table 1 reveals that the potential global factors attractiveness, masculinity/femininity, and gender share only 54–66% of the total variance, whereas the six personality traits share 70–92 % of the total variance. It can be concluded, therefore, that the potential global factors do not play an important role in the judgment process. The finer-grained personality trait judgments cannot be explained by a spontaneous global impression formed on the basis of either attractiveness (beauty-is-good-stereotype, see e.g., Eagly, Ashmore, Makhijani, & Longo, 1991), masculinity/femininity, or gender (e.g., Hoffman & Hurst, 1990) of the individual. Subjects make differentiated social judgments about individuals, even if the only information available is facial appearance. An explanation for the negligible role of the faces' gender and for the overall lack of gender stereotyping could be that subjects might not have gender categorized the faces. Subjects rating the personality traits were not explicitly asked to make gender categorizations. The absence of extra-facial features may be the reason why subjects did not spontaneously gender categorize the faces. Even though adult subjects are able to gender categorize adult faces without non-facial gender cues (Cheng, O'Toole, & Abdi, 2001; Rossion, 2002; Wild et al., 2000), there is evidence that they do not categorize if they are not explicitly asked to do so (Martin & Macrae, 2007). Other candidates for global impressions such as pose, facial expression, or gaze direction were held constant in our face database and thus did not contribute variance to our data. 
Implemented face space
Our method of manipulating face photographs so that the persons depicted are perceived as possessing different personality traits to lesser or higher degrees mainly consists of two techniques: a technique to quantify and model the perception of social dimensions in faces and an image processing technique to apply and transfer this modeling to novel photographs of faces. 
Face modeling
The 3D morphable face model (Blanz & Vetter, 1999) was built on the basis of laser scans (Cyberware™) of 100 male and 100 female young adults' faces. These faces did not show any make-up, jewelry, or facial hair. Shape and reflectance were coded separately resulting in approximately 70,000 vertices and approximately 70,000 color values per face. 
To construct a face space, all faces were set in full correspondence with an average face so that every face can be described in terms of its deviation from this average (Blanz & Vetter, 1999). A PCA was performed to estimate the statistics of head shape and reflectance of the given set of faces. This method allows for synthesizing a wide range of new faces by forming linear combinations of the faces which the face space is built on. 
This face space also allows for manipulating certain facial attributes such as gender or the fullness of the face. Every face in a set can be labeled with respect to an attribute (e.g., male/female). Weighted sums can be computed separately for shape and reflectance:  
Δ S = i = 1 m μ i ( S i S ) , Δ R = i = 1 m μ i ( R i R ) .
(1)
 
Any individual face can then be manipulated with respect to this attribute by adding or subtracting multiples of (Δ S, Δ R) (Blanz & Vetter, 1999). In the present investigation, we used this method to model the perception of personality traits on the basis of faces. First, the mean scores for all six personality traits collected in Experiment 1 (aggressiveness, extroversion, likeability, risk seeking, social skills, and trustworthiness) were rescaled from unipolar scales ranging from 1 to 7 to bipolar scales ranging from −1 to 1. 
We then added the social information collected to every face representation in face space. In order to obtain the (Δ S, Δ R), we computed a regression analysis individually for each trait. This procedure located directions of the perception of different personality traits in the multidimensional face space with maximum variability relative to these traits (Blanz, 2000; Blanz & Vetter, 1999). These directions can be represented as vectors and can be used to manipulate the perception of personality traits in faces from the database they were built on. Since our aim is to transfer this manipulation of perception to any novel 2D face photograph, an image processing technique is also necessary. 
Image processing
In this section, we will describe the transfer of information represented in our 3D face space model to the 2D photograph of an arbitrary person. In a first step, we reconstruct the 3D shape and surface reflectance of the head depicted in the target 2D photograph utilizing the analysis-by-synthesis approach (Blanz & Vetter, 1999). Along with a set of rendering parameters, the 3D model coefficients are optimized so that they produce an image with maximum resemblance to the input image. This results in an estimation of the three-dimensional shape structure and the reflectance map (a so-called “texture map”) of the head from the 2D input photograph. Having mapped the face from the 2D photograph into the 3D face space, we are able to manipulate shape and reflectance separately, as described above, by adding and subtracting the personality trait vectors to the head model. In the final step, the slightly manipulated 3D head is rendered back into the original photograph using the rendering parameters estimated earlier (Blanz et al., 2004). This method uses state-of-the-art blending technologies so that no lines are visible between the novel, manipulated part of the face and the remaining part of the 2D image. The results are completely natural-looking photographs without any visible manipulation artifacts. The perceptual impact of the manipulations was investigated in our second experiment. 
Experiment 2
Method
Participants
One hundred six subjects participated in this experiment. Three of them were randomly selected to win a CD. The average age was 25.3 with values ranging from 17 to 73 years. 
Stimuli
The images from Experiment 1 were used to obtain the first set of stimuli. Three male and three female faces were randomly selected. Twelve new versions were created for each face by applying 1/8 of the total length of the six personality trait vectors in the positive and the negative direction. The criterion for vector length was that the changes should be minimal but have the potential to affect social judgments. The corresponding shape and reflectance vectors were always applied in combination. This resulted in 36 pairs of face images. 
The second set of stimuli used color photographs from the Feret database (Phillips, Wechsler, Huang, & Rauss, 1998). Unlike the faces used to generate the vectors, these faces include extra-facial cues such as hairstyle, clothes, and jewelry and therefore have higher ecological validity. Three female and three male face images with neutral expression wearing no glasses, beards, or mustaches were randomly selected from all Caucasian database faces. Firstly, these images were pre-processed in order to render the background color of all six images comparable. They were then analyzed to obtain a 3D representation, which was manipulated in the same way as the faces from our own database. The resulting faces were rendered back into the original photographs. Figure 2 shows one of the six independent database faces and its variations resulting from manipulation in a positive direction. 
Figure 2
 
Examples of stimuli used to evaluate the six different personality trait vectors. By fitting our three-dimensional face model to an image from an independent database, we obtained a three-dimensional shape and a reflectance representation of the face. It could then be manipulated by adding different personality trait vectors and the resulting faces could be rendered back into the original photographs. Data concerning the six attributes were collected on 200 faces from another database and projected into our face space. A regression analysis revealed directions with maximum variability with respect to these traits.
Figure 2
 
Examples of stimuli used to evaluate the six different personality trait vectors. By fitting our three-dimensional face model to an image from an independent database, we obtained a three-dimensional shape and a reflectance representation of the face. It could then be manipulated by adding different personality trait vectors and the resulting faces could be rendered back into the original photographs. Data concerning the six attributes were collected on 200 faces from another database and projected into our face space. A regression analysis revealed directions with maximum variability with respect to these traits.
Procedure
This experiment was conducted via Internet. Subjects were randomly assigned to one of four conditions. There was one between-factor type of stimulus. On each Web page, the subjects were shown one face pair derived from one identical face. One version was manipulated slightly towards a higher, the other towards a lesser degree of the same attribute. Conditions with the same type of stimulus differed only in the arrangement of the face pairs, which was counterbalanced. Then six sets of six face pairs were presented in random order (within-factor attribute order). Each set consisted of one attribute manipulated in all of the six different original faces. These six faces appeared in random order in each block. First of all, subjects were welcomed, they were told that the study was about impression formation and that they would see pairs of similar faces. They were asked to answer a number of questions. They were then shown the first face pair and were asked questions of the following form: “Which face looks more aggressive?” “Which face looks more likeable?” Subjects could select one of three answers, e.g., “The face on the left looks more aggressive,” “The face on the right looks more aggressive,” or “I cannot decide which face looks more aggressive.” On the next page, the second face pair from a total of 36 pairs was presented. On the last page, subjects were thanked and asked for their e-mail addresses in order to take part in the lottery. 
Analyses and results
The two types of stimuli were evaluated separately. Subjects chose the answer “I cannot decide which face looks more…” in only 6% of all cases for faces without extra-facial features and in 8% of all cases for faces with extra-facial features. 
To test our main hypothesis, which claims that our method to manipulate faces is able to reflect the social perception of a person, we calculated the percentage of correct ratings for every trait and subject. The results of evaluations for the faces which the vectors were built on indicated the success of the graphic manipulation, for the ratings were in agreement with the intended image manipulation as follows: 87% agreement for aggressiveness, 92% for extroversion, 100% for likeability, 61% for risk seeking, 88% for social skills, and 97% for trustworthiness. All accuracies were significantly above chance level, as determined by six one-sample t-tests against a hypothetical mean, t min(47) = 2.60, p max = .006 (one-way; Figure 3A). Alpha levels were corrected for multiple tests (Jaccard & Wan, 1996). The eta-squared statistic (.13 to >.99) indicated moderate to large effect sizes. 
Figure 3
 
Accuracy in the perception of manipulated personality traits in face pairs (A) for faces the vectors were built on and (B) for faces from an independent database, as a function of different personality traits. Subjects had three response options: “The left face looks more…,” “The right face looks more…,” and “I cannot decide, which face looks more….” Bars represent 95% confidence interval.
Figure 3
 
Accuracy in the perception of manipulated personality traits in face pairs (A) for faces the vectors were built on and (B) for faces from an independent database, as a function of different personality traits. Subjects had three response options: “The left face looks more…,” “The right face looks more…,” and “I cannot decide, which face looks more….” Bars represent 95% confidence interval.
Evaluation results for the faces from the independent Feret database, which contained extra-facial cues and thus looked more natural, also showed that the image manipulation produced the intended effect on social attribution. The ratings agreed with the intended image manipulation in 82% for aggressiveness, 91% for extroversion, 84% for likeability, 73% for risk seeking, 79% for social skills, and 86% for trustworthiness. All accuracies were significantly above chance level, as determined by six one-sample t-tests against a hypothetical mean, t min(57) = 7.44, p max < .001 ( Figure 3B). Alpha levels were corrected for multiple tests (Jaccard & Wan, 1996). The eta-squared statistic (.49 to .92) indicated large effect sizes. 
A mixed between–within-subjects analysis of variance to assess the impact of stimulus type and type of personality trait on the judgment accuracy could not be conducted since the assumption of homogeneity of inter-correlations was impaired, F(21, 36888) = 6.37, p < .001. 
Discussion
Results show that our method to quantify the facial information used for social judgments and to subtly manipulate this information both in faces from the database the vectors were built on and from an independent database was successful. Subjects were able to identify the face image that had been transformed in the respective direction significantly above chance level in all experimental conditions. 
Since the distractor faces were transformed exactly in the opposite direction, one could argue that we still lack evidence that every vector triggers judgments of the corresponding personality trait best. Experiment 3 is aimed at comparing the directions of the different personality trait vectors and testing whether the six vectors are precise enough to trigger the corresponding social judgments. 
Experiment 3
Introduction
There is evidence that the different personality trait vectors are not totally independent of each other (Oosterhof & Todorov, 2008). This experiment is aimed at investigating whether the different personality trait vectors dissociate in face space, so that every vector influences the perception of the corresponding personality trait most strongly. This was tested in a two-alternative forced choice task with pairs of faces where different personality traits were enhanced. Subjects had to decide which of the two faces looked more extreme regarding one of the two dimensions manipulated in the two stimuli. 
Before, however, we will compare the correlations between the six vectors in face space with correlations between the six personality trait judgments in order to see to which degree the physical and the psychological face spaces are related. Since the raw face data have to be organized in order to build a face space, certain assumptions have been made. In the present case, a correspondence algorithm mainly based on optic flow computations was used to transform the faces into face space based on the curvature of face shape and texture. A comparison of correlations between personality trait vectors and personality trait judgments helps to answer the question whether this correspondence algorithm is sensitive to the same features as the human visual system. 
Results show that the shape and reflectance vectors are correlated to different degrees for six personality traits. The spectrum varies from hardly correlated dimensions (e. g., aggressiveness and extroversion) to highly correlated ones (e.g., social skills, trustworthiness, and likeability) ( Table 2). The maximum difference between two corresponding cells in the correlation matrices for shape and texture vectors is E max = .07. 
Table 2
 
(A) Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) shape vector; and (B) correlations between the six different reflectance vectors.
Table 2
 
(A) Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) shape vector; and (B) correlations between the six different reflectance vectors.
A E L R S T
(A)
A 1 .11 −.77 .84 −.72 −.84
E 1 .34 .51 .37 .24
L 1 −.44 .94 .94
R 1 −.40 −.56
S 1 .92
T 1
(B)
A 1 .04 −.75 .81 −.67 −.82
E 1 .40 .47 .44 .26
L 1 −.40 .93 .93
R 1 −.33 −.63
S 1 .91
T 1
These correlations are reflected in the correlations between different social judgments ( Table 3). The maximum difference between two corresponding cells in the correlation matrices for vectors and social judgments is E max = .11 for the shape vectors and E max = .16 for the reflectance vectors. This indicates a high correspondence between psychological and physical face space, which means that these face spaces are organized on the basis of similar features. 
Table 3
 
Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) judgments from Experiment 1.
Table 3
 
Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) judgments from Experiment 1.
A E L R S T
A 1 .06 −.73 .81 −.66 −.78
E 1 .41 .49 .45 .30
L 1 −.36 .93 .93
R 1 −.29 −.47
S 1 .90
T 1
It is not surprising that some correlations between perceived personality traits are very high while others are not: It is hard to imagine that people find somebody likeable, but not trustworthy, or vice versa, whereas it is quite easy to imagine two extroverted persons, one of which appears much more trustworthy than the other. 
To better visualize the directions of the different vectors in face space, we compared the original face image from Figure 2 with each of the six manipulated images ( Figure 4). 
Figure 4
 
Differences between an original face photograph (00498_960627_fa.png) and the six slightly manipulated images in the positive direction of every personality trait vector used in the experiment. Images are compared per pixel. The darker a pixel, the bigger is the difference between the two face images at the respective position.
Figure 4
 
Differences between an original face photograph (00498_960627_fa.png) and the six slightly manipulated images in the positive direction of every personality trait vector used in the experiment. Images are compared per pixel. The darker a pixel, the bigger is the difference between the two face images at the respective position.
Two main points can be stated: (a) different regions in the face are relevant to different social judgments (e.g., the mouth to social skills, the eyebrows to extroversion), and (b) shape (e.g., corner of the mouth in extroversion) as well as configuration of features (e.g., position of the mouth in aggressiveness) are responsible for the perception of different personality traits. Although the differences between two images belonging to highly correlated dimensions are harder to describe than the differences between images belonging to hardly correlated ones, it is still possible to perceive them. Two highly correlated dimensions are, e.g., social skills and likeability. Comparing the corresponding images, one can see that whereas the corners of the mouth and the eye region change in a similar way when these two vectors are applied to a face, the social skills vector induces more change in the cheek and forehead regions and moves the upper lip contour, while the likeability vector causes more changes in the nose region. 
In order to better visualize the different impacts of shape and texture vectors to the face manipulations, we produced faces that were more exaggerated than the ones we had used in our experiments ( Figure 5). From these faces, a third conclusion can be drawn: (c) Reflectance also plays a role in the perception of different personality traits (e.g., darker pixels in the eye and chin region in the extroverted than in the aggressive version, rosier cheeks in the trustworthy than in the likeable version). 
Figure 5
 
These faces are manipulated by adding 1/4 of the total lengths of every personality trait vector.
Figure 5
 
These faces are manipulated by adding 1/4 of the total lengths of every personality trait vector.
Since some of the vectors are highly correlated, we conducted Experiment 3 to determine how difficult it is to distinguish the corresponding dimensions. 
Method
Participants
Fifty-nine subjects participated in this experiment. Three of them were randomly selected to win a CD. The average age was 26 years with values ranging from 17 to 51. 
Stimuli
One stimulus person from the FERET-database used in Experiment 2 was selected for this experiment. 
Procedure
Experiment 3 was conducted via the Internet. Each subject was randomly assigned to one of two conditions differing in the arrangement of the stimulus pairs. Instructions were the same as in Experiment 2. On each Web page, subjects were shown one face pair derived from an identical original face. One version was slightly manipulated towards one of the six attributes (e.g., social skills), the other towards or away from another attribute, depending on whether the latter was correlated positively (trustworthiness) or negatively (aggressiveness) with the former. 
Since there are 15 combinations of pairs of attributes and since the question can be posed in two directions, there were 30 questions presented in six blocks with the same attribute, e.g., “Which face looks more extroverted?” Subjects responded using one of two answers (“The face on the left looks more extroverted,” “The face on the right looks more extroverted”). Each of the 30 face pairs was presented on one page. The blocks were presented in random order. On the last page, subjects were thanked and asked to leave their e-mail addresses in order to take part in the lottery. 
Analyses and results
Results show that, although the images presented together resembled each other much more than the ones in Experiment 2, ratings agreed with the intended image manipulation in 59% of all cases, which is significantly above chance level, as determined by a one-sample t-test against a hypothetical mean, M = .59, SD = .10; t (58) = 7.05, p < .001 (one-tailed). The eta-squared statistic (.46) indicated a large effect size. 
Assessing every personality trait individually, results are as follows: Significantly above chance level, as determined by one-sample t-tests against a hypothetical mean, were the accuracies for likeability, M = .66, SD = .21; t(58) = 6.06, p < .001 (one-tailed); social skills, M = .64, SD = .22; t(58) = 4.96, p < .001 (one-tailed); extroversion, M = .60, SD = .21; t(58) = 3.72, p < .001 (one-tailed); and risk seeking, M = .57, SD = .20; t(58) = 2.58, p = .007 (one-tailed). The eta-squared statistic (.10 to .39) indicated moderate to large effect sizes. Ratings for the following personality traits tended to be above chance level: aggressiveness, M = .51, SD = .21; t(58) = .18, p = .43 (one-tailed); and trustworthiness, M = .55, SD = .24; t(58) = 1.46, p = .07 (one-tailed). Alpha levels were corrected for multiple tests (Jaccard & Wan, 1996). 
Detailed results for the individual pairs are shown in Table 4. Significantly above chance level, as determined by one-sample t-tests against a hypothetical mean, were the accuracies for the pairs aggressiveness vs. extroversion, risk seeking vs. likeability, extroversion vs. risk seeking, risk seeking vs. social skills, extroversion vs. likeability, and aggressiveness vs. likeability. The eta-squared statistic (.11 to .37) indicated moderate to large effect sizes. Ratings for the following pairs tended to be above chance level: risk seeking vs. trustworthiness, extroversion vs. trustworthiness, aggressiveness vs. social skills, trustworthiness vs. social skills, aggressiveness vs. risk seeking, extroversion vs. social skills, and likeability vs. trustworthiness. Ratings for the following pairs tended to be below chance levels: aggressiveness vs. trustworthiness and likeability vs. social skills. 
Table 4
 
Every cell shows the percentage of ratings in agreement with the intended image manipulation for a combination of two dimensions.
Table 4
 
Every cell shows the percentage of ratings in agreement with the intended image manipulation for a combination of two dimensions.
A E L R S T
A 75* 61* 56 57 42
E 62* 66* 54 60
L 69* 45 51
R 66* 62
S 57
T
 

Note:*Significantly above chance level with corrected alpha levels for multiple tests (Jaccard & Wan, 1996).

Discussion
The face pairs presented in this experiment consisted of very similar stimuli, since the directions of the vectors applied were not opposite as in Experiment 2, but the vector length was the same. The results show that even in this condition most vectors describe the direction of the corresponding personality traits best. Comparing Table 3 with Table 4 reveals that five of the eight pairs with at most moderately correlated trait judgments ( r ≤ .49) are discriminated significantly above chance level, whereas six of the seven pairs with highly correlated trait judgments ( r ≥ .50) are not. Subjects' performance in discriminating between pairs of faces to which different personality trait vectors were applied seems to depend on the correlations in the respective trait judgments. 
Results from this experiment show that we are able to manipulate the way a person is socially perceived on the basis of a photograph in a fine-grained way. It can be assumed that results could be improved if the vector lengths were enhanced. If the aim is to model the social perception of a target person as precisely as possible, it does make sense—even if the different social dimensions are correlated—not to use a method reducing them to fewer uncorrelated dimensions (Oosterhof & Todorov, 2008). 
Discussion
Even though judgments of personality traits based on the facial appearance of strangers seem to be something subjective (e.g., likeability) and arbitrary (e.g., risk seeking), we have demonstrated in this paper that the interpersonal consensus in forming social judgments from facial appearance is high enough to find physical correlates to this information in our face space. It was possible to locate these judgments in our face space in the form of vectors with maximum variability with respect to each perceived personality trait and to manipulate the information in completely new faces to affect a change in the perception of particular personality attributes. This suggests that the perception of personality traits from faces is objectively quantifiable. 
Earlier studies have shown that social judgments based exclusively on physical information are often not justified (Zebrowitz, Andreoletti, Collins, Lee, & Blumenthal, 1998; Zebrowitz, Hall, Murphy, & Rhodes, 2002; Zebrowitz, Voinescu, & Collins, 1996). Therefore, they are labeled as mere perceptual illusions (Bachmann & Nurmoja, 2006), having their origin in face overgeneralization effects (Zebrowitz, Fellous, Mignault, & Andreoletti, 2003). This suggests that behavioral and physical characteristics which are strongly associated for members of one specific social group (e.g., roundish faces and dependency on others in the group of babies) may be overgeneralized to the physical appearance of members of a different social group (baby-faced men). This may trigger the associated social judgment (dependence on others). Another effect might also play a role in the perception of personality traits derived from faces: Subjects tend to overgeneralize from emotional facial expressions to more stable personality traits (Montepare & Dobish, 2003). Although the faces in our database have neutral emotional expression, the shape of the mouth played an important role in the perception of personality traits. These overgeneralization effects could explain why the consensus among different subjects is high enough to create personality trait vectors in our face space. 
Enriching our statistical face model with different perceived personality traits makes it possible to synthesize new realistic-looking face images with clearly defined effects. New faces can be generated by adding or subtracting vectors with maximum variability with respect to different personality traits to independent faces. Subtle and hardly perceivable differences in faces are powerful enough to cause judgments of personality traits in the intended direction. Our method of modeling different personality traits helps to understand social judgments made on the basis of facial appearance because it clearly defines directions in our face space that reflect specific social judgment scales. 
The application of perceived personality trait vectors is independent of the faces they were developed from. This means that any kind of face photograph with neutral facial expression, regardless of pose (Blanz et al., 2004; Romdhani & Vetter, 2005) and lighting, can be fitted with our 3D morphable face model. It is thus possible to manipulate any image of any human face so that the person depicted is perceived as more aggressive, likeable, socially skilled, etc. This method of manipulating faces is enhanced to a degree that the quality of output images is practically as good as the quality of the input image (for a scheme of this procedure see Figure 1). 
Since shape and reflectance were manipulated together in our experiments, we did not learn much about their relative impact on social perception. This could be a topic for future research. 
The method presented in this paper may be useful, on the one hand, for researchers in the fields of social psychology and neuroscience, since it can generate clearly parameterized stimuli and on the other hand for advertising and for the film industries since the manipulated faces trigger predictable impressions and look completely natural. 
Acknowledgments
This research project is supported by the Swiss National Science Foundation through eikones—the NCCR Iconic Criticism. The authors are very grateful to Anita Lerch for her assistance. Portions of the research in this paper use the Feret database of facial images collected under the Feret program. 
Commercial relationships: none. 
Corresponding author: Mirella Walker. 
Address: Computer Science Department, Bernoullistrasse 16, 4056 Basel, Switzerland. 
References
Bachmann, T. Nurmoja, M. (2006). Are there affordances of suggestibility in facial appearance? Journal of Nonverbal Behavior, 30, 87–92. [Article] [CrossRef]
Bartlett, M. S. (1954). A note on multiplying factors for various chi-squared approximations. Journal of the Royal Statistical Society B, 16, 296–298.
Blanz, V. (2000). Automatische Rekonstruktion der dreidimensionalen Form von Gesichtern aus einem Einzelbild.
Blanz, V. Scherbaum, K. Vetter, T. Seidel, H. (2004). Exchanging faces in images. Computer Graphics Forum, 23, 669–676. [Article] [CrossRef]
Blanz, V. Vetter, T. (1999). A morphable model for the synthesis of 3D facesn Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (pp. 187–194). New York: ACM Press.
Bower, G. H. Karlin, M. B. (1974). Depth of processing pictures of faces and recognition memory. Journal of Experimental Psychology, 103, 751–757. [CrossRef]
Bruce, V. Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327. [PubMed] [CrossRef] [PubMed]
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245–276. [CrossRef]
(1999). Dual-process theories in social psychology. New York: The Guilford Press.
Cheng, Y. D. O'Toole, A. J. Abdi, H. (2001). Classifying adults' and children's faces by sex: Computational investigations of subcategorical feature encoding. Cognitive Science, 25, 819–838. [CrossRef]
Corneille, O. Huart, J. Becquart, E. Brédart, S. (2004). When memory shifts toward more typical category exemplars: Accentuation effects in the recollection of ethnically ambiguous faces. Journal of Personality and Social Psychology, 86, 236–250. [PubMed] [CrossRef] [PubMed]
Eagly, A. H. Ashmore, R. D. Makhijani, M. G. Longo, L. C. (1991). What is beautiful is good, but: A meta-analytic review of research on the physical attractiveness stereotype. Psychological Bulletin, 110, 109–128. [CrossRef]
Etcoff, N. L. Magee, J. J. (1992). Categorical perception of facial expressions. Cognition, 44, 227–240. [PubMed] [CrossRef] [PubMed]
Halberstadt, J. B. Niedenthal, P. M. (2001). Effects of emotion concepts on perceptual memory for emotional expressions. Journal of Personality and Social Psychology, 81, 587–598. [PubMed] [CrossRef] [PubMed]
Hill, H. Bruce, V. Akamatsu, S. (1995). Perceiving the sex and race of faces: The role of shape and colour. Proceedings of the Royal Society of London B: Biological Sciences, 261, 367–373. [PubMed] [Article] [CrossRef]
Hoffman, C. Hurst, N. (1990). Gender stereotypes: Perception or rationalization? Journal of Personality and Social Psychology, 58, 197–208. [CrossRef]
Huart, J. Corneille, O. Becquart, E. (2005). Face-based categorization, context-based categorization, and distortions in the recollection of gender ambiguous faces. Journal of Experimental Social Psychology, 41, 598–608. [Article] [CrossRef]
Jaccard, J. Wan, C. K. (1996). LISREL approaches to interaction effects in multiple regression. Thousand Oaks, CA: Sage Publications.
Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35, 401–415. [CrossRef]
Kaiser, H. F. Rice, J. (1974). Little jiffy, Mark IV. Educational and Psychological Measurement, 35, 111–117. [CrossRef]
MacLin, O. H. Malpass, R. S. (2001). Racial categorization of faces The ambiguous race face effect. Psychology, Public Policy, and Law, 7, 98–118. [CrossRef]
Martin, D. Macrae, C. N. (2007). A face with a cue: Exploring the inevitability of person categorization. European Journal of Social Psychology, 37, 806–816. [CrossRef]
Montepare, J. M. Dobish, H. (2003). The contribution of emotion perceptions and their overgeneralizations to trait impressions. Journal of Nonverbal Behavior, 27, 237–254. [Article] [CrossRef]
Mueller, J. H. Heesacker, M. Ross, M. J. (1984). Likability of targets and distractors in facial recognition. American Journal of Psychology, 97, 235–247. [PubMed] [CrossRef] [PubMed]
O'Toole, A. J. Vetter, T. Blanz, V. (1999). Three-dimensional shape and two-dimensional surface reflectance contributions to face recognition: An application of three-dimensional morphing. Vision Research, 39, 3145–3155. [PubMed] [Article] [CrossRef] [PubMed]
O'Toole, A. J. Vetter, T. Troje, N. F. Bülthoff, H. H. (1997). Sex classification is better with three-dimensional head structure than with image intensity information. Perception, 26, 75–84. [PubMed] [CrossRef] [PubMed]
Oosterhof, N. N. Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences of the United States of America, 105, 11087–11092. [PubMed] [Article] [CrossRef] [PubMed]
Phillips, P. J. Wechsler, H. Huang, J. Rauss, P. J. (1998). The FERET database and evaluation procedure for face-recognition algorithms. Image and Vision Computing, 16, 295–306. [Article] [CrossRef]
Potter, T. Corneille, O. (2008). Locating attractiveness in the face space: Faces are more attractive when closer to their group prototype. Psychonomic Bulletin & Review, 15, 615–622. [PubMed] [CrossRef] [PubMed]
Romdhani, S. Vetter, T. (2005). Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (vol. 2, pp. 986–993).
Rossion, B. (2002). Is sex categorization from faces really parallel to face recognition? Visual Cognition, 9, 1003–1020. [CrossRef]
Rule, N. O. Ambady, N. (2008). The face of success: Inferences from chief executive officers' appearance predict company profits. Psychological Science: A Journal of the American Psychological Society/APS, 19, 109–111. [PubMed] [Article] [CrossRef]
Todorov, A. Mandisodza, A. N. Goren, A. Hall, C. C. (2005). Inferences of competence from faces predict election outcomes. Science, 308, 1623–1626. [PubMed] [Article] [CrossRef] [PubMed]
Todorov, A. Said, C. P. Engell, A. D. Oosterhof, N. N. (2008). Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12, 455–460. [PubMed] [Article] [CrossRef] [PubMed]
Troje, N. F. Bülthoff, H. H. (1998). How is bilateral symmetry of human faces used for recognition of novel views? Vision Research, 38, 79–89. [PubMed] [CrossRef] [PubMed]
Wild, H. A. Barrett, S. E. Spence, M. J. O'Toole, A. J. Cheng, Y. D. Brooke, J. (2000). Recognition and sex categorization of adults' and children's faces: Examining performance in the absence of sex-stereotyped cues. Journal of Experimental Child Psychology, 77, 269–291. [PubMed] [CrossRef] [PubMed]
Yip, A. W. Sinha, P. (2002). Contribution of color to face recognition. Perception, 31, 995–1003. [PubMed] [Article] [CrossRef] [PubMed]
Zebrowitz, L. A. Andreoletti, C. Collins, M. A. Lee, S. Y. Blumenthal, J. (1998). Bright, bad, babyfaced boys: Appearance stereotypes do not always yield self-fulfilling prophecy effects. Journal of Personality and Social Psychology, 75, 1300–1320. [PubMed] [CrossRef] [PubMed]
Zebrowitz, L. A. Fellous, J. Mignault, A. Andreoletti, C. (2003). Trait impressions as overgeneralized responses to adaptively significant facial qualities: Evidence from connectionist modeling. Personality and Social Psychology Review, 7, 194–215. [PubMed] [CrossRef] [PubMed]
Zebrowitz, L. A. Hall, J. A. Murphy, N. A. Rhodes, G. (2002). Looking smart and looking good: Facial cues to intelligence and their origins. Personality and Social Psychology Bulletin, 28, 238–249. [CrossRef]
Zebrowitz, L. A. Voinescu, L. Collins, M. A. (1996). “Wide-eyed” and “crooked-faced”: Determinants of perceived and real honesty across the life span. Personality and Social Psychology Bulletin, 22, 1258–1269. [CrossRef]
Figure 1
 
Through the enrichment of our morphable face model based on 200 3D scans of faces by personality trait judgments we can locate the directions with maximum variability according to these personality traits. Any photograph of any human face can be analyzed by fitting our model to the face image. The resulting 3D head can be manipulated by adding or subtracting personality trait vectors and rendered back into the original photograph.
Figure 1
 
Through the enrichment of our morphable face model based on 200 3D scans of faces by personality trait judgments we can locate the directions with maximum variability according to these personality traits. Any photograph of any human face can be analyzed by fitting our model to the face image. The resulting 3D head can be manipulated by adding or subtracting personality trait vectors and rendered back into the original photograph.
Figure 2
 
Examples of stimuli used to evaluate the six different personality trait vectors. By fitting our three-dimensional face model to an image from an independent database, we obtained a three-dimensional shape and a reflectance representation of the face. It could then be manipulated by adding different personality trait vectors and the resulting faces could be rendered back into the original photographs. Data concerning the six attributes were collected on 200 faces from another database and projected into our face space. A regression analysis revealed directions with maximum variability with respect to these traits.
Figure 2
 
Examples of stimuli used to evaluate the six different personality trait vectors. By fitting our three-dimensional face model to an image from an independent database, we obtained a three-dimensional shape and a reflectance representation of the face. It could then be manipulated by adding different personality trait vectors and the resulting faces could be rendered back into the original photographs. Data concerning the six attributes were collected on 200 faces from another database and projected into our face space. A regression analysis revealed directions with maximum variability with respect to these traits.
Figure 3
 
Accuracy in the perception of manipulated personality traits in face pairs (A) for faces the vectors were built on and (B) for faces from an independent database, as a function of different personality traits. Subjects had three response options: “The left face looks more…,” “The right face looks more…,” and “I cannot decide, which face looks more….” Bars represent 95% confidence interval.
Figure 3
 
Accuracy in the perception of manipulated personality traits in face pairs (A) for faces the vectors were built on and (B) for faces from an independent database, as a function of different personality traits. Subjects had three response options: “The left face looks more…,” “The right face looks more…,” and “I cannot decide, which face looks more….” Bars represent 95% confidence interval.
Figure 4
 
Differences between an original face photograph (00498_960627_fa.png) and the six slightly manipulated images in the positive direction of every personality trait vector used in the experiment. Images are compared per pixel. The darker a pixel, the bigger is the difference between the two face images at the respective position.
Figure 4
 
Differences between an original face photograph (00498_960627_fa.png) and the six slightly manipulated images in the positive direction of every personality trait vector used in the experiment. Images are compared per pixel. The darker a pixel, the bigger is the difference between the two face images at the respective position.
Figure 5
 
These faces are manipulated by adding 1/4 of the total lengths of every personality trait vector.
Figure 5
 
These faces are manipulated by adding 1/4 of the total lengths of every personality trait vector.
Table 1
 
Pattern and structure matrix for PCA with oblimin rotation of the two-factor solution for the items social skills (S), likeability (L), attractiveness (At), extroversion (E), trustworthiness (T), risk seeking (R), masculinity/femininity (MF), aggressiveness (Ag), and gender (G).
Table 1
 
Pattern and structure matrix for PCA with oblimin rotation of the two-factor solution for the items social skills (S), likeability (L), attractiveness (At), extroversion (E), trustworthiness (T), risk seeking (R), masculinity/femininity (MF), aggressiveness (Ag), and gender (G).
Item Component 1 (social skills) Component 2 (risk seeking) Communalities
Pattern Structure Pattern Structure
S .871 .920 −.262 −.427 .913
L .863 .918 −.290 −.454 .924
At .817 .801 .086 −.069 .649
E .766 .670 .507 .361 .697
T .764 .848 −.443 −.588 .907
R .133 −.040 .912 .887 .803
MF .065 .216 −.799 −.811 .662
Ag −.356 −.499 .755 .823 .800
G .050 .188 −.726 −.735 .543
 

Note: Major loadings for each item are in boldface.

Table 2
 
(A) Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) shape vector; and (B) correlations between the six different reflectance vectors.
Table 2
 
(A) Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) shape vector; and (B) correlations between the six different reflectance vectors.
A E L R S T
(A)
A 1 .11 −.77 .84 −.72 −.84
E 1 .34 .51 .37 .24
L 1 −.44 .94 .94
R 1 −.40 −.56
S 1 .92
T 1
(B)
A 1 .04 −.75 .81 −.67 −.82
E 1 .40 .47 .44 .26
L 1 −.40 .93 .93
R 1 −.33 −.63
S 1 .91
T 1
Table 3
 
Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) judgments from Experiment 1.
Table 3
 
Correlations between the aggressiveness (A), extroversion (E), likeability (L), risk seeking (R), social skills (S), and trustworthiness (T) judgments from Experiment 1.
A E L R S T
A 1 .06 −.73 .81 −.66 −.78
E 1 .41 .49 .45 .30
L 1 −.36 .93 .93
R 1 −.29 −.47
S 1 .90
T 1
Table 4
 
Every cell shows the percentage of ratings in agreement with the intended image manipulation for a combination of two dimensions.
Table 4
 
Every cell shows the percentage of ratings in agreement with the intended image manipulation for a combination of two dimensions.
A E L R S T
A 75* 61* 56 57 42
E 62* 66* 54 60
L 69* 45 51
R 66* 62
S 57
T
 

Note:*Significantly above chance level with corrected alpha levels for multiple tests (Jaccard & Wan, 1996).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×