Free
Article  |   November 2012
Men's judgments of women's facial attractiveness from two- and three-dimensional images are similar
Author Affiliations
  • Cara C. Tigue
    Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
    [email protected]
  • Katarzyna Pisanski
    Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
    [email protected]
  • Jillian J. M. O'Connor
    Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
    [email protected]
  • Paul J. Fraccaro
    Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
    [email protected]
  • David R. Feinberg
    Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
    [email protected]
Journal of Vision November 2012, Vol.12, 3. doi:https://doi.org/10.1167/12.12.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cara C. Tigue, Katarzyna Pisanski, Jillian J. M. O'Connor, Paul J. Fraccaro, David R. Feinberg; Men's judgments of women's facial attractiveness from two- and three-dimensional images are similar. Journal of Vision 2012;12(12):3. https://doi.org/10.1167/12.12.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Although most research on human facial attractiveness has used front-facing two-dimensional (2D) images, our primary visual experience with faces is in three dimensions. Because face coding in the human visual system is viewpoint-specific, faces may be processed differently from different angles. Thus, results from perceptual studies using front-facing 2D facial images may not be generalizable to other viewpoints. We used rotating three-dimensional (3D) images of women's faces to test whether men's attractiveness ratings of women's faces from 2D and 3D images differed. We found a significant positive correlation between men's judgments of women's facial attractiveness from 2D and 3D images (r = 0.707), suggesting that attractiveness judgments from 2D images are valid and provide similar information about women's attractiveness as do 3D images. We also found that women's faces were rated significantly more attractive in 3D images than in 2D images. Our study verifies a novel method using 3D facial images, which may be important for future research on viewpoint-specific social perception. This method may also be valuable for the accurate measurement and assessment of facial characteristics such as averageness, identity, attractiveness, and emotional expression.

Introduction
The majority of research on human facial attractiveness has utilized two-dimensional (2D) facial images (for reviews: Little, Jones, & DeBruine, 2011; Rhodes, 2006; Thornhill & Gangestad, 1999). Results from prior work using 2D images provide evidence that invariant facial characteristics such as averageness (Langlois & Roggman, 1990; Valentine, Darling, & Donnelly, 2004), femininity (Perrett et al., 1998; Rhodes, Hickford, & Jeffery, 2000), and fluctuating asymmetry (Perrett et al., 1999; Rhodes, Proffitt, Grady, & Sumich, 1998) are important determinants of women's facial attractiveness. Variant facial characteristics such as skin condition (Jones, Little, Burt, & Perrett, 2004), skin coloration (Fink, Grammer, & Matts, 2006; Stephen et al., 2012), and gaze direction (Conway, Jones, DeBruine, & Little, 2008) can also influence perceptions of facial attractiveness. Most of the 2D images used in previous work have been front-facing, but in the real world, we experience faces from multiple viewing angles. Because the human visual system may process three-dimensional (3D) objects differently depending on the viewing angle, studies that use front-facing facial images may be limited. 
It has been debated whether object recognition is view-specific (Tarr & Bulthoff, 1995) or view-invariant (Biederman & Gerhardstein, 1993) in the human visual system. This debate has centered on understanding how the visual system recognizes the same object from different angles when that object projects different shapes onto the retina from different viewpoints (Hayward, 2003). Structural description models argue that object recognition does not depend on viewpoint because the visual system uses information from 3D structures to identify an object (Hayward, 2003). On the other hand, view-based models state that the visual system uses a 2D projection of an object from a specific viewpoint to identify it (Hayward, 2003). Although the debate over the so-called “viewpoint problem” has waned, the relative roles of 3D structural information and 2D view-based information in object recognition remain unclear. 
Although objects and faces are processed differently in the visual system, the neural mechanisms underlying object and face perception each show sensitivity to viewpoint. Prior work on the monkey visual system has demonstrated that there are neurons in the superior temporal sulcus that are sensitive to face view (Perrett et al., 1985) and that face coding is view-specific in macaques (Perrett et al., 1991; Perrett, Hietanen, Oram, & Benson, 1992). Similarly, studies on viewpoint aftereffects in humans suggest that humans also have neurons tuned to specific viewing angles (Fang & He, 2005). More specifically, face viewpoint aftereffects, or changes in the responses of neurons that code faces following habituation to a specific viewpoint, have been demonstrated for a variety of facial features (Chen, Yang, Wang, & Fang, 2010). Some researchers have interpreted the finding that face aftereffects transfer across viewpoints as evidence that face perception is viewpoint-invariant (Jiang, Blanz, & O'Toole, 2006). Others have pointed out that because there is only partial transfer of face aftereffects across viewpoints, face coding must be viewpoint-specific (Jeffery, Rhodes, & Busey, 2006, 2007). Recent work has provided stronger evidence for view-specific face processing by showing that aftereffects can be induced simultaneously in opposite directions, suggesting that aftereffects for different viewpoints are dissociable (Welling et al., 2009). 
Whether face processing in the human visual system is view-specific or view-invariant is crucial to research on facial attractiveness because perceptions of key determinants of facial attractiveness, such as averageness, femininity, emotion, and symmetry, may be affected by viewing angle. If face processing is view-specific, then results from facial attractiveness studies using front-facing 2D images may not be generalizable to other viewpoints. Indeed, some researchers have argued that 2D images are not ecologically valid stimuli for face attractiveness research because attractiveness ratings from static images did not correlate with ratings from dynamic stimuli (Lander, 2008; Penton-Voak & Chang, 2008; Rubenstein, 2005). For example, Rubenstein (2005) found that men's and women's ratings of women's facial attractiveness from videos were not significantly correlated with attractiveness ratings from static freeze-frame images. In addition, Lander (2008) found that attractiveness ratings of men's and women's moving and static faces were significantly positively correlated when rated by members of the opposite, but not the same, sex. Furthermore, Penton-Voak and Chang (2008) showed that dynamic men's, but not women's, faces were rated more attractive than were static faces. 
By contrast, other work indicates that 2D images are ecologically valid because attractiveness ratings are consistent across stimulus presentation formats and modalities. For example, Roberts et al. (2009) found strong positive correlations between attractiveness ratings from static and dynamic facial images of both men and women. Rhodes et al. (2011) also found a positive correlation between attractiveness ratings of men's faces from static images and videos. It has also been shown that women's preferences for men's vocal and facial masculinity in videos are positively correlated (O'Connor et al., 2012), replicating preferences observed in studies using still images. Furthermore, studies that have used profile views of the face have found that people judged emotional expressions (Matsumoto & Hwang, 2011) and symmetry and averageness (Valentine et al., 2004) equally well from front-facing and profile facial stimuli. Finally, Saxton, Burriss, Murray, Rowland, and Roberts (2009) found that attractiveness ratings of the face, body, and voice were each positively correlated with each other. Taken together, there is evidence both for and against the use of 2D images in research on facial attractiveness. 
Using 3D images that allow the face to be viewed from more than one angle may minimize confounds associated with front-facing 2D images. For example, camera angle and head tilt can influence perceptions of femininity and symmetry in 2D (Penton-Voak et al., 2001), but not 3D, images. Previous studies have used 3D imaging techniques to study different aspects of face perception, including attractiveness (Blanz, O'Toole, Vetter, & Wild, 2000; Caharel, Jiang, Blanz, & Rossion, 2009; O'Toole, Price, Vetter, Bartlett, & Blanz, 1999), but did not directly compare attractiveness ratings from 2D and 3D facial images. To test whether facial attractiveness judgments from 2D and 3D images differ, we presented men with both 2D and 3D images of women's faces. The 3D images contained information from 180° of the face (from ear to ear), while the 2D images contained information from only the front of the face. Given that facial attractiveness ratings are generally positively correlated across presentation formats and modalities, we predicted that men would rate women's facial attractiveness similarly in 2D and 3D images. 
Methods
Stimuli collection
We collected 2D and 3D facial images from 39 White women (mean age = 18.69 ± 1.00 years, range = 17–22 years) who received course credit or payment for participation. The 2D and 3D images were captured in random order. 
2D facial images
We captured a 2D color facial photograph of each participant using a Nikon D90 digital single-lens reflex camera (Nikon, Tokyo, Japan) with an AF Micro Nikkor 60 mm lens (Nikkor, Tokyo, Japan) under standardized lighting. We photographed each participant looking straight-on at the camera with neutral facial expression. Participants wore a headband to pull hair off of the face and removed glasses, makeup, and facial piercings. Images were captured in RAW format and exported to JPEG format using Nikon ViewNX version 1.1.1 (Nikon Corporation) software. 
3D facial images
We captured a 3D facial image of each participant with neutral expression using the 3dMDface System (3dMD LLC, Atlanta, GA) under standardized lighting. This system projects an invisible infrared speckled light pattern onto the face and uses four stereo cameras and two color cameras to generate the geometry and surface texture, respectively, of the face using a distance-calibrated stitching algorithm. It captures 180° facial images at a capture speed of < 1.5 ms and a geometry accuracy of < 200 μm. The system was calibrated daily prior to image acquisition. Three-dimensional images were captured in TriSpectives 3D drawing file (.TSB) format and converted to videos in Audio Video Interleaved (.AVI) format using 3dMDpatient software version 4.0 (3dMD LLC, Atlanta, GA). This method of 3D image capture has been used successfully in prior work (Aldridge, Boyadjiev, Capone, DeLeon, & Richtsmeier, 2005). 
Stimulus creation
To control for size of 2D images, we standardized interpupillary distance using PsychoMorph for Windows version 8.4.11 software (Tiddeman, Burt, & Perrett, 2001). We also used PsychoMorph to mask each 2D image to reduce visual cues outside of the face that could influence attractiveness ratings (Figure 1a). Two-dimensional images were presented at 1350 × 1800 pixels in size. 
Figure 1
 
Examples of (a) 2D facial image and (b) screen shot of 3D facial image of the same woman, which were used as stimuli.
Figure 1
 
Examples of (a) 2D facial image and (b) screen shot of 3D facial image of the same woman, which were used as stimuli.
We converted the .AVI videos of 3D images into QuickTime format (.MOV) with an H.264 video codec using Adobe Media Encoder CS5 version 5.0.1.0 (64-bit, Adobe [Adobe Systems Incorporated, San Jose, CA]). Each video was recorded so that the face started facing 90° to the left, rotated around the y-axis toward the viewer 180° to face 90° to the right, and back again. This sequence repeated twice. Each video lasted 10 seconds total. Three-dimensional images were presented against a black background (Figure 1b). Three-dimensional images were presented at 654 × 480 pixels in size. Similar methods of 3D face presentation have been used successfully in prior studies of facial attractiveness (O'Toole et al., 1999). 
Raters
Raters were 31 self-reported heterosexual men (mean age = 18.74 ± 1.65 years, range = 17–26 years) who rated the women's 2D and 3D facial images for attractiveness. 
Rating procedure
Raters viewed each 2D and 3D facial image on a computer screen (30-inch Apple Cinema HD display at 2650 × 1600 pixel resolution [Apple Inc., Cupertino, CA]) and rated it for attractiveness on a seven-point scale (one = very unattractive, seven = very attractive). For the 3D images, participants doubleclicked on each image to play the video on the computer screen. We instructed participants to wait for the video to stop completely before making their rating. 2D and 3D faces were presented in separate randomized blocks (both within and between blocks). 
Statistical analyses
We performed statistical analyses using SPSS 20 (IBM, Armonk, NY) with two-tailed probability estimates and α = 0.05. 
Results
For each stimulus identity, we averaged across all men's attractiveness ratings. Interrater reliability was excellent for the 2D (Cronbach's α = 0.952) and 3D images (Cronbach's α = 0.948). The distributions of attractiveness ratings from 2D and 3D images were not significantly different from normal (Shapiro-Wilk tests; 2D images: W39 = .948, p = 0.068; 3D images: W39 = .951, p = 0.086). Because the p-values of these tests were close to the alpha level of 0.05, we preformed both parametric and nonparametric statistical tests. 
To test if men's attractiveness ratings of women's faces were related in 2D and 3D images, we performed a Pearson correlation. We found a significant positive relationship between attractiveness ratings in 2D and 3D images (r = .707, p < 0.001, N = 39; Figure 2). Controlling for the age of the women whose faces were used as stimuli did not change the statistical significance of this result (partial r36 = 0.696, p < 0.001). Using nonparametric tests did not change the statistical significance of these correlations (Spearman's rho = .705, p < 0.001, N = 39; partial rho controlling for age = 0.692, p < 0.001, N = 39). We also repeated the above correlation analyses excluding the identity with the highest 2D and 3D attractiveness ratings. Excluding this data point did not change the statistical significance of the Pearson correlation (r = .684, p < 0.001, N = 38) or the Spearman correlation (Spearman's rho = .683, p < 0.001, N = 38). 
Figure 2
 
Men's attractiveness ratings of women's faces from 2D images were significantly positively correlated with their attractiveness ratings from 3D images (N = 39).
Figure 2
 
Men's attractiveness ratings of women's faces from 2D images were significantly positively correlated with their attractiveness ratings from 3D images (N = 39).
To test if women's faces were rated more attractive in 3D than in 2D images, we performed a paired-samples t test. We found that women's faces were rated significantly more attractive from 3D images (M ± SEM = 2.61 ± 0.12) than from 2D images (M ± SEM = 2.43 ± 0.11; t38 = −2.08, p = 0.045). 
Discussion
We found that men's judgments of women's facial attractiveness from 2D and 3D images were correlated. This result is consistent with results from previous studies showing that facial attractiveness judgments from 2D images are valid and can be replicated in different stimulus presentation formats (O'Connor et al., 2012; Rhodes et al., 2011; Roberts et al., 2009; Saxton et al., 2009). This study, along with other recent studies, provides converging evidence that 2D and 3D facial images contain similar information about women's attractiveness and that women's attractiveness can be judged accurately from 2D images. 
Our result suggests that 2D facial images from a single viewpoint and 3D images with 180° views contain similar information about women's attractiveness. That 2D and 3D faces were rated similarly in attractiveness when evaluated from either a single viewpoint (2D images) or several viewpoints (3D images), is not inconsistent with work showing that face coding is view-specific in both monkeys (Perrett et al., 1991, 1992) and humans (Jeffery et al., 2006, 2007). Rather, it is most likely that 2D images contain enough information about 3D structural elements of the face to accurately assess attractiveness. 
We also found that women's faces were rated more attractive in 3D than in 2D images. Although 2D images likely contain enough visual information to accurately access facial attractiveness, 3D images contain more information, which may cause them to be perceived as more attractive overall. It is also possible that raters gave higher ratings to the 3D images because the 3D images were presented as videos, whereas the 2D images were static photographs. Raters may have found the videos of 3D faces more attractive in general because they were more visually stimulated by moving images than by static ones. Furthermore, it should be noted that we standardized interpupillary distance in the 2D images but not in the 3D images. Because we found a significant positive correlation between attractiveness ratings in 2D and 3D images despite this difference between stimuli in the two image formats, it is likely that the observed relationship is independent of image size. 
It remains to be tested whether perceptions of facial features such as averageness, sexual dimorphism, and symmetry differ in 2D and 3D images. Future studies should test whether perceptions of averageness differ in 2D and 3D facial images. Additionally, it is possible that some of the key determinants of facial masculinity, such as protrusion of the brow ridge and angularity of the jaw, may be easier to assess in 3D than in 2D facial images. For this reason, 3D facial images may be especially important to future research on facial masculinity. Three-dimensional images may also provide more accurate measurements of facial symmetry. Lateral head rotation, a potential confound in previous studies on facial symmetry using 2D images (Penton-Voak et al., 2001), does not influence symmetry measurements in 3D images. 
Our finding also demonstrates that using 3D images is as valid as using 2D images for research on facial attractiveness and face perception because ratings of facial attractiveness from 3D and 2D images were correlated. By validating this novel method of stimuli presentation, we have introduced a potentially more generalizable method of evaluating perceptions of facial characteristics. This 3D approach could be used in future studies on the influence of gaze direction on attractiveness and other attributions since gaze direction has been shown to influence perceptions of attractiveness (Conway et al., 2008; Jones, DeBruine, Little, Conway, & Feinberg, 2006). Three-dimensional facial images could also be useful to studies investigating view-specific face coding in humans (Jeffery et al., 2006, 2007) and could add to our understanding of how viewing angle influences perceptions of different facial features. 
Conclusions
In sum, we have shown that women's facial attractiveness is judged similarly from 2D and 3D images. Our data demonstrate that 2D and 3D facial images provide similar information about women's attractiveness and that 2D images are acceptable stimuli, despite the criticism they have received. Nevertheless, 3D images may be important to future work investigating aspects of face perception that are view-specific or that may be measured more accurately in 3D than in 2D. 
Acknowledgments
This research was supported by grants from the Canada Foundation for Innovation, Ontario's Ministry of Research and Innovation, and the Social Sciences and Humanities Research Council to David Feinberg. Cara Tigue, Katarzyna Pisanski, Jillian O'Connor, and Paul Fraccaro are funded by the Social Sciences and Humanities Research Council. We thank Kaan Alkin and Lisa Do Couto for their help in collecting data. We also thank Bernard Tiddeman, David Perrett, and Dengke Xio for the use of Psychomorph software. 
Commercial relationships: none. 
Corresponding author: David Feinberg. 
Address: Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada. 
References
Aldridge K. Boyadjiev S. A. Capone G. T. DeLeon V. B. Richtsmeier J. T. (2005). Precision and error of three-dimensional phenotypic measures acquired from 3dmd photogrammetric images. American Journal of Medical Genetics Part A, 138A(3), 247–253. [CrossRef] [PubMed]
Biederman I. Gerhardstein P. C. (1993). Recognizing depth-rotated objects: Evidence and conditions for 3-dimensional viewpoint invariance. Journal of Experimental Psychology-Human Perception and Performance, 19(6), 1162–1182. [CrossRef] [PubMed]
Blanz V. O'Toole A. J. Vetter T. Wild H. A. (2000). On the other side of the mean: The perception of dissimilarity in human faces. Perception, 29(8), 885–891. [CrossRef] [PubMed]
Caharel S. Jiang F. Blanz V. Rossion B. (2009). Recognizing an individual face: 3d shape contributes earlier than 2d surface reflectance information. Neuroimage, 47(4), 1809–1818. [CrossRef] [PubMed]
Chen J. Yang H. Wang A. Fang F. (2010). Perceptual consequences of face viewpoint adaptation: Face viewpoint aftereffect, changes of differential sensitivity to face view, and their relationship. Journal of Vision, 10(3):12, 1–11, http://www.journalofvision.org/content/10/3/12, doi:10.1167/10.3.12. [PubMed] [Article] [CrossRef] [PubMed]
Conway C. A. Jones B. C. DeBruine L. M. Little A. C. (2008). Evidence for adaptive design in human gaze preference. Proceedings of the Royal Society B-Biological Sciences, 275(1630), 63–69. [CrossRef]
Fang F. He S. (2005). Viewer-centered object representation in the human visual system revealed by viewpoint aftereffects. Neuron, 45(5), 793–800. [CrossRef] [PubMed]
Fink B. Grammer K. Matts P. J. (2006). Visible skin color distribution plays a role in the perception of age, attractiveness, and health in female faces. Evolution and Human Behavior, 27(6), 433–442. [CrossRef]
Hayward W. G. (2003). After the viewpoint debate: Where next in object recognition? Trends in Cognitive Sciences, 7(10), 425–427. [CrossRef] [PubMed]
Jeffery L. Rhodes G. Busey T. (2006). View-specific coding of face shape. Psychological Science, 17(6), 501–505. [CrossRef] [PubMed]
Jeffery L. Rhodes G. Busey T. (2007). Broadly tuned, view-specific coding of face shape: Opposing figural aftereffects can be induced in different views. Vision Research, 47(24), 3070–3077. [CrossRef] [PubMed]
Jiang F. Blanz V. O'Toole A. J. (2006). Probing the visual representation of faces with adaptation: A view from the other side of the mean. Psychological Science, 17(6), 493–500. [CrossRef] [PubMed]
Jones B. C. DeBruine L. M. Little A. C. Conway C. A. Feinberg D. R. (2006). Integrating gaze direction and expression in preferences for attractive faces. Psychological Science, 17(7), 588–591. [CrossRef] [PubMed]
Jones B. C. Little A. C. Burt D. M. Perrett D. I. (2004). When facial attractiveness is only skin deep. Perception, 33(5), 569–576. [CrossRef] [PubMed]
Lander K. (2008). Relating visual and vocal attractiveness for moving and static faces. Animal Behaviour, 75, 817–822. [CrossRef]
Langlois J. H. Roggman L. A. (1990). Attractive faces are only average. Psychological Science, 1(2):115–121. [CrossRef]
Little A. C. Jones B. C. DeBruine L. M. (2011). Facial attractiveness: Evolutionary based research. Philosophical Transactions of the Royal Society B-Biological Sciences, 366(1571), 1638–1659. [CrossRef]
Matsumoto D. Hwang H. S. (2011). Judgments of facial expressions of emotion in profile. Emotion, 11(5), 1223–1229. [CrossRef] [PubMed]
O'Connor J. J. M. Feinberg D. R. Fraccaro P. J. Borak D. J. Tigue C. C. Re D. E. (2012). Female preferences for male vocal and facial masculinity in videos. Ethology, 118(4), 321–330. [CrossRef]
O'Toole A. J. Price T. Vetter T. Bartlett J. C. Blanz V. (1999). 3D shape and 2D surface textures of human faces: The role of “averages” in attractiveness and age. Image and Vision Computing, 18(1), 9–19. [CrossRef]
Penton-Voak I. S. Chang H. Y. (2008). Attractiveness judgments of individuals vary across emotional expression and movement conditions. Journal of Evolutionary Psychology, 6(2), 89–100. [CrossRef]
Penton-Voak I. S. Jones B. C. Little A. C. Baker S. Tiddeman B. Burt D. M. (2001). Symmetry, sexual dimorphism in facial proportions and male facial attractiveness. Proceedings of the Royal Society of London Series B-Biological Sciences, 268(1476), 1617–1623. [CrossRef]
Perrett D. I. Burt D. M. Penton-Voak I. S. Lee K. J. Rowland D. A. Edwards R. (1999). Symmetry and human facial attractiveness. Evolution and Human Behavior, 20(5), 295–307. [CrossRef]
Perrett D. I. Hietanen J. K. Oram M. W. Benson P. J. (1992). Organization and functions of cells responsive to faces in the temporal cortex. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences, 335(1273), 23–30. [CrossRef]
Perrett D. I. Lee K. J. Penton-Voak I. Rowland D. Yoshikawa S. Burt D. M. (1998). Effects of sexual dimorphism on facial attractiveness. Nature, 394(6696), 884–887. [CrossRef] [PubMed]
Perrett D. I. Oram M. W. Harries M. H. Bevan R. Hietanen J. K. Benson P. J. (1991). Viewer-centered and object-centered coding of heads in the macaque temporal cortex. Experimental Brain Research, 86(1), 159–173. [PubMed]
Perrett D. I. Smith P. A. J. Potter D. D. Mistlin A. J. Head A. S. Milner A. D. (1985). Visual cells in the temporal cortex sensitive to face view and gaze direction. Proceedings of the Royal Society of London Series B-Biological Sciences, 223(1232), 293–317. [CrossRef]
Rhodes G. (2006). The evolutionary psychology of facial beauty. Annual Review of Psychology, 57, 199–226. [CrossRef] [PubMed]
Rhodes G. Hickford C. Jeffery L. (2000). Sex-typicality and attractiveness: Are supermale and superfemale faces super-attractive? British Journal of Psychology, 91, 125–140. [CrossRef] [PubMed]
Rhodes G. Lie H. C. Thevaraja N. Taylor L. Iredell N. Curran C. (2011). Facial attractiveness ratings from video-clips and static images tell the same story. PLOS One, 6(11), e26653. [CrossRef] [PubMed]
Rhodes G. Proffitt F. Grady J. M. Sumich A. (1998). Facial symmetry and the perception of beauty. Psychonomic Bulletin & Review, 5(4), 659–669. [CrossRef]
Roberts S. C. Saxton T. K. Murray A. K. Burriss R. P. Rowland H. M. Little A. C. (2009). Static and dynamic facial images cue similar attractiveness judgements. Ethology, 115(6), 588–595. [CrossRef]
Rubenstein A. J. (2005). Variation in perceived attractiveness: Differences between dynamic and static faces. Psychological Science, 16(10), 759–762. [CrossRef] [PubMed]
Saxton T. K. Burriss R. P. Murray A. K. Rowland H. M. Roberts S. C. (2009). Face, body and speech cues independently predict judgments of attractiveness. Journal of Evolutionary Psychology, 7(1), 23–35. [CrossRef]
Stephen I. D. Scott I. M. L. Coetzee V. Pound N. Perrett D. I. Penton-Voak I. S. (2012). Cross-cultural effects of color, but not morphological masculinity, on perceived attractiveness of men's faces. Evolution and Human Behavior, 33(4), 260–267. [CrossRef]
Tarr M. J. Bulthoff H. H. (1995). Is human object recognition better described by geon-structural-descriptions or by multiple-views? Comment on Biederman and Gerhardstein (1993). Journal of Experimental Psychology-Human Perception and Performance, 21(6):1494–1505. [CrossRef] [PubMed]
Thornhill R. Gangestad S. W. (1999). Facial attractiveness. Trends in Cognitive Sciences, 3(12), 452–460. [CrossRef] [PubMed]
Tiddeman B. Burt D. M. Perrett D. (2001). Computer graphics in facial perception research. IEEE Computer Graphics and Applications, 21(5), 42–50. [CrossRef]
Valentine T. Darling S. Donnelly M. (2004). Why are average faces attractive? The effect of view and averageness on the attractiveness of female faces. Psychonomic Bulletin & Review, 11(3), 482–487. [CrossRef] [PubMed]
Welling L. L. M. Jones B. C. Bestelmeyer P. E. G. DeBruine L. M. Little A. C. Conway C. A. (2009). View-contingent aftereffects suggest joint coding of face shape and view. Perception, 38(1), 133–141. [CrossRef] [PubMed]
Figure 1
 
Examples of (a) 2D facial image and (b) screen shot of 3D facial image of the same woman, which were used as stimuli.
Figure 1
 
Examples of (a) 2D facial image and (b) screen shot of 3D facial image of the same woman, which were used as stimuli.
Figure 2
 
Men's attractiveness ratings of women's faces from 2D images were significantly positively correlated with their attractiveness ratings from 3D images (N = 39).
Figure 2
 
Men's attractiveness ratings of women's faces from 2D images were significantly positively correlated with their attractiveness ratings from 3D images (N = 39).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×