September 2019
Volume 19, Issue 11
Open Access
Article  |   September 2019
Gazing into space: Systematic biases in determining another's fixation distance from gaze vergence in upright and inverted faces
Author Affiliations
  • Alysha T. T. Nguyen
    School of Psychology, University of New South Wales Sydney, Sydney, Australia
  • Colin W. G. Clifford
    School of Psychology, University of New South Wales Sydney, Sydney, Australia
    Colin.Clifford@unsw.edu.au
Journal of Vision September 2019, Vol.19, 5. doi:https://doi.org/10.1167/19.11.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alysha T. T. Nguyen, Colin W. G. Clifford; Gazing into space: Systematic biases in determining another's fixation distance from gaze vergence in upright and inverted faces. Journal of Vision 2019;19(11):5. https://doi.org/10.1167/19.11.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The eyes of others play a crucial role in social interactions, providing information such as the focus of another's attention and their current thoughts and emotions. Although much research has focused on understanding how we perceive gaze direction, little has been done on gaze vergence, which can potentially yield information about the distance of another's fixation. Here, we presented participants with synthetic faces in a stereoscopically simulated 3-D environment to determine the absolute fixation distance at which they perceived a face to be gazing. The results showed an underestimation in fixation distance for downward-averted gaze and a limit in discrimination of gaze vergence beyond 35 cm. For inverted faces, fixation distance for gaze vergence in the lower visual field (corresponding to the avatar's upwards gaze) was underestimated, suggesting that our bias to underestimate others' fixation distance may rely on a viewer-centered, egocentric representation of interpersonal space.

Introduction
Our eyes are an essential part of vision, allowing us to actively explore and perceive the world around us. Yet they also serve an additional role—as a communicative cue to others in social interactions. Our eyes can indicate the focus of our attention and, in doing so, our current thoughts and intentions (Baron-Cohen, 1995). Understanding where someone is looking can thus inform us of objects in the environment of which we might not otherwise be aware, from dangerous predators lurking in the bushes to potential food sources or objects of interest. Eye gaze can also be used to initiate conversation with others, coordinate attention during joint action, and learn word-object associations during language acquisition in early development (Morales et al., 2000). 
Eye gaze can be decomposed into two components: gaze direction—that is, someone's line of sight—and gaze vergence—that is, the angle between the eyes. The vergence of the eyes indicates the distance of another person's fixation: The closer their object of fixation, the more converged their eyes. Conversely, the farther their object of fixation, the more parallel the eyes. So far, the vast majority of the gaze-perception literature has focused on investigating gaze direction (Mareschal, Otsuka, & Clifford, 2014; Otsuka, Mareschal, Calder, & Clifford, 2014), and little is known about the perception of gaze vergence (Nguyen, Palmer, Otsuka, & Clifford, 2018). However, both gaze direction and gaze vergence are necessary for understanding where someone is looking in three-dimensional space. By providing extra information about fixation depth, gaze vergence can help to refine a general sense of what direction another person is looking (e.g., to the left or to the right) to recover a precise point of fixation in three-dimensional space. 
We recently reported a systematic bias to perceive gaze as more convergent, especially when gaze is directed downward compared to upward (Nguyen et al., 2018). However, in that study, participants judged fixation distance on a relative scale by moving the mouse on a continuous slider that varied from “very close” to “very far.” The use of this relative scale to measure perceived fixation distance meant that it was unclear whether this bias represented an underestimation of fixation distance for downward gaze or an overestimation of fixation distance for upward gaze (or both). Furthermore, this asymmetry in vergence perception for downward compared to upward gaze may have been due to several factors: (a) There may be an inherent geometrical property of downward-averted eyes that leads to a perception of greater convergence. (b) We may be taking into account visual scene statistics, whereby objects on the ground tend to be closer than objects located at the same angle upward (Yang & Purves, 2003). Thus, if another person is gazing downward in our visual field, we may be more likely to judge their gaze to be fixating on closer distances (Nguyen et al., 2018). If this were the case, this would mean that an egocentric, viewer-centered perspective was employed in interpreting interpersonal space. (c) Alternatively, we may instead be adopting the other person's perspective and frame of reference when judging the fixation distance of their eyes. Although our perspective of upward and downward is identical to that of others in most situations, these two views can be in conflict in situations where one person is at a different orientation from the other (Figure 1), such as when one is hanging upside-down from a tree branch or has tilted their head on the side. If there is an underestimation in fixation distance for the other person's downward gaze, this would mean that we are interpreting interpersonal space from the other person's frame of reference rather than our own. 
Figure 1
 
The four conditions, representing the viewer's and avatar's frames of reference. The top row corresponds to the viewer's sense of “up,” and the bottom row the viewer's sense of “down.” In contrast, the blue-bordered conditions correspond to the avatar's upward gaze, and the red-bordered conditions the avatar's downward gaze. The first column contains the stimuli shown in the upright-face condition, and the second contains the stimuli shown in the inverted-face condition.
Figure 1
 
The four conditions, representing the viewer's and avatar's frames of reference. The top row corresponds to the viewer's sense of “up,” and the bottom row the viewer's sense of “down.” In contrast, the blue-bordered conditions correspond to the avatar's upward gaze, and the red-bordered conditions the avatar's downward gaze. The first column contains the stimuli shown in the upright-face condition, and the second contains the stimuli shown in the inverted-face condition.
In the present study, our first aim was to determine the absolute fixation distance at which participants perceive a face to be gazing. We also aimed to investigate whether the asymmetry in perceived fixation distance as a function of gaze direction was due to processing interpersonal space from our own frame of reference or that of others. We did this by presenting participants both with faces that were upright and with faces that were inverted. In order to accurately test the perception of gaze vergence and fixation distance, the experiment was conducted in a stereoscopically simulated three-dimensional environment. 
Methods
Participants
Participants were six experienced psychophysical observers recruited from the lab, including the two authors. The sample size was selected prior to data collection based on conventions in visual psychophysics and recent studies on gaze perception from this laboratory (Palmer & Clifford, 2017a, 2017b). This approach was used rather than a formal power analysis because of the novelty of the effect tested in the present study, which meant that a precise effect size could not be estimated. This experiment was approved by the University of New South Wales Human Research Ethics Advisory Panel C (Psychology). 
Design
The experiment had an 8 × 2 × 2 within-subject design with three independent variables: gaze vergence (fixation distances of 10, 15, 20, 25, 30, 35, 40, and 45 cm), vertical gaze direction (10° up and 10° down), and face orientation (upright and inverted). The dependent variable was perceived fixation distance, measured as participants' choice of one of 10 target spheres (corresponding to fixation distances of 5, 10, 15, 20, 25, 30, 35, 40, 45, and 50 cm). Each trial was repeated 16 times, resulting in a total of 512 trials. 
Apparatus and stimuli
Stimuli were 3-D synthetic faces presented on a True3Di stereoscopic monitor (Redrover, Seoul, Republic of Korea) in a darkened room. Participants viewed the monitor through glasses with linearly polarizing filters at a distance of 2 m from the screen, such that each eye received a different image from corresponding regions of visual space. 
The synthetic face models and textures were created first in FaceGen Modeller 3.5 (Singular Inversions, Toronto, Canada), then imported into the scene-based modeling program Blender (The Blender Foundation, Amsterdam, The Netherlands) for further manipulation. In Blender, gaze vergence and gaze direction were manipulated by setting the precise rotation of the eyes and the positions of the target objects in three-dimensional space. 
In order to create a realistic percept of depth, images were generated from the perspective of two virtual cameras in Blender, with the positions of the cameras corresponding to the positions of the participant's eyes relative to the monitor when viewing the stimuli. In this way, the intercamera distance matched each participant's interpupillary distance (i.e., the distance between their pupils). Participants' interpupillary distances were measured prior to beginning the experiment using an Essilor digital pupilometer (Essilor Instruments, Charenton-le-Pont, France), and participants were presented with the pregenerated set of images with an intercamera distance that most closely matched their interpupillary distance. As the average interpupillary distance in humans is approximately 63 mm (Fesharaki, Rezaei, Farrahi, Banihashem, & Jahanbakhshi, 2012), image sets with intercamera distances of 56, 58, 60, 62, 64, and 66 mm were generated. 
Additionally, as stereovision was necessary for the perception of depth in the three-dimensional stereoscopic monitor setup, participants were tested for adequate stereovision with an OPTEC2500 vision tester (Stereo Optical Company, Chicago, IL). In this test, participants viewed a screen with nine diamond shapes, with each diamond containing four target circles. One of the four targets had a retinal disparity and appeared to be the odd one out. The participants' task was to identify which of the four targets in each diamond was the odd one out. Adequate stereovision was defined as the ability to correctly detect 400 arcsec of retinal disparity. All participants had adequate stereovision. 
The inverted faces were created by changing the rotation angle of the upright face images to 180° in MATLAB (MathWorks, Natick, MA). 
Procedure
Participants viewed a series of 3-D computer-generated faces whose eyes opened to fixate on a point 10, 15, 20, 25, 30, 35, 40, or 45 cm in front of the face, at an angle of 10° above direct gaze or 10° below direct gaze. After 0.5 s, 10 spheres then appeared along the face's line of sight, corresponding to distances of 5, 10, 15, 20, 25, 30, 35, 40, 45, and 50 cm (Figure 2). The fifth and 10th spheres (25 and 50 cm) were colored red to enable participants to identify sphere locations more easily. Participants were asked to judge which of the 10 spheres the face was looking at by pressing a key on the keyboard (1, 2, 3, 4, 5, 6, 7, 8, 9, or 0), where 1 corresponded to the sphere closest to the avatar's face and 0 corresponded to the sphere farthest from the face. Upon response, the avatar's eyes closed for 0.5 s before opening for the next trial. The order of trials was randomized across participants. 
Figure 2
 
The order of animation frames in each trial: (i) Eyes are closed. (ii) Eyes open and fixate at a distance of 10, 15, 20, 25, 30, 35, 40, or 45 cm. (iii) Target spheres appear. (iv) Spheres disappear and eyes close after participant's response.
Figure 2
 
The order of animation frames in each trial: (i) Eyes are closed. (ii) Eyes open and fixate at a distance of 10, 15, 20, 25, 30, 35, 40, or 45 cm. (iii) Target spheres appear. (iv) Spheres disappear and eyes close after participant's response.
The face orientation conditions (upright and inverted) were completed in two separate sessions and their order counterbalanced across participants. Each session lasted approximately 45 min, resulting in 1.5 hr of testing in total per participant. 
Results
An 8 × 2 × 2 repeated-measures analysis of variance was conducted for perceived fixation distance (measured as the mean depth of the chosen target sphere), with gaze vergence (fixation distances of 10, 15, 20, 25, 30, 35, 40, and 45 cm), vertical gaze direction (10° up or 10° down), and face orientation (upright or inverted) as factors. 
Participants' perceived fixation distances were graphed as a function of the avatar's gaze fixation distance (Figure 3). There was a significant main effect of gaze vergence on perceived fixation distance, F(7, 160) = 58.056, p < 0.001, with the depth of participants' chosen target sphere increasing with the gaze vergence of the avatar's face. There was a nonsignificant main effect of vertical gaze direction, F(1, 160) = 2.568, p = 0.111, and a nonsignificant main effect of face orientation, F(1, 160) = 3.572, p = 0.061. A curve estimation following up the main effect of gaze vergence revealed that both linear and quadratic trends were significant, p < 0.001. Adding a cubic term to the model did not significantly improve the fit of the data. 
Figure 3
 
Mean depth of participants' chosen target sphere as a function of the avatar's gaze fixation distance for (A) the upright-face condition and (B) the inverted-face condition. Example images of face stimuli are taken from the left virtual camera in Blender. The error bars represent the standard error of the mean for each fixation distance, averaged across participants.
Figure 3
 
Mean depth of participants' chosen target sphere as a function of the avatar's gaze fixation distance for (A) the upright-face condition and (B) the inverted-face condition. Example images of face stimuli are taken from the left virtual camera in Blender. The error bars represent the standard error of the mean for each fixation distance, averaged across participants.
Of the two-way interaction effects, only the Vertical gaze direction × Face orientation interaction was significant, F(1, 160) = 51.759, p < 0.001. The avatar's downward gaze in the upright-face condition and upward gaze in the inverted-face condition were on average perceived to be fixating on distances 26.1% closer than its upward gaze in the upright condition and downward gaze in the inverted condition. Further t tests were conducted on each term of the interaction, with results showing a significant difference for five of the six contrasts: DownUpright vs. DownInverted, t(47) = −7.113, p < 0.001; DownUpright vs. UpUpright, t(47) = −8.438, p < 0.001; DownUpright vs. UpInverted, t(47) = −7.087, p < 0.001; DownInverted vs. UpInverted, t(47) = 4.435, p < 0.001; and UpUpright vs. UpInverted, t(47) = .4.947, p < 0.001. These contrasts remained significant after a Bonferroni correction for multiple comparisons, with the p values falling below the adjusted critical threshold of 0.0083. The difference between DownInverted and UpUpright was not significant, t(47) = 0.410, p = 0.684. The Gaze vergence × Vertical gaze direction interaction, F(7, 160) = 0.105, p = 0.998, and Gaze vergence × Face orientation interaction, F(7, 160) = 0.238, p = 0.975, were nonsignificant. 
The three-way Gaze vergence × Vertical gaze direction × Face orientation interaction was not significant, F(7, 160) = 0.442, p = 0.8741. Individual subject data are presented in Supplementary Figure S1
Discussion
The present study had two main aims. The first was to investigate the perception of absolute fixation distance from others' gaze vergence in downward-averted gaze compared to upward-averted gaze. This was investigated by asking participants to choose target spheres corresponding to the perceived fixation distance of a series of upright faces. Consistent with Nguyen et al. (2018), the results of the upright-face condition indicated that downward gaze was perceived to be fixating on closer distances compared to upward gaze. Additionally, by using participants' choice of target spheres as an absolute measure of the avatar's gaze fixation distance rather than a relative scale, we are able to conclude that this difference in perception for upward compared to downward gaze represents a systematic underestimation of gaze fixation distance for downward gaze. The results, as depicted in Figure 3A, also indicate that although our perception of fixation distance increases monotonically with the fixation distance of the avatar, our ability to discriminate gaze fixation distance from vergence declines markedly for distances beyond approximately 35 cm. We speculate that this reduced ability to discriminate gaze fixation distances beyond 35 cm may be related to the nonlinear relationship between gaze distance and eye vergence (i.e., the change in gaze vergence angle is much greater for gazes fixated at two close distances compared to two far distances). This trend is mirrored in the inverted-face condition (Figure 3B). As face inversion has been shown to disrupt facial recognition (Yin, 1969) and integration of facial features (Young, Hellawell, & Hay, 1987), our results suggest that the processing of gaze vergence does not rely heavily on encoding the configuration of other facial features. The fact that the avatar's downward gaze in the inverted-face condition was not underestimated, as it was in the upright-face condition, also shows that the underestimation effect we found cannot be explained by an inherent geometrical property of the eyes. 
Our second aim was to determine whether the perception of fixation distance from others' gaze vergence involves processing interpersonal space from the other's frame of reference or from our own. We were able to disambiguate the participant's and the avatar's frames of reference by presenting participants with both upright and inverted faces (Figure 1). In this way, the participant's sense of “up” would be represented by the avatar's upward gaze in the upright condition and downward gaze in the inverted condition. Similarly, the participant's sense of “down” would be represented by the avatar's downward gaze in the upright condition and upward gaze in the inverted condition. Like in the upright-face condition, the results for the inverted-face condition showed a significant difference in the perception of fixation distance between upward and downward gaze. However, while in the upright-face condition it was the fixation distance of the avatar's downward gaze that was underestimated, in the inverted-face condition it was the fixation distance of the avatar's upward gaze instead that was underestimated. What the two conditions that lead to gaze underestimation have in common is that both of these conditions correspond to gaze directed into the participant's lower visual field and their own sense of “down.” Thus, these results indicate that the perception of others' gaze fixation distance from vergence is influenced by the viewer's frame of reference rather than the other's. 
In particular, it is the viewer's sense of where the ground is that overrides that of the avatar, leading to a consistent underestimation of gaze fixation distance for gaze directed in the viewer's lower visual field. This underestimation for gaze located downward from the viewer's perspective is consistent with an explanation based on visual scene statistics (Nguyen et al., 2018), whereby objects on the ground are likely to be much closer to the viewer than objects located at the same angle upward, where there is no ground level to restrict potential distance (Yang & Purves, 2003). Additionally, these results highlight the special importance of the ground in the perception of distance in visual space, as proposed by Gibson's ground theory (Bian, Braunstein, & Andersen, 2005; Gibson, 1946/1958, 1950). Gibson proposed that the ground surface is both universal and crucial for determining the layout and distance of objects in visual scenes. The importance of the ground plane is further highlighted in studies reporting faster visual-search times on ground surfaces compared to ceiling surfaces (McCarley & He, 2000) and faster reaction times in change-detection tasks for objects on the ground plane (Ozkan & Braunstein, 2010). Our results similarly suggest that the ground plane plays an important role in the judgement of fixation distance from others' gaze vergence. 
Although we were able to distinguish between the avatar's and the viewer's frame of reference in the current experiment, we were unable to distinguish between the viewer's frame of reference and the environmental frame of reference. As the ground plane was always in the participants' lower visual field in the current experiment, it is unclear whether any underestimation in gaze fixation distance corresponds to the overriding influence of the ground plane or simply the influence of the viewer's subjective sense of downward. Research has found advantages in visual processing in the lower visual field (Karim & Kojima, 2010), and it is possible that these asymmetries in the visual field are responsible for the asymmetry in the perception of gaze fixation distance. A future experiment could attempt to disentangle the two by presenting face stimuli on the side, or by having participants tilt their heads to the side or upside down. 
Conclusions
In summary, the present study further explores the perception of gaze vergence by investigating the absolute fixation distance at which participants perceive an avatar's face to be gazing. The results indicated a limit in discrimination of gaze vergence beyond fixation distances of 35 cm, and a significant underestimation of gaze fixation distance when gaze was directed downward in the upright-face condition and upward in the inverted-face condition. These results provide support for a viewer-centered perspective on gaze perception, suggesting that we represent interpersonal space egocentrically when judging the gaze of others. 
Acknowledgments
This work was supported by Australian Research Council Discovery Project DP160102239 to CWGC. ATTN and CWGC designed the experiment. ATTN performed testing, data collection, data analysis, and interpretation under the supervision of CWGC. ATTN drafted the manuscript and CWGC provided edits. 
Commercial relationships: none. 
Corresponding author: Colin W. G. Clifford. 
Address: School of Psychology, University of New South Wales Sydney, Sydney, Australia. 
References
Baron-Cohen, S. (1995). Mindblindness: An essay on autism and theory of mind. Cambridge, MA: MIT Press.
Bian, Z., Braunstein, M. L., & Andersen, G. J. (2005). The ground dominance effect in the perception of 3-D layout. Perception & Psychophysics, 67 (5), 802–815.
Fesharaki, H., Rezaei, L., Farrahi, F., Banihashem, T., & Jahanbakhshi, A. (2012). Normal interpupillary distance values in an Iranian population. Journal of Opthalmic and Vision Research, 7, 231–234.
Gibson, J. J. (1950). The perception of the visual world. Oxford, UK: Houghton-Mifflin.
Gibson, J. J. (1958). Perception of distance and space in the open air. In Beardslee D. C. & Wertheimer M. (Eds.), Readings in perception (pp. 415–431). Princeton, NJ: Van Nostrand. (Reprinted from Motion picture testing and research [Army Air Force Aviation Psychology Program Research Report No. 7], pp. 185–187, by J. J. Gibson, Ed., 1946, Washington, DC: United States Government Publishing Office)
Karim, A. R., & Kojima, H. (2010). The what and why of perceptual asymmetries in the visual domain. Advances in Cognitive Psychology, 6, 103–115.
Mareschal, I., Otsuka, Y., & Clifford, C. (2014). A generalized tendency toward direct gaze with uncertainty. Journal of Vision, 14 (12): 27, 1–9, https://doi.org/10.1167/14.12.27. [PubMed] [Article]
McCarley, J. S., & He, Z. J. (2000). Asymmetry in 3-D perceptual organization: Ground-like surface superior to ceiling-like surface. Perception & Psychophysics, 62, 540–549.
Morales, M., Mundy, P., Delgado, C. E., Yale, M., Neal, R., & Schwartz, H. K. (2000). Gaze following, temperament, and language development in 6-month-olds: A replication and extension. Infant Behavior and Development, 23 (2), 231–236.
Nguyen, A. T. T., Palmer, C. J., Otsuka, Y., & Clifford, C. W. (2018). Biases in perceiving gaze vergence. Journal of Experimental Psychology: General, 147 (8), 1125–1133.
Otsuka, Y., Mareschal, I., Calder, A., & Clifford, C. (2014). Dual-route model of the effect of head orientation on perceived gaze direction. Journal of Experimental Psychology: Human Perception and Performance, 40 (4), 1425–1439, https://doi.org/10.1037/a0036151.
Ozkan, K., & Braunstein, M. L. (2010). Background surface and horizon effects in the perception of relative size and distance. Visual Cognition, 18 (2), 229–254.
Palmer, C. J., & Clifford, C. W. (2017a). Functional mechanisms encoding others' direction of gaze in the human nervous system. Journal of Cognitive Neuroscience, 29 (10), 1725–1738.
Palmer, C. J., & Clifford, C. W. (2017b). Perceived object trajectory is influenced by others' tracking movements. Current Biology, 27 (14), 2169–2176.
Yang, Z., & Purves, D. (2003). A statistical explanation of visual space. Nature Neuroscience, 6 (6), 632.
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81 (1), 141–145.
Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face perception. Perception, 42 (11), 1166–1178.
Figure 1
 
The four conditions, representing the viewer's and avatar's frames of reference. The top row corresponds to the viewer's sense of “up,” and the bottom row the viewer's sense of “down.” In contrast, the blue-bordered conditions correspond to the avatar's upward gaze, and the red-bordered conditions the avatar's downward gaze. The first column contains the stimuli shown in the upright-face condition, and the second contains the stimuli shown in the inverted-face condition.
Figure 1
 
The four conditions, representing the viewer's and avatar's frames of reference. The top row corresponds to the viewer's sense of “up,” and the bottom row the viewer's sense of “down.” In contrast, the blue-bordered conditions correspond to the avatar's upward gaze, and the red-bordered conditions the avatar's downward gaze. The first column contains the stimuli shown in the upright-face condition, and the second contains the stimuli shown in the inverted-face condition.
Figure 2
 
The order of animation frames in each trial: (i) Eyes are closed. (ii) Eyes open and fixate at a distance of 10, 15, 20, 25, 30, 35, 40, or 45 cm. (iii) Target spheres appear. (iv) Spheres disappear and eyes close after participant's response.
Figure 2
 
The order of animation frames in each trial: (i) Eyes are closed. (ii) Eyes open and fixate at a distance of 10, 15, 20, 25, 30, 35, 40, or 45 cm. (iii) Target spheres appear. (iv) Spheres disappear and eyes close after participant's response.
Figure 3
 
Mean depth of participants' chosen target sphere as a function of the avatar's gaze fixation distance for (A) the upright-face condition and (B) the inverted-face condition. Example images of face stimuli are taken from the left virtual camera in Blender. The error bars represent the standard error of the mean for each fixation distance, averaged across participants.
Figure 3
 
Mean depth of participants' chosen target sphere as a function of the avatar's gaze fixation distance for (A) the upright-face condition and (B) the inverted-face condition. Example images of face stimuli are taken from the left virtual camera in Blender. The error bars represent the standard error of the mean for each fixation distance, averaged across participants.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×