Free
Article  |   April 2013
Gaze categorization under uncertainty: Psychophysics and modeling
Author Affiliations
  • Isabelle Mareschal
    School of Psychology, The University of Sydney, NSW, Australia
    Australian Centre of Excellence in Vision Science, The University of Sydney, NSW, Australia
    Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
    i.mareschal@qmul.ac.uk
  • Andrew J. Calder
    MRC Cognition and Brain Sciences Unit, Cambridge, UK
    andy.calder@mrc-cbu.cam.ac.uk
  • Mark R. Dadds
    Department of Psychology, The University of New South Wales, NSW, Australia
    m.dadds@unsw.edu.au
  • Colin W. G. Clifford
    School of Psychology, The University of Sydney, NSW, Australia
    Australian Centre of Excellence in Vision Science, The University of Sydney, NSW, Australia
    colin.clifford@sydney.edu.au
Journal of Vision April 2013, Vol.13, 18. doi:https://doi.org/10.1167/13.5.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Isabelle Mareschal, Andrew J. Calder, Mark R. Dadds, Colin W. G. Clifford; Gaze categorization under uncertainty: Psychophysics and modeling. Journal of Vision 2013;13(5):18. https://doi.org/10.1167/13.5.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The accurate perception of another person's gaze direction underlies most social interactions and provides important information about his or her future intentions. As a first step to measuring gaze perception, most experiments determine the range of gaze directions that observers judge as being direct: the cone of direct gaze. This measurement has revealed the flexibility of observers' perception of gaze and provides a useful benchmark against which to test clinical populations with abnormal gaze behavior. Here, we manipulated effective signal strength by adding noise to the eyes of synthetic face stimuli or removing face information. We sought to move beyond a descriptive account of gaze categorization by fitting a model to the data that relies on changing the uncertainty associated with an estimate of gaze direction as a function of the signal strength. This model accounts for all the data and provides useful insight into the visual processes underlying normal gaze perception.

Introduction
Human vision is an active process whereby the eyes saccade to and track objects of interest. Such eye movements serve to position the retinal image of the object of interest on the fovea, ensuring that it is processed with the highest acuity visual mechanisms. To an observer, the direction of your gaze reveals where you are looking and hence what you are looking at. This might be an object of shared attention, or it might be the observer himself or herself. The direction of your gaze is thus a strong social signal to your intentions and future actions (Baron-Cohen, 1995). As such, understanding the mechanisms by which another's gaze is perceived and interpreted is an active area of interest in the burgeoning field of social neuroscience (for reviews, see Itier & Batty, 2009; Nummenmaa & Calder, 2009). 
Early psychophysical reports suggest that observers can estimate the direction of gaze of a human subject with very high acuity (Gibson & Pick, 1963; Martin & Rovira, 1981) but that the direction of gaze is influenced by the head orientation (Anstis, Mayhew, & Morely, 1969) in a manner that does not simply reflect changes of the visible part of the eye (Langton, Honeyman, & Tessler, 2004; Todorovic, 2006). More recent accounts have provided different (although not conflicting) explanations for encoding gaze. For example, Ando (2004) suggests that gaze is estimated based on a local luminance analysis between the eye and the surrounding region, and Calder, Jenkins, Cassel, and Clifford (2008) propose a multichannel process, whereby the direction of gaze is determined by the relative activity of channels that encode direct and averted gaze. Recently, Gamer and Hecht (2007) proposed the concept of a cone of direct gaze—the range of directions that were judged as being direct. To measure this, these authors instructed observers to center or decenter the horizontal direction of the eyes in a virtual head and measured the boundaries between direct and averted gaze. Using the decentering technique (a method of limits), they calculated the cone of direct gaze defined as the angular difference between the observer-defined leftward and rightward boundaries as being roughly 8° to 9° in diameter, revealing that observers were quite liberal in their judgments. More recently, a different procedure that bypasses the potential for systematic measurement errors associated with Gamer and Hecht's method of limits (see Hock & Schöner, 2010) was developed. Ewbank, Jennings, and Calder (2009) and Stoyanova, Ewbank, and Calder (2010) asked observers to categorize the deviation of gaze in faces as being direct, averted to the left, or averted to the right. With this technique, they were able to reliably measure the influence of facial expression and vocal signals on the cone of direct gaze. 
A clear strength of this latter technique as a behavioral measure of gaze perception is the ease with which it can be applied to children and clinical populations to chart the development of both normal and abnormal gaze processing. Vida and Maurer (2012) have used it to measure developmental changes in the ability to discriminate between direct and averted gaze along both the horizontal and vertical dimension. They report that the horizontal cone of direct gaze is wider in children younger than 6 years than in adults, suggesting a later development for fine-grained sensitivity to gaze. This approach could also prove useful in studying children with autism who show decreased sensitivity to direct gaze (Senju, Yaguchi, Tojo, & Hasegawa, 2003) and problems in discriminating small angles of gaze (0°, 2°, 4°, and 8° left/right; Campbell et al., 2006; see also Webster & Potter, 2008). As such, applying a gaze categorization methodology to clinical populations who display abnormal gaze behavior is useful for investigating the mechanisms underlying some of their psychopathologies (i.e., Gamer, Hecht, Seipp, & Hiller, 2011; Langdon, Corner, McLaren, Coltheart, & Ward, 2006; Ristic et al., 2005; Senju, Kikuchi, Hasegawa, Tojo, & Osanai, 2008). Gamer et al. (2011), for example, propose to use the width of the cone of direct gaze as a measure of social phobia. However, a shortcoming of the above methods is that they are agnostic as to the processes underlying gaze categorization; a formalized mechanistic approach to gaze categorization is currently lacking. 
We developed a simple psychophysical model that accounts for gaze categorization using changes in observers' uncertainty (i.e., how unsure they are about the stimulus) that reflects changes in both the internal noise (of the observer) and the external noise (imposed on the stimulus). We measured categorization under different levels of uncertainty and used the model to determine not simply how an observer's performance declined (e.g., by adding noise to the stimulus) but specifically what accounted for the decline. For example, is it due to a narrowing of category boundaries for “averted leftward” and “averted rightward,” or to a noisy sensory representation of the stimulus, or some combination of the two? Being able to identify the source of error in observers' (or clinical populations') behavior is a critical first step in designing tools that can help overcome it. Note that the model, as presented, is agnostic to higher social cues that are known to influence or interact with judgments of gaze direction. For example, the emotional content of the face (e.g., Ewbank et al., 2009; Lobmaier, Tiddeman, & Perrett, 2008) and its perceived attractiveness (e.g., Kloth, Altmann, & Schweinberger, 2011) can influence the perception of direct gaze. Here, we sought to determine how the visual signal (on which higher cortical areas make judgments) is represented under conditions of high and low uncertainty. 
To develop our model, we measured the range of gaze directions that observers judge as direct and fit the data using conventional psychometric methods (i.e., logistic functions were fit to the leftward and rightward data). We then fit the same data with our psychophysical model based on early visual processes using one fewer free parameter. We verified that our model provided a good fit to the data and then examined the model performance by experimentally varying signal strength via the following two manipulations: (a) adding noise to the eye region and (b) removing the head to create eyes-only stimuli. It has been suggested that from an evolutionary point of view, increasing uncertainty should lead observers to report gaze more often as being direct because “the danger of a miss (being eyed up by a predator) is much greater than a few false alarms” (Langton et al., 2004, p. 756). Earlier experiments also report that the presence or absence of head context and head orientation influences gaze categorization (e.g., Anstis et al., 1969; Gibson & Pick, 1963; Jenkins & Langton, 2003). For example, the Wollaston effect (Wollaston, 1824) shows that when the head position and eye position are incongruent, gaze is generally biased in the direction of the head, such that its perceived direction falls between the two (Cline, 1967; Langton et al., 2004). We therefore determined whether adding noise to the eyes or removing the face modified the cone of direct gaze and, if so, what underpinned this change in the cone size. 
Methods
Observers
Two of the authors (I.M. and C.W.G.C.) and seven naïve observers (four male) served as subjects (mean age = 31.2 years; SD = 7.6 years). All wore optical correction as necessary. All experiments adhered to the Declaration of Helsinki guidelines. 
Apparatus and stimuli
A Dell XPS computer running Matlab™ (MathWorks Ltd) was used for stimulus generation, experiment control, and recording subjects' responses. The programs controlling the experiment incorporated elements of the PsychToolbox (Brainard, 1997). Stimuli were displayed on a Sony Trinitron 20SE monitor (1024 × 768 pixels, refresh rate: 75 Hz) driven by the computer's built-in NVIDIA GeForce GTS 240 graphics card. The display was calibrated using a photometer and linearized using look-up tables in software. At the viewing distance of 57 cm, one pixel subtended 2.2 arcmin. 
Stimuli
Face stimuli
Eight gray-scale faces with neutral expressions were created with Daz software (http://www.daz3d.com/). One of the female faces is shown in Figure 1a, for the five gaze deviations tested. The hair was cropped and the face was presented within a circular aperture in the middle of the monitor. The stimuli subtended on average 15.1° × 11.2° and were viewed at 57 cm in a dimly lit room. To control the direction of gaze, the original eyes in the faces were replaced using Gimp software by gray-scale eye stimuli created using Matlab. The deviation of each eye was independently controlled using Matlab procedures that gave us precision down to the nearest pixel for eye rotation along the horizontal axis. 
Figure 1
 
Sample stimuli used in experiments, under the three conditions tested. (a) In the faces condition, four female (only one shown here) and four male faces with neutral expressions were used. Negative gaze deviations represent gaze averted to the left of the subject, and positive deviations represent gaze averted to the right of the subject. (b) In the noisy faces condition, fractal noise was added to the eyes only (shown here for the same female face). (c) In the eyes-only condition, only the eye region of the same eight faces was presented. This was achieved by applying an elliptical raised cosine contrast envelope over each eye (same female face as in a).
Figure 1
 
Sample stimuli used in experiments, under the three conditions tested. (a) In the faces condition, four female (only one shown here) and four male faces with neutral expressions were used. Negative gaze deviations represent gaze averted to the left of the subject, and positive deviations represent gaze averted to the right of the subject. (b) In the noisy faces condition, fractal noise was added to the eyes only (shown here for the same female face). (c) In the eyes-only condition, only the eye region of the same eight faces was presented. This was achieved by applying an elliptical raised cosine contrast envelope over each eye (same female face as in a).
Noisy faces
Fractal noise (1/f amplitude spectrum) was added to the eyes of the same faces. The noise was held constant at 6% r.m.s. contrast, and all observers used a Michelson contrast of 7.5% between the pupil and sclera of the eyes, except for C.W.G.C., who used 10%. Figure 1b shows the same female face with noise added to the 10% eye contrast stimuli. 
Eyes-only stimuli
Only the (noiseless) eyes of the same eight faces were used in this condition to examine cue combination (Figure 1c). The stimuli subtended on average 1.3° × 7.2° (the same size as when within the head context). 
Procedure
The observers' task was to indicate whether the direction of gaze in the three different conditions was averted to the left, direct, or averted to the right using key-presses “j,” “k,” and “l,” respectively. Each stimulus was presented for 400 ms followed by a gray screen that lasted 600 ms, during which no response was recorded. The next trial was initiated only after a response was made following the 600-ms wait period. The female and male faces were tested separately and data combined as there was no difference of face gender in performance (t(8) = 0.87, p > 0.1). Stimuli were presented using a method of constant stimuli with nine different directions of gaze selected from the set {−9°, −6°, −3°, −1°, 0°, 1°, 3°, 6°, 9°} or {−20°, −6°, −3°, −1°, 0°, 1°, 3°, 6°, 20°} when noise was added for some of the observers who failed to identify correctly the most extreme gazes more than 70% of the time in an initial practice trial. Each direction of gaze was sampled 12 times in a run (16 times for observer I.M.). Observers performed a minimum of six runs, with equal presentation of the male and female faces. An example of the compiled data across the runs is shown in Figure 2a for observer I.M. 
Figure 2
 
Gaze categorization procedure and model fits. (a) Sample observer (I.M.) responses to different directions of gaze: averted to the left (blue diamonds), direct (pink squares), or averted to the right (red triangles). Data were averaged across all runs, and error bars are ±1 SEM. Faces show the leftmost and rightmost directions of gaze tested for this observer (±9°). (b) The standard logistic fit to the normalized data. (c) The psychophysical Gaussian model (scale increased for visibility) showing an observer's sensory representation of the gaze stimulus whose direction is indicated by the arrow on the x-axis. The likelihood of the observer responding “direct” to the direction of gaze indicated by the arrow corresponds to the area of the Gaussian in the gray region. The likelihood of the observer responding “left” corresponds to the area of the Gaussian in the white region, and the likelihood of responding “right” is zero. (d) Model fit to the data.
Figure 2
 
Gaze categorization procedure and model fits. (a) Sample observer (I.M.) responses to different directions of gaze: averted to the left (blue diamonds), direct (pink squares), or averted to the right (red triangles). Data were averaged across all runs, and error bars are ±1 SEM. Faces show the leftmost and rightmost directions of gaze tested for this observer (±9°). (b) The standard logistic fit to the normalized data. (c) The psychophysical Gaussian model (scale increased for visibility) showing an observer's sensory representation of the gaze stimulus whose direction is indicated by the arrow on the x-axis. The likelihood of the observer responding “direct” to the direction of gaze indicated by the arrow corresponds to the area of the Gaussian in the gray region. The likelihood of the observer responding “left” corresponds to the area of the Gaussian in the white region, and the likelihood of responding “right” is zero. (d) Model fit to the data.
Logistic fit
Data from the six runs were compiled and logistic functions were fitted to the proportion of “left” and “right” responses. A function for “direct” responses was calculated by subtracting the sum of the left and right responses from 1 (Figure 2b). These three functions were fitted as an ensemble using the Nelder-Mead simplex method (Nelder & Mead, 1965) implemented via Matlab's fminsearch function to minimize residual variance. The crossover points of the direct and the left (L1) and direct and right (R1) responses, respectively, are termed the categorical boundaries, and the separation between the two is taken as the cone of direct gaze. 
Psychophysical model
To quantify gaze perception and the changes that occur when the signal is reduced, we fitted a psychophysical model to the data, illustrated in Figure 2c. We assumed that an observer's sensory representation of a given gaze direction has a Gaussian probability distribution centered on that actual direction, with a standard deviation that represents the uncertainty associated with the estimate. The probability of a direct response was taken as the area under the Gaussian that fell within the category boundaries (Figure 2c, shown in gray). As such, the model had three free parameters: 
  1.  
    An estimate of the peak that corresponds to the midpoint between the category boundaries and represents the gaze direction most judged to be direct (P0).
  2.  
    An estimate of the width of direct judgments (w) that corresponds to the distance between the categorical boundaries L1 and R1.
  3.  
    An estimate of the standard deviation (σrep) of the observers' sensory representation of a gaze stimulus (Figure 2c). This represents the uncertainty associated with the estimate and reflects the noise (internal and external) affecting the observer's sensory representation.
The respective probabilities of perceiving a stimulus as leftward, direct, and rightward are thus given by:    where G(θ, σ) is a Gaussian distribution of mean θ and standard deviation σ and θstim is the actual direction of gaze of the stimulus. 
Results
Figure 2a plots the unfitted data for one observer as a function of gaze direction in the noiseless condition and displays a typical range of direct responses. Figure 2b shows the data fit using the logistic functions. Figure 2c illustrates the model: The sensory representation of any given gaze direction has a Gaussian probability distribution whose standard deviation (σrep) is a parameter that is free to vary between observers and conditions. The likelihood of the observer responding “direct” is the area of the Gaussian within the category boundaries (shown in gray). Increasing the width of the Gaussian will increase the variability of the sensory representation, and responses will be less precise. 
Categorization data from nine observers are shown in Figure 3a for the noiseless condition and in Figure 3b for the noisy conditions. Solid lines are the logistic fits to the observers' proportion of leftward and rightward responses, and dashed lines are the model fits. There was very little difference between the two types of fit, revealing that the psychophysical model captures the data as well as the logistic fits but with one fewer free parameter (only observer A.A. in the noisy condition displayed a clear difference). Adding noise to the stimuli reduced the number of direct responses for a direct gaze deviation (the peak amplitude of the cone of direct gaze was reduced) for all observers except for R.M. We determined how good our model fits were to the data by calculating how much of the variance was accounted for by the model. In the noiseless condition, averaged across all observers, this was 98.5%; in the noisy condition, this was 90.1%. 
Figure 3
 
(a) Gaze categorization in the noiseless condition. Solid lines are logistic fits to the rightward and leftward data; dashed lines are the model fits to the rightward and leftward data. (b) Gaze categorization in the noisy condition. Solid and dashed lines same as in (a).
Figure 3
 
(a) Gaze categorization in the noiseless condition. Solid lines are logistic fits to the rightward and leftward data; dashed lines are the model fits to the rightward and leftward data. (b) Gaze categorization in the noisy condition. Solid and dashed lines same as in (a).
Estimates of the midpoints (peaks) between the categorical boundaries (Figure 4, left), widths (middle), and standard deviations of the noise (right) obtained from the model fits are plotted with 95% confidence intervals for the noisy and noiseless conditions. Adding noise to the eyes significantly increased the standard deviation of the Gaussian sensory representation (t(8) = 5.68, p < 0.001, right panel) but did not significantly alter the width (t(8) = 1.30, p > 0.2, middle) or peaks (t(8) = 0.02, p > 0.05, left panel). 
Figure 4
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless and noisy conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Figure 4
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless and noisy conditions. Solid lines are equality, and error bars are 95% confidence intervals.
It is worth noting that although the peak of the direct response was reduced when noise was added to the eyes, the total number of direct responses was not. We measured the area under the direct response curve in the noiseless and noisy conditions and find a nonsignificant trend for a greater area in noise (Figure 5, left panel). 
Figure 5
 
Area under the curve of “direct” responses. Noiseless faces against noisy faces (left) and noiseless faces against eyes-only (right). Solid line depicts equality.
Figure 5
 
Area under the curve of “direct” responses. Noiseless faces against noisy faces (left) and noiseless faces against eyes-only (right). Solid line depicts equality.
Because the presence or absence of head context and head orientation has been suggested to affect perception of gaze direction (e.g., Anstis et al., 1969; Cline, 1967; Gibson & Pick, 1963; Jenkins & Langton, 2003; Langton et al., 2004; Stein & Sterzer, 2011), we sought to determine how this cue might be influencing performance by measuring gaze categorization with eyes-only stimuli and fitting the data with our model. We found that for all observers, this led them to report fewer gaze directions as being direct relative to the noiseless faces condition (compare Figure 6 with Figure 3a). To quantify this, we measured the area under the curves for the direct responses and find that this area is significantly smaller in the eyes-only condition (t(8) = 4.12, p < 0.005; Figure 5, right panel). Using the model fits, we report that the width of the category boundaries significantly decreased (t(8) = 2.98, p < 0.05), but neither the peaks (t(8) = 1.28, p > 0.05) nor the standard deviation of the Gaussian distribution (t(8) = 0.45, p > 0.05) changed (Figure 7). This suggests that observers combine cues when they have access to both the direction of the head and the direction of the eyes. Because the head was always oriented toward the viewer, the net result of cue combination would be to increase the cone of direct eye gaze when the head was present. In the eyes-only condition, the model accounted for 97.7% of the variance, averaged across all observers. 
Figure 6
 
Gaze categorization in the eyes-only condition. Solid lines are logistic fits; dashed lines are the psychophysical model.
Figure 6
 
Gaze categorization in the eyes-only condition. Solid lines are logistic fits; dashed lines are the psychophysical model.
Figure 7
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless faces and eyes-only conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Figure 7
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless faces and eyes-only conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Discussion
Research to date has provided no clear understanding of the processes underlying gaze categorization. To address this, we developed a psychophysical model of how the visual signal is encoded to fit gaze categorization data under different conditions of signal strength. We find that the addition of noise to the stimulus leads to a decrease in the number of direct responses to a direct gaze stimulus by the observer but an increase in direct response to nondirect gaze deviations. Using our model, we propose that the change in performance can be accounted for solely by a change in the standard deviation of the observers' sensory representation: As the signal decreases, the observers' uncertainty increases. Our lack of a change in the widths of the category boundaries is not inconsistent with Gamer and Hecht (2007), who report a shift in the centering only when uncertainty is manipulated by viewing distance. Our lack of a change in the peaks (midpoints between the categorical boundaries) may simply reflect the fact that viewing distance affects a number of components (the stimulus apparent contrast, uncertainty of the positions of the eyes, reduced acuity) and that the observers may have unwittingly shifted their criteria when all of these parameters were concordantly changed. It is also worth noting that the procedure Gamer and Hecht used is a measurement of limits (asking observers to shift the direction of the eyes until they no longer seemed direct) that has the potential for systematic measurement errors because it is prone to adaptation and hysteresis effects (e.g., Hock & Schöner, 2010). Finally, it is important to note that our model ignores higher-level social factors (e.g., perceived attractiveness, perceived dominance) that have been reported to influence judgments of gaze, although it could be extended in the future to include a modulatory role for these social cues that could influence the sensory representation and/or category boundaries. 
We also find that when the direction of the head is removed as a cue (eyes-only stimuli), observers report fewer directions as being direct: The cone of direct gaze narrows. Because the heads were always facing forward, this cue is useful only when it is consistent with the (direct) gaze deviation. This is in agreement with earlier reports that observers combine information from the eyes and head direction in their estimate of gaze (Anstis et al., 1969; Cline, 1967; Gibson & Pick, 1963; Langton et al., 2004). However, it is worth noting that removing the heads may also modulate some of the social influences that modulate gaze perception because the observer may no longer view the stimulus as a “face.” Our model does not attempt to account for these potential changes in social cues but rather deals with the encoding of the visual signal. Using our model, this difference in performance is accounted for by a change in the width of the category boundaries for eye gaze direction, consistent with the idea of cue combination rather than a change in uncertainty for the eyes-only condition. Perhaps surprisingly, the uncertainty in the sensory representation, as indexed by the standard deviation of the Gaussian likelihood function in the model, did not change in the eyes-only condition compared with the noiseless faces. One might have expected the addition of information about head direction in the noiseless condition to have reduced uncertainty when combined with the information from the eye region. However, this need not be the case if the standard deviation for faces is broader than for eyes-only or if the process of combining face and eye cues is itself noisy. 
We have recently proposed a Bayesian framework for gaze perception along with empirical evidence for a prior tendency to report gaze as direct (Mareschal, Calder, & Clifford, 2013; see also Langton et al., 2004). The categorization experiments presented here cannot be used to extract a prior because a change in observers' performance could be due to either the influence of the prior or a change in the subjective category boundaries between direct and averted gaze or a combination of those two factors. Instead, we find that the simple model described here provides a good account for the effect of stimulus uncertainty on gaze categorization judgments. 
In conclusion, our model has allowed us to examine the role of external and internal noise in determining performance. It has been reported that clinical populations with visual deficits such as amblyopia suffer from impaired detection of signals due to an increase in internal noise (e.g., Levi, Klein, & Chen, 2008). However, recent advances suggest that perceptual learning can improve performance on simple visual tasks through a reduction of internal noise and/or more efficient use of the stimulus (Huang, Lu, & Zhou, 2009; see Levi & Li, 2009, for a review). Being able to quantify the influence of noise on behavior in a comparative manner between normal and clinical populations could provide a first step in aiding or enhancing visual detection and discrimination. This may be particularly suited to clinical populations with behavioral psychopathologies associated with avoidance of the eyes, such as autism, social anxiety, and schizophrenia (e.g., Dadds, Jambrak, Pasalich, Hawes, & Brennan, 2011; Gamer et al., 2011; Neumann, Spezio, Piven, & Adolphs, 2006; Spezio, Adolphs, Hurley, & Piven, 2006). For example, it has been suggested that schizophrenic patients display an imbalance in dopamine that is required to optimize signal-to-noise ratios of local microcircuits (Winterer & Weinberger, 2004), a finding with testable repercussions for gaze categorization under different levels of noise. 
Acknowledgments
This work is supported by Australian Research Council Discovery Project DP120102589 to C.W.G.C. and A.J.C. C.W.G.C. is supported by an Australian Research Council Future Fellowship. A.J.C. is supported by the Medical Research Council, UK (grant ref. MC_US_A060_5PQ50). 
Commercial relationships: none. 
Corresponding author: Isabelle Mareschal. 
Email: i.mareschal@qmul.ac.uk; imareschal@gmail.com. 
Address: Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK. 
References
Ando S. (2004). Perception of gaze direction based on luminance ratio. Perception, 33, 1173–1184. [CrossRef] [PubMed]
Anstis S. M. Mayhew J. W. Morely T. (1969). The perception of where a face or television “portrait” is looking. American Journal of Psychology, 82, 474–489. [CrossRef] [PubMed]
Baron-Cohen S. (1995). Mindblindness: An essay on autism and theory of mind. Cambridge, MA: MIT Press.
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Calder A. J. Jenkins R. Cassel A. Clifford C. W. G. (2008). Visual representation of eye gaze is coded by a nonopponent multichannel system. Journal of Experimental Psychology, General, 137, 244–261. [CrossRef]
Campbell R. Lawrence K. Mandy W. Mitra C. Jeyakuma L. Skuse D. (2006). Meanings in motion and faces: Developmental associations between the processing of intention from geometrical animations and gaze detection accuracy. Development and Psychopathology, 18, 99–118. [CrossRef] [PubMed]
Cline M. G. (1967). The perception of where a person is looking. American Journal of Psychology, 80, 41–50. [CrossRef] [PubMed]
Dadds M. R. Jambrak J. Pasalich D. Hawes D. J. Brennan J. (2011). Impaired attention to the eyes of attachment figures and the developmental origins of psychopathology. Journal of Child Psychology and Psychiatry, 52, 238–245. [CrossRef] [PubMed]
Ewbank M. P. Jennings C. Calder A. J. (2009). Why are you angry with me? Facial expressions of threat influence perception of gaze direction. Journal of Vision, 9 (12): 16, 1–7, http://www.journalofvision.org/content/9/12/16, doi:10.1167/9.12.16. [PubMed] [Article] [CrossRef] [PubMed]
Gamer M. Hecht H. (2007). Are you looking at me? Measuring the cone of gaze. Journal of Experimental Psychology: Human Perception and Performance, 33, 705–715. [CrossRef] [PubMed]
Gamer M. Hecht H. Seipp N. Hiller W. (2011). Who is looking at me? The cone of gaze widens in social phobia. Cognition and Emotion, 25, 756–764. [CrossRef] [PubMed]
Gibson J. T. Pick A. D. (1963). Perception of another person's looking behaviour. American Journal of Psychology, 76, 386–394. [CrossRef] [PubMed]
Hock H. S. Schöner G. (2010). Measuring perceptual hysteresis with the modified method of limits: Dynamics at the threshold. Seeing and Perceiving, 23, 173–195. [CrossRef] [PubMed]
Huang C. B. Lu Z. L. Zhou Y. (2009). Mechanisms underlying perceptual learning of contrast detection in adults with anisometropic amblyopia. Journal of Vision, 9 (11): 24, 1–14, http://www.journalofvision.org/content/9/11/24, doi:10.1167/9.11.24. [PubMed] [Article] [CrossRef] [PubMed]
Itier R. J. Batty M. (2009). Neural bases of eye and gaze processing: the core of social cognition. Neuroscience Biobehaviour Review, 33, 843–863. [CrossRef]
Jenkins J. Langton S. R. H. (2003). Configural processing in the perception of eye-gaze direction. Perception, 32, 1181–1188. [CrossRef] [PubMed]
Kloth N. Altmann C. S. Schweinberger S. R. (2011). Facial attractiveness biases the perception of eye contact. Quarterly Journal of Experimental Psychology, 64, 1906–1918. [CrossRef]
Langdon R. Corner T. McLaren J. Coltheart M. Ward P. B. (2006). Attentional orienting triggered by gaze in schizophrenia. Neuropsychologia, 44, 417–429. [CrossRef] [PubMed]
Langton S. R. H. Honeyman H. Tessler E. (2004). The influence of head contour and nose angle on the perception of eye gaze direction. Perception & Psychophysics, 66, 752–771. [CrossRef] [PubMed]
Levi D. M. Klein S. A. Chen I. (2008). What limits performance in the amblyopic visual system: Seeing signals in noise with an amblyopic brain. Journal of Vision, 8 (4): 1, 1–23, http://www.journalofvision.org/content/8/4/1, doi:10.1167/8.4.1. [PubMed] [Article] [CrossRef]
Levi D. M. Li R. W. (2009). Perceptual learning as a potential treatment for amblyopia: a mini-review. Vision Research, 21, 2535–2349. [CrossRef]
Lobmaier J. S. Tiddeman B. P. Perrett D. I. (2008). Emotional expression modulates perceived gaze direction. Emotion, 8, 573–577. [CrossRef] [PubMed]
Mareschal I. Calder A. J. Clifford C. W. G. (2013). Humans have an expectation that gaze is directed toward them. Current Biology, in press, http://dx.doi.org/10.1016/j.cub.2013.03.030.
Martin W. W. Rovira M. L. (1981). An experimental analysis of discriminability and bias in eye-gaze judgment. Journal of Nonverbal Behaviour, 5, 155–163. [CrossRef]
Nelder J. A. Mead R. (1965). A simplex method for function minimization. Computer Journal, 7, 308–313. [CrossRef]
Neumann D. Spezio M. L. Piven J. Adolphs R. (2006). Looking you in the mouth: Abnormal gaze in autism resulting from impaired top-down modulation of visual attention. Social Cognitive and Affective Neuroscience, 1, 194–202. [CrossRef] [PubMed]
Nummenmaa L. Calder A. J. (2009). Neural mechanims of social attention. Trends in Cognitive Sciences, 13, 135–143. [CrossRef] [PubMed]
Ristic J. Mottron L. Friesen C. K. Iarocci G. Burack J. A. Kingstone A. (2005). Eyes are special but not for everyone: the case of autism. Cognitive Brain Research, 24, 715–718. [CrossRef] [PubMed]
Senju A. Kikuchi Y. Hasegawa T. Tojo Y. Osanai H. (2008). Is anyone looking at me? Direct gaze detection in children with and without autism. Brain and Cognition, 67, 127–139. [CrossRef] [PubMed]
Senju A. Yaguchi K. Tojo Y. Hasegawa T. (2003). Eye contact does not facilitate detection in children with autism. Cognition, 89, B43–B51. [CrossRef] [PubMed]
Spezio M. L. Adolphs R. Hurley R. S. E. Piven J. (2006). Abnormal use of facial information in high-functioning autism. Journal of Autism and Developmental Disorders, 37, 929–939. [CrossRef]
Stein T. Sterzer P. (2011). High-level face shape adaptation depends on visual awareness: evidence from continuous flash suppression. Journal of Vision, 11 (8): 5, 1–14, http://www.journalofvision.org/content/11/8/5, doi:10.1167/11.8.5. [PubMed] [Article] [CrossRef] [PubMed]
Stoyanova R. S. Ewbank M. P. Calder A. J. (2010). “You talking to me?”: Self-relevant auditory signals influence perception of gaze direction. Psychological Science, 21, 1765–1769. [CrossRef] [PubMed]
Todorovic D. (2006). Geometric basis of perception of gaze direction. Vision Research, 46, 3549–3562. [CrossRef] [PubMed]
Vida M. D. Maurer D. (2012). The development of fine-grained sensitivity to eye contact after 6 years of age. Journal of Experimental Child Psychology, 112, 243–256. [CrossRef] [PubMed]
Webster S. Potter D. D. (2008). Eye direction detection improves with development in autism. Journal of Autism and Developmental Disorders, 38, 1184–1186. [CrossRef] [PubMed]
Winterer G. Weinberger D. R. (2004). Genes, dopamine and cortical signal-to-noise ratio in schizophrenia. Trends in Neuroscience, 27, 683–690. [CrossRef]
Wollaston W. H. (1824). On the apparent direction of eyes in a portrait. Philosophical Transactions of the Royal Society of London, 114, 247–256. [CrossRef]
Figure 1
 
Sample stimuli used in experiments, under the three conditions tested. (a) In the faces condition, four female (only one shown here) and four male faces with neutral expressions were used. Negative gaze deviations represent gaze averted to the left of the subject, and positive deviations represent gaze averted to the right of the subject. (b) In the noisy faces condition, fractal noise was added to the eyes only (shown here for the same female face). (c) In the eyes-only condition, only the eye region of the same eight faces was presented. This was achieved by applying an elliptical raised cosine contrast envelope over each eye (same female face as in a).
Figure 1
 
Sample stimuli used in experiments, under the three conditions tested. (a) In the faces condition, four female (only one shown here) and four male faces with neutral expressions were used. Negative gaze deviations represent gaze averted to the left of the subject, and positive deviations represent gaze averted to the right of the subject. (b) In the noisy faces condition, fractal noise was added to the eyes only (shown here for the same female face). (c) In the eyes-only condition, only the eye region of the same eight faces was presented. This was achieved by applying an elliptical raised cosine contrast envelope over each eye (same female face as in a).
Figure 2
 
Gaze categorization procedure and model fits. (a) Sample observer (I.M.) responses to different directions of gaze: averted to the left (blue diamonds), direct (pink squares), or averted to the right (red triangles). Data were averaged across all runs, and error bars are ±1 SEM. Faces show the leftmost and rightmost directions of gaze tested for this observer (±9°). (b) The standard logistic fit to the normalized data. (c) The psychophysical Gaussian model (scale increased for visibility) showing an observer's sensory representation of the gaze stimulus whose direction is indicated by the arrow on the x-axis. The likelihood of the observer responding “direct” to the direction of gaze indicated by the arrow corresponds to the area of the Gaussian in the gray region. The likelihood of the observer responding “left” corresponds to the area of the Gaussian in the white region, and the likelihood of responding “right” is zero. (d) Model fit to the data.
Figure 2
 
Gaze categorization procedure and model fits. (a) Sample observer (I.M.) responses to different directions of gaze: averted to the left (blue diamonds), direct (pink squares), or averted to the right (red triangles). Data were averaged across all runs, and error bars are ±1 SEM. Faces show the leftmost and rightmost directions of gaze tested for this observer (±9°). (b) The standard logistic fit to the normalized data. (c) The psychophysical Gaussian model (scale increased for visibility) showing an observer's sensory representation of the gaze stimulus whose direction is indicated by the arrow on the x-axis. The likelihood of the observer responding “direct” to the direction of gaze indicated by the arrow corresponds to the area of the Gaussian in the gray region. The likelihood of the observer responding “left” corresponds to the area of the Gaussian in the white region, and the likelihood of responding “right” is zero. (d) Model fit to the data.
Figure 3
 
(a) Gaze categorization in the noiseless condition. Solid lines are logistic fits to the rightward and leftward data; dashed lines are the model fits to the rightward and leftward data. (b) Gaze categorization in the noisy condition. Solid and dashed lines same as in (a).
Figure 3
 
(a) Gaze categorization in the noiseless condition. Solid lines are logistic fits to the rightward and leftward data; dashed lines are the model fits to the rightward and leftward data. (b) Gaze categorization in the noisy condition. Solid and dashed lines same as in (a).
Figure 4
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless and noisy conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Figure 4
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless and noisy conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Figure 5
 
Area under the curve of “direct” responses. Noiseless faces against noisy faces (left) and noiseless faces against eyes-only (right). Solid line depicts equality.
Figure 5
 
Area under the curve of “direct” responses. Noiseless faces against noisy faces (left) and noiseless faces against eyes-only (right). Solid line depicts equality.
Figure 6
 
Gaze categorization in the eyes-only condition. Solid lines are logistic fits; dashed lines are the psychophysical model.
Figure 6
 
Gaze categorization in the eyes-only condition. Solid lines are logistic fits; dashed lines are the psychophysical model.
Figure 7
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless faces and eyes-only conditions. Solid lines are equality, and error bars are 95% confidence intervals.
Figure 7
 
Model fit estimates of peaks (left), widths (middle), and standard deviations (right) in the noiseless faces and eyes-only conditions. Solid lines are equality, and error bars are 95% confidence intervals.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×