Free
Article  |   February 2012
The optimal viewing position in face recognition
Author Affiliations
Journal of Vision February 2012, Vol.12, 22. doi:https://doi.org/10.1167/12.2.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Janet H. Hsiao, Tina T. Liu; The optimal viewing position in face recognition. Journal of Vision 2012;12(2):22. https://doi.org/10.1167/12.2.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision.

Introduction
In the research on visual word recognition, the effect of optimal viewing position (OVP; O'Regan, Lévy-Schoen, Pynte, & Brugaillère, 1984) has been extensively studied. The OVP refers to the location to which the initial fixation is directed when the best recognition performance is obtained. In English word recognition, the OVP has been shown to be to the left of the word center; this OVP effect has been argued to reflect an interplay of different variables (Brysbaert & Nazir, 2005), including: 
  1.  
    Visual acuity, which drops dramatically from center to periphery. Accordingly, better recognition performance can be obtained when readers fixate the center letters than when they fixate the outer letters (e.g., O'Regan, 1981; O'Regan et al., 1984).
  2.  
    Information structure of the word stimuli: In English, word beginnings are usually more informative for identification than endings (e.g., Shillcock, Ellison, & Monaghan, 2000); thus, better recognition performance can be obtained when the initial fixation is directed to more informative parts of the words (e.g., Brysbaert, Vitu, & Schroyens, 1996; see also Farid & Grainger, 1996).
  3.  
    Perceptual learning (or reading direction): English is read from left to right, and thus letters are recognized in the right visual field (RVF) more often, resulting in better recognition performance in the RVF. In addition, it has been shown that presenting a visual stimulus repeatedly in one location of the visual field enhances discrimination sensitivity of the stimulus only in that particular location but not in a novel location due to low-level location-specific perceptual learning (e.g., Nazir & O'Regan, 1990). Thus, better recognition performance can be obtained when the initial fixation is directed to the location where readers fixate the most often during reading (i.e., the preferred landing position, PLP; Ducrot & Pynte, 2002; see also Rayner, 1979).
  4.  
    Hemispheric lateralization: The RVF has direct access to the left hemisphere (LH), where language processes are usually lateralized (Brysbaert & Nazir, 2005). In particular, Hunter, Brysbaert, and Knecht (2007) showed that the OVP in English word recognition could be modulated by individual differences in hemispheric dominance for language: Compared with people with LH language dominance, those with right hemisphere (RH) language dominance had their OVP shifted more to the right.
Thus, the phenomenon that the OVP in English word recognition is biased to the left of the word center can be accounted for by either the factor of (2) information structure of the word stimuli, (3) perceptual learning, or (4) hemispheric asymmetry in word processing (Brysbaert & Nazir, 2005). Nevertheless, since all three factors predict a left-based OVP, the relative contribution from each factor to the asymmetric OVP effect remains unclear. 
Similar to words, faces are another type of visual stimuli that we have frequent exposure to, and we are all experts in face recognition because of early exposure and constant practice in daily life. It remains unclear whether the OVP effect can also be observed in face recognition and what factors contribute to the effect. Recent research has shown that people have a preference of looking at the left half-face when viewing faces (from the viewer's perspective). For example, Leonards and Scott-Samuel (2005) showed that participants' initial saccade was directed mostly to the left when viewing faces but not when viewing landscapes, fractals, or inverted faces. Hsiao and Cottrell (2008) showed that the PLP of the first fixation in face recognition was slightly to the left of the center. Butler et al. (2005) showed a preference of leftward saccades in the first fixation in a gender judgment task. Everdell, Marsh, Yurick, Munhall, and Paré (2007) showed that when viewing dynamic faces, participants fixated the left half-face more often than the right half-face, including their first fixation (see also Bindemann, Scheepers, & Burton, 2009; Mertens, Siegmund, & Grusser, 1993). In contrast, some studies showed that participants looked at the center of the face most often without any bias to either side, possibly due to differences in the stimuli and tasks used (e.g., Saether, Van Belle, Laeng, Brennen, & Øvervoll, 2009). In addition, it has been suggested that eye movement strategies used to extract information from faces may be shaped by culture and social experiences, but these differences do not modulate information use (Caldara, Zhou, & Miellet, 2010). To examine the temporal dynamics of effective use of information in face recognition, Vinette, Gosselin, and Schyns (2004) used the bubbles procedure (Gosselin & Schyns, 2001) and showed that the left eye was the earliest diagnostic feature used by the participants, consistent with most of the eye movement data. Thus, these data suggest that the best face recognition performance may be obtained when the initial fixation is directed to the left side of a face because of participants' use of information (which is related to factor 2, information structure of the stimuli) and perceptual learning (factor 3). 
Recent research also suggests right hemisphere (RH) lateralization in face processing. For example, fMRI studies have shown that an area inside the fusiform gyrus (fusiform face area, FFA) responds selectively to faces (although some argue that FFA is an area for expertise in subordinate level visual processing instead of selective for faces, e.g., Tarr & Gauthier, 2000), with larger activation in the RH compared with the LH (e.g., Kanwisher, McDermott, & Chun, 1997). Note that Willems, Peelen, and Hagoort (2010) recently showed that this RH lateralization in face processing was observed in right-handers but not in left-handers. Neuropsychological data suggest a link between RH lesion and face recognition deficits (e.g., Meadows, 1974). Electrophysiological data also show that faces elicit larger Event-Related Potential (ERP) N170 than other types of objects, especially in the RH (e.g., Rossion, Joyce, Cottrell, & Tarr, 2003). Behaviorally, a left visual field (LVF)/RH advantage in processing upright faces was observed (e.g., H. D. Ellis & Shepherd, 1975; Leehey, Carey, Diamond, & Cahn, 1978; Levine & Koch-Weser, 1982; Young, 1984); this LVF/RH advantage in face recognition has been shown to be stronger for same-race faces than for other-race faces, suggesting that this effect depends on experience (Correll, Lemoine, & Ma, 2011; see also Turk, Handy, & Gazzaniga, 2005). In addition, a left side bias in face perception has been consistently reported: A chimeric face made from two left half-faces from the viewer's perspective is usually judged more similar to the original face than one made from two right half-faces (e.g., Brady, Campbett, & Flaherty, 2005; Gilbert & Bakan, 1973); this perceptual asymmetry has been argued to be an indicator of RH involvement in face processing (Burt & Perrett, 1997) and cannot be completely accounted for by participants' preference to look at the left half-face (although the effect may be enhanced in the trials in which participants make more left-half-face fixations; see Butler et al., 2005; Butler & Harvey, 2006). 
Thus, according to Brysbaert and Nazir (2005), the factor of hemispheric asymmetry predicts that the OVP in face recognition will be to the right of the face center, so that more face input can be projected to the RH, which is superior in face processing. This prediction is in contrast to that from the factors of information use and perceptual learning, which predicts the OVP to be slightly to the left of the center. Thus, the study of OVP in face recognition enables us to tease apart the influence from information use/perceptual learning and hemispheric asymmetry on the OVP effect in the recognition of visual categories. Since the factor of visual acuity (factor 1) is unlikely to show a visual field difference in recognition performance (i.e., the visual acuity curve has a symmetric profile), if the OVP in face recognition has a leftward bias, it will suggest greater influence from information use/perceptual learning; in contrast, a rightward bias will suggest greater influence from hemispheric asymmetry on the OVP phenomenon. 
Experiment 1
Methods
Materials
The materials consisted of 80 (40 males and 40 females) grayscale front-view Asian face images with neutral expressions and no glasses, and all were unfamiliar, new faces to the participants prior to the experiment. Each face was slightly resized to have an interpupil distance of 108 pixels and aligned according to the eye position. The image presented on the screen was 8 cm wide, and participants' viewing distance was approximately 60 cm; thus, each face subtended about 8° of visual angle horizontally and 10° of visual angle vertically, similar to the size of a real face viewed from 100 cm away, reflecting a natural distance during human interaction (Henderson, Williams, & Falk, 2005; Hsiao & Cottrell, 2008). Approximately one eye on a face may be foveated at a time. 
Participants
Thirty-two Asian volunteers (16 males) from the University of Hong Kong participated in the study for course credit or receiving honorariums. The age range was 18–23. All participants reported normal or corrected-to-normal vision. They were all right-handed according to the Edinburgh Handedness Inventory (Oldfield, 1971). 
Apparatus
Stimuli were displayed on a 22″ CRT monitor with a resolution of 1024 × 768 pixels and 150-Hz frame rate. An EyeLink 1000 eye tracker (SR Research, Osgoode, Canada) was used to monitor participants' eye movement during the experiment. The tracking mode was pupil and corneal reflection with a sample rate of 2000 Hz. A chin rest was used to reduce head movements. The standard nine-point calibration procedure was administered at the beginning of the task; the procedure was repeated whenever the drift correction error was larger than 1 degree of visual angle during the experiment. EyeLink 1000 default settings for cognitive research were used in data acquisition: acceleration threshold was 8000 degree/s2 and saccade velocity threshold was 30 degree/s. 
Design
The study had one independent variable: viewing position (left, near-left, center, near-right, or right; the distance between two adjacent positions was 2° of visual angle). The dependent variables were recognition performance (d′) and reaction time at each position. 
Procedure
The experiment was divided into two blocks. In each block, participants were asked to remember 20 faces in the study phase and recognized them among 20 new faces in the recognition phase. Different faces were used in the two blocks. Half of the stimuli were selected as targets and the other half as foils (new faces), and the target/foil images were swapped for half of the participants for counterbalancing. In order to counterbalance possible differences between the two sides of the faces, half of the participants were tested with mirror images of the original stimuli. The 80 stimuli were divided into the five subsets evenly for the five viewing positions (16 faces for each position). The five subsets were counterbalanced through a partial Latin square design. 
Each trial started with a solid dot at the center of the screen. Participants were asked to accurately fixate on the dot for drift correction. Then, the dot was replaced by a fixation cross, which stayed on screen for 500 ms or until participant accurately fixated on it. During the study phase, the faces were presented either above or below the central fixation at random, one at a time, for 5 s; the bottom/top edge of the face was 3° of visual angle away from the fixation (cf. Hsiao & Cottrell, 2008). Faces were separated by a 1-s blank interval (Figure 1). Participants were allowed to move their eyes freely in the study phase. During the recognition phase, the center of the face stimulus was presented at either −4°, −2°, 0°, 2°, or 4° horizontal to the center of display (corresponding to the left, near-left, center, near-right, and right conditions, respectively; Figure 2). Participants were told to remain looking at the center during the stimulus presentation (500 ms). The gaze contingent design was used to make sure that participants still fixated at the same location of the face image when they failed to remain fixating the center (Hsiao & Cottrell, 2008). The participants were asked to judge whether they had seen the face in the study phase as fast and as accurately as possible by pressing “YES” and “NO” buttons on a response pad using both hands (only the fastest response from either hand was recorded); the mapping of the buttons was counterbalanced across participants. 
Figure 1
 
Experimental procedure of the study phases in Experiment 1.
Figure 1
 
Experimental procedure of the study phases in Experiment 1.
Figure 2
 
Experimental procedure of the recognition phase in Experiment 1.
Figure 2
 
Experimental procedure of the recognition phase in Experiment 1.
There was a practice session (remembering 6 faces in the study phase and subsequently recognizing them among 6 new faces) to familiarize participants with the task; feedback was given only after incorrect responses. No feedback was given during the experiment. Participants took a short break between blocks. 
Results
Repeated measures ANOVA was used for the analyses. The recognition performance measured by d′ showed a position effect, F(4, 120) = 3.450, p = 0.010: The best recognition performance was obtained where faces were presented at near-right (i.e., the initial fixation was to the left of the face center; Figure 3). Post hoc paired t test revealed that the d′ for near-right was significantly better than left (t(31) = 2.510, p = 0.018), near-left (t(31) = 2.845, p = 0.008), and right (t(31) = 3.481, p = 0.002) but not center (t(31) = 0.848, n.s.). The d′ for center also differed significantly from right, t(31) = 2.240, p = 0.032. Thus, the OVP in face recognition appeared to be in between the left and the center of a face, suggesting greater influence from information use/perceptual learning. Similar results were obtained in reaction times: There was a main effect of position, F(3.137, 120) = 2.817, p = 0.041 (after Greenhouse–Geisser correction). The fastest response was obtained for faces at center (M = 1254.09 ms), and it was not significantly different from near-right (t(31) = 0.757, p = 0.445). 
Figure 3
 
Participants' recognition performance measured by d′ in each viewing position in Experiment 1.
Figure 3
 
Participants' recognition performance measured by d′ in each viewing position in Experiment 1.
Experiment 2
The results of Experiment 1 suggest greater influence from information use/perceptual learning on the OVP effect in face recognition when faces were presented around the central vision. Here, we test the hypothesis that the influence from hemispheric asymmetry may start to emerge when faces are presented in the peripheral vision, where the influence from perceptual learning becomes weaker (since we do not usually put faces in our peripheral vision when we try to recognize them). 
Methods
Apparatus and stimuli
The apparatus was identical to those used in Experiment 1. The stimuli were 96 grayscale, front-view Asian face images (48 males and 48 females), similar to those used in Experiment 1
Participants
Thirty-two Asian volunteers (16 males) from the University of Hong Kong participated in the study and received a small honorarium. They did not participate in Experiment 1. The age range was 18–28 years. All participants reported normal or corrected-to-normal vision. They were all right-handed according to the Edinburgh Handedness Inventory (Oldfield, 1971). 
Design and procedure
The design and procedure were similar to Experiment 1 except for the following changes. The experiment was divided into 6 blocks. In each block, participants were asked to remember 8 faces in the study phase and identify them among 8 new faces in the recognition phase. The experiment had two independent variables: distance (far vs. near) and visual field (LVF vs. RVF). During the recognition phase, the center of the face stimulus was presented at one of the four positions: −8°, −4°, 4°, 8° horizontal to the center of display, corresponding to the far-LVF, near-LVF, near-RVF, far-RVF conditions, respectively. Thus, the face images were divided into the four subsets evenly for the four fixation positions (Figure 4), counterbalanced through a partial Latin square design. The dependent variable was the discrimination sensitivity measure d′. Compared with Experiment 1, here faces were presented farther away from the center on average. Thus, to avoid floor effects, the study phase was repeated so that each face was viewed twice, in different orders. 
Figure 4
 
Experimental procedure of the recognition phase in Experiment 2.
Figure 4
 
Experimental procedure of the recognition phase in Experiment 2.
Results
Repeated measures ANOVA was used for the analysis. The results showed a significant main effect of distance, F(1, 30) = 49.684, p < 0.01, with better recognition performance when faces were presented near the center than far from the center. However, there was no main effect of visual field. A significant interaction of distance by visual field was found, F(1, 30) = 4.377, p < 0.05 (Figure 5) 1 : When the faces were near the center, better performance was obtained when faces were presented in the RVF compared with the LVF (t(31) = 1.124, p = 0.269); in contrast, when faces were far from the center, the LVF superiority effect in face recognition started to emerge (t(31) = 1.612, p = 0.117). 2 Consistent with Experiment 1, the best recognition performance was obtained in the near-RVF condition, when the left side of the face was close to the fixation. This result suggests greater influence from information use/perceptual learning in central vision and greater influence from hemispheric asymmetry in periphery vision. 
Figure 5
 
Participants' discrimination performance (d′) for faces shown at four positions: −8°, −4°, 4°, 8° horizontal to the center of the display.
Figure 5
 
Participants' discrimination performance (d′) for faces shown at four positions: −8°, −4°, 4°, 8° horizontal to the center of the display.
The reaction time analysis showed a main effect of distance, F(1, 30) = 12.511, p = 0.001: The responses were faster when faces were near the center. Neither the main effect of visual field nor the interaction of distance by visual filed reached significance. 
Discussion
In the current study, we investigated the optimal viewing position (OVP) in face recognition and the factors that may influence the OVP effect. Brysbaert and Nazir proposed three factors that may account for the asymmetry in the OVP effect in English word recognition: information structure of the stimuli, perceptual learning, and hemispheric asymmetry. Recent face recognition research has shown that people prefer to look at the left side of faces when viewing faces (from the viewer's perspective; e.g., Butler et al., 2005; Hsiao & Cottrell, 2008). Studies using the Bubbles technique also revealed that the left eye is the earliest diagnostic feature people use in face processing (Vinette et al., 2004). Thus, the information use and perceptual learning factors predict the OVP in face recognition to be to the left of the face center. In contrast, the RH/LVF advantage in face processing observed in the literature predicts the OVP to be to the right of the face center, so that most of the face input can be projected to the RH. 
In Experiment 1, we showed that the best recognition performance was obtained when the initial fixation was directed to the center of the left half-face (i.e., the near-right condition in Figure 2, when most of the face was in the RVF), suggesting greater influence from information use/perceptual learning when faces were presented close to central vision. In contrast, in Experiment 2, the interaction between distance and visual field suggested qualitatively different visual field effects when faces were presented close to the center or periphery: The LVF/RH advantage in face processing (in right-handers) started to emerge when faces were presented far from the center. This effect suggests greater influence from hemispheric asymmetry when faces were presented away from the center. 
In face recognition, the preferred landing location (PLP) of the first fixation when viewing faces is usually slightly to the left of the face center (e.g., Butler et al., 2005; Hsiao & Cottrell, 2008). Here, in both experiments, although the initial fixation might not have been directed to the PLP of the first fixation exactly, the best performance was always obtained when the initial fixation was closest to the PLP obtained in Hsiao and Cottrell (2008). This phenomenon is consistent with the perceptual learning literature that shows a gradual decline of improvement when the location of the test stimulus is moved away from the trained location (within a small region, since perceptual learning effects have generally been shown to be position specific; e.g., Crist, Kapadia, Westheimer, & Gilbert, 1997; see Gilbert, Sigman, & Crist, 2001, for a review). It remains unclear why the PLP of the first fixation in face recognition is slightly to the left of the center and why people have a preference of using the information from the left eye initially for face processing (Vinette et al., 2004). Hsiao and Cottrell argued that this effect might be because during learning to recognize faces, the left half-face is usually initially projected to the RH. It has been shown that the RH has an advantage in processing low spatial frequency information (e.g., Ivry & Robertson, 1999), which is important for face processing (e.g., Dailey & Cottrell, 1999; Whitman & Konarzewski-Nassau, 1997). Thus, the internal representation of the left half-face may be more informative for face processing, resulting in the preference of a leftward first saccade/information use. This speculation is consistent with the finding that the leftward perceptual asymmetry in chimeric face judgments was related to left saccades (Butler et al., 2005; note, however, that the perceptual asymmetry can still be elicited, although becomes weaker, when eye movements are controlled; see Butler & Harvey, 2006). Alternatively, the leftward-biased OVP and PLP in face recognition, and the leftward-biased use of information in face recognition (Vinette et al., 2004), may also be due to a biologically based face asymmetry normally presented to the viewers in daily life (Hsiao & Cottrell, 2008); this speculation requires further examination. 
The differential visual field effects observed in Experiment 2 when faces were presented near or far from the center have important implications for research on hemispheric asymmetry in visual cognition. It suggests that not all visual field differences observed in behavioral data imply fundamental processing differences between the two hemispheres. Consistent with this observation, a recent study showed that a visual field difference in processing two types of Chinese characters could be accounted for by a computational model that did not implement any processing difference between the two hemispheres; the visual field difference emerged naturally as a consequence of the fundamental structural differences in information between the two types of characters, suggesting influence from perceptual learning (Hsiao, 2011). The results here provide further support for the influence of perceptual learning in central vision in accounting for visual field difference effects. 
In divided visual field studies of word recognition, it has been suggested that since foveal representation (about the central 2° of the visual field) is bilaterally projected to both hemispheres (e.g., Huber, 1962; Stone, Leicester, & Sherman, 1973; see also Jordan & Paterson, 2009, for a review), to get reliable hemispheric asymmetry effects, visual stimuli have to be presented outside of the foveal region. In contrast, some have argued that foveal representation is split along the vertical midline with the two halves initially contralaterally projected to different hemispheres (e.g., Lavidor & Walsh, 2004; for a review, see A. W. Ellis & Brysbaert, 2010), and thus hemispheric asymmetry effects can still be obtained within the fovea. In the current study, our face image spanned about 8° of visual angle and, thus, was far beyond the foveal region when being presented completely in one visual field (Experiment 2). Nevertheless, differential visual field effects were obtained here when the face stimuli were presented near or far from the center: The influence from perceptual learning/information use was dominant when the faces were presented within 8° of visual angle from the center, whereas the RH advantage in face recognition only started to emerge when the faces were presented 8° of visual angle away from the center. This effect suggests that the influence from perceptual learning/information use can go beyond the foveal region. Although the current study also suggests that the influence of perceptual learning/information use may decrease in peripheral vision, perceptual learning/information use is an important factor that should be taken into account when interpreting visual field difference effects regardless of whether the stimulus is presented within or outside of fovea. 
Although the current results showed weaker hemispheric lateralization effects when face stimuli were presented within 8° of visual angle from the center, research in visual word recognition has shown that the OVP effect for foveally presented word stimuli can be influenced by hemispheric lateralization in language processing: Participants with RH language dominance had their OVP shifted more to the right compared with those with LH language dominance (Brysbaert, 1994; Hunter et al., 2007). Thus, it is possible that hemispheric lateralization in face processing can also modulate the OVP effect when faces are presented in central vision. Willems et al. (2010) recently showed that the RH lateralization in face processing was observed in right-handers but not in left-handers; thus, the OVP in face recognition may shift more to the left in left-handers compared with right-handers due to the modulation of hemispheric lateralization. Future work will investigate possible modulation effects of hemispheric lateralization in the OVP effect in face recognition through examining the difference between left- and right-handers. 
In the current study, the size of the face images (8° of visual angle) was chosen to reflect the size of a real face with a 100-cm viewing distance, about the distance between two people during a normal conversation. Since the face image size was much bigger than the foveal region (usually the central 2–3° of visual angle), the influence from the hypothesized bilateral/split foveal representation has been assumed to be minimal (see, e.g., A. W. Ellis & Brysbaert, 2010; Jordan & Paterson, 2009); in other words, to examine whether the fovea is bilaterally or split and contralaterally represented, the test stimuli should be small enough to fit within the fovea. Although in real life, it is possible to see a face completely within foveal vision when the face is very far away, the current results suggest greater influence from perceptual learning/information use than from hemispheric lateralization in foveal face processing; thus, whether foveal representation is split or not may not have significant influence on face recognition behavior. 
Recent research suggests that people in different cultures may adopt different eye fixation strategies to extract visual information when processing faces; for example, Blais, Jack, Scheepers, Fiset, and Caldara (2008) showed that Caucasians had a scattered triangular eye fixation pattern (on the eyes and mouth) in face recognition, whereas Asians' eye fixations focused on the face center (the nose). Nevertheless, further studies suggest that the two cultural groups do not differ in information use regardless of their difference in eye fixation strategies in face processing (Caldara et al., 2010). Consistent with this finding, Kelly, Miellet, and Caldara (2010) showed that Asians' central fixation pattern was observed across different visual categories in addition to faces, suggesting that this eye fixation behavior is not related to the holistic processing specific to faces. Thus, although in the current study we recruited only Asian participants, we expect that similar results will be obtained in Caucasian participants; this prediction requires further examination. 
Conclusions
In conclusion, here we show that the OVP in face recognition is to the left of the center from the viewer's perspective, suggesting greater influence from perceptual learning/information use, i.e., people's preference of directing their first saccade to the left half-face when viewing faces or their preference of using the information from the left eye initially when processing faces. In contrast, the influence from hemispheric asymmetry in face processing emerges when faces are presented away from the center. This effect demonstrates differential influences from perceptual learning/information use and hemispheric asymmetry in central and peripheral vision in the recognition of visual stimuli. 
Acknowledgments
We are grateful to the Research Grant Council of Hong Kong (GRF Project Code: HKU 744509H, J. H. Hsiao, PI). We thank the editor, Professor Marc Brysbaert, and an anonymous referee for helpful comments. 
Commercial relationships: none. 
Corresponding author: Janet H. Hsiao. 
Email: jhsiao@hku.hk. 
Address: Department of Psychology, University of Hong Kong, Pokfulam Road, Hong Kong. 
Footnotes
Footnotes
1  Note that different images and task designs and difficulties were used in the two experiments; thus, although the left and right positions in Experiment 1 was the same as the near-LVF and near-RVF conditions in Experiment 2, the results should not be directly compared.
Footnotes
2  The distance effect (far vs. near) was stronger when the faces were presented in the RVF (t(31) = 7.802, p < 0.001) compared with the LVF (t(31) = 3.363, p < 0.05).
References
Bindemann M. Scheepers C. Burton A. M. (2009). Viewpoint and center of gravity affect eye movements to human faces. Journal of Vision, 9(2):7, 1–16, http://www.journalofvision.org/content/9/2/7, doi:10.1167/9.2.7. [PubMed] [Article] [CrossRef] [PubMed]
Blais C. Jack R. E. Scheepers C. Fiset D. Caldara R. (2008). Culture shapes how we look at faces. PLoS ONE, 3, e3022.
Brady N. Campbell M. Flaherty M. (2005). Perceptual asymmetries are preserved in memory for highly familiar faces of self and friend. Brain and Cognition, 58, 334–342. [CrossRef] [PubMed]
Brysbaert M. (1994). Interhemispheric transfer and the processing of foveally presented stimuli. Behavioural Brain Research, 64, 151–161. [CrossRef] [PubMed]
Brysbaert M. Nazir T. (2005). Visual constraints in written word recognition: Evidence from the optimal viewing-position effect. Journal of Research in Reading, 28, 216–228. [CrossRef]
Brysbaert M. Vitu F. Schroyens W. (1996). The right visual field advantage and the optimal viewing position effect: On the relation between foveal and parafoveal word recognition. Neuropsychology, 10, 385–395. [CrossRef]
Burt D. M. Perrett D. I. (1997). Perceptual asymmetries in judgments of facial attractiveness, age, gender, speech and expression. Neuropsychologia, 35, 685–693. [CrossRef] [PubMed]
Butler S. Gilchrist I. D. Burt D. M. Perrett D. I. Jones E. Harvey M. (2005). Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Neuropsychologia, 43, 52–59. [CrossRef] [PubMed]
Butler S. Harvey M. (2006). Perceptual biases in chimeric face processing: Eye-movement patterns cannot explain it all. Brain Research, 1124, 96–99. [CrossRef] [PubMed]
Caldara R. Zhou X. Miellet S. (2010). Putting culture under the ‘Spotlight’ reveals universal information use for face recognition. PLoS ONE, 5, e9708.
Correll J. Lemoine C. Ma D. S. (2011). Hemispheric asymmetry in cross-race face recognition. Journal of Experimental Social Psychology, 47, 1162–1166. [CrossRef]
Crist R. E. Kapadia M. Westheimer G. Gilbert C. D. (1997). Perceptual learning of spatial localization. Journal of Neurophysiology, 78, 2889–2894. [PubMed]
Dailey M. N. Cottrell G. W. (1999). Organization of face and object recognition in modular neural networks. Neural Networks, 12, 1053–1074. [CrossRef] [PubMed]
Ducrot S. Pynte J. (2002). What determines the eyes' landing position in words? Perception & Psychophysics, 64, 1130–1144. [CrossRef] [PubMed]
Ellis A. W. Brysbaert M. (2010). Split fovea theory and the role of the two cerebral hemispheres in reading: A review of the evidence. Neuropsychologia, 48, 353–365. [CrossRef] [PubMed]
Ellis H. D. Shepherd J. W. (1975). Recognition of upright and inverted faces presented in the left and right visual fields. Cortex, 11, 3–7. [CrossRef] [PubMed]
Everdell I. T. Marsh H. Yurick M. D. Munhall K. G. Paré M. (2007). Gaze behaviour in audiovisual speech perception: Asymmetrical distribution of face-directed fixations. Perception, 36, 1535–1545. [CrossRef] [PubMed]
Farid M. Grainger J. (1996). How initial fixation position influences visual word recognition: A comparison of French and Arabic. Brain and Language, 53, 351–368. [CrossRef] [PubMed]
Gilbert C. Bakan P. (1973). Visual asymmetry in perception of faces. Neuropsychologia, 11, 355–362. [CrossRef] [PubMed]
Gilbert C. D. Sigman M. Crist R. E. (2001). The neural basis of perceptual learning. Neuron, 31, 681–697. [CrossRef] [PubMed]
Gosselin F. Schyns P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261–2271. [CrossRef] [PubMed]
Henderson J. M. Williams C. C. Falk R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33, 98–106. [CrossRef] [PubMed]
Hsiao J. H. (2011). Visual field differences can emerge purely from perceptual learning: Evidence from modeling Chinese character pronunciation. Brain & Language, 119, 89–98. [CrossRef]
Hsiao J. H. Cottrell G. W. (2008). Two fixations suffice in face recognition. Psychological Science, 9, 998–1006. [CrossRef]
Huber A. (1962). Homonymous hemianopia after occipital lobectomy. American Journal of Ophthalmology, 54, 623–629. [CrossRef] [PubMed]
Hunter Z. R. Brysbaert M. Knecht S. (2007). Foveal word reading requires interhemispheric communication. Journal of Cognitive Neuroscience, 19, 1373–1387. [CrossRef] [PubMed]
Ivry R. B. Robertson L. C. (1999). The two sides of perception. Cambridge, Massachusetts: MIT Press.
Jordan T. R. Paterson K. (2009). Re-evaluating split-fovea processing in visual word recognition: A critical assessment of recent research. Neuropsychologia, 47, 2341–2353. [CrossRef] [PubMed]
Kanwisher N. McDermott J. Chun M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. [PubMed]
Kelly D. J. Miellet S. Caldara R. (2010). Culture shapes eye movements for visually homogeneous objects. Frontiers in Perception Science, 1, 6.
Lavidor M. Walsh V. (2004). The nature of foveal representation. Nature Reviews Neuroscience, 5, 729–735. [CrossRef] [PubMed]
Leehey S. Carey S. Diamond R. Cahn A. (1978). Upright and inverted faces: The right hemisphere knows the difference. Cortex, 14, 411–419. [CrossRef] [PubMed]
Leonards U. Scott-Samuel N. E. (2005). Idiosyncratic initiation of saccadic face exploration in humans. Vision Research, 45, 2677–2684. [CrossRef] [PubMed]
Levine S. C. Koch-Weser M. P. (1982). Right hemisphere superiority in the recognition of famous faces. Brain & Cognition, 1, 10–22. [CrossRef]
Meadows J. C. (1974). The anatomical basis of prosopagnosia. Journal of Neurology, Neurosurgery, & Psychiatry, 37, 489–501. [CrossRef]
Mertens I. Siegmund H. Grusser O. J. (1993). Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia, 31, 989–998. [CrossRef] [PubMed]
Nazir T. A. O'Regan J. K. (1990). Some results on the translation invariance in the human visual system. Spatial Vision, 3, 81–100. [CrossRef]
Oldfield R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. [CrossRef] [PubMed]
O'Regan J. K. (1981). The convenient viewing position hypothesis. In Fisher D. F. Monty R. A. Senders J. W. (Eds.), Eye movements, cognition, and visual perception (pp. 289–298). Hillsdale, NJ: Erlbaum.
O'Regan J. K. Lévy-Schoen A. Pynte J. Brugaillère B. (1984). Convenient fixation location within isolated words of different length and structure. Journal of Experimental Psychology: Human Perception and Performance, 10, 250–257. [CrossRef] [PubMed]
Rayner K. (1979). Eye guidance in reading: Fixation locations within words. Perception, 8, 21–30. [CrossRef] [PubMed]
Rossion B. Joyce C. A. Cottrell G. W. Tarr M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20, 1609–1624. [CrossRef] [PubMed]
Saether L. Van Belle W. Laeng B. Brennen T. Øvervoll M. (2009). Anchoring gaze when categorizing faces' sex: Evidence from eye-tracking data. Vision Research, 49, 2870–2880. [CrossRef] [PubMed]
Shillcock R. Ellison T. M. Monaghan P. (2000). Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model. Psychological Review, 107, 824–851. [CrossRef] [PubMed]
Stone J. Leicester L. Sherman S. M. (1973). The naso-temporal division of the monkey's retina. Journal of Comparative Neurology, 150, 333–348. [CrossRef] [PubMed]
Tarr M. J. Gauthier I. (2000). FFA: A flexible fusiform area for subordinate-level visual processing automatized by expertise. Nature Neuroscience, 3, 764–769. [CrossRef] [PubMed]
Turk D. J. Handy T. C. Gazzaniga M. S. (2005). “Can perceptual expertise account for the own-race bias in face recognition? A split-brain study”. Cognitive Neuropsychology, 22, 877–883. [CrossRef] [PubMed]
Vinette C. Gosselin F. Schyns P. G. (2004). Spatio-temporal dynamics of face recognition in a flash: It's in the eyes. Cognitive Science, 28, 289–301.
Whitman D. Konarzewski-Nassau S. (1997). Lateralized facial recognition: Spatial frequency and masking effects. Archives of Clinical Neuropsychology, 12, 428. [CrossRef]
Willems R. M. Peelen M. V. Hagoort P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719–1725. [CrossRef] [PubMed]
Young A. W. (1984). Right cerebral hemisphere superiority for recognizing the internal and external features of famous faces. British Journal of Psychology, 75, 161–169. [CrossRef] [PubMed]
Figure 1
 
Experimental procedure of the study phases in Experiment 1.
Figure 1
 
Experimental procedure of the study phases in Experiment 1.
Figure 2
 
Experimental procedure of the recognition phase in Experiment 1.
Figure 2
 
Experimental procedure of the recognition phase in Experiment 1.
Figure 3
 
Participants' recognition performance measured by d′ in each viewing position in Experiment 1.
Figure 3
 
Participants' recognition performance measured by d′ in each viewing position in Experiment 1.
Figure 4
 
Experimental procedure of the recognition phase in Experiment 2.
Figure 4
 
Experimental procedure of the recognition phase in Experiment 2.
Figure 5
 
Participants' discrimination performance (d′) for faces shown at four positions: −8°, −4°, 4°, 8° horizontal to the center of the display.
Figure 5
 
Participants' discrimination performance (d′) for faces shown at four positions: −8°, −4°, 4°, 8° horizontal to the center of the display.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×