Free
Article  |   January 2012
Serial exploration of faces: Comparing vision and touch
Author Affiliations
Journal of Vision January 2012, Vol.12, 6. doi:10.1167/12.1.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lisa Dopjans, Heinrich H. Bülthoff, Christian Wallraven; Serial exploration of faces: Comparing vision and touch. Journal of Vision 2012;12(1):6. doi: 10.1167/12.1.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Even though we can recognize faces by touch surprisingly well, haptic face recognition performance is still worse than for visual exploration. One possibility for this performance difference might be due to different encoding strategies in the two modalities, namely, holistic encoding in vision versus serial encoding in haptics. Here, we tested this hypothesis by promoting serial encoding in vision, using a novel, gaze-restricted display that limited the effective field of view in vision to resemble that of haptic exploration. First, we compared haptic with gaze-restricted and unrestricted visual face recognition. Second, we used the face inversion paradigm to assess how encoding differences might affect processing strategies (featural vs. holistic). By promoting serial encoding in vision, we found equal face recognition performance in vision and haptics with a clear switch from holistic to featural processing, suggesting that performance differences in visual and haptic face recognition are due to modality-specific encoding strategies.

Introduction
The visual information provided by human faces is of strong ecological significance, for example, for communication, identification, and mate selection. Consequently, face processing has received a lot of attention in vision research providing evidence for specific processing strategies that evolve with perceptual expertise. Not surprisingly, a considerable amount of vision research has been devoted to investigating the basis of such expertise and has long been attributed to the use of configural as opposed to featural processing (processing of individual face parts). Three types of such configural processing have been defined (Maurer, Le Grand, & Mondloch, 2002): (i) sensitivity to first-order relations (recognizing the stimulus as a face), (ii) holistic processing (gluing the features together into a holistic whole or Gestalt), and (iii) sensitivity to second-order relations (perceiving inter-feature distances, for example, the actual (metric) distances between the two eyes). In contrast, featural processing refers to the processing of individual features free of the context of the face—hence, in featural processing, one eye or the mouth of a person is processed without regarding their position or configuration in the face. 
Regardless of the specific form of configural processing involved, one highly consistent finding is that face perception is orientation-specific, i.e., faces are processed more accurately when they are presented in the normal upright position than when they are inverted (for reviews, see Searcy & Bartlett, 1996; Valentine, 1988). The predominant explanation for this so-called “face inversion” effect, first demonstrated by Yin (1969), is that vertical inversion selectively impairs our ability to extract configural information from faces, while leaving featural processing largely intact (Leder & Bruce, 2000; Schwaninger, Wallraven, Cunningham, & Chiller-Glaus, 2006). In fact, using gaze-contingent stimulation, Van Belle, De Graef, Verfaillie, Rossion, and Lefèvre (2010) recently showed that the face inversion effect is, indeed, caused by a lack of configural processing, thus supporting the view that observers' expertise at upright face recognition is due to the ability to perceive an individual face as a whole. 
Interestingly, the inversion effect is not present in infants. In fact, a number of studies have shown that this pattern of results (i.e., a remarkable deficit in recognition of inverted faces but no such deficit for inverted images of non-face objects) takes many years to develop and is, therefore, seen as one of the hallmarks of visual face processing expertise (Carey & Diamond, 1977; Dahl, Wallraven, Bülthoff, & Logothetis, 2009; Hay & Cox, 2000; Maurer et al., 2002; Mondloch, Geldart, Maurer, & LeGrand, 2003; Pellicano & Rhodes, 2003; Schwarzer, 2000). 
Face processing, however, is not limited solely to vision. Indeed, it has recently been shown that humans are capable of identifying individual faces at levels well above chance using only their sense of touch, as demonstrated for the first time by Kilgour and Lederman (2002). Since then, other studies have confirmed this result using 3D face masks (Casey & Newell, 2007; Dopjans, Wallraven, & Bülthoff, 2009; Kilgour, deGelder, & Lederman, 2004; Kilgour & Lederman, 2006; Pietrini et al., 2004), raising the question of whether the visual and haptic modalities encode similar information and share the hallmarks of face processing, i.e., whether the use of expert face processing strategies, for example, is modality independent. 
Indeed, it has been shown that haptic processing of complex shapes can be as efficient as visual processing (Cooke, Jäkel, Wallraven, & Bülthoff, 2007; Gaissert, Wallraven, & Bülthoff, 2010; Norman, Norman, Clayton, Lianekhammy, & Zielke, 2004). The perceptual reconstruction of three-dimensional, novel shapes was found to follow a physically defined space remarkably well, indicating that haptic shape processing for these objects was at least on par with processing based on visual expertise (Cooke et al., 2007; Gaissert et al., 2010). As faces are a special class of complex objects that seem to require perceptual expertise in shape processing, one might, therefore, ask to what degree haptic processing of faces will also be similar to visual processing. 
Few studies have since investigated a haptic face inversion effect with contradictory results. While Kilgour and Lederman (2006), for example, previously used a haptic face inversion paradigm to study orientation sensitivity of haptic face processing and found a strong inversion effect for faces, Baron (2008) failed to find such a haptic face inversion effect during haptic classification of facial expressions of emotion. Further research is, therefore, necessary to thoroughly understand orientation sensitivity of haptic face processing. 
In a previous study (Dopjans et al., 2009), we provided further evidence that both the haptic and visual systems have the capacity to process faces and that face-relevant information can be shared across sensory modalities, using a fully controlled stimulus set. Interestingly, we found this information transfer across modalities to be asymmetric and limited by haptic face processing. More specifically, if faces had been learned visually and tested visually, performance was high. If faces had been learned haptically and tested haptically, performance was overall lower. In the first cross-modal condition, if faces had been learned haptically and tested visually, performance for this last cross-modal test block remained the same, with no cost of switching modalities for testing. In contrast, when faces were learned visually and then tested haptically, there was a clear drop in performance. This asymmetry was most likely due to a cost in transfer from vision to haptics with no such cost in haptics-to-vision transfer. Moreover, we found initial evidence that haptic face recognition performance was significantly improved when haptic memory was refreshed during the experiment, indicating that haptic exploration seems to impose a high memory load. 
We suggested that the observed asymmetric transfer may be due to differences in visual and haptic information processing. While visual face processing has been shown to involve configural processing, haptics might rely more on featural processing. The different processing strategies might, in turn, be introduced by qualitative differences in information encoding in haptics and vision. Earlier studies have shown, for example, that vision can process all aspects of an image in parallel, so that local facial features and their global configuration can be rapidly processed (Tanaka & Sengco, 1997). While faces are encoded holistically in vision (Maurer et al., 2002), haptic encoding is limited to serial exploration of an object, i.e., it is not holistic but involves a feature-by-feature analysis (Loomis, Klatzky, & Lederman, 1991; Loomis & Lederman, 1986) due to its narrow effective field of “view.” Therefore, haptic information of an object has to be integrated over time in order to take in the same amount of information. This may impose higher memory demands on haptic exploration of objects, which might also be a reason why haptic memory capacity appears to be more limited and variable than that of visual memory (Bliss & Hämäläinen, 2005; Dopjans et al., 2009). To date, however, relatively few studies have attempted to address the characteristics and functioning of people's memory for haptically perceived objects (e.g., see Walk & Pick, 1981, for an extensive early review; Knecht, Kunesch, & Schnitzler, 1996; Millar & Al-Attar, 2004)—as compared to the large number of studies that have addressed people's memory for visually presented objects (e.g., Alvarez & Cavanagh, 2004; Desimone, 1996; Luck & Vogel, 1997; Squire, 1992; Vogel, Woodman, & Luck, 2001). In addition, to our knowledge, no study has been published that has specifically addressed the question of whether and how the way in which haptic information is gathered affects the encoding and storage of that information in the brain (i.e., a question that is related to the serial vs. holistic manner of perceiving haptic stimuli) and, consequently, how it affects performance in a high-level cognitive task such as face recognition. If either of these factors, serial encoding and/or higher memory demands, limits performance, as seems likely, then touch must be at a disadvantage in such a task. 
Loomis et al. (1991), for example, showed that recognition performance across the visual and haptic sense can be equated in 2D picture recognition by reducing the visual window to the narrowness of the effective field of “view” in haptics. Since the near equivalence of tactual picture perception and narrow-field vision suggested that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view, the authors concluded that recognition performance when the field of view is restricted is, indeed, impeded by limitations in memory or in the integration process. 
In a recent study, Casey and Newell (2007) addressed the question of how encoding differences across the visual and haptic modalities might affect face recognition performance. The authors proposed to make encoding in vision and haptics more similar by limiting visual encoding to a feature-by-feature procedure, i.e., using scrambled faces to enforce serial encoding. More specifically, visual face images were divided into four parts comprising nose area, eye and brow area, mouth and chin area, and external features. Participants in the haptic group learned whole face stimuli, whereas participants in the visual group learned part-based face images (one face part at a time). During testing, both groups were presented with whole face stimuli in an old–new recognition paradigm with recognition conducted visually and haptically in each group, so that there were within-modal and cross-modal recognition trials. Interestingly, both groups had the same within-modal performance, and recognition performance also decreased similarly for both groups in the cross-modal condition. This decrease was the same when whole faces were learned in the visual condition. Based on these findings, the authors suggested that encoding differences did not account for differences in visual and haptic face recognition, as the same cost was incurred for both within-modal and cross-modal conditions, independent of encoding procedures. One major shortcoming of this study lies, however, in the visual encoding procedure: While using scrambled faces and presenting only one feature at a time certainly enforces perceptual integration, it does not resemble haptic encoding. More specifically, participants' exploratory procedures are severely restricted as the order in which individual features are represented is determined by the experimenter rather than by the participant when freely exploring the face as in normal haptic exploration. More importantly, however, Loomis et al. (1991) reported a clear effect of visual field size on performance. Doubling the width of the field from resembling information gained through one fingertip to two already produced a substantial increment in visual recognition performance. Disassembling a face into four parts yields a much larger field size than possible for haptic exploration and might, therefore, not resemble serial encoding in haptics. 
Following this line of inquiry, we here present results that directly test the effect of encoding differences on visual and haptic face recognition performance. This was achieved by using a gaze-restricted display used here to constrain the visual system to sequential, self-directed exploration promoting serial encoding in vision in much the same way that the haptic system encodes objects (Loomis et al., 1991). The gaze-restricted display limited the effective field of view in vision to resemble that of haptic exploration using two fingertips. For this, an aperture was moved over a face image by the participants, resembling serial, haptic exploratory procedures. Participants were given control of the movements of the aperture (by moving a mouse), such that they could control the information input continuously through time, which is also similar to haptic exploration. The only constraint of this gaze-restricted design was, therefore, that only one feature, determined by the observer him/herself, was available at any given time on a face. Thus, the specific aperture-viewing procedure used in these experiments allowed a fair comparison with haptic recognition. In a first series of experiments, we compared haptic, gaze-restricted, and unrestricted visual face recognition. Second, we tested how refreshing of memory would help recognition performance for haptic and gaze-restricted face recognition. Finally, we used the face inversion paradigm to assess how encoding differences might affect face processing strategies (featural vs. configural face information processing). 
Experiment 1
We assessed the effect of modality-specific encoding differences on visual and haptic face recognition testing haptic (H), gaze-restricted (GRV), and unrestricted (UV) visual face recognition. 
Methods
Participants
Fifty-four experimentally naive university students (18 per condition, 27 men and 27 women, mean age of 23.7 years) were paid 8 Euros per hour to perform the respective experiment. All participants reported normal or corrected-to-normal vision and had no sensory impairment. All participants gave informed consent to the three experiments. Participants and participants' data were treated in accordance with the Declaration of Helsinki. 
Stimuli
Stimuli for the haptic and unrestricted visual face recognition experiments consisted of nineteen white plastic face masks. For this, the 3D models of 19 faces were taken from the MPI-Face Database (Troje & Bülthoff, 1996) and edited for printing using the graphics package 3D Studio Max (Autodesk). Three-dimensional face masks were printed with the use of an Eden 250 printer (Objet Geometries) and weighed about 138 ± 5 g each and measured 89 ± 5.5 mm wide, 120 ± 7.5 mm high, and 103.5 ± 5.5 mm deep (see Figure 1 for an example of the stimuli used). 1  
Figure 1
 
(A) Experimental setup used for haptic and unrestricted visual face recognition. (B) Demonstration of the gaze-restricted display: The red circle indicates the size of the aperture. Only the part of the image inside the aperture was visible as indicated by the difference in brightness of the images inside and outside of the aperture. The aperture of 2° visual angle was moved over the frontal photograph of the face mask. (C) Example of a recorded trajectory during gaze-restricted face recognition.
Figure 1
 
(A) Experimental setup used for haptic and unrestricted visual face recognition. (B) Demonstration of the gaze-restricted display: The red circle indicates the size of the aperture. Only the part of the image inside the aperture was visible as indicated by the difference in brightness of the images inside and outside of the aperture. The aperture of 2° visual angle was moved over the frontal photograph of the face mask. (C) Example of a recorded trajectory during gaze-restricted face recognition.
Visual stimuli for the gaze-restricted face recognition experiments were generated and presented under Matlab 7.11 using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Each gaze-restricted stimulus was created using two images. The first was one of 19 photographs from a frontal view of the white plastic face masks previously used in haptic experiments. The faces spanned 14.7 ± 1.2° visual angle in the vertical plane and 9.1 ± 0.5° visual angle in the horizontal plane and were presented on a black background spanning 36.9° visual angle in the horizontal plane and 28.1° visual angle in the vertical plane. The visual angle values were selected to yield an image size roughly equivalent to that obtained when viewing the faces (which were, on average, 9 cm wide and 12 cm high) at arms length (50 cm) in the haptic setup. The second was a black image that was superimposed on the photograph. These two images were blended into each other via a Gaussian weight mask (an aperture). The mask was centered at the center of gaze, allowing for a smooth transition between the two images and uncovering a window of 2° visual angle of the underlying photograph. Again, the visual aperture uncovered an area equivalent to two fingers at arm's length, reflecting the most commonly used exploratory procedure by participants in the haptic face recognition experiments (see Figure 1 for examples of the stimuli used for gaze-restricted visual face recognition). 
Procedures and experimental designs
Participants performed the experiment in only one condition. Learning phase, identification task, and testing phase were conducted in the same modality, i.e., either in the haptic, gaze-restricted visual, or unrestricted visual modalities. The general design of the experiments was the same to ensure comparability. 
In the haptic and unrestricted visual experiments, the faces were positioned on a platform that was placed horizontally, on top of a fixed table. All faces could be rigidly fixed to this platform and were always presented from a frontal view. Participants used a chin rest that was placed 30 cm away from the stand on which the objects were presented. An opaque curtain that could be slid back to reveal faces for the visual experiments separated the participants from the stand. During haptic exploration of the faces, an armrest was provided to prevent exhaustion. 
In the gaze-restricted experiments, participants were seated about 60 cm away from a computer screen (21-inch CRT) resting their chin on a chin rest and used a mouse to move a Gaussian window that uncovered 2° of the photograph of the 3D face. Participants were instructed not to move the mouse rapidly back and forth, for such a method would have produced an effective visual field much larger than intended, since very rapid scanning differs little from simultaneous dull display (Ikeda & Uchikawa, 1978) by virtue of screen and visual persistence. Since we recorded the trajectories, we were able to control for this confound. 
Before performing the experiment in the haptic and gaze-restricted conditions, we presented one stimulus and asked the naive participants to explore it and to report what kind of object they were dealing with. Every participant identified the stimulus correctly as a face. Participants were then familiarized with three upright faces (out of 19 total) that were randomly chosen from six sets of three faces each. We labeled each face with a short first name. They were told to explore the face masks carefully and to learn their names because they would be asked to recognize those particular faces later. No further information was given about the nature of the following experiment during the familiarization. 
In the subsequent identification task, participants had to name each randomly presented face mask after exploration. Feedback was provided in that participants were told whether the face was recognized correctly or not. Each face mask had to be identified correctly twice before the experiment continued. 
The old–new recognition task immediately followed the identification task and consisted of 3 blocks of 19 trials, corresponding to 3 old (learned) and 16 new faces (each object was shown once per block). This asymmetric design was chosen because of time constraints for haptic learning with one single experiment containing multiple blocks already taking close to 2 h (including breaks). Face masks were shown one at a time in random order with an ISI of 10 s in which the faces were exchanged. Participants were asked to explore each face mask and to report whether it was one of the three faces they had learned (old) or not (new). Although exploration time was unrestricted, they were instructed to respond as quickly and accurately as possible by pressing an “old” or “new” labeled key on a keyboard with their left hand. Participants took about 10 min to complete a haptic or gaze-restricted test block and 4 min for an unrestricted visual test block. No feedback was provided for the old–new recognition task. 
Results
Responses were converted to standard d′ scores. 2 The means and standard error for each block and condition are shown in Figure 2. The results were analyzed using a 3 × 3 ANOVA with between-participants factor Modality (haptics, gaze-restricted vision, and unrestricted vision) and within-participants factor Block (1, 2, and 3) to directly compare performance in the three modalities. We found a main effect for Modality (F(2, 51) = 23.40, p < 0.001), a main effect for Block (F(2, 102) = 7.13, p < 0.01), and a significant interaction of Modality × Block (F(4, 102) = 3.63, p < 0.01). This interaction indicates that performance changes differently across blocks depending on the underlying modality and/or condition. 
Figure 2
 
Plots comparing face recognition performance across test blocks for (A) haptic (H-WO), (B) gaze-restricted (GRV-WO), and (C) unrestricted visual face recognition without refreshing memory and (D) averaged performance across test blocks for each condition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 2
 
Plots comparing face recognition performance across test blocks for (A) haptic (H-WO), (B) gaze-restricted (GRV-WO), and (C) unrestricted visual face recognition without refreshing memory and (D) averaged performance across test blocks for each condition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
One-tailed t-tests showed that performance was above chance for block and each condition (H—Block 1: t(17) = 4.23, p < 0.001; Block 2: t(17) = 4.02, p < 0.001; Block 3: t(17) = 4.53, p < 0.001; GRV—Block 1: t(17) = 5.4, p < 0.001; Block 2: t(17) = 3.62, p < 0.01; Block 3: t(17) = 3.47, p < 0.01; UV—Block 1: t(17) = 13.12, p < 0.001; Block 2: t(17) = 5.93, p < 0.001; Block 3: t(17) = 7.9, p < 0.001). Post-hoc tests of the three modalities using Tukey's HSD showed that the unrestricted visual condition was different from both the gaze-restricted and haptic conditions (both p < 0.001), whereas the gaze-restricted and haptic conditions did not differ overall (p = 0.88). Analyses of the within-participants contrast in repeated measures ANOVAs for each condition showed that performance had a significant linear decrease across blocks in the gaze-restricted (F(1,17) = 11.14, p < 0.01) and haptic (F(1,17) = 4.66, p < 0.05) conditions. By contrast, we found no significant decrease in the unrestricted visual condition, although the data was perhaps suggestive of a trend (F(1,17) = 3.15, p = 0.09). Taken together, these results suggest that visual face recognition accuracy was significantly reduced by promoting serial encoding. Performance in the gaze-restricted condition was also not significantly different from haptic face recognition accuracy. 
Discussion
We found a clear advantage for unrestricted visual face recognition over gaze-restricted visual and haptic face recognition with no significant difference between the latter. More specifically, face recognition performance across the visual and haptic sense seems to have been equated by reducing the visual window to the narrowness of the effective field of view in haptics. However, we previously reported a memory effect for haptic face recognition in a cross-modal transfer task where haptic face recognition performance was significantly improved by refreshing memory (Dopjans et al., 2009). Since we found haptic and gaze-restricted face recognition to be at the same level and to decrease across blocks, we suggested that haptic and gaze-restricted face recognition might be equally impeded by limitations in memory. One possible reason for this may be higher memory demands for serial encoding of faces as, for example, suggested by Loomis et al. (1991), and therefore, both conditions might be aided by refreshing memory. Additional support for this comes from the debriefing of the participants who mentioned that it was hard for them to retain the information about the learned faces throughout the three blocks of the experiment in the serial encoding conditions. 
Experiment 2
To investigate the effects of memory on haptic and gaze-restricted face recognition, we repeated the experiment using a “with refreshing memory” version of the experimental design introduced in Experiment 1
Methods
Participants
Thirty-six experimentally naive participants (18 per condition, 18 men and 18 women, mean age of 24.3 years; all participants were university students) were paid 8 Euros per hour to perform the respective experiment. All participants reported normal or corrected-to-normal vision and had no sensory impairment. 
Stimuli
The same stimuli were used as in Experiment 1
Procedures and experimental designs
Participants performed the experiment in only one condition. Learning phase, identification task, and testing phase were conducted in the same modality, i.e., either in the haptic or gaze-restricted visual modality. The general design of the experiments was the same to ensure comparability. 
The design was the same as in Experiment 1 except that memory was refreshed by repeated exposure to the three learned faces. That is, the identification task was conducted before each test block
This experiment, therefore, comprised two different conditions: haptic (H-W) and gaze-restricted visual (GRV-W) face recognition with refreshing memory. 
Results
Responses were converted to standard d′ scores and averaged across test blocks. The means and standard error for each condition are shown in Figure 3. One-tailed t-tests showed that performance was above chance for each condition (H-W: t(17) = 7.06, p < 0.001; GRV-W: t(17) = 9.02, p < 0.001). The results were further analyzed using two-tailed t-tests to assess the effect of refreshing memory on haptic and gaze-restricted face recognition performance. We found no significant difference between haptic and gaze-restricted performance (H-W vs. GRV-W: t(34) = −0.25, p = 0.8). Finally, we compared the results to those from Experiment 1 (haptic (H-WO) and gaze-restricted vision (GRV-WO) without refreshing memory) using a 2 × 2 ANOVA with between-participants factors Modality (haptics and gaze-restricted vision) and Memory (with and without refreshing memory) to test directly how face recognition performance was affected by refreshing memory in the two modalities. We found a significant main effect for Memory (F(1, 68) = 15.67, p < 0.001) but no significant interaction of Memory × Modality (F(1, 68) = 0.01, p = 0.93) and no significant effect for Modality (F(1, 68) = 0.23, p = 0.63). Taken together, we therefore found the same pattern of results for haptic and gaze-restricted visual face recognition. 
Figure 3
 
Plots comparing face recognition performance for haptic (H-W) and gaze-restricted (GRV-W) face recognition with refreshing memory. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 3
 
Plots comparing face recognition performance for haptic (H-W) and gaze-restricted (GRV-W) face recognition with refreshing memory. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Furthermore, reaction time data demonstrated the sequential nature of restricted viewing, as RTs for gaze-restricted visual face recognition were significantly slower than for unrestricted vision and at the same level as for haptic face recognition (H-WO: 19.96 ± 0.80 s; GRV-WO: 16.60 ± 0.97 s; UV: 4.29 ± 0.35 s; H-W: 20.03 ± 0.92 s; GRV-W: 16.30 ± 0.84 s). It is important to note, however, that long response times as observed in this haptic and gaze-restricted recognition task are difficult to interpret and only allow for limited conclusions. The data here are, thus, reported only for completeness' sake. 
Discussion
Similarly to Experiment 1, we found that haptic and gaze-restricted exploration resulted in the exact same recognition pattern. Compared to the previous results, memory refreshing led to a similar increase in performance in both modalities. Even though nearly all participants mentioned having trouble remembering the faces in Experiment 1 in the debriefing for the haptic and restricted visual conditions, there were no complaints about this in the unrestricted visual condition in Experiment 1 or for the memory-refreshed conditions in the present experiment. Potential factors responsible for this effect include higher memory demands of serial encoding during the encoding phase, limitations of long-term memory for retaining the serially encoded representations, as well as a larger interference of test faces with stored faces in serially encoded representations (compared to faces encoded in the unrestricted condition). Compared to the unrestricted condition, the blocks also took longer to complete, which also might contribute to the observed decay in performance in the two serial conditions—however, the decrease in performance was visible also for the fastest participants in the haptic and gaze-restricted conditions, who had almost comparable exploration times with the slower participants in the unrestricted condition. Given that the pattern of performance was highly similar across the two serial encoding conditions, however, one might speculate that modality-independent higher memory demands for serial encoding of faces equally affect gaze-restricted visual and also haptic face recognition performance. For a more direct comparison, another experiment with memory refresh in the unrestricted visual condition would need to be run. In addition, to tease apart the contribution of the different factors mentioned above, more specific experimental manipulations would need to be implemented. 
Taken together, we found the exact same pattern of results for haptic and visual face recognition performance using a gaze-restricted display. Given these modality-specific encoding differences, the question arises as to how they might affect face processing strategies. 
Experiment 3
We used the face inversion paradigm to assess the effect of encoding differences on face processing strategies (featural vs. configural) in haptic (H), gaze-restricted (GRV), and unrestricted visual (UV) face recognition, by comparing recognition performance for upright (−U) vs. inverted (−I) faces in each modality. 
Methods
Participants
Fifty-four experimentally naive participants (18 per modality, 26 men and 28 women, mean age of 25 years; all participants were university students) were paid 8 Euros per hour to perform the respective experiment. All participants reported normal or corrected-to-normal vision and had no sensory impairment. 
Stimuli
The same stimuli were used as in Experiment 1
Procedures and experimental designs
Participants performed the experiment in only one modality (unrestricted visual, haptic, or gaze-restricted visual). There was no switch in modality between learning and testing phase. The general design of all experiments was the same to ensure comparability. 
The same setups were used as described above for Experiment 1 for presenting upright as well as inverted (upside-down) faces. Furthermore, the learning and identification phase were conducted with upright faces as described above for Experiment 1
Using a “with refreshing memory” design in all modalities, the old–new recognition task immediately followed the identification task and consisted of 3 blocks of 19 trials, corresponding to 3 old (learned) and 16 new faces (each object was shown once per block). In Block 1, participants were presented with upright faces, while inverted (upside-down) faces were presented in the second and third test blocks. 
In the old–new recognition task, face masks were shown one at a time in random order with an ISI of 10 s in which the faces were exchanged. Participants were asked to explore each face mask and to report whether it was one of the three faces they had learned (old) or not (new). Although exploration time was unrestricted, they were instructed to respond as quickly and accurately as possible by pressing an “old” or “new” labeled key on a keyboard with their left hand. Participants took about 10 min to complete a haptic or gaze-restricted test block and 4 min for an unrestricted visual test block. No feedback was provided for the old–new recognition task. 
Participants performed the experiment in only one modality, i.e., participants in the haptic modality learned faces haptically, proceeded to the identification task in the haptic modality, followed by the haptic old–new recognition task with upright (Block 1) and inverted (Blocks 2 and 3) faces. 
Results
Responses were converted to standard d′ scores and averaged across inverted test blocks. The means and standard error for each condition are shown in Figure 4. One-tailed t-tests were used to assess that performance was above chance for each modality and orientation (H-U: t(17) = 4.29, p < 0.001; H-I: t(17) = 4.22, p < 0.001; GRV-U: t(17) = 5.33, p < 0.001; GRV-I: t(17) = 5.43, p < 0.001; UV-U: t(17) = 9.13, p < 0.001; UV-I: t(17) = 3.38, p < 0.01). As a second step, we performed a 3 × 2 factorial ANOVA with the between-participants factor being Modality (haptic, gaze-restricted, unrestricted visual) and the within-participants factor being Face Orientation (upright, inverted). While there was no significant main effect for Modality (F(2, 51) = 0.09, p = 0.91), we found a significant main effect of Face Orientation (F(1, 51) = 4.89, p < 0.05). Most importantly, however, we found a highly significant interaction between the two factors (F(2, 51) = 16.23, p < 0.001). Two-tailed t-tests revealed that face recognition performance in the upright orientation was significantly better in the unrestricted visual modality than in the haptic and gaze-restricted conditions (UV-U vs. H-U: t(34) = 2.49, p < 0.05; UV-U vs. GRV-U: t(34) = 2.91, p < 0.01; H-U vs. GRV-U: t(34) = 0.07, p = 0.94). More interestingly, however, we found a clear inversion effect in the unrestricted visual modality while no decrease in recognition accuracy was found in either the haptic or gaze-restricted modality (UV-U vs. UV-I: t(17) = 6.14, p < 0.001; H-U vs. H-I: t(17) = −0.74, p = 0.47; GRV-U vs. GRV-I: t(17) = −1.23, p = 0.24). Consequently, in the inverted orientation, performance was significantly better in the haptic and gaze-restricted modalities than in the unrestricted visual modality (UV-I vs. H-I: t(34) = −3.40, p < 0.01; UV-I vs. GRV-I: t(34) = −3.43, p < 0.01; H-I vs. GRV-I: t(34) = −0.06, p = 0.52). 
Figure 4
 
Plot comparing face recognition performance for upright and inverted faces for haptic, gaze-restricted, and unrestricted visual recognition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 4
 
Plot comparing face recognition performance for upright and inverted faces for haptic, gaze-restricted, and unrestricted visual recognition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Discussion
We found a strong face inversion effect for unrestricted visual recognition but no significant difference in performance between upright and inverted faces in either the haptic or the gaze-restricted conditions using our 3D face masks. We found evidence for sensitivity to first-order relations, as all participants recognized the stimulus as a face. The lack of a strong face inversion effect may be explained by participants adopting a feature-based strategy for haptic and gaze-restricted face recognition, in which facial features were processed individually and sequentially. In contrast, in the unrestricted visual condition, participants clearly seem to have benefited from the use of a configural processing strategy in the old–new recognition task as evidenced by the high recognition performance and the clear inversion effect. 
Analysis of gaze-restricted exploration
Exploratory patterns from the gaze-restricted visual condition strongly support the use of a feature-based strategy in vision when serial encoding is enforced (Figure 5). For an initial analysis, we pooled the trajectories smoothed with a Gaussian window to create heat maps resembling those used to visualize eye tracking data. The heat maps shown in Figure 5 demonstrate (1) that participants focused on single features rather than moving the window around a lot—a strategy that might be more consistent with obtaining second-order relations, as it might yield more direct information about inter-featural distances, and (2) that there is a general similarity of exploratory patterns in the upright and inverted conditions, i.e., there were no clear and systematic difference between conditions and orientations in terms of the location of gaze “fixations.” The latter finding is well illustrated by Figure 5B, in which averaged exploration trajectories are shown for one stimulus: eyes, nose, mouth, and ears result in much more interest than other parts of the face for both conditions. 
Figure 5
 
Heat maps illustrating the relative number of fixation per trial on a given screen position on the stimuli for an upright and an inverted trial for (A) a single participant and (B) averaged across all participants for one stimulus. Fixation positions were smoothed using a Gaussian filter with a sigma corresponding to 2° visual angle in order to account for the fixation position variability when fixating a certain point.
Figure 5
 
Heat maps illustrating the relative number of fixation per trial on a given screen position on the stimuli for an upright and an inverted trial for (A) a single participant and (B) averaged across all participants for one stimulus. Fixation positions were smoothed using a Gaussian filter with a sigma corresponding to 2° visual angle in order to account for the fixation position variability when fixating a certain point.
Considering the small differences between conditions, we subjected the fixation data to a simple analysis in terms of duration and number. For this, we first calculated hand-driven fixations from the recorded trajectories in the gaze-restricted visual condition based on the same logic as for eye movement fixations as follows: Fixations were defined as consecutive data points lying within the 2° visual angle window (40 pixels/degree) for at least 500 ms, following a “mouse saccade.” A saccade occurred whenever the velocity difference between two consecutive windows of duration exceeded a pre-defined acceleration criterion (defined as window size (again, 40 pixels/degree) * acceleration threshold (4 degrees/s2)). 
We used two-sided t-tests to compare duration of fixation and number of fixations for gaze-restricted face recognition for the without (number: 3.97 ± 0.18; duration: 9.62 ± 0.54 s) and with refreshing memory conditions (number: 3.44 ± 0.1; duration: 8.07 ± 0.43 s), as well as for upright (number: 5.56 ± 0.17; duration: 12.68 ± 0.62 s) and inverted (number: 5.31 ± 0.14; duration: 12.2 ± 0.58 s) faces. We found no significant difference for the number of fixations (t(34) = −0.29, p = 0.39) or the duration (t(34) = −1.22, p = 0.1) in the without than with refreshing memory condition. 
Similarly to face recognition accuracy, we found no significant difference for the number of fixations (t(17) = −1.59, p = 0.13) or the duration (t(17) = 0.74, p = 0.47) for upright and inverted faces supporting the interpretation that participants used the same, feature-based, strategy for recognizing upright and inverted faces in gaze-restricted face recognition. 
While normal observers in unrestricted visual face recognition usually show a preference for the eye region when viewing upright faces (see below; Goldstein & Mackenberg, 1966; McKelvie, 1976, Sergent, 1984; Tanaka & Farah, 1993; Walker Smith, 1978), gaze-restricted viewing showed a preference for the mouth area for upright and inverted faces. Interestingly, similar preferences for the mouth region in face exploration have been reported for haptic exploration (Lederman, Klatzky, & Kitada, 2010) as well as patient groups with face recognition deficits such as prosopagnosic patient L.R. (Bukach, Bub, Gauthier, & Tarr, 2006) and individuals with autism (Joseph & Tanaka, 2003; Klin, Jones, Schultz, Volkmar, & Cohen, 2002; Langdell, 1978). Eye tracking studies of unrestricted visual face recognition have shown that eyes are looked at more frequently than any other facial part when faces are presented in a natural (upright) way (Barton, Radcliffe, Cherkasova, Edelman, & Intriligator, 2006; Dahl et al., 2009; Emery, 2000; Farroni, Csibra, Simion, & Johnson, 2002; Sergent, 1984; Tanaka & Farah, 1993). In contrast to upright faces, face inversion has been shown to lead to a drastic loss of eye preference in human faces (Dahl et al., 2009). The saliency of the eye region has, therefore, been argued not to be due to low-level appearance but to be driven by higher level expectations based on the spatial configuration of the face (Guo, Robertson, Mahmoodi, Tadmor, & Young, 2003). Additionally, a high proportion of eye fixations is, to some extent, believed to be indicative of holistic face processing (Dahl, Logothetis, & Hoffman, 2007). Note that, whereas most studies support our view of the exploration pattern, there are also studies that do not find very clear differences in eye movement patterns during inversion (Rodger, Kelly, Blais, & Caldara, 2010; Williams & Henderson, 2007). 
Further analysis of the exploration pattern and comparison with eye tracking data will, thus, be necessary to go into more depth about the characteristic differences between serial and “holistic” exploration. 
General discussion
Research on the nature of haptic encoding has highlighted an important distinction regarding the relative effectiveness with which the visual and haptic systems encode objects, more specifically, faces. While faces are encoded holistically in vision (that is, all aspects of an image are processed in parallel; Maurer et al., 2002), haptic encoding is limited to serial exploration of an object due to its narrow effective field of view (that is, it involves a feature-by-feature analysis; Loomis et al., 1991; Loomis & Lederman, 1986). Given these modality-specific encoding differences, the question arises as to how they might affect high-level cognitive tasks such as face recognition. As discussed in the Introduction section, visual expert face processing is characterized by configural processing. If information gained through serial haptic encoding can be accurately integrated into a more global representation (e.g., Lakatos & Marks, 1999), haptic face processing might also benefit from configural processing. If not, haptic face processing should rely more on featural information. 
Here, we studied the effects of modality-specific encoding differences in visual and haptic face recognition in terms of recognition accuracy as well as information processing strategies. We found that face recognition was equally disrupted using gaze-restricted vision and haptics as compared to unrestricted vision. Most importantly, we found no inversion effect for serially encoded faces compared to unrestricted visual exploration. These findings indicate that face processing is impeded by serial encoding—even when participants have control over the information that they view through the aperture. In addition, using gaze-restricted vision increased the time necessary to recognize a face relative to unrestricted visual face recognition. More specifically, the time required to recognize faces using gaze-restricted vision was similar to that for haptic face recognition. The exploration pattern during the gaze-restricted face exploration focused mainly on single features (mostly in the top half of the face), which is consistent with previous investigations of the relative importance of different internal face features for recognition (Haig, 1986; Schyns, Bonnar, & Gosselin, 2002; Sekuler, Gaspar, Gold, & Bennett, 2004; Yarbus, 1967) on the time scale of feature integration (that is, simultaneous versus sequential). 
Our experiments yield the exact same pattern of results for haptic and visual face recognition performance using a gaze-restricted display. Face recognition performance across the visual and haptic sense was equated by reducing the visual window to the narrowness of the effective field of view in haptics (due to a decrease in visual face recognition accuracy as compared to unrestricted visual face recognition). In addition, recognition performance is low in the without refreshing memory condition in haptic and gaze-restricted face recognition. At the same time, it is significantly improved when memory to the three learned faces is refreshed before each test block. Although performance across blocks in the unrestricted visual condition does not decrease significantly, the question of whether there are higher memory demands for serial encoding will need to be addressed by another follow-up experiment in which visual memory is also refreshed in the unrestricted condition. If performance would stay at similar levels, one might attribute the differences in performance to serial encoding, that is, to the temporal integration of serially gained information for face recognition in both modalities. This finding would be in line with a previous study in which recognition performance across the visual and haptic sense was equated in 2D picture recognition by reducing the visual window to the narrowness of the effective field of “view” in haptics (Loomis et al., 1991). The authors concluded that recognition performance in restricted field-of-view conditions was, indeed, impeded by limitations in memory or in the integration process. 
Furthermore, with faces presented in the unrestricted visual condition, we replicated the well-known face inversion effect in our task, as previously observed in several behavioral studies: With the exact same stimuli to recognize, participants performed significantly better with upright than inverted faces. More interestingly, however, we failed to find such an inversion effect for haptic or gaze-restricted face recognition. While we found evidence for sensitivity to first-order relations, as all participants recognized the stimulus as a face, the lack of a face inversion effect indicates that participants adopted a feature-based strategy for haptic and gaze-restricted face recognition. In the unrestricted visual modality, however, participants clearly seem to have benefited from the use of a configural processing strategy in the old–new recognition task. 
Our findings agree well with a recent study by Van Belle et al. (2010), in which the authors used a similar gaze-contingent stimulus presentation method to study the visual face inversion effect. They compared participants' face discrimination performance on (1) faces presented in full view, (2) with only the central window of vision revealed, and (3) with only the fixated feature masked by means of an eye-contingent mask. Similar to our results, they found a face inversion effect for faces presented in full view but none when observers had their vision constrained by an aperture. The authors, therefore, concluded that the inversion effect is not caused primarily by a difficulty in perceiving local detailed facial features but by the observers' inability to simultaneously extract diagnostic information at different locations on an inverted face, i.e., that holistic face perception is impaired for inverted faces. Our results confirm and extend these findings by providing the first direct comparison between serial exploration in visual and haptic processing of faces. 
As mentioned in the Introduction section, Kilgour and Lederman (2006) previously used a haptic face inversion paradigm to study orientation sensitivity of haptic face processing and found a strong inversion effect for faces. An important difference to our study, however, lies in the task chosen to investigate orientation specificity for haptically explored faces. While we used an old–new recognition task, Kilgour and Lederman (as well as Lakatos and Marks, 1999 who studied configural processing in haptics for non-face objects) chose a 2AFC same–different face discrimination task. As our results have shown, haptic memory for complex objects such as faces might be rather brittle without memory refreshes. In a same–different paradigm, the comparison of two face representations happens in short succession in each trial, so that the two encoded representations might be able to retain the complex configural information. In contrast, in the old–new paradigm, a current haptic face representation needs to be compared to many representations stored in long-term memory. Since potential working memory effects due to the serial encoding process itself are similar in both tasks, long-term memory effects (such as limited storage and/or faster decay of serially encoded face representations) may contribute to the old–new task being more prone to errors than the same–different task—future studies are needed, however, to assess the differences between the two tasks in more detail. 
Moreover, McGregor, Klatzky, Hamilton, and Lederman (2010) recently investigated the use of configural versus feature-based processing in haptic identity classification of upright versus inverted versus scrambled faces using 2D raised-line displays. While performance was rather low overall, they found that upright and scrambled faces produced equivalent accuracy, and both were identified more accurately than inverted faces. While this finding was taken to indicate that the upright orientation was “privileged” in the haptic representation, the authors suggested that the effect of scrambling argued against the use of configural information. Scrambling faces alter an object-centered description of a face. It also changes a body-centered description of the face as a configuration, while maintaining the local features in their normal, upright orientation within this body-based frame of reference. The fact that there was no statistical difference in accuracy between upright and scrambled faces was, thus, taken to indicate that configural information was not used to haptically process facial identity in raised-line drawings. However, one potential problem with this study is that no direct visual comparison was run—indeed, it remains to be seen whether an inversion effect would also be obtained for the raised-line displays of that study. In contrast, in the present work, we have used more life-like three-dimensional face masks as stimuli. These yielded a clear and strong inversion effect for unrestricted visual exploration but not for restricted exploration either in the haptic or in the visual domain. 
Our findings are, therefore, interesting in light of our recent study investigating cross-modal transfer in visual and haptic face recognition in which we observed asymmetric cross-modal face information transfer (a cost in vision-to-haptics but none in haptics-to-vision transfer resulting in equal cross-modal recognition performance; Dopjans et al., 2009). If haptics encodes faces on the basis of features, then the visual recognition of a face from its feature-based haptic representation may have been less efficient than from a holistic, visual representation. Conversely, haptic recognition quite likely does not benefit from the holistic information encoded by vision and might, therefore, be limited by the use of feature-based information. The observed performance differences in visual and haptic face recognition might, therefore, be attributed to qualitative differences in information processing due to modality-specific encoding differences. Further research, such as the comparison of exploratory procedures in unrestricted vision, haptics, and gaze-restricted vision using eye tracking and motion capture, is, however, necessary to thoroughly study differences in encoding strategies in these modalities. 
Furthermore, our results may lend indirect evidence to the idea that information processing is expertise dependent in the ongoing face specificity versus expertise debate in configural information processing. While many studies argue that faces represent a special class of objects in that certain types of information processing, in particular configural processing of facial features, are specific to face perception (e.g., Kanwisher, 2000), other findings suggest that they are a matter of expertise (e.g., Diamond & Carey, 1986; Gauthier & Tarr, 1997). Our results suggest that not only haptics but also gaze-restricted vision are limited to serial information encoding and that both process feature-based face information, while unrestricted vision processes configural information. 
Finally, inasmuch as we have little to no training in haptic or gaze-restricted visual face recognition throughout life, it is possible that participants might be able to develop strategies to compensate for processing differences introduced by serial encoding. Further research is required to elucidate the role of expertise in face recognition such as a training study to investigate whether encoding process-dependent differences in information processing strategies can be overcome with the acquisition of, for example, gaze-restricted face recognition expertise. In general, continued investigation of information encoding and processing of faces using gaze-restricted vision and haptic exploration will continue to increase our understanding of the cognitive and neural processes underlying human face recognition. 
Acknowledgments
This research was supported by a Ph.D. stipend from the Max Planck Society and by the World Class University (WCU) program through the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology (R31-1008-000-10008-0). We thank the anonymous reviewers for their insights and for helping to cross-check some of the statistical methods in this paper. 
Commercial relationships: none. 
Corresponding author: Christian Wallraven. 
Email: wallraven@korea.ac.kr. 
Address: Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga, Seongbuk-gu, Seoul 136-713, Korea. 
Footnotes
Footnotes
1  Due to technical constraints, the experiments were conducted with smaller-than-life face masks. We have, however, previously shown that stimulus size does not significantly affect haptic recognition performance in a same–different task (Dopjans et al., 2009), as well as for a subset of the same faces tested here in an old–new recognition task in pilot experiments.
Footnotes
2  In the following, we report only d′ scores. However, we checked the percent correct values in this and all following experiments to control for possible floor or ceiling effects. No condition in any experiment showed any sign of these effects.
References
Alvarez G. A. Cavanagh P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15, 106–111. [CrossRef] [PubMed]
Baron M. (2008). Haptic classification of facial expressions of emotion on 3D facemasks and the absence of a haptic face-inversion effect. Unpublished BSc (honors) thesis, Queen's University.
Barton J. J. Radcliffe N. Cherkasova M. V. Edelman J. Intriligator J. M. (2006). Information processing during face recognition: The effects of familiarity, inversion, and morphing on scanning fixations. Perception, 35, 1089–1105.
Bliss I. Hämäläinen H. (2005). Different working memory capacity in normal young adults for visual and tactile letter recognition task. Scandinavian Journal of Psychology, 46, 247–251. [CrossRef] [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Bukach C. M. Bub D. N. Gauthier I. Tarr M. J. (2006). Perceptual expertise effects are not all or none: Spatially limited perceptual expertise for faces in a case of prosopagnosia. Journal of Cognitive Neuroscience, 18, 48–63. [CrossRef] [PubMed]
Carey S. Diamond R. (1977). From piecemeal to configurational representation of faces. Science, 195, 312–314. [CrossRef] [PubMed]
Casey S. J. Newell F. N. (2007). Are representations of unfamiliar faces independent of encoding modality? Neuropsychologia, 45, 506–513. [CrossRef] [PubMed]
Cooke T. Jäkel F. Wallraven C. Bülthoff H. H. (2007). Multimodal similarity and categorization of novel, three-dimensional objects. Neuropsychologia, 45, 484–495. [CrossRef] [PubMed]
Dahl C. D. Logothetis N. K. Hoffman K. L. (2007). Individuation and holistic processing of faces in rhesus monkeys. Proceedings of the Royal Society: Biological Sciences, 274, 2069–2076. [CrossRef]
Dahl C. D. Wallraven C. Bülthoff H. H. Logothetis N. K. (2009). Humans and macaques employ similar face-processing strategies. Current Biology, 19, 509–513. [CrossRef] [PubMed]
Desimone R. (1996). Neural mechanisms for visual memory and their role in attention. Proceedings of the National Academy of Sciences of the United Stated of America, 93, 13494–13499. [CrossRef]
Diamond R. Carey S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. [CrossRef] [PubMed]
Dopjans L. Wallraven C. Bülthoff H. H. (2009). Cross-modal transfer in visual and haptic face recognition. IEEE Transaction on Haptics, 2, 236–240. [CrossRef]
Emery N. J. (2000). The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24, 581–604. [CrossRef]
Farroni T. Csibra G. Simion F. Johnson M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences of the United States of America, 99, 9602–9605. [CrossRef] [PubMed]
Gaissert N. Wallraven C. Bülthoff H. H. (2010). Visual and haptic perceptual spaces show high similarity in humans. Journal of Vision, 10(11):2, 1–20, http://www.journalofvision.org/content/10/11/2, doi:10.1167/10.11.2. [PubMed] [Article] [CrossRef] [PubMed]
Gauthier I. Tarr M. J. (1997). Becoming a ‘Greeble’ expert: Exploring the face recognition mechanism. Vision Research, 37, 1673–1682. [CrossRef] [PubMed]
Goldstein A. G. Mackenberg E. J. (1966). Recognition of human faces from isolated facial features: A developmental study. Psychonomic Science, 6, 149–150. [CrossRef]
Guo K. Robertson R. G. Mahmoodi S. Tadmor Y. Young M. P. (2003). How do monkeys view faces? A study of eye movements. Experimental Brain Research, 150, 363–374. [PubMed]
Haig N. D. (1986). Exploring recognition with interchanged facial features. Perception, 15, 235–247. [CrossRef] [PubMed]
Hay D. C. Cox R. (2000). Developmental changes in the recognition of faces and facial features. Infant and Child Development, 9, 199–212. [CrossRef]
Ikeda M. Uchikawa K. (1978). Integrating time for visual pattern perception and a comparison with the tactile mode. Vision Research, 18, 1565–1571. [CrossRef] [PubMed]
Joseph R. M. Tanaka J. (2003). Holistic and part-based face recognition in children with autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 44, 529–542. [CrossRef]
Kanwisher N. (2000). Domain specificity in face recognition. Nature Neuroscience, 3, 759–763. [CrossRef] [PubMed]
Kilgour R. deGelder B. Lederman S. J. (2004). Haptic face recognition and prosopagnosia. Neuropsychologia, 42, 707–712. [CrossRef] [PubMed]
Kilgour R. Lederman S. J. (2002). Face recognition by hand. Perception & Psychophysics, 64, 339–352. [CrossRef] [PubMed]
Kilgour R. Lederman S. J. (2006). A haptic face-inversion effect. Perception, 35, 921–931. [CrossRef] [PubMed]
Klin A. Jones W. Schultz R. Volkmar F. Cohen D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59, 809–816. [CrossRef] [PubMed]
Knecht S. Kunesch E. Schnitzler A. (1996). Parallel and serial processing of haptic information in man: Effects of parietal lesions on sensorimotor hand function. Neuropsychologia, 34, 669–687. [CrossRef] [PubMed]
Lakatos S. Marks L. (1999). Haptic form perception: Relative salience of local and global features. Perception & Psychophysics, 6, 895–908. [CrossRef]
Langdell T. (1978). Recognition of faces: An approach to the study of autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 19, 255–268. [CrossRef]
Leder H. Bruce V. (2000). When inverted faces are recognized: The role of configural information in face recognition. The Quarterly Journal of Experimental Psychology, 53, 513–536. [CrossRef] [PubMed]
Lederman S. J. Klatzky R. L. Kitada R. (2010). Haptic face processing and its relation to vision. In Kaiser J. Naumer M. J. (Eds.), Multisensory object perception in the primate brain (pp. 273–300). New York: Springer.
Loomis J. M. Klatzky R. L. Lederman S. J. (1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20, 167–177. [CrossRef] [PubMed]
Loomis J. M. Lederman S. J. (1986). Tactual perception. In Boff, K. R. Kaufman L. Thomas J. P. (Eds.), Handbook of perception and human performances: Cognitive processes and performance (vol. 2, pp. 31/1–31/41). New York: Wiley.
Luck S. J. Vogel E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281. [CrossRef] [PubMed]
Maurer D. LeGrand R. Mondloch C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [CrossRef] [PubMed]
McGregor T. A. Klatzky R. L. Hamilton C. Lederman S. J. (2010). Haptic classification of facial identity in 2D displays: Configural versus feature-based processing. IEEE Transaction on Haptics, 3, 48–55. [CrossRef]
McKelvie S. J. (1976). The role of eyes and mouth in the memory of a face. American Journal of Psychology, 89, 311–323. [CrossRef]
Millar S. Al-Attar Z. (2004). External and body-centered frames of reference in spatial memory: Evidence from touch. Perception & Psychophysics, 66, 51–59. [CrossRef] [PubMed]
Mondloch C. J. Geldart S. Maurer D. LeGrand R. (2003). Developmental changes in face processing skills. Journal of Experimental Child Psychology, 86, 67–84. [CrossRef] [PubMed]
Norman J. F. Norman H. F. Clayton A. M. Lianekhammy J. Zielke G. (2004). The visual and haptic perception of natural object shape. Perception & Psychophysics, 66, 342–351. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Pellicano E. Rhodes G. (2003). Holistic processing of faces in preschool children and adults. Psychological Science, 14, 618–622. [CrossRef] [PubMed]
Pietrini P. Furey M. L. Ricciardi E. Gobbini M. I. Wu W.-H. C. Cohen L. et al. (2004). Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences of the United States of America, 101, 5658–5663. [CrossRef] [PubMed]
Rodger H. Kelly D. J. Blais C. Caldara R. (2010). Inverting faces does not abolish cultural diversity in eye movements. Perception, 39, 1491–1502. [CrossRef] [PubMed]
Schwaninger A. Wallraven C. Cunningham D. W. Chiller-Glaus S. (2006). Processing of identity and emotion in faces: A psychophysical, physiological and computational perspective. Progress in Brain Research, 156, 321–343. [PubMed]
Schwarzer G. (2000). Development of face processing: The effect of face inversion. Child Development, 71, 391–401. [CrossRef] [PubMed]
Schyns P. G. Bonnar L. Gosselin F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13, 402–409. [CrossRef] [PubMed]
Searcy J. H. Bartlett J. C. (1996). Inversion and processing of component and spatial-relational information of faces. Journal of Experimental Psychology: Human Perception and Performance, 22, 43–47. [CrossRef]
Sekuler A. B. Gaspar C. M. Gold J. M. Bennett P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14, 391–396. [CrossRef] [PubMed]
Sergent J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. [CrossRef] [PubMed]
Squire L. R. (1992). Declarative and nondeclarative memory: Multiple brain systems supporting learning and memory. Journal of Cognitive Neuroscience, 4, 232–243. [CrossRef] [PubMed]
Tanaka J. W. Farah M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 12, 242–248.
Tanaka J. W. Sengco J. (1997). Features and their configuration in face recognition. Memory and Cognition, 25, 583–592. [CrossRef] [PubMed]
Troje N. F. Bülthoff H. H. (1996). Face recognition under varying poses: The role of texture and shape. Vision Research, 36, 1761–1771. [CrossRef] [PubMed]
Valentine T. (1988). Upside-down faces: A review of the effects of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [CrossRef] [PubMed]
Van Belle G. De Graef P. Verfaillie K. Rossion B. Lefèvre P. (2010). Face inversion impairs holistic perception: Evidence from gaze-contingent stimulation. Journal of Vision, 10(5):10, 1–13, http://www.journalofvision.org/content/10/5/10, doi:10.1167/10.5.10. [PubMed] [Article] [CrossRef] [PubMed]
Vogel E. K. Woodman G. F. Luck S. J. (2001). Storage of features, conjunctions and objects in visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 27, 92–114. [CrossRef] [PubMed]
Walk R. D. Pick H. L., Jr. (1981). Intersensory perception and sensory integration. New York: Plenum.
Walker Smith G. J. (1978). The effects of delay and exposure duration in a face recognition task. Perception & Psychophysics, 24, 63–70. [CrossRef]
Williams C. C. Henderson J. M. (2007). The face inversion effect is not a consequence of aberrant eye movements. Memory & Cognition, 35, 1977–1985. [CrossRef] [PubMed]
Yarbus A. L. (1967). Eye movements and vision. New York: Plenum Press.
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Figure 1
 
(A) Experimental setup used for haptic and unrestricted visual face recognition. (B) Demonstration of the gaze-restricted display: The red circle indicates the size of the aperture. Only the part of the image inside the aperture was visible as indicated by the difference in brightness of the images inside and outside of the aperture. The aperture of 2° visual angle was moved over the frontal photograph of the face mask. (C) Example of a recorded trajectory during gaze-restricted face recognition.
Figure 1
 
(A) Experimental setup used for haptic and unrestricted visual face recognition. (B) Demonstration of the gaze-restricted display: The red circle indicates the size of the aperture. Only the part of the image inside the aperture was visible as indicated by the difference in brightness of the images inside and outside of the aperture. The aperture of 2° visual angle was moved over the frontal photograph of the face mask. (C) Example of a recorded trajectory during gaze-restricted face recognition.
Figure 2
 
Plots comparing face recognition performance across test blocks for (A) haptic (H-WO), (B) gaze-restricted (GRV-WO), and (C) unrestricted visual face recognition without refreshing memory and (D) averaged performance across test blocks for each condition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 2
 
Plots comparing face recognition performance across test blocks for (A) haptic (H-WO), (B) gaze-restricted (GRV-WO), and (C) unrestricted visual face recognition without refreshing memory and (D) averaged performance across test blocks for each condition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 3
 
Plots comparing face recognition performance for haptic (H-W) and gaze-restricted (GRV-W) face recognition with refreshing memory. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 3
 
Plots comparing face recognition performance for haptic (H-W) and gaze-restricted (GRV-W) face recognition with refreshing memory. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 4
 
Plot comparing face recognition performance for upright and inverted faces for haptic, gaze-restricted, and unrestricted visual recognition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 4
 
Plot comparing face recognition performance for upright and inverted faces for haptic, gaze-restricted, and unrestricted visual recognition. Data are measured in mean d′ ± 1 Standard Error of the Mean (SEM).
Figure 5
 
Heat maps illustrating the relative number of fixation per trial on a given screen position on the stimuli for an upright and an inverted trial for (A) a single participant and (B) averaged across all participants for one stimulus. Fixation positions were smoothed using a Gaussian filter with a sigma corresponding to 2° visual angle in order to account for the fixation position variability when fixating a certain point.
Figure 5
 
Heat maps illustrating the relative number of fixation per trial on a given screen position on the stimuli for an upright and an inverted trial for (A) a single participant and (B) averaged across all participants for one stimulus. Fixation positions were smoothed using a Gaussian filter with a sigma corresponding to 2° visual angle in order to account for the fixation position variability when fixating a certain point.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×