Research Article  |   December 2008
Orientation sensitivity of the N1 evoked by letters and digits
Author Affiliations
Journal of Vision December 2008, Vol.8, 11. doi:
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Branka Milivojevic, Michael C. Corballis, Jeff P. Hamm; Orientation sensitivity of the N1 evoked by letters and digits. Journal of Vision 2008;8(10):11.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Orientation sensitivity of the amplitude and latency of the P1 and the N1 was investigated while participants performed letter–digit category judgments and red–blue color judgments. These two tasks were used in order to ascertain whether the orientation effects reflect access to object identity, which would be necessary for category, but not color, judgments. Character misorientation significantly affected both the latency and the amplitude of the N1, but not the P1, component. The N1 amplitude increased gradually up to 90°, then leveled off up to 150°, and dipped somewhat for 180°. The effect of orientation on N1 latency differed between the hemispheres, with a quadratic function characterizing the effect of orientation on the left, and linear and quadratic trends characterizing the effect on the right. The effects of orientation were attributed to perceptual learning rather than object recognition, and the hemispheric differences in N1 latency suggest feature-based processing in the left hemisphere and holistic processing in the right.

As our position changes in relation to objects in the environment, so does the image that those objects project on our retinas. Despite widely different retinal images, we maintain object constancy and seemingly recognize most objects with relative ease irrespective of their orientation. There are, however, circumstances in which changes in orientation affect recognition. 
One such example is the effect of inversion on face recognition. A well-established finding is that stimulus inversion impairs recognition of faces more so than that of other objects (Yin, 1969). However, impaired recognition of inverted non-face objects, such as natural scenes (Epstein, Higgins, Parker, Aguirre, & Cooperman, 2006), body postures (Reed, Stone, Bozova, & Tanaka, 2003), and body movements (Jokisch, Daum, Suchan, & Troje, 2005), has also been reported. Furthermore, the effect of inversion on recognition seems to develop as a function of visual expertise with a stimulus category. For example, Husk, Bennet, and Sekuler (2007) have shown that training individuals to recognize houses and textures results in increased deficits on recognition of those object when inverted. 
The traditional interpretation for the face-inversion effects is that face recognition relies more exclusively on information based on spatial configuration between and within the constituent features, the so-called configural information, and that this type of information is disrupted with changes in orientation (e.g., Bruce, Doyle, Dench, & Burton, 1991; Carey, 1992; Farah, Tanaka, & Drain, 1995; Freire, Lee, & Symons, 2000; Tanaka & Farah, 1993). Evidence of poorer recognition for inverted than upright non-face objects is often interpreted as evidence that recognition of those object classes also depends on configural information (Epstein et al., 2006; Jokisch et al., 2005; Reed et al., 2003). 
Nevertheless, there is evidence that fingerprint experts use configural information for fingerprint recognition (Busey & Vanderkolk, 2005). It is therefore possible that visual expertise with an object category results in configural processing of those objects, which in turn may be a more efficient processing strategy than feature-based recognition, which is often offered as an alternative object-recognition strategy (e.g., Tanaka & Farah, 1993). However, poorer recognition of inverted, compared to upright, objects does not necessarily imply that configural processing is involved (see Husk et al., 2007). 
The electrophysiological correlate of face inversion is a delay, and sometimes amplification, of a negative component of the visual-evoked potential (VEP) distributed over occipito-temporal electrodes at about 170 ms after stimulus onset—the so-called N170 (e.g., Rebai, Poiroux, Bernard, & Lalonde, 2001; Rossion et al., 2000; Rossion, Joyce, Cottrell, & Tarr, 2003). Similar effect of inversion on the latency of the N170 evoked by fingerprint inversion has been reported in fingerprint experts, but not non-experts (Busey & Vanderkolk, 2005). Busey and Vanderkolk (2005) interpreted the similarity of the ERP correlate of inversion for faces and fingerprints in experts as further evidence that configural processing is involved in expert fingerprint recognition. However, stimulus inversion only affected recognition of faces but not fingerprints, for both fingerprint experts and non-experts. Therefore it is not entirely clear whether increases in the latency of the N170 with stimulus inversion necessarily correlate with poorer recognition. 
Configural processing is often contrasted with feature-based processing. However, the term configural processing does not necessarily reflect a unified concept. For example, Maurer, Grand, and Mondloch (2002) have identified three types of configural processing of faces: processing based on first-order relations, holistic processing, and processing based on second-order relations. Binding of constituent features into a gestalt is the mark of holistic processing, while first-order and second-order relations refer to the positioning of the constituent features, with first-order relations describing the overall positioning of facial features, such as the positioning of the eyes above the mouth, and second-order relations describing the relative distances between those features. Although some authors have argued that holistic and configural processing are largely interchangeable (e.g., Tanaka & Farah, 1993), Maurer et al. (2002) provide considerable evidence that this is not the case. More importantly, they argue that the delay of the N170 reflects disruption of the first-order relations—that is, the type of information that results in faces being recognized as faces. 
In terms of recognition, inversion does not typically interrupt classification of an object as a face but does impair the individuation of a particular face. According to the model of Rosch, Mervis, Gray, Johnson, and Boyes-Braem (1976), face individuation would reflect subordinate-level identification, while classification of a stimulus as a face would require assignment of a basic-level label. As most objects are identified at the basic level (e.g., dog), rather than at a subordinate level (e.g., collie), some authors have argued that recognition deficits associated with face inversion reflect orientation-dependent subordinate-level identification (e.g., Tanaka, 2001). According to this view, expertise with a stimulus category also results in subordinate-level classification and may thus account for deficits in recognition following inversion. 
This may certainly be the case as the effects of orientation have been shown to depend on level of recognition—superordinate, basic, or subordinate—required for task performance. Namely, in some cases identification of common objects can also be affected by misorientation. For example, when observers are required to simply identify rotated alphanumeric characters (Corballis, Zbrodoff, Shetzer, & Butler, 1978; White, 1980) or naturalistic objects (Jolicoeur, 1985), their reaction times (RTs) do show some variation with angular departure from the upright. Hamm and McMullen (1998) examined whether effects of orientation on object recognition affect processing before or after access to an internal object representation in long-term memory store. They hypothesized that if orientation-related recognition delays are related to initial delay for access to long-term memory store, then recognition at all levels should be affected to a similar degree. Alternatively, if the RT increases are accrued after the long-term memory representation has already been accessed, then identification at the subordinate level would be more affected than basic or superordinate-level identification. They found that matching picture of objects with subordinate labels produced larger orientation-dependent effects than matching to basic-level or superordinate-level labels. These results, therefore, support the notion that orientation-related costs follow the access to long-term memory representations. 
However, unlike the effects of inversion on face recognition, which persist over time, the effect of stimulus misorientation on naming times is relatively transient and diminishes with practice (Jolicoeur, 1985; Jolicoeur, Snow, & Murray, 1987). For example, Jolicoeur et al. (1987) asked the participants to name misoriented and upright alphanumeric characters. Each character was presented four times at each orientation in the course of the experiment in two font types, one familiar and another less familiar. Naming RTs increased linearly from 0° to 120°, but not from 120° to 180°, conforming to an M-shape as a function of angular departure from upright. Jolicoeur et al. found that 1) the effects of orientation diminished in the second half of the experiment and 2) the increase in RTs with misorientation was greater for the less familiar font, suggesting that exposure to the stimuli within the experiment and outside of the laboratory setting both had an effect on recognition deficits due to stimulus misorientation. 
Nevertheless, it is not entirely clear that the RT costs associated with character misorientation are due to a need for additional processing required to access a subordinate-level label. Wong and Gauthier (2007) have demonstrated that labeling characters as letters or digits is a superordinate-level task, labeling them as specific letters, ‘G’ for example, is a basic-level task, and deciding whether a character is handwritten or, say, printed in Arial font is a subordinate-level task. Despite this inconsistency, identifying rotated alphanumeric characters at basic-level initially results in RT increases as a function of character orientation. 
The reduction of orientation-dependent RT costs with practice has been attributed to a shift in recognition strategy. Jolicoeur (1990) argued that increased familiarity with a stimulus set would allow for more efficient recognition based on featural information. In contrast, the initial presentations of novel stimuli would depend on matching to an internal, orientation-specific, template. If the stimulus is presented at an orientation other than the canonical one, it would, thus, require normalization to a standard template. 
Although there is debate in the literature as to the cognitive mechanisms underlying the ‘M’-shaped function associated with object recognition (see Jolicoeur, 1990), a possibility remains that the M-shaped function may arise at the “input” level, prior to access to object information. For example, Lawson and Jolicoeur (2003) showed that the same function also characterized duration thresholds for line drawings briefly presented at varying angular departures from upright. Similar effects of stimulus duration on accuracy rates were observed for identification of alphanumeric characters (Jolicoeur & Landau, 1984). 
We have mentioned earlier that classification into a superordinate category is typically not sensitive to stimulus orientation. For example, Corballis and Nagourney (1978) showed that when participants were required to classify alphanumeric characters as letters or digits their RTs did not systematically depend on orientation of the stimuli. If the orientation sensitivity arises at the input level, then one may expect that any task that requires object recognition would elicit orientation-dependent neural processing that precedes the recognition of the object. Further, these effects should be elicited irrespective of whether the recognition itself is affected by changes in orientation. Thus, superordinate classification tasks should also elicit orientation-dependent neural processing if such processing critically depends on stimulus input and is independent of correction for misorientation. Therefore, even if there is no evidence for behavioral deficits in recognition, such as those observed in identification tasks, orientation-related changes in neural processing should be observed. 
If the delay and amplification of the N170 are associated with changes in first-order relations, which are necessary for basic-level categorization of faces as faces, then similar effects may also be observed with other objects that have a canonical orientation. Evidence suggests that inversion of cars, words, houses, chairs, and natural scenes all result in N170 modulation (Itier, Latinus, & Taylor, 2006; Rebai et al., 2001; Rossion et al., 2003; Rousselet, Macé, & Fabre-Thorpe, 2004) but not, for example, cups (Rebai et al., 2001) or triangles (Ito, Sugata, & Kuwabara, 1997). It is therefore possible that visual expertise with objects at a particular orientation result in plastic changes in the visual system that favors a view-dependent configuration of their constituent parts (see Perrett, Oram, & Ashbridge, 1998). 
Although not representative of everyday three-dimensional objects, alphanumeric characters have a single canonical upright, which makes them ideal for the study of any neural processes that might be affected by angular orientation. Further, literate individuals are highly skilled at identifying alphanumeric characters and encounter them often. Visual expertise with letters and digits is likely to result in plastic changes within the visual system in the form of recruitment of certain areas within the brain to process them (James, James, Jobard, Wong, & Gauthier, 2005). A number of studies have shown that alphanumeric characters elicit activation in specific subregions of the ventral stream (Garrett et al., 2000; James & Gauthier, 2006; James et al., 2005; Joseph, Cerullo, Farley, Steinmetz, & Mier, 2006; Pernet, Celsis, & Démonet, 2005; Polk & Farah, 1998). 
Furthermore, the N1, a VEP component with temporal and spatial characteristics comparable to those of the N170, shows response specificity to letters. Wong, Gauthier, Woroch, De Buse, and Curran (2005) demonstrated that alphanumeric characters elicit an increased amplitude of the N1 component in comparison to either Chinese characters or visually similar letter-like symbols on the left. The sensitivity of the N1 to characters seems to develop with expertise, as individuals who were proficient Chinese readers also showed increased N1 amplitudes to Chinese characters, comparable to those evoked by letters (Wong et al., 2005). Further, Gros, Doydon, Rioual, and Celsis (2002) also demonstrated that N1 is sensitive to letters and that letter-specific effects are left lateralized. They found that the shape ‘O’ shows N1 response similar to other letters when preceded by letters and a response similar to geometric figures when preceded by geometric figures (Gros et al., 2002). However, unlike the enhancement of N1 in response to letters compared to visually similar symbols reported by Wong et al. (2005), Gros et al. (2002) found that the N1 evoked by letters is smaller in amplitude than the N1 evoked by geometric figures. Despite the inconsistency of the direction of the effects observed by Wong et al. (2005) and Gros et al. (2002), the N1 appears to be sensitive to alphanumeric characters. It seems reasonable, then, to investigate the effects of orientation on the N1 component in particular. 
In our earlier mental-rotation study in which we used alphanumeric characters (Milivojevic, Johnson, Hamm, & Corballis, 2003), we reported orientation-related modulation of the N1 component around 140 ms, distributed over occipito-temporal leads. This early orientation-related modulation was not associated with mental rotation demands because the effects were better characterized with combined linear and quadratic trends, rather than the linear trend that characterizes mental-rotation-related effects. Instead, the largest N1 amplitudes were evoked by stimuli presented at 60 and 120 degree orientations, with a dip at 180 degrees. 
The N1, therefore, seems to be sensitive to alphanumeric characters and, more specifically, their orientation. As discussed earlier, VEP studies examining effects of orientation on face processing have focused on the N170, a negative VEP component corresponding in time and scalp topography to the N1. As a parallel, effects of stimulus orientation on the amplitude and the latency of the N170 evoked by faces are well documented (e.g., Rebai et al., 2001; Rossion et al., 2000, 2003). Effects of face inversion on the P1 (e.g., Itier & Taylor, 2002, 2004; Rebai et al., 2001) have also been reported, although less consistently. Comparable effects on the amplitude and latency of the visually evoked potentials (VEPs) in response to non-face stimuli have not been consistently reported, although some studies have reported effects on the amplitude and latency of the N170 in response to inversion of non-face objects (Itier et al., 2006; Rebai et al., 2001; Rousselet et al., 2004). 
In a more recent study, Jacques and Rossion (2007) investigated the effects of face orientation on amplitude of the P1 and the N170 components. They found that the amplitudes and the latencies of both of these components varied as a function of stimulus orientation. The effects of orientation on the amplitude and latency of the P1 were characterized by a quadratic function with increases up to 90°, and then decreases between 90° and 180°. 
Amplitudes and latencies of the N170 also varied as a function of stimulus orientation, but this function may be better described as a combination of linear and quadratic trends—with increases up to 90° and small decreases around the 180° orientation. Jacques and Rossion (2007) also reported hemispheric asymmetries for the N170 effects, with further increases in amplitudes and latencies of the N1 between 90° and 150° orientations on the right, but no such increases on the left. 
Furthermore, both the N1 amplitude and the latency effects were highly correlated with orientation-dependent deficits on a delayed-matching task. Despite the correlations between the latency and amplitude measures, and between both of these measures and behavioral effects, Jacques and Rossion (2007) argued that the effects of orientation on the latency and amplitude of these components do not necessarily reflect the same neural processes. They suggested that the latency effects reflect slower accumulation of neural activity, but that the effects on the amplitude reflect recruitment of additional cortical regions for stimulus processing (Jacques & Rossion, 2007), as suggested by functional neuroimaging studies (e.g., Haxby et al., 1999). 
The aim of this study is to investigate effects of stimulus orientation on VEPs evoked by alphanumeric characters. As mentioned earlier, letters and digits offer a number of advantages for the study of orientation sensitivity of the basic neural mechanisms involved in visual processing of objects. Firstly, since alphanumeric characters have a single canonical upright, they seem ideal for study of effects of object orientation on neural processing. Secondly, because alphanumeric characters are highly overlearned, there is little effect of orientation on their recognition (see Jolicoeur et al., 1987), so processes involved in recognition itself can be easily distinguished from the processes involved in post-perceptual normalization mechanisms, such as view interpolation or mental rotation. For example, although categorization itself might be independent of orientation (Corballis & Nagourney, 1978), earlier processing leading up to categorization might well be affected by rotation away from the canonical upright. 
We also wished to investigate whether effects of stimulus orientation are affected by top-down task demands for object recognition. Therefore, we wished to compare orientation specificity of P1 and N1 responses under a task that required recognition of characters and another that did not. Previous evidence indicates that decisions based on the color of objects can be accomplished independently of object recognition (Boucart et al., 2000; Pins, Meyer, Foucher, Humphreys, & Boucart, 2004). Because letters and digits form naturally distinct categories, they permit ready comparison between two dichotomous tasks: a categorization task in which participants decide whether each character is a letter or a digit, and a color judgment task in which participants decide whether each character is red or blue. 
Of particular interest for this study is the shape of orientation function associated with object recognition. When RTs are plotted against angular departure from the upright, i.e., from 0° to 180°, the ‘M’-shaped function translates to a combination of linear and quadratic functions, because the function is described as linear up until 120°, then dipping at 180°. A quadratic-only function would have to “peak” at 90°, while a linear-only function would peak at 180°. Therefore, we will examine the effects of orientation with polynomial contrasts, in order to determine whether the M-shaped function can also be used to describe orientation-dependent neural processing. 
Fifteen neurologically normal volunteers (7 women) with normal or corrected-to-normal vision participated in this study. All were right-handed (LQ range: 60–100, mean 85.03), as determined by Edinburgh Handedness Inventory (Oldfield, 1971). They ranged in age from 21 years to 35 years, with a mean of 26.2 years. The procedures were approved by the University of Auckland Human Subjects Ethics Committee, and all participants gave their informed consent to participate in the experiment. 
Visual displays
The stimuli consisted of eight uppercase alphanumeric characters—four letters (R, F, L, and P) and four digits (2, 4, 5, and 7)—presented in red and in blue 72-point Arial font on white background. At upright, the characters subtended a vertical visual angle of 2° and a horizontal visual angle of 1.45°, on average, although small differences in the horizontal visual angle were present due to the shape of the characters (range: 1.3–1.8°). Each character was presented at twelve clockwise angular departures ranging from 0° to 330°, in 30° increments. Twice as many stimuli were presented for the upright and inverted (180°) orientations, compared to those at the other ten orientations, since clockwise and counterclockwise rotations were treated as equivalent, thus resulting in seven orientation conditions. 
Manipulation of visual displays was performed using Microsoft Office Picture Editor (MS). Stimuli were displayed on an SVGA computer monitor (1024 × 768 pixel resolution; 60-Hz refresh rate) from a distance of 57 cm. Stimulus presentation was controlled using E-Prime v1.1.4.1 (Psychology Software Tools, Pittsburgh, Pennsylvania, USA). TTL pulses generated via the parallel port of the display computer provided synchronization of stimulus events with EEG acquisition. Millisecond timing routines for the visual displays and pulse generation were conducted as outlined in the E-Prime User Guide (Schneider, Eschmann, & Zuccolotto, 2002). 
The participants performed two tasks, a red–blue discrimination task (color judgment) and a letter–digit discrimination task (category judgment). In the color-judgment task, the participants were required to press the left mouse button if they judged the character to be blue and the right mouse button if they judged it to be red. In the category-judgment task, the participants were required to press the left mouse button if they judged the character to be a letter, and the right mouse button if they judged the character to be a digit. In both cases they were instructed to respond as quickly as possible without sacrificing accuracy. 
Participants performed two practice blocks, one for each task, immediately prior to the experiment. Each practice block consisted of 20 randomly selected trials. Over the course of the experiment, 448 trials were presented for each task—corresponding to 64 trials per condition. The experiment was conducted over four blocks of trials, and task order was randomized across the blocks. Participants always responded with their dominant (right) hand. Within each block, each stimulus was presented twice at each orientation, and the order of stimulus presentations was randomized. Stimuli were presented centrally for up to 4 s or until a response was detected, whichever was sooner. The inter-trial interval varied between 733 and 1566 ms and participants were instructed to keep fixation on a small ‘+’ presented on the screen during that interval. Participants were also instructed to keep looking at the stimulus, avoid eye movements, and withhold blinking until after the response was made. 
EEG recording and analysis
Electrical Geodesics 128-channel Ag/AgCl electrode nets (Tucker, 1993) were used. EEG was recorded continuously (1000-Hz sampling rate; 0.1–100 Hz analog band-pass) with Electrical Geodesics amplifiers (200-MΩ input impedance) and acquisition software running on a Macintosh G4 computer with a 16-bit analog-to-digital conversion card bit. Electrode impedances were below 50 kΩ (range 30–50 kΩ), an acceptable level for this system (Ferree, Luu, Russell, & Tucker, 2001). EEG was acquired using a common vertex (Cz) reference. 
Following data collection, the EEG files were segmented with respect to event triggers in 800-ms epochs including a 200-ms pre-stimulus baseline and 600-ms post-stimulus epoch. Only the trials on which the participants responded correctly were included in the analyses. Eye-movement correction was made on all segments using the method of Jervis, Nichols, Allen, Hudson, and Johnson (1985). The corrected data from each subject were then averaged to produce a total of 14 ERPs (two tasks and seven orientation conditions). DC offsets were calculated from the pre-stimulus baseline and removed from all waveforms. The individual waveforms were digitally filtered with a band-pass filter for 0.01–30 Hz range using a bi-directional 3 Pole Butterworth filter (Alarcon, Guy, & Binnie, 2000). Averaged and filtered ERPs were re-referenced to the average reference off-line. 
To improve signal-to-noise ratio for further analysis, ERPs were averaged over seven occipito-temporal electrodes over each hemisphere, extending between O1 and T5 on the left and O2 and T6 on the right. Figure 1 shows the relationship between electrodes of interest and topographic distribution of P1 and N1. For each of these ERPs, peak latency and amplitude were estimated at the two electrode clusters of interest for each participant individually. The peak latency for the P1 and the N1 components was estimated as the latency at which the components reached their maximal amplitude. The component amplitude was estimated as average amplitude over the time period corresponding to the full-width-at-half-maximum amplitude. 
Figure 1
Topographic distribution of the P1 and N1 with occipito-temporal electrodes of interest highlighted in gray—O1, O2, T5, and T6 are labeled. Cz is marked for reference purposes.
Figure 1
Topographic distribution of the P1 and N1 with occipito-temporal electrodes of interest highlighted in gray—O1, O2, T5, and T6 are labeled. Cz is marked for reference purposes.
Behavioral results
Mean RTs for correct responses, accuracy, as percent correct, and inverse efficacy, as RTs divided by proportion of correct trials (see Jacques & Rossion, 2007) were analyzed in a 2 × 7 repeated-measures ANOVA with task and orientation as factors and plotted in Figure 2. Sphericity violations arising from repeated measures were corrected for using the reduced dfs as determined by the Huynh–Feldt ɛ value (Huynh & Feldt, 1976). 
Figure 2
Reaction times (RT), accuracy (%), and inverse efficiency as percent correct, as a function of task and orientation. Error bars represent SE of the mean.
Figure 2
Reaction times (RT), accuracy (%), and inverse efficiency as percent correct, as a function of task and orientation. Error bars represent SE of the mean.
In terms of accuracy, the main effects of task ( F(1, 14) = 3.33, p = 0.089) and orientation ( F(6, 84) < 1, ɛ = 0.928) were not significant, and the task-by-orientation interaction only approached significance ( F(6, 84) = 2.21, p = 0.05, ɛ = 1). 
With respect to RTs, the category task (538.41 ms, SE = 25.56) elicited significantly longer RTs than the color task (455.47 ms, SE = 20.95; F(1, 14) = 57.48, p < 0.001). The main effect of orientation was not significant ( F(6, 84) = 2.63, p = 0.061, ɛ = 0.515), although there was a significant task-by-orientation interaction ( F(6, 84) = 3.15, p = 0.010, ɛ = 0.910). The interaction was further investigated by examining the effects of orientation for each task separately. The effect of orientation was not significant for the color task ( F(6, 84) = 1.00, p = 0.423, ɛ = 0.846) but was for the category task ( F(6, 84) = 3.76, p = 0.017, ɛ = 0.508). The effect of orientation was best characterized in terms of linear increases with increased angle of orientation ( F(1,14) = 14.78, p = 0.002; 88.26%), and pair-wise comparisons indicated that significantly longer RTs were elicited by stimuli presented at 150° (554.03, SE = 28.82) and 180° (549.62, SE = 28.08) orientations than at 0° orientation (525.43, SE = 26.43; p < 0.05, with Bonferroni correction), by 28.59 ms and 24.18 ms, respectively. 
In terms of inverse efficacy, main effect of task was significant ( F(1, 14) = 62.61, p < 0.001) with larger inverse efficacy measure for the category task than the color task. Significant main effect of orientation ( F(6, 84) = 2.60, p = 0.039, ɛ = 0.744) was also observed. The interaction between the two factors was also significant ( F(6, 84) =4.99, p = 0.001, ɛ = 0.833). The tasks differed at each level of orientation. For the category task, inverse efficacy at 120° (571.97 ms, SE = 25.14 ms, p = 0.047 with Bonferroni correction) and 150° (582.54 ms, SE = 27.83, p = 0.029 with Bonferroni correction) orientations differed from upright (547.16 ms, SE = 26.72). The difference between upright and inverted (577.81, SE = 29.365) also approached significance ( p = 0.068, with Bonferroni correction). No differences between orientations were observed for the color task. 
Simple effects of orientation for each task were then examined. Effects of orientation were significant for the category task ( F(6, 84) = 5.09, p < 0.001, ɛ = 0.917) and consisted mainly of a significant linear trend ( F(1, 14) = 22.39, p < 0.001), which accounted for 92.54% of variance. No significant effects of orientation were observed for inverse efficacy for the color task ( F(6, 84) < 1). 
EEG results
Figure 3A illustrates ERPs over the left and the right electrode clusters for category and color tasks at 0°, 90°, and 180° orientations only, as well as average latency with error bars representing standard errors. Figures 3B and 3C also illustrate the latency and amplitude, respectively, of the P1 and N1 components as a function of task, orientation, and hemisphere. These three factors were used in a 2 × 7 × 2 repeated-measures ANOVA as within-subjects factors and Huynh–Feldt ɛ correction was used for sphericity violations (Huynh & Feldt, 1976). 
Figure 3
(A) ERPs as a function of task, hemisphere, and orientation with mean N1 latency (error bars represent standard errors). (B) P1 and N1 latency as a function of orientation, hemisphere, and task. (C) P1 and N1 amplitudes as a function of orientation, hemisphere, and task.
Figure 3
(A) ERPs as a function of task, hemisphere, and orientation with mean N1 latency (error bars represent standard errors). (B) P1 and N1 latency as a function of orientation, hemisphere, and task. (C) P1 and N1 amplitudes as a function of orientation, hemisphere, and task.
To assess whether there is a relationship between behavioral measures (RTs, accuracy and inverse efficacy) and the ERP measures, Pearson's correlations were also computed between the left and the right electrode clusters for each task separately. 
Amplitude effects
Effects of orientation, task, and hemisphere were examined for the amplitudes of the P1 and N1 components. For the P1 component, only the main effect of hemisphere was significant ( F(1, 14) = 4.74, p = 0.047), reflecting larger P1 amplitudes on the right. 
No significant relationship was observed between the amplitude of the P1 and any of the behavioral measures for either task ( r(7) < ∣−0.612∣, p ≥ 0.144). 
For the N1 component, the main effect of task was significant ( F(1, 14) = 6.55, p = 0.023) and was characterized by larger N1 amplitudes in response to the color-judgment task. The main effect of orientation was also significant ( F(6,84) = 7.14, p = 0.001, ɛ = 0.449) and was predominantly characterized by significant quadratic ( F(1,14) = 11.39, p = 0.005, 76.79 % variance explained) and linear ( F(1,14) = 5.75, p = 0.031, 13.97% variance explained) trends, although the cubic trend was also significant ( F(1,14) = 7.36, p = 0.017, 3.80% variance explained). 
No significant relationship was observed between the amplitude of the N1 and any of the behavioral measures for either task, although the relationship between inverse efficacy for the category task and N1 amplitude on the left approached significance ( r(7) = 0.695, p = 0.083). 
Latency effects
Task ( F(1, 14) = 1.77, p = 0.205), orientation ( F(6, 84) < 1, ɛ = 0.671), hemisphere ( F(1, 14) < 1), and the interactions between any of those factors did not have a significant effect on the latency of the P1. 
A significant correlation was observed between P1 latency on the right and reaction times ( r(7) = 0.766, p = 0.045) and approached significance between P1 latency on the right and inverse efficacy ( r(7) = 0.719, p = 0.069) for the category task. No other correlations were significant ( r(7) < ∣0.563∣, p ≥ 0.215). 
For the N1 component, significant main effect of orientation ( F(6, 84) = 8.53, p < 0.001, ɛ = 0.419) and a significant orientation-by-hemisphere interaction ( F(6, 84) = 5.18, p = 0.005, ɛ = 0.470) were observed. Significant effects of orientation were observed over both the left ( F(6, 84) = 3.33, p = 0.025, ɛ = 0.532) and the right hemispheres ( F(6, 84) = 9.37, p = 0.002, ɛ = 0.271). The orientation effects over the left were characterized by a significant quadratic trend ( F(1, 14) = 5.69, p = 0.032), which explained 84.69% of variance. On the right, effects of orientation were best characterized with significant linear ( F(1, 14) = 10.03, p = 0.007) and quadratic ( F(1, 14) = 22.27, p < 0.001) trends, which explained 58.26% and 36.16% of variance, respectively. No significant differences were observed between the hemispheres at any of the orientations, although the right hemisphere was 6.7 ms faster than the left when characters were at upright ( p = 0.06, with Bonferroni correction), and the left hemisphere was 6.3 ms faster than the right when the characters were inverted ( p = 0.2 with Bonferroni correction) and the differences between the hemispheres changed gradually at intermediate orientations. 
A significant positive correlation was observed between inverse efficacy and N1 latency on the right for the category task ( r(7) = 0.784, p = 0.037). Correlations between N1 latency on the right for the category task and accuracy ( r(7) = −0.718, p = 0.053), as well as reaction times ( r(7) = 0.675, p = 0.096) also approached significance. No other correlations were significant ( r(7) > ∣0.594∣, p ≥ 0.160). 
This study was designed to elucidate the effects of orientation on ERP correlates of early visual processing of alphanumeric characters. The amplitudes and latencies of the P1 and N1 visual components were examined during a letter–digit category judgment—a task that required object recognition—and a red–blue color judgment task, which did not require object recognition. The stimuli were presented at seven orientations ranging from upright to inverted at 30° increments. 
We did not expect to observe effects of orientation on behavioral measures for either task for two reasons. Firstly, we used a categorization task, rather than an identification task. As reaction times for object recognition at superordinate level typically do not vary systematically as a function of stimulus orientation (Corballis & Nagourney, 1978; Hamm & McMullen, 1998), we anticipated that the behavioral measures used in this study would also show little dependence on orientation. 
Secondly, even in identification or subordinate-level recognition tasks, repeated exposure to a stimulus set within an experiment leads to decreases in orientation dependence of the RT function. So, for example, Jolicoeur et al. (1987) reported that even second presentation of an identical stimulus at a given orientation results in a reduction of the effect of orientation. One might expect that exposure to 64 trials per condition would result in practice-dependent reduction in orientation effects, even if a categorization task elicited orientation-dependent recognition in the first place. 
Surprisingly, reliable effects of stimulus orientation were observed on both reaction times and inverse efficacy measures for the category task. No such effects were observed for the color task, as may be expected given that this task did not require object recognition. 
No reliable relationship was observed between reaction times, accuracy or inverse efficacy measures, and ERP amplitudes. However, significant correlations were observed for the category task between reaction times and P1 latency on the right, and between inverse efficacy and N1 latency, also on the right. No such effects were observed for the color task. The results therefore suggest that behavioral performance on a task that required object recognition is related to the effects of orientation on neural processing as assessed using VEPs. However, comparable VEP effects were also observed in the task that did not require object recognition, indicating that these effects are stimulus driven and may reflect impaired object processing with changes in orientation. 
Despite the fact that there was a significant relationship between reaction times and the P1 latency, effects of orientation were only observed on the latency and amplitude of the N1 component. The results indicate that the N1 shifts in time with stimulus misorientation and that the amplitudes increase in response to misoriented stimuli. This shift in latency cannot be attributed to delays of the P1 component, indicating that the latency shifts are most likely to be related to a delay in the recruitment of the neural generators underlying the N1 and are not due to delays at preceding processing stages. 
For upright characters, the N1 latency was shorter over the right than over the left hemisphere, suggestive of right-hemispheric dominance for processing of alphanumeric characters, which is in contrast to the findings from most neuroimaging studies to date (Garrett et al., 2000; Gros, Boulanouar, Viallard, Cassol, & Celsis, 2001; James & Gauthier, 2006; James et al., 2005; Joseph et al., 2006; Pernet et al., 2005). Although it is unlikely that the N1 response reflects fusiform gyrus activation, which is typically left lateralized and may reflect activation of lateral occipital regions that do not show lateralization for alphanumeric characters (e.g., Nakamura et al., 2005), sensitivity of the N1 amplitudes on the left in response to letters have been consistently reported (Gros et al., 2002; Wong et al., 2005). In this study, however, no reliable hemispheric effects were observed on the amplitude of the N1—an effect that might have been attenuated by the effect of rotation in the picture plane. However, even when we examined only upright stimuli, the N1 amplitude was not significantly greater on the left for either task (p ≥ 0.119). It is possible that hemispheric asymmetries were not found because both letters and digits were presented, and only letters, as language-related stimuli tend to favor the left hemisphere. Nevertheless, the results show that N1 occurs earlier over the right hemisphere, and as the angular departure from upright increased, the latency difference between the hemispheres decreased, and the N1 over the left hemisphere reached its peak earlier than the N1 over the right when the stimuli were upside down. 
Furthermore, the orientation effects on the latency of the N1 were larger on the right than the left. These results fit well with Burgund and Marsolek's (2000) proposal that the left hemisphere subserves orientation-independent recognition while the right hemisphere subserves orientation-dependent recognition. Burgund and Marsolek (2000) also suggest that the orientation-independent recognition subsystem relies on feature-based local processing, while the orientation-dependent system depends on global, holistic processing. 
If the left hemisphere subserves feature-based recognition mechanisms, while the right subserves holistic processing (Bradshaw & Nettleton, 1981; Burgund & Marsolek, 2000), then these results could be interpreted as an indication that feature-based processing is slower than holistic processing when the stimuli are presented at upright and that holistic processing is delayed as a function of rotation from upright, while feature-based processing is delayed as a function of rotation from vertical. In this experiment, feature-based processing may be more similar between the upright and inverted stimuli because the vertical axis is in alignment for stimuli presented at these two orientations (Hummel & Biederman, 1992). 
Further, this pattern of results may be interpreted as indicating that right-hemisphere function in alphanumeric processing is based on holistic analysis, while left-hemisphere function is based on analysis of constituent features. The difference in the timing of the effects could then be interpreted as indicative of faster holistic processing when the stimuli are at upright, but larger deficits in holistic processing with misorientation. In contrast, feature-based processing may be slower at the upright orientation, but it is less affected by changes in orientation. Although this is a speculative interpretation, the results do suggest that the latency of the N1 shows greater degree of orientation sensitivity on the right than on the left. Further research is necessary to address the issue of whether recognition of alphanumeric characters can be based on both featural and holistic information. 
As discussed previously, the RT function associated with object recognition typically “dips” at 180°, giving rise to an ‘M’-shaped function (Corballis, 1988; Jolicoeur, 1985; Jolicoeur et al., 1987). At least one account of the “dip” at 180° orientation, offered by Jolicoeur (1990), suggests that the dip around the 180° orientation reflects operation of two independent object recognition systems. One of these systems is orientation dependent and requires transformation to align the input image to a canonical template. The other system depends on orientation-independent descriptions of features and parts. Jolicoeur (1990) argued that since upright and inverted stimuli share alignment of the vertical (top–bottom) axis, recognition of individual features or object parts may be easier at 180° than at orientations at which the top–bottom axis is not aligned with the canonical representation. In this case, orientation-invariant feature extraction mechanism may result in faster object identification than the normalization to the canonical upright. Our results are consistent with the notion that there are two types of orientation-dependent delays in visual processing—one that is affected by departures from a shared alignment of the top–bottom axis, and the other that is affected by departures from upright. However, in contrast to Jolicoeur's (1990) stipulation that the orientation-dependent RT function reflects normalization to upright, our data suggest that the delays are accrued at an earlier stage of visual processing. 
Amplitudes of the N1 also increased as a function of stimulus orientation, with a smaller increase between upright and inverted, and larger increases from upright to all other orientations—except for 30°, which elicited amplitude intermediate between 0° and 60°. Thus, irrespective of the relative timing of the effects, stimulus misorientation elicited increases in amplitudes of the N1 component. Increases in amplitude of the N1 has been related to increased processing difficulty (Gros et al., 2002; Pernet et al., 2003). Further, correlation between inverse efficacy and N1 latency indicate that the N1 latency relates to delays in object recognition. The effects of orientation on the N1 component therefore reflect an interplay between relative timing of the N1 component and increases in amplitudes, which may reflect increased processing difficulty. 
The effects of orientation were observed for both the category and color tasks, suggesting that these effects are independent of any explicit need for character recognition. It is possible, however, that the participants could not suppress recognition of the highly familiar characters during color task, so the role of object recognition on the N1 effects might be considered inconclusive. Nevertheless, comparable effects of orientation on the N1 during both color and category tasks are suggestive of stimulus-driven processing. If so, the effects of orientation on the N1 may reflect the functioning of overlearned perceptual pathways rather than recognition itself, and thus precede matching to memory representations. 
Although we suggest that the effects are stimulus driven, it is unlikely that the effects of orientation reported here are simply due to physical differences caused by changes in global orientation. Firstly, upright and inverted characters, as well as characters rotated by either 30° or 150°, share the same orientation in terms of the main axis of elongation, and yet elicit different N1 amplitudes and latencies. If the stimuli share their orientation in terms of the axis of elongation, then to recognize which part of the stimulus is the top of the object one must firstly know what the object is. It then follows that some level of object recognition has occurred by this processing stage. Furthermore, N1 latency is correlated with inverse efficacy for alphanumeric categorization but not color judgments. This suggests that the timing of the N1 has an effect on speed and accuracy of object recognition as this process is necessary for category judgments but not for color judgments. 
Perceptual learning may account for these results. Visual familiarity with a stimulus at a particular orientation may result in plastic changes within the visual system, such as strengthening of synapses and recruitment of larger number of neurons sensitive to a particular view of an object. Perrett et al. (1998) have provided detailed accounts of the response properties of orientation specificity in response to faces and bodies. As mentioned earlier, recognition of these types of objects is thought to depend on configural processing (e.g., Bruce et al., 1991; Carey, 1992; Farah et al., 1995; Freire et al., 2000; Jokisch et al., 2005; Reed et al., 2003; Tanaka & Farah, 1993). 
The similarity between the effect of orientation on the amplitude and latency of the N170 evoked by faces (Jacques & Rossion, 2007) and those reported in this study on the N1 evoked by alphanumeric characters may indicate that visual familiarity with letters and digits results in use of a configural-processing strategy with alphanumeric characters. This conclusion is supported by findings that expertise results in configural processing of stimuli and that stimulus inversion results in N1 delays comparable to those observed with faces (Busey & Vanderkolk, 2005). 
However, this does not necessarily need to be the case. The work by Logothetis et al. (Logothetis, Pauls, Bülthoff, & Poggio, 1994; Logothetis, Pauls, & Poggio, 1995; Logothetis & Sheinberg, 1996) suggests that similar orientation specificity can be observed even for novel three-dimensional shapes and not only faces and bodies reported by Perrett et al. (1998). The majority of object-responsive neurons in the inferior temporal cortex are preferentially responsive to a particular viewpoint (or orientation) of that object, although there are neurons that also respond to objects in viewpoint-invariant manner (Perrett et al., 1998). 
The number of neurons that respond to a particular viewpoint of an object is also dependent on experience with that object at that particular viewpoint. Therefore, if an object is most commonly viewed upright, then more orientation-sensitive cells will respond preferentially to that object when it is upright, compared to other orientations. In the case of alphanumeric characters, the largest number of neurons should be responsive to their canonical representations, which have a well-defined top–bottom and left–right orientation. Perrett et al. (1998) argued that the RT cost associated with object misorientation arises because a smaller neural population would require more time to elicit sufficient neural activity for object recognition. Therefore, it may be reasonable to suppose that the effects should be observed in terms of latency, rather than in terms of amplitude. The response pattern of the orientation sensitivity of the N1 observed over the right hemisphere is consistent with this interpretation. 
Misorientation of the characters elicited an increase in amplitude of the N1. This is somewhat paradoxical because, if the above interpretation is correct, then one might expect that a larger neural population would also summate to produce larger amplitude. Given that an increase in amplitude of the N1 is commonly observed with face inversion, the present results could similarly indicate that the increase in amplitude of the N1 is related to an increased difficulty of processing misoriented stimuli or recruitment of additional cortical regions within the ventral stream (Rossion et al., 1999). The plastic changes in the visual system for frequently encountered objects, such as alphanumeric characters, may enable more efficient processing of the stimulus, which would typically lead to recognition. The increase in the N1 amplitude with misorientation may thus reflect a need for additional neural processing for structural stimulus encoding of the stimulus—but only relative to the more familiar views. 
This study shows that N1 evoked by alphanumeric stimuli shows orientation sensitivity somewhat similar, but not identical, to that of the N170 evoked by faces. This orientation sensitivity does not appear to be directly related to explicit object -recognition demands, as there was no difference between the task that required object recognition and the task that did not. Orientation sensitivity of the N1 does, however, correlate with deficits in object recognition. We argued that the perceptual familiarity with alphanumeric characters at their canonical upright results in a larger number of viewpoint-dependent neurons sensitive to the canonical orientation than for other orientations. The delays in the N1 are interpreted as increases in latency required for a smaller neural population to elicit sufficient neural activity for object recognition. 
This research was supported by the University of Auckland Research Grant (Project number 3607199). Author BM was supported by a Top Achievers Doctoral Scholarship administered by Tertiary Education Commission of New Zealand. 
Commercial relationships: none. 
Corresponding author: Branka Milivojevic. 
Address: Department of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, Scotland, UK. 
Alarcon, G. Guy, C. N. Binnie, C. D. (2000). A simple algorithm for a digital three-pole Butterworth filter of arbitrary cut-off frequency: Application to digital electroencephalography. Journal of Neuroscience Methods, 104, 35–44. [PubMed] [CrossRef] [PubMed]
Boucart, M. Meyer, M. E. Pins, D. Humphreys, G. W. Scheiber, C. Gounod, D. (2000). Automatic object identification: An fMRI study. Neuroreport, 11, 2379–2383. [PubMed] [CrossRef] [PubMed]
Bradshaw, J. L. Nettleton, N. C. (1981). The nature of hemispheric specialization in man. Behavioral and Brain Sciences, 4, 51–91. [CrossRef]
Bruce, V. Doyle, T. Dench, N. Burton, M. (1991). Remembering facial configurations. Cognition, 38, 109–144. [PubMed] [CrossRef] [PubMed]
Burgund, E. D. Marsolek, C. J. (2000). Viewpoint-invariant and viewpoint-dependent object recognition in dissociable neural subsystems. Psychonomic Bulletin & Review, 7, 480–489. [PubMed] [CrossRef] [PubMed]
Busey, T. A. Vanderkolk, J. R. (2005). Behavioral and electrophysiological evidence for configural processing in fingerprint experts. Vision Research, 45, 431–448. [PubMed] [CrossRef] [PubMed]
Carey, S. (1992). Becoming a face expert. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 335, 95–102. [PubMed] [Article] [CrossRef]
Corballis, M. C. (1988). Recognition of disoriented shapes. Psychological Review, 95, 115–123. [PubMed] [CrossRef] [PubMed]
Corballis, M. C. Nagourney, B. A. (1978). Latency to categorize disoriented alphanumeric characters as letters or digits. Canadian Journal of Psychology, 23, 186–188. [CrossRef]
Corballis, M. C. Zbrodoff, N. J. Shetzer, L. I. Butler, P. B. (1978). Decisions about identity and orientation of rotated letters and digits. Memory & Cognition, 6, 98–107. [PubMed] [CrossRef] [PubMed]
Epstein, R. A. Higgins, J. S. Parker, W. Aguirre, G. K. Cooperman, S. (2006). Cortical correlates of face and scene inversion: A comparison. Neuropsychologia, 44, 1145–1158. [PubMed] [CrossRef] [PubMed]
Farah, M. J. Tanaka, J. W. Drain, H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21, 628–634. [PubMed] [CrossRef] [PubMed]
Ferree, T. C. Luu, P. Russell, G. S. Tucker, D. M. (2001). Scalp electrode impedance, infection risk, and EEG data quality. Clinical Neurophysiology, 112, 536–544. [PubMed] [CrossRef] [PubMed]
Freire, A. Lee, K. Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configural information: Direct evidence. Perception, 29, 159–170. [PubMed] [CrossRef] [PubMed]
Garrett, A. S. Flowers, D. L. Absher, J. R. Fahey, F. H. Gage, H. D. Keyes, J. W. (2000). Cortical activity related to accuracy of letter recognition. Neuroimage, 11, 111–123. [PubMed] [CrossRef] [PubMed]
Gros, H. Boulanouar, K. Viallard, G. Cassol, E. Celsis, P. (2001). Event-related functional magnetic resonance imaging study of the extrastriate cortex response to a categorically ambiguous stimulus primed by letters and familiar geometric figures. Journal of Cerebral Blood Flow and Metabolism, 21, 1330–1341. [PubMed] [Article] [CrossRef] [PubMed]
Gros, H. Doyon, B. Rioual, K. Celsis, P. (2002). Automatic grapheme processing in the left occipitotemporal cortex. Neuroreport, 13, 1021–1024. [PubMed] [CrossRef] [PubMed]
Hamm, J. P. McMullen, P. A. (1998). Effects of orientation on the identification of rotated objects depend on the level of identity. Journal of Experimental Psychology: Human Perception and Performance, 24, 413–426. [PubMed] [CrossRef] [PubMed]
Haxby, J. V. Ungerleider, L. G. Clark, V. P. Schouten, J. L. Hoffman, E. A. Martin, A. (1999). The effect of face inversion on activity in human neural systems for face and object perception. Neuron, 22, 189–199. [PubMed] [Article] [CrossRef] [PubMed]
Hummel, J. E. Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review, 99, 480–517. [PubMed] [CrossRef] [PubMed]
Husk, J. S. Bennett, P. J. Sekuler, A. B. (2007). Inverting houses and textures: Investigating the characteristics of learned inversion effects. Vision Research, 47, 3350–3359. [PubMed] [CrossRef] [PubMed]
Huynh, H. Feldt, L. S. (1976). Estimation of the Box correction for degrees of freedom from sample data in randomized block and split-plot designs. Journal of Educational Statistics, 1, 69–82. [CrossRef]
Itier, R. J. Latinus, M. Taylor, M. J. (2006). Face, eye and object early processing: What is the face specificity? Neuroimage, 29, 667–676. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2002). Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: A repetition study using ERPs. Neuroimage, 15, 353–372. [PubMed] [CrossRef] [PubMed]
Itier, R. J. Taylor, M. J. (2004). Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs. Neuroimage, 21, 1518–1532. [PubMed] [CrossRef] [PubMed]
Ito, M. Sugata, T. Kuwabara, H. (1997). Visual evoked potentials to geometric forms: Effects of spatial orientation. Japanese Psychological Research, 39, 339–344. [CrossRef]
Jacques, C. Rossion, B. (2007). Early electrophysiological responses to multiple face orientations correlate with individual discrimination performance in humans. Neuroimage, 36, 863–876. [PubMed] [CrossRef] [PubMed]
James, K. H. Gauthier, I. (2006). Letter processing automatically recruits a sensory-motor brain network. Neuropsychologia, 44, 2937–2949. [PubMed] [CrossRef] [PubMed]
James, K. H. James, T. W. Jobard, G. Wong, A. C. Gauthier, I. (2005). Letter processing in the visual system: Different activation patterns for single letters and strings. Cognitive, Affective & Behavioral Neuroscience, 5, 452–466. [PubMed] [CrossRef] [PubMed]
Jervis, B. W. Nichols, M. J. Allen, E. M. Hudson, N. R. Johnson, T. E. (1985). The assessment of two methods for removing eye movement artefact from the EEG. Electroencephalography and Clinical Neurophysiology, 61, 444–452. [PubMed] [CrossRef] [PubMed]
Jokisch, D. Daum, I. Suchan, B. Troje, N. F. (2005). Structural encoding and recognition of biological motion: Evidence from event-related potentials and source analysis. Behavioural Brain Research, 157, 195–204. [PubMed] [CrossRef] [PubMed]
Jolicoeur, P. (1985). The time to name disoriented objects. Memory & Cognition, 13, 289–303. [PubMed] [CrossRef] [PubMed]
Jolicoeur, P. (1990). Identification and disoriented objects: A dual systems theory. Mind & Language, 5, 387–410. [CrossRef]
Jolicoeur, P. Landau, M. J. (1984). Effects of orientation on the identification of simple visual patterns. Canadian Journal of Psychology, 38, 80–93. [PubMed] [CrossRef] [PubMed]
Jolicoeur, P. Snow, D. Murray, J. E. (1987). The time to identify disoriented letters: Effects of practice and font. Canadian Journal of Psychology, 41, 303–316. [PubMed] [CrossRef] [PubMed]
Joseph, J. E. Cerullo, M. A. Farley, A. B. Steinmetz, N. A. Mier, C. R. (2006). fMRI correlates of cortical specialization and generalization for letter processing. Neuroimage, 32, 806–820. [PubMed] [CrossRef] [PubMed]
Lawson, R. Jolicoeur, P. (2003). Recognition thresholds for plane-rotated pictures of familiar objects. Acta Psychologica, 112, 17–41. [PubMed] [CrossRef] [PubMed]
Logothetis, N. K. Pauls, J. Bülthoff, H. H. Poggio, T. (1994). View-dependent object recognition by monkeys. Current Biology, 4, 401–414. [PubMed] [Article] [CrossRef] [PubMed]
Logothetis, N. K. Pauls, J. Poggio, T. (1995). Shape representation in the inferior temporal cortex of monkeys. Current Biology, 5, 552–563. [PubMed] [Article] [CrossRef] [PubMed]
Logothetis, N. K. Sheinberg, D. L. (1996). Visual object recognition. Annual Review of Neuroscience, 19, 577–621. [PubMed] [CrossRef] [PubMed]
Maurer, D. Grand, R. L. Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6, 255–260. [PubMed] [CrossRef] [PubMed]
Milivojevic, B. Johnson, B. W. Hamm, J. P. Corballis, M. C. (2003). Non-identical neural mechanisms for two types of mental transformation: Event-related potentials during mental rotation and mental paper folding. Neuropsychologia, 41, 1345–4356. [PubMed] [CrossRef] [PubMed]
Nakamura, K. Oga, T. Okada, T. Sadato, N. Takayama, Y. Wydell, T. (2005). Hemispheric asymmetry emerges at distinct parts of the occipitotemporal cortex for objects, logograms and phonograms: A functional MRI study. Neuroimage, 28, 521–528. [PubMed] [CrossRef] [PubMed]
Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. [PubMed] [CrossRef] [PubMed]
Pernet, C. Basan, S. Doyon, B. Cardebat, D. Démonet, J. F. Celsis, P. (2003). Neural timing of visual implicit categorization. Brain Research: Cognitive Brain Research, 17, 327–338. [PubMed] [CrossRef] [PubMed]
Pernet, C. Celsis, P. Démonet, J. F. (2005). Selective response to letter categorization within the left fusiform gyrus. Neuroimage, 28, 738–744. [PubMed] [CrossRef] [PubMed]
Perrett, D. I. Oram, M. W. Ashbridge, E. (1998). Evidence accumulation in cell populations responsive to faces: An account of generalisation of recognition without mental transformations. Cognition, 67, 111–145. [PubMed] [CrossRef] [PubMed]
Pins, D. Meyer, M. E. Foucher, J. Humphreys, G. Boucart, M. (2004). Neural correlates of implicit object identification. Neuropsychologia, 42, 1247–1259. [PubMed] [CrossRef] [PubMed]
Polk, T. A. Farah, M. J. (1998). The neural development and organization of letter recognition: Evidence from functional neuroimaging, computational modeling, and behavioral studies. Proceedings of the National Academy of Sciences of the United States of America, 95, 847–852. [PubMed] [Article] [CrossRef] [PubMed]
Rebai, M. Poiroux, S. Bernard, C. Lalonde, R. (2001). Event-related potentials for category-specific information during passive viewing of faces and objects. International Journal of Neuroscience, 106, 209–226. [PubMed] [CrossRef] [PubMed]
Reed, C. L. Stone, V. E. Bozova, S. Tanaka, J. (2003). The body-inversion effect. Psychological Science, 14, 302–308. [PubMed] [CrossRef] [PubMed]
Rosch, E. Mervis, C. B. Gray, W. D. Johnson, D. M. Boyes-Braem, E. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. [CrossRef]
Rossion, B. Campanella, S. Gomez, C. M. Delinte, A. Debatisse, D. Liard, L. (1999). Task modulation of brain activity related to familiar and unfamiliar face processing: An ERP study. Clinical Neurophysiology, 110, 449–462. [PubMed] [CrossRef] [PubMed]
Rossion, B. Gauthier, I. Tarr, M. J. Despland, P. Bruyer, R. Linotte, S. (2000). The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: An electrophysiological account of face-specific processes in the human brain. Neuroreport, 11, 69–74. [PubMed] [CrossRef] [PubMed]
Rossion, B. Joyce, C. A. Cottrell, G. W. Tarr, M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20, 1609–1624. [PubMed] [CrossRef] [PubMed]
Rousselet, G. A. Macé, M. J. Fabre-Thorpe, M. (2004). Animal and human faces in natural scenes: How specific to human faces is the N170 ERP component? Journal of Vision, 4, (1):2, 13–21,, doi:10.1167/4.1.2. [PubMed] [Article] [CrossRef]
Schneider, W. Eschmann, A. Zuccolotto, A. (2002). E-Prime user's guide. Pittsburgh, PA: Psychology Softare Tools.
Tanaka, J. W. (2001). The entry point of face recognition: Evidence for face expertise. Journal of Experimental Psychology: General, 130, 534–543. [PubMed] [CrossRef] [PubMed]
Tanaka, J. W. Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 46, 225–245. [PubMed] [CrossRef]
Tucker, D. M. (1993). Spatial sampling of head electrical fields: The geodesic sensor net. Electroencephalography and Clinical Neurophysiology, 87, 154–163. [PubMed] [CrossRef] [PubMed]
White, M. J. (1980). Naming and categorization of tilted alphanumeric characters do not require mental rotation. Bulletin of the Psychonomic Society, 15, 153–156. [CrossRef]
Wong, A. C. N. Gauthier, I. (2007). An analysis of letter expertise in a levels-of-categorization framework. Visual Cognition, 15, 854–879. [CrossRef]
Wong, A. C. Gauthier, I. Woroch, B. DeBuse, C. Curran, T. (2005). An early electrophysiological response associated with expertise in letter perception. Cognitive, Affective & Behavioral Neuroscience, 5, 306–318. [PubMed] [CrossRef] [PubMed]
Yin, R. K. (1969). Looking at upside down faces. Journal of Experimental Psychology, 81, 141–145. [CrossRef]
Figure 1
Topographic distribution of the P1 and N1 with occipito-temporal electrodes of interest highlighted in gray—O1, O2, T5, and T6 are labeled. Cz is marked for reference purposes.
Figure 1
Topographic distribution of the P1 and N1 with occipito-temporal electrodes of interest highlighted in gray—O1, O2, T5, and T6 are labeled. Cz is marked for reference purposes.
Figure 2
Reaction times (RT), accuracy (%), and inverse efficiency as percent correct, as a function of task and orientation. Error bars represent SE of the mean.
Figure 2
Reaction times (RT), accuracy (%), and inverse efficiency as percent correct, as a function of task and orientation. Error bars represent SE of the mean.
Figure 3
(A) ERPs as a function of task, hemisphere, and orientation with mean N1 latency (error bars represent standard errors). (B) P1 and N1 latency as a function of orientation, hemisphere, and task. (C) P1 and N1 amplitudes as a function of orientation, hemisphere, and task.
Figure 3
(A) ERPs as a function of task, hemisphere, and orientation with mean N1 latency (error bars represent standard errors). (B) P1 and N1 latency as a function of orientation, hemisphere, and task. (C) P1 and N1 amplitudes as a function of orientation, hemisphere, and task.

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.