Free
Article  |   June 2012
Human electrophysiological reflections of the recruitment of perceptual processing during actions that engage memory
Author Affiliations
  • Leanna C. Cruikshank
    Centre for Neuroscience, University of Alberta 5005-A Katz Group-Rexall Centre Edmonton, AB, Canada
    leannac@ualberta.ca
  • Jeremy B. Caplan
    Centre for Neuroscience, University of Alberta 5005-A Katz Group-Rexall Centre Edmonton, AB, Canada
    Department of Psychology, University of Alberta, P217 Biological Sciences Edmonton, AB, Canada
    jcaplan@ualberta.ca
  • Anthony Singhal
    Centre for Neuroscience, University of Alberta 5005-A Katz Group-Rexall Centre Edmonton, AB, Canada
    Department of Psychology, University of Alberta, P217 Biological Sciences Edmonton, AB, Canada
    asinghal@ualberta.ca
Journal of Vision June 2012, Vol.12, 29. doi:https://doi.org/10.1167/12.6.29
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leanna C. Cruikshank, Jeremy B. Caplan, Anthony Singhal; Human electrophysiological reflections of the recruitment of perceptual processing during actions that engage memory. Journal of Vision 2012;12(6):29. https://doi.org/10.1167/12.6.29.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The N170 event-related potential (ERP) component reflects visual perceptual processes and is known to have a source in the lateral occipital cortex (LOC) and temporal lobe regions. Convergent evidence from neuropsychological and neuroimaging studies suggests that the LOC is recruited for action tasks in which visibility of a target is unavailable and a perceptual memory of the target's characteristics must be used instead. We tested the hypothesis that the N170 reflects the contribution of additional ventral stream processes required for performing actions in which vision of a target is occluded. We predicted that the amplitude of the ERP in the latency range of the N170 would be larger when perceptual mechanisms are engaged to a greater extent. Participants were auditorily cued to touch target dots appearing on a touchscreen. Two viewing conditions varied with respect to the contribution of the ventral visuomotor stream during response initiation. In condition 1, the target disappeared with movement initiation whereas in condition 2, it disappeared with the cue to respond. The N170 during the response-initiation phase of trials was larger in amplitude for condition 2. The effect was observed over temporal electrode sites bilaterally, likely reflecting an overlap between auditory cue-related processes and additional perceptual processes within regions in the inferior-temporal cortex. Thus, the N170 may be a marker of neural activity within the ventral stream, further supporting the notion that actions initiated in the absence of a visual target rely more on perceptual representations than those directed towards visually available targets.

Introduction
When individuals reach for an object in the environment, the movement characteristics of their arm and hand will often depend on whether the target object is in view or not. For example, when looking at a coffee cup while reaching for it, the real-time visual image of the cup may be used to guide the action. However, turning away from a coffee cup prior to reaching for it precludes the use of immediate visual information, and a perceptual memory of the target characteristics must be used instead to guide the action. Behavioral studies have reliably shown that hand and arm kinematics vary between these action types, with the latter (memory-guided) actions tending to be slower and less accurate (Goodale, Jakobson, & Keillor, 1994). According to the influential perception-action model of Goodale & Milner (1992), visually guided actions are performed under the control of dorsal stream mechanisms in parietal cortex. However, this model also predicts that actions initiated in the absence of a visual target are influenced by mechanisms in the ventral stream, particularly those associated with inferior temporal cortex (Goodale, 1998). The perception-action model further suggests that a shift from dorsal to ventral activation is required when a target is unavailable, in order for previously stored perceptual representations to inform the motor plan (Westwood & Goodale, 2003). Converging neuropsychological, neuroimaging, and kinematic findings support this theory. For example, visual-form agnosia patient D.F., who has bilateral damage to her lateral occipital cortex (LOC) in the ventral stream, is perceptually compromised (James, Culham, Humphrey, Milner, & Goodale, 2003) but can guide her actions appropriately when the target object is in full view, presumably due to her intact dorsal stream (Milner et al., 2001; Goodale, Milner, Jakobson, & Carey, 1991). These authors argue that D.F. is unable to correctly perform actions to disappearing targets because the damage to her ventral stream prevents perception of the object in the first place, and she cannot draw on the necessary perceptual information to act when the object is no longer in full view. Additional evidence for this perspective comes from behavioral studies that have shown that while visually guided actions are resistant to pictorial illusions, those that require previously stored perceptual representations are not. Erroneous perceptual information appears to influence behavior following a delay (Hu & Goodale, 2000; Westwood, Dubrowski, Carnahan, & Roy, 2000; Westwood, McEachern, & Roy, 2001; Ganel, Tanzer, & Goodale, 2008). Furthermore, functional magnetic resonance imaging (fMRI) findings have shown that lateral occipital cortex (LOC) is re-activated in the action phase of a reaching and grasping paradigm when actions are performed without vision of a target (Singhal, Kaufman, Valyear, & Culham, 2006), and transcranial magnetic stimulation (TMS) “virtual lesions” of LOC disrupts grasping when the object is visually unavailable, compared to when it is (Cohen, Cross, Tunik, Grafton, & Culham, 2009). Taken together, these studies suggest that when vision of a target is precluded, the motor program must rely on previously stored perceptual information in ventral stream regions. 
Despite the compelling nature of the previously described studies, other findings suggest that the dorsal stream is also engaged for actions in which vision of the target is unavailable (Franz, Hesse, & Kollath, 2009; Hesse & Franz, 2009). This position is supported by monkey neurophysiology work showing sustained activity in parietal neurons across a memory delay following the presentation of a visual target, and prior to the initiation of the action toward it without vision (Murata, Gallese, Kaseda, & Sakata, 1996), a finding also supported by human fMRI data (Singhal et al., 2006). One problem in addressing the extent to which different neural mechanisms subserve visually guided and perception-based actions in humans is that it is difficult to directly compare these action types using neuroimaging techniques. The primary reason for this difficulty is that the two types of action require differences in timing, and in the case of fMRI with visually guided actions, it would be difficult to distinguish between LOC activation associated with target presentation from LOC activation associated with action. However, a strong advantage of the event-related potential (ERP) technique is that it provides excellent temporal resolution. Curiously, ERP is underrepresented in action-related research. Electrophysiological recordings are known to be extremely sensitive to movement-related artifacts in action studies. However, this study was possible because it was focused on the response initiation phase prior to action execution, and employed strategic windows of analysis and artifact correction methods to minimize the effect of artifacts. 
If perceptual processes are required for actions in which vision of the target is unavailable, a likely electrophysiological marker is the N170 ERP component. This component has been linked to perceptual processes in the ventral stream. It is elicited in response to visual stimuli and reflects the early classification of objects (Grill-Spector, Kourtzi, & Kanwisher, 2001; Rossion, Joyce, Cottrell, & Tarr, 2003; Sreenivasan, Katz, & Jha, 2007), which is likely a necessary perceptual component of perception-based actions. Moreover, the N170 likely has various neural generators, including a source in LOC (Rossion et al, 2003). Studies have also shown that the N170 is modulated by memory processes and becomes more negative in amplitude as memory requirements increase (Bankó, Gal, & Vidnyánszky, 2009; Morgan, Klein, Boehm, Shapiro, & Linden, 2008). This finding presents an additional link between the N170 and actions that cannot be guided by vision, as planning of these actions likely relies on the recall of target features, and presumably engages perceptual memory processes (Goodale & Milner, 1992; Klatzky, Pellegrino, McCloskey, & Lederman, 1993). One study has shown that if perceptual memory is engaged while performing actions to previously viewed targets, compared to those that remain visible, dual-task interference tends to be greater—likely due to the overlap in task demands or shared resources (Singhal, Culham, Chinellato, & Goodale, 2007). Additionally, it has been demonstrated that the LOC is active not only during initial form perception, but also as the percept of an object endures (Ferber, Humphrey, & Vilis, 2003). This conclusion is consistent with the idea that perceptual information is stored, or maintained somewhere for brief periods of time before it is recalled during planning and execution. Since evidence suggests that the LOC is differentially activated by the perceptual memory demands associated with particular actions, we asked in the current study whether the N170 would reflect these differences. That is, would the amplitude of the N170 be larger in situations requiring more of a contribution from perceptual processes? 
In short, we directly compared perception-based neural activity during the response initiation phase for two different reaching and pointing trials, which were designed to differentially manipulate the contribution of visual memory to the behavioral task. Due to the different visual memory demands, the contribution of the ventral stream likely also differed as a result of the manipulation. Moreover, since the N170 waveform likely reflects neural generators in ventral stream areas that include LOC, we tested the hypothesis that this waveform reflects motor planning processes for actions that are reliant on the ventral stream. We manipulated the visibility of a target during the response initiation phase of an instructed delayed reaching paradigm. The target was either present (condition 1) or absent (condition 2) during the response initiation phase, thereby altering the contribution of visual memory. Thus, condition 2 was predicted to recruit ventral stream resources more heavily than condition 1 during response initiation. We collected ERP data that were time-locked to the auditory cue, which signaled the participants to initiate a response. Our primary research question was as follows: Does the ERP, in response to the auditory cue for action, reflect more contribution from perceptual mechanisms in the ventral stream if the action is initiated towards a target that is no longer visible? If it does, then we would expect to observe an enhancement in the amplitude of the ERP in the post-stimulus range of the N170. We ensured that the auditory cue was identical for both conditions, and thus, we could attribute any differences in the ERP to the additional processing associated with the task requirements of each condition. If the amplitude of the ERP in the N170 latency range reflects differences between conditions, it may be that the observed effect is sensorimotor in nature, reflecting more than just sensory processing. 
Experiment 1:methods
Participants
Twenty-seven (20 female, 7 male) right-handed undergraduate students aged 18–25 (mean 21, SD = 1.86) received payment for participating in this study. One participant's data was excluded from analyses due to persistent EMG contamination. All participants had normal or corrected-to-normal vision, and normal hearing. Written informed consent was obtained prior to the experiment in accordance with the University of Alberta's ethical review board, and the Declaration of Helsinki. 
Procedure
The study, which includes the same data as reported by Cruikshank, Singhal, Hueppelsheuser, and Caplan (2012) was conducted in a darkened, electrically shielded, and sound-attenuated chamber. At the start of the experiment, participants were seated in front of a 430.4 mm × 270.3 mm touchscreen that was rotated 90° so that the vertical angle was optimized for height. Participants positioned themselves so that the angle of his/her right arm could extend comfortably at a 45° angle, to reach the top of the touchscreen. The distance from the participant's nasion electrode to the monitor was taken and recorded, and measurements ranged from 35.56 to 51.82 cm (mean distance 40.39 cm from the screen). At the beginning of each session, the touchscreen was re-calibrated by the participant being tested to ensure that accuracy measures remained reliable across subjects. Based on average distance from the screen, the vertical and horizontal visual angles of the touchscreen were 33.78° and 46.82°, respectively. The vertical and horizontal visual angles of the stimuli were 1.98° and 1.13°, respectively. 
Our task required that participants reach towards and touch 9 mm × 14 mm black dots displayed on a touchscreen using E-Prime presentation software version 1.2 (Psychology Software Tools). The participant depressed a button to begin a trial. One second after the button was depressed, a target appeared on the screen in a random location, which the participant was told to fixate on. An 800 Hz, 64 dB (SPL) tone sounded for 50 ms 1–3 s after the target appeared. The participant was instructed to continue holding down the button until he/she heard the tone, and then to touch the target as quickly and accurately as possible. In condition 1, the target disappeared as soon as the button was released (i.e., with movement onset). After 1 s, if the participant had not yet initiated a movement, the target disappeared. In condition 2, the target disappeared simultaneously with the onset of the tone (Figure 1). After participants made contact with the screen, they were to return their finger to the response box and hold down the button, which advanced the next trial after 1 s. Prior to testing, 4 practice trials were administered to ensure that participants understood the task. Condition 1 and condition 2 trials were presented pseudo-randomly, with the restriction that a particular condition did not occur more than 5 times consecutively. A total of 360 test trials (180 per condition) were included in a session, and participants were given a break period for a self-determined length of time, every 120 trials. 
Figure 1
 
Schematic of the behavioral paradigm. 180 trials of each condition were presented to participants within a session. In condition 1 (vision), the tone sounds and the stimulus disappears when the participant lifts his/her finger from the response box. In condition 2 (memory), the tone sounds simultaneously with the disappearance of the target.
Figure 1
 
Schematic of the behavioral paradigm. 180 trials of each condition were presented to participants within a session. In condition 1 (vision), the tone sounds and the stimulus disappears when the participant lifts his/her finger from the response box. In condition 2 (memory), the tone sounds simultaneously with the disappearance of the target.
Behavioral analyses
For each trial, reaction time (RT) and movement time (MT) were recorded. RT was defined as the time it took to initiate a movement in response to the beep, and MT was defined as the time it took to fully execute a movement, from release of the response button to contact with the touchscreen. Touch positions were recorded. Trials were considered to be accurate if the participant responded within 8 mm of the center of the target. During training, participants were required to achieve radial error accuracy, and a binary measure was used for analysis. Trials were excluded from analyses if RTs were ≤150 ms or ≥800 ms or MTs were ≤200 or ≥2000 ms. The lower range of RTs was chosen based on research suggesting that auditory RTs are around 160 ms (Brebner & Welford, 1980); we did not want to include responses whereby the participant may have employed an anticipatory strategy. The upper limit was included in order to ensure that the conditions remained irrefutably separate. Because targets in condition 1 disappeared within one second if a participant had not responded, we wanted to ensure that any responses were made while visual information was still available. Otherwise, trials that were initially designed to rely on visual feedback during response initiation would, by default, come to rely on perceptual processes instead. Thus, condition 1 trials could take on the properties of condition 2 trials by virtue of the participant's RT. By restricting MT, we also ensured that participants were indeed reaching with the hand that they were instructed to reach with, and that movements were executed in a reasonable time frame. The vast majority of RTs and MTs fell within the exclusion parameters, and <3% of all trials were rejected based on these criteria. Statistical analyses were carried out using Matlab 7.1 (The Mathworks) and SPSS (version 18.0). 
EEG recording and analysis
EEG was recorded using a high-density 256-channel Geodesic Sensor Net (Electrical Geodesics Inc., Eugene, OR), amplified at a gain of 1000 and sampled at 250 Hz. Impedances were kept below 50 kΩ and the recording was initially referenced to the vertex electrode (Cz) before being converted to an average reference. In accordance with other studies examining the N170, we applied an offline bandpass filter of 0.5–30 Hz (Daniel & Bentin, 2010; Taylor, McCarthy, Saliba, & Degiovanni, 1999). Then the EEG was segmented into 600-ms epochs, time-locked to the auditory action cue (epochs extended from 100 ms prior to the tone to 500 ms after the tone). Eye blinks and eye movements were corrected for (Gratton, Coles, & Donchin, 1983), and bad channels were corrected on a trial-by-trial basis using interpolated splines (Srinivasan, Nunez, Silberstein, Tucker, & Cadusch, 1996). Segments were rejected if they contained more than 20 bad channels and excluded from further analysis. Acceptable trials were averaged together and baseline corrected relative to pre-stimulus activity (−100–0 ms). On average, 159 condition-1 and 160 condition-2 trials per subject were retained. The maximum negative (N170) peak values for a given time interval were identified using a computerized statistical extraction tool (100–300 ms), and mean voltages were calculated across a window extending one sample in either direction of the peak's maximum. Peak latency was also quantified, based on the peak's maximal value. Analysis was confined to left temporal, right temporal, parietal, and occipital electrode clusters (Figure 2). Each cluster was comprised of seven adjacent electrodes, which were centered around an appropriate electrode corresponding to the traditional 10-20 system. Our left and right temporal clusters were centered on T5 and T6 respectively-sites where the N170 is commonly reported. Our parietal cluster was centered on Pz, and our occipital cluster was centered on Oz. Individual electrodes were averaged together for each cluster, and repeated measures ANOVAs were used to compare the amplitude and latencies of the N170. The factors in the ANOVAs were reach type (condition 1/condition 2) and region (left temporal, right temporal, parietal, and occipital). Statistical analysis was conducted using SPSS version 18.0. Bonferroni corrections were also applied where appropriate and Greenhouse-Geisser corrections were made for violations of sphericity. 
Figure 2
 
Sensor layout and analysis clusters, which are shown in green. The N170 ERP component for conditions 1 and 2 were compared at the following temporal, parietal, and occipital electrode clusters.
Figure 2
 
Sensor layout and analysis clusters, which are shown in green. The N170 ERP component for conditions 1 and 2 were compared at the following temporal, parietal, and occipital electrode clusters.
Results
Behavioral results
Average MT, accuracy, and RT were compared between the two conditions using two-tailed, paired-samples t-tests. One participant was excluded from the accuracy analysis due to a touchscreen calibration error. Average MT was significantly longer in Condition 2 than in Condition 1 (534.7 ms vs. 528.6 ms; t(25) = −3.16, p < 0.01) and mean accuracy was also lower in Condition 2 (74% vs. 79%; t(25) = 4.62, p < 0.01). These results replicate previous findings (Goodale, et al., 1994) that actions requiring more perception-based information are slower and less accurate because they may rely on the recall of target features (Goodale, et al., 1994), which is likely less precise than using directly available visual information. Finally, RTs were faster in condition 2 than in condition 1 (261.0 ms vs. 277.7 ms; t(25) = 12.75, p < 0.01); a pattern opposite to the MT data. This result may describe an effect of attention on action. In condition 2, the cue to respond is paired with the visual stimulus offset, and research has shown that the disappearance of a target will attract attention. Thus, when stimulus location is relevant to a task, having a unitary target for both visual attention and goal-directed action may be advantageous (Nishimura & Yokosawa, 2010). In one study, participants were to respond to the onset or offset of a light in a two-light display. On one block of trials, both lights were initially off, and the stimulus was the onset of one; in another block, both lights were initially on, and the stimulus was the offset of one. Reaction times were fastest, for both onset and offset trials, when responses were directed toward the changed rather than the unchanged element (Simon, Craft, & Webster, 1971). Transient change information thus has an important perceptual effect on action, and we report similar findings. 
ERP results
The mean amplitudes and latencies of the N170 component measured at temporal, parietal, and occipital regions are reported in Table 1. Grand average ERPs are shown in Figure 3. We compared the amplitudes and latencies of the ERP in the latency range of the N170 component between reaching conditions 1 and 2 at temporal, parietal, and occipital regions. 
Table 1
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Table 1
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Region Condition 1 N170 amplitude [μV] Condition 2 N170 amplitude [μV] Marginal mean Condition 1 N170 latency [ms] Condition 2 N170 latency [ms] Marginal mean
Left temporal −2.7 ± 1.6 −3.3 ± 1.7 −3.0 171.3 ± 47.8 173.5 ± 49.1 172.4
Right temporal −2.4 ± 2.0 −2.8 ± 2.1 −2.6 187.3 ± 51.2 182.5 ± 52.8 184.9
Parietal −4.4 ± 3.6 −4.5 ± 3.3 −4.4 199.4 ± 48.5 190.4 ± 38.6 194.9
Occipital −4.8 ± 5.1 −5.3 ± 4.2 −5.0 189.4 ± 46.2 185.3 ± 43.1 187.3
Marginal mean −3.58 −3.98 186.85 182.93
Figure 3
 
ERP plots for the central electrode within each electrode cluster (T5, T6, Pz, Oz). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis.
Figure 3
 
ERP plots for the central electrode within each electrode cluster (T5, T6, Pz, Oz). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis.
A reach type [2] (condition1/condition2) X region [4] (left temporal/right temporal/parietal/occipital) repeated measures ANOVA revealed a main effect of reach type on N170 amplitude, F(1, 25) = 5.61, p < 0.05, due to a more negative amplitude for condition 2. A main effect of region was also significant, F(1.84, 46.10) = 4.75, p < 0.05, although pairwise comparisons revealed no significant effects. The interaction was not significant (p > 0.1). A lack of interaction with region indicates that the topography was not significant. However, our hypothesis would be challenged if the main effect of reach type were not observable at temporal electrodes overlying the ventral stream. Therefore, although the interaction did not reach significance, we tested these electrodes individually, as planned comparisons. While the results of the planned comparisons are not as statistically robust as an interaction, comparisons revealed that the N170 was significantly more negative at the left temporal, t(25) = 2.87, p < 0.01 and right temporal, t(25) = 2.79, p < 0.05 locations in condition 2, compared to condition 1. This difference did not reach significance at parietal or occipital locations (p > 0.1). 
The N170 latency ANOVA revealed no significant effects (p > 0.1). 
Discussion
Our findings confirmed our major hypothesis that the N170 is a robust marker of increased ventral stream perceptual processes, reflected by an enhancement of the auditory ERP during the initiation phase of actions that rely more strongly on visual memory. We found that the negative evoked potential was larger in condition 2 than in condition 1. However, it is important to consider an alternative explanation. That is, in condition 2, the offset of the stimulus occurs simultaneously with the auditory cue to move, whereas in condition 1, the stimulus offset coincides with the participants' initiation of a response (on average, 277.7 ms after the auditory cue). Therefore, it is conceivable that the larger negativity in condition 2 is due to an added visual evoked potential in response to the offset of the visual stimulus, which is more time-locked to the signal to move in condition 2 than in condition 1, rather than the addition of a ventral-stream contribution to movement initiation. 
We offer three arguments against this alternative hypothesis. First, if the N170 merely reflects the offset of the visual stimulus, we would have expected to see a similar negative deflection of equal magnitude in condition 1 trials. For condition 1, mean stimulus offset time was 261 ms, corresponding to the average participant reaction time. If the larger negativity in the N170 latency range were attributable to the combined effect of a visual offset, we would expect to see a clear negative peak occurring 455 ms post-beep (average visual offset time for condition 1 trials [261 ms] + average N170 latency for condition 1 trials at temporal electrodes [194 ms]). However, visual inspection of our ERP data reveals that this was not the case. Second, in condition 1, the visual stimulus is presented for a longer time compared to condition 2. That is, condition 2 relies on a shorter stimulus presentation time, and it does not seem likely that a shorter stimulus duration (condition 2) would elicit a larger ERP deflection in the latency range of the N170. Rather, longer stimulus duration times have been shown to elicit larger amplitude visual evoked offset responses (Morotomi & Kitajima, 1975; Wilson, 1983). Thirdly, we would expect that subtle differences in the timing of the visual offsets are more likely to be reflected in earlier sensory components recorded over occipital scalp regions (Maier, Dagnelie, Spekreijse, & van Dijk, 1987) than those in the N170 latency range over temporal areas. However, in order to directly test the alternative interpretation that our ERP effects are due to the trial pacing of our task rather than the differential contributions of perception-based processing, we conducted a follow-up control study to isolate effects purely due to the trial timing (visual offset) differences between conditions 1 and 2 on the ERP we observed in the latency range of N170. A difference in ERP amplitude between conditions would suggest that the results of Experiment 1 were due to an additive visual evoked potential. If there were no difference in amplitude, however, this would support the interpretation that the N170 reliably reflects increased contributions of visual memory to action. 
Experiment 2: methods
Participants
Twelve (nine female, three male) right-handed undergraduate students aged 18–43 (mean 22, SD = 6.92) who had not participated in Experiment 1 participated in the control study. All participants had normal or corrected-to-normal vision, and normal hearing. Written informed consent was obtained prior to the experiment in accordance with the University of Alberta's ethical review board, and the Declaration of Helsinki. 
Procedure
The experimental setup was identical to that of Experiment 1, except that participants were no longer required to make reaching movements to the target. Rather, participants passively viewed the presentation of the targets while listening to the auditory tone. As in Experiment 1, the tone sounded simultaneously with the disappearance of the target for condition 2 trials. In the initial study, however, the disappearance of the target in condition 1 trials depended on the participants' behavioral response. Because reaching was not required in the control experiment, visual offset of the target for condition 1 trials was pre-determined using response times yoked to participants in Experiment 1. Each participant in the control study was given the same visual offset times as a randomly selected (without replacement) participant from the initial study. Rather than using mean response times or an average range of responses, yoking participant data ensured that the control group saw identical sequences of visual stimuli as participants in Experiment 1 (apart from catch trials; see below). 
To ensure that participants were on task and paying attention, catch trials replaced some trials, comprising twenty percent of all trials. For these trials, the black target dot flashed to red before disappearing, requiring participants to respond manually by pressing a button. Catch trials comprised twenty percent of all trials, and there were equal numbers of condition-1 and condition-2 catch trials (i.e., the target flashed to red at the time of the auditory tone [condition-2 catch trial] or with the later stimulus offset [condition-1 catch trial]). Thus, a total of 360 test trials (288 condition 1/condition 2 trials, and 72 condition 1/condition 2 catch trials) were included in a session, and participants were given a break period for a self-determined length of time, every 120 trials. 
Behavioral and EEG analysis
Overall accuracy was calculated by determining the percentage of correctly responded-to catch trials and correctly rejected non-catch trials. EEG recording procedures were identical to that of the initial study (see section 2.4). All catch trials were excluded from ERP analysis, and only condition 1 and condition 2 trials were compared. Only those condition 1 trials in which the visual offset occurred between 150 and 800 ms were included in the ERP analysis, to maintain consistency with the first experiment. EEG segmentation and artifact detection procedures were identical to the initial study (see section 2.4). On average, 123 condition 1 and 109 condition 2 acceptable trials per subject were retained for analysis. The maximum negative (N170) peak values were identified as before, and analysis was confined to previously defined electrode clusters (Figure 2). Individual electrodes were averaged together for each cluster and repeated measures ANOVAs were used to compare the amplitude and latencies of the N170. Statistical analysis was conducted using SPSS version 18.0. Bonferroni corrections were applied where appropriate, and Greenhouse-Geisser corrections were made for violations of sphericity. 
Results
Behavioral results
Overall accuracy was 98% (SD = 0.03%), suggesting that participants were correctly following the task procedure and remained attentive throughout the recording session. 
ERP results
We compared the morphology of the ERP component in the latency range of the N170 as in Experiment 1 between our conditions at left and right temporal electrode sites. The mean amplitudes and latencies are reported in Table 2. Grand average ERPs are shown in Figure 4 alongside Experiment 1 results at temporal electrodes, for comparison. A 2 reach type (condition1/condition2) X 2 region (left temporal/right temporal) repeated measures ANOVA revealed no main effects on N170 amplitude and no significant interactions. Similarly, the N170 latency ANOVA revealed no main effects or significant interactions (p > 0.1). 
Table 2
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Table 2
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Region Condition 1 N170 amplitude [μV] Condition 2 N170 amplitude [μV] Marginal mean Condition 1 N170 latency [ms] Condition 2 N170 latency [ms] Marginal mean
Left temporal −2.9 ± 2.0 −2.9 ± 1.7 −2.9 189.2 ± 39.1 189.6 ± 41.8 189.4
Right temporal −2.0 ± 1.2 −2.2 ± 1.2 −2.1 200.1 ± 32.5 217.2 ± 26.0 208.6
Parietal −2.1 ± 1.3 −2.0 ± 1.2 −2.0 164.0 ± 39.1 168.0 ± 40.8 166.0
Occipital −3.3 ± 2.2 −3.2 ± 1.9 −3.2 191.3 ± 44.2 204.8 ± 50.5 198.0
Marginal mean −2.58 −2.58 186.15 194.90
Figure 4
 
ERP plots for the central electrode within each temporal area cluster (T5 and T6). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis. Experiment 1 results are shown above the control results for comparison.
Figure 4
 
ERP plots for the central electrode within each temporal area cluster (T5 and T6). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis. Experiment 1 results are shown above the control results for comparison.
Discussion
The absence of a difference in N170 amplitude between conditions 1 and 2 in this follow-up experiment clearly rules out the possibility that the central ERP results of Experiment 1 are due to the different temporal dynamics of visual stimulus offset. Rather, based on the large body of evidence from neuropsychological patients and neurologically intact participants, it is more likely that our finding of a larger amplitude of the ERP in the latency range of the N170 during condition 2 compared to condition 1 is due to the recruitment of additional processes required for the successful completion of the task. That is, more perception-based processes are required to plan the action without full visual input (Westwood & Goodale, 2003). 
Conclusions
The main purpose of this study was to use ERPs to directly examine the patterns of neural activity underlying pointing actions that are initiated toward a visible target (condition 1) compared to pointing actions initiated toward a target that was previously visible (condition 2). The extant literature suggests that the second case (condition 2) likely relies on more perception-based neural activity compared to the first case (condition 1). This is because the initiation of the pointing actions without a visible target (condition 2) must rely on briefly stored information about the physical characteristics of the target that was perceptually encoded prior to the action (Goodale & Milner, 1992). Our main hypothesis was that the ERP in the latency range of N170 is a good electrophysiological marker for the differences in the neural bases of the two action types in our experiment because the N170 has been previously shown to reflect perceptual processes within the ventral visual stream. 
The pattern of behavioral data from this study followed other studies that have examined the nature of actions requiring perception-based information. That is, MT was increased and reach accuracy was decreased in condition 2, which compared to condition 1, engages ventral mechanisms to a greater extent. This pattern has been shown in other studies comparing visually guided actions and those for which vision of a target is precluded (Armstrong & Singhal, 2011; Klatzky et al., 1993; Singhal et al., 2007; Westwood & Goodale, 2003); tasks which similarly differ in their reliance on the ventral stream. These findings support the idea that actions that engage the ventral stream rely on stored perceptual information, which is less accurate than real-time visual information and thus, performance to visually unavailable targets induces slowing and greater variability in the arm and hand actions (Goodale & Milner, 1992). Furthermore, our participants reported (post hoc) that they were unaware of which trial type they were engaged in, suggesting that our manipulation was successful in engaging perception and memory processes without altering performance strategy. 
To our knowledge, this is the first study to use ERPs to directly compare the neural processes underlying the planning of pointing behaviors for which the putative contribution of ventral stream perceptual mechanisms differs. Results indicate that the negative evoked potential elicited by the auditory cue to move was greater in amplitude for condition 2 than for condition 1 trials. Because the physical characteristics of the tone (pitch, amplitude, duration) were identical in both trial types, it does not make sense that there would be differences in amplitude between conditions due to the auditory cue alone. We argue that the brief maintenance and recall process in both conditions of our instructed delay task necessitated perceptual activity in ventral stream brain areas, and these processes were reflected by the contribution of N170 range activity overlapping the ERP that was time-locked to the auditory action cue. 
These results are consistent with our hypothesis that the N170 reflects ventral stream processes involved in action planning and thus is larger in amplitude for those tasks that rely more heavily on perception-based information. During a memory task, stored representations of relevant information must be recruited during recall (Goodale & Milner, 1992; Klatzky et al., 1993). Additionally, studies have shown that the process of remembering often reactivates sensory-specific cortices that were first activated during the encoding of stimulus features (Johnson, Mitchell, Raye, D'Esposito, & Johnson, 2007; Geng, Ruff, & Driver, 2009). By directly comparing two conditions for which the contribution of the ventral stream is hypothesized to differ, we have shown that the N170 likely reflects perceptual requirements during mnemonic processing specifically linked to action. The N170 is known to have several sources in the brain, including the LOC and fusiform face area (FFA) in the temporal lobes. And while we cannot definitively conclude that our temporal electrodes are a direct index of ventral stream processing, our a priori hypothesis led us to predict that this area would elicit a more negative N170 during condition 2, (which presumably engages ventral mechanisms to a greater extent than during condition 1), and this component would overlap with the auditory evoked potential. Our findings support previous fMRI, kinematic, and neuropsychological reports, which suggest that action planning in the absence of a visual target engages ventral stream processes (Goodale and Milner, 1992; Singhal et al., 2006; 2007). Based on the good consistency between our conclusions and previous studies, we provide converging electrophysiological support that actions towards memory-based targets demand greater contribution from ventral areas. Furthermore, our data support the idea that detailed memory representations of visual objects activate areas within the LOC (Xu & Chun, 2006), which are likely reactivated during recall (Nyberg, Habib, McIntosh, & Tulving, 2000; Wheeler, Petersen, & Buckner, 2000). 
We also rule out the alternative interpretation that the stimulus offset differences contributed to the larger ERP amplitude in condition 2. In order to ensure that the offset was not contributing to the N170, Experiment 2 did not require participants to make a response to the target. Therefore, there were no differences in motor planning between conditions, and thus no need for any additional recruitment of the ventral stream required to plan the action. Therefore, we were able to isolate any effects due to the varying stimulus offset timing between our conditions. Results of Experiment 2 indicate that there were no differences in the N170 between condition 1 and condition 2, eliminating the possibility that there was contamination by the offset of the visual stimuli in Experiment 1. 
We suggest that while the ventral stream is recruited for tasks like condition 2, these types of actions likely still involve communication with parietal areas. There are hundreds of single cell studies implicating dorsal stream structures in delayed response tasks, so we would not argue for a complete dissociation between the dorsal and ventral streams. Rather, collaboration between the streams seems more plausible. Both streams should be involved somehow in the transformation of visual information into motor output, and how the two streams interact may depend on the specific requirements of the task at hand (McIntosh & Schenk, 2009) In our task, visual information about the hand and touchscreen are available throughout the movement, and the ventral stream may be providing allocentric spatial representations that could aid in successful reaching (Dijkerman, Milner, & Carey, 1998). Some studies have suggested that delay-related activity in the dorsal stream may also reflect inputs from the ventral stream (Toth & Assad, 2002), and offline dorsal stream activity may depend on ventral stream information. Sensorimotor transformations that occur in ventral stream areas, including coordinate transformations for object recognition and translation-invariance, likely also have implications for visuomotor control (Graf, 2006), and the output of this system may be equally important to consider. 
Given the advantage of EEG's temporal resolution, we may be able to examine the time course of the communication between ventral and parieto-frontal regions, and better quantify the streams' relative contributions to different sensorimotor tasks. For example, while we have isolated and analyzed an EEG component hypothesized to reflect ventral stream processing, it is important to address a potential bias towards finding effects within temporal, versus occipital and parietal, recording sites. In theory, it is plausible that both ventral and dorsal streams are driven more strongly by condition 2, but that differences arise at different times for different brain areas. An electrophysiological marker of dorsal stream processing could potentially be isolated within a different temporal window of the task. However, we feel that such a possibility is difficult to test using this particular paradigm. We do not include a strictly “visually guided” condition, in which the target remains present throughout the movement. Therefore, the circuit is deprived of visual input during movement execution in both condition 1 and condition 2. Furthermore, we have not manipulated either condition to differ during the preparatory phase of the reach either. Thus, the appearance of the target should initiate planning activity in the sensorimotor system in a similar way between conditions. Perhaps a more direct test of signal differences between conditions in parietal cortex would involve an experimental paradigm in which the conditions were more strongly distinguished from one another in terms of visual feedback throughout the movement. Future studies to investigate the electrophysiological mechanisms of real-time visual feedback for guiding actions should be performed. 
In sum, this study is the first to directly compare actions that are hypothesized to require differential contribution of ventral stream mechanisms using high temporal resolution ERP. Our results, taken together with previous patient data and fMRI work, support the idea that the contribution of the N170 overlapping the auditory cue ERP may be a reliable marker of increased activity within the LOC during the planning of actions which rely more heavily on perception-based information. 
Acknowledgments
This work was supported by the National Sciences and Engineering Research Council of Canada grants rgpin #341662 to J. C., and rgpin #341714-08 to A. S., and Alberta Ingenuity grant #200800568 to J. C. We would like to thank the following people who contributed to this work: Graeme Armstrong, Chris Madan, and Ian Surdhar for technical and programming assistance, and Ashley McKillop and Tania Shapka for assisting with data collection and analysis. 
Commercial relationships: none. 
Corresponding author: Leanna C. Cruikshank. 
Email: leannac@ualberta.ca. 
Address: Centre for Neuroscience, University of Alberta, Edmonton, AB, Canada. 
References
Armstrong G. A. Singhal A . (2011). Neural markers of automatic and controlled attention during immediate and delayed action. Experimental Brain Research, 213, 35–48. doi: 10.1007/s00221-011-2774-0. [CrossRef] [PubMed]
Banko E. M. Gal V. Vidnyanszky Z. (2009). Flawless visual short-term memory for facial emotional expressions. Journal of Vision, 9(1):12, 1–13, http://www.journalofvision.org/content/9/1/12, doi: 10.1167/9.1.12. [PubMed] [Article] [CrossRef] [PubMed]
Brebner J. T. Welford A. T 1980. Introduction: an historical background sketch. In Welford A. T (Ed.), Reaction Times (pp. 1–23). New York: Academic Press.
Cohen N. R. Cross E. S. Tunik E. Grafton S. T. Culham J. C . (2009). Ventral and dorsal stream contributions to the online control of immediate and delayed grasping: A TMS approach. Neuropsychologia, 47(6), 1553–1562. [CrossRef] [PubMed]
Cruikshank L. C. Singhal A. Hueppelsheuser M. Caplan J. B . (2012). Theta oscillations reflect a putative neural mechanism for human sensorimotor integration. Journal of Neurophysiology, 107(1), 65–77. doi: 10.1152/jn.00893.2010. [CrossRef] [PubMed]
Daniel S. Bentin S . (2010). Age-related changes in processing faces from detection to identification: ERP evidence. Neurobiology of Aging, doi: 10.1016/j.neurobiolaging.2010.09.001.
Dijkerman H. C. Milner A. D. Carey D. P . (1998). Grasping spatial relationships: Failure to demonstrate allocentric visual coding in a patient with visual form agnosia. Consciousness and Cognition, 7(3), 424–437. doi:10.1006/ccog.1998.0365. [CrossRef] [PubMed]
Ferber S. Humphrey G. K. Vilis T . (2003). The lateral occipital complex subserves the perceptual persistence of motion-defined groupings. Cerebral Cortex 13(7), 716–721. [CrossRef] [PubMed]
Franz V. H. Hesse C. Kollath ST . (2003). Visual illusions, delayed grasping, and memory: No shift from dorsal to ventral control. Neuropsychologia 47(6), 1518–1531.doi:10.1016/j.neuropsychologia.2008.08.029
Ganel T. Tanzer M. Goodale M. A . (2008). A double dissociation between action and perception in the context of visual illusions: Opposite effects of real and illusory size. Psychological Science: A Journal of the American Psychological Society/APS, 19(3), 221–225. [CrossRef]
Geng J. J. Ruff C. C. Driver J . (2009). Saccades to a remembered location elicit spatially specific activation in human retinotopic visual cortex. Journal of Cognitive Neuroscience, 21(2), 230–245. [CrossRef] [PubMed]
Goodale M. A . (1998). Vision for perception and vision for action in the primate brain. Novartis Foundation Symposium, 218, 21–34; discussion 34-9. [PubMed]
Goodale M. A. Jakobson L. S. Keillor J. M . (1994). Differences in the visual control of pantomimed and natural grasping movements. Neuropsychologia, 32(10), 1159–1178. [CrossRef] [PubMed]
Goodale M. A. Milner A. D . (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. [CrossRef] [PubMed]
Goodale M. A. Milner A. D. Jakobson L. S. Carey D. P . (1991). A neurological dissociation between perceiving objects and grasping them. Nature, 349(6305), 154–156. [CrossRef] [PubMed]
Goodale M. A. Milner A. D. Jakobson L. S. Carey D. P . (1991). Object awareness. Nature, 352(6332), 202. [CrossRef] [PubMed]
Graf M . (2006). Coordinate transformations in object recognition. Psychological Bulletin, 132(6), 920–945. doi:10.1037/0033-2909.132.6.920. [CrossRef] [PubMed]
Gratton G. Coles M. G. Donchin E . (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55, 468–484. [CrossRef] [PubMed]
Grill-Spector K. Kourtzi Z. Kanwisher N . (2001). The lateral occipital complex and its role in object recognition. Vision Research, 41(10-11), 1409–1422. [CrossRef] [PubMed]
Hesse C. Franz V. H . (2009). Memory mechanisms in grasping. Neuropsychologia, 47(6), 1532–1545. doi:10.1016/j.neuropsychologia.2008.08.012 [CrossRef] [PubMed]
Hu Y. Goodale M. A . (2000). Grasping after a delay shifts size-scaling from absolute to relative metrics. Journal of Cognitive Neuroscience, 12(5), 856–868. [CrossRef] [PubMed]
James T. W. Culham J. Humphrey G. K. Milner A. D. Goodale M. A . (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: An fMRI study. Brain, 126(Pt 11), 2463–2475. [CrossRef] [PubMed]
Johnson M. R. Mitchell K. J. Raye C. L. D'Esposito M. Johnson M. K . (2007). A brief thought can modulate activity in extrastriate visual areas: Top-down effects of refreshing just-seen visual stimuli. NeuroImage, 37(1), 290–299. [CrossRef] [PubMed]
Klatzky R. L. Pellegrino J. McCloskey B. P. Lederman S. J . (1993). Cognitive representations of functional interactions with objects. Memory & Cognition, 21(3), 294–303. [CrossRef] [PubMed]
Maier J. Dagnelie G. Spekreijse H. van Dijk B. W . (1987). Principal components analysis for source localization of VEPs in man. Vision Research, 27(2), 165–177. [CrossRef] [PubMed]
McIntosh R. D. Schenk T . (2009). Two visual streams for perception and action: Current trends. Neuropsychologia, 47(6), 1391–1396. doi:10.1016/j.neuropsychologia.2009.02.009 [CrossRef] [PubMed]
Milner A. D. Dijkerman H. C. Pisella L. McIntosh R. D. Tilikete C. Vighetto A (2001). Grasping the past. delay can improve visuomotor performance. Current Biology: CB, 11(23), 1896–1901. [CrossRef] [PubMed]
Morgan H. M. Klein C. Boehm S. G. Shapiro K. L. Linden D. E . (2008). Working memory load for faces modulates P300, N170, and N250r. Journal of Cognitive Neuroscience, 20(6), 989–1002. [CrossRef] [PubMed]
Morotomi T. Kitajima S . (1975). Enhancement of evoked responses to brief flashes and its correlation with off responses to pre-exposed light stimulation. Vision Research, 15(2), 267–272. [CrossRef] [PubMed]
Murata A. Gallese V. Kaseda M. Sakata H . (1996). Parietal neurons related to memory-guided hand manipulation. Journal of Neurophysiology, 75, 2180–2186. [PubMed]
Nishimura A. Yokosawa K . (2010). Visual and auditory accessory stimulus offset and the simon effect. Attention, Perception & Psychophysics, 72(7), 1965–1974. doi: 10.3758/APP.72.7.1965. [CrossRef] [PubMed]
Nyberg L. Habib R. McIntosh A. R. Tulving E . (2000). Reactivation of encoding-related brain activity during memory retrieval. Proceedings of the National Academy of Sciences of the United States of America, 97(20), 11120–11 124. [CrossRef] [PubMed]
Rossion B. Joyce C. A. Cottrell G. W. Tarr M. J . (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. NeuroImage, 20(3), 1609–1624. [CrossRef] [PubMed]
Simon J. R. Craft J. L. Webster J. B . (1971). Reaction time to onset and offset of lights and tones: Reactions toward the changed element in a two-element display. Journal of Experimental Psychology, 89(1), 197–202. [CrossRef] [PubMed]
Singhal A. Culham J. C. Chinellato E. Goodale M. A. (2007). Dual-task interference is greater in delayed grasping than in visually guided grasping. Journal of Vision, 7(5):5, 1–12, http://www.journalofvision.org/content/7/5/5, doi: 10.1167/7.5.5. [PubMed] [Article] [CrossRef] [PubMed]
Singhal A. Kaufman L. Valyear K. Culham J . (2006). fMRI reactivation of the human lateral occipital complex during delayed actions to remembered targets. Visual Cognition, 14(1), 125–128.
Sreenivasan K. K. Katz J. Jha A. P . (2007). Temporal characteristics of top-down modulations during working memory maintenance: an event-related potential study of the N170 component. Journal of Cognitive Neuroscience, 19, 1836–1844. [CrossRef] [PubMed]
Srinivasan R. Nunez P. L. Silberstein R. B. Tucker D. M. Cadusch P. J . (1996). Spatial sampling and filtering of EEG with spline-Laplacians to estimate cortical potentials. Brain Topography, 8, 355–366. [CrossRef] [PubMed]
Taylor M. J. McCarthy G. Saliba E. Degiovanni E . (1999). ERP evidence of developmental changes in processing of faces. Clinical Neurophysiology, 110(5), 910–915. [CrossRef] [PubMed]
Toth L. J. Assad J. A . (2002). Dynamic coding of behaviorally relevant stimuli in parietal cortex. Nature, 415(6868), 165–168. doi: 10.1038/415165a. [CrossRef] [PubMed]
Westwood D. A. Dubrowski A. Carnahan H. Roy E. A . (2000). The effect of illusory size on force production when grasping objects. Experimental Brain Research. Experimentation Cerebrale, 135(4), 535–543. [CrossRef] [PubMed]
Westwood D. A. Goodale M. A . (2003). Perceptual illusion and the real-time control of action. Spatial Vision, 16(3–4), 243–254. [CrossRef] [PubMed]
Westwood D. A. McEachern T. Roy E. A . (2001). Delayed grasping of a muller-lyer figure. Experimental Brain Research. Experimentation Cerebrale, 141(2), 166–173. [CrossRef] [PubMed]
Wheeler M. E. Petersen S. E. Buckner R. L . (2000). Memory's echo: Vivid remembering reactivates sensory-specific cortex. Proceedings of the National Academy of Sciences of the United States of America, 97(20), 11125–11129. [CrossRef] [PubMed]
Wilson J. T . (1983). Effects of stimulus luminance and duration on responses to onset and offset. Vision Research, 23(12), 1699–1709. [CrossRef] [PubMed]
Xu Y. Chun M. M . (2006). Dissociable neural mechanisms supporting visual short-term memory for objects. Nature, 440(7080), 91–95. [CrossRef] [PubMed]
Figure 1
 
Schematic of the behavioral paradigm. 180 trials of each condition were presented to participants within a session. In condition 1 (vision), the tone sounds and the stimulus disappears when the participant lifts his/her finger from the response box. In condition 2 (memory), the tone sounds simultaneously with the disappearance of the target.
Figure 1
 
Schematic of the behavioral paradigm. 180 trials of each condition were presented to participants within a session. In condition 1 (vision), the tone sounds and the stimulus disappears when the participant lifts his/her finger from the response box. In condition 2 (memory), the tone sounds simultaneously with the disappearance of the target.
Figure 2
 
Sensor layout and analysis clusters, which are shown in green. The N170 ERP component for conditions 1 and 2 were compared at the following temporal, parietal, and occipital electrode clusters.
Figure 2
 
Sensor layout and analysis clusters, which are shown in green. The N170 ERP component for conditions 1 and 2 were compared at the following temporal, parietal, and occipital electrode clusters.
Figure 3
 
ERP plots for the central electrode within each electrode cluster (T5, T6, Pz, Oz). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis.
Figure 3
 
ERP plots for the central electrode within each electrode cluster (T5, T6, Pz, Oz). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis.
Figure 4
 
ERP plots for the central electrode within each temporal area cluster (T5 and T6). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis. Experiment 1 results are shown above the control results for comparison.
Figure 4
 
ERP plots for the central electrode within each temporal area cluster (T5 and T6). Conditions 1 and 2 are plotted for each electrode, with voltage (μV) plotted on the x-axis and time (ms) on the y-axis. Experiment 1 results are shown above the control results for comparison.
Table 1
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Table 1
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Region Condition 1 N170 amplitude [μV] Condition 2 N170 amplitude [μV] Marginal mean Condition 1 N170 latency [ms] Condition 2 N170 latency [ms] Marginal mean
Left temporal −2.7 ± 1.6 −3.3 ± 1.7 −3.0 171.3 ± 47.8 173.5 ± 49.1 172.4
Right temporal −2.4 ± 2.0 −2.8 ± 2.1 −2.6 187.3 ± 51.2 182.5 ± 52.8 184.9
Parietal −4.4 ± 3.6 −4.5 ± 3.3 −4.4 199.4 ± 48.5 190.4 ± 38.6 194.9
Occipital −4.8 ± 5.1 −5.3 ± 4.2 −5.0 189.4 ± 46.2 185.3 ± 43.1 187.3
Marginal mean −3.58 −3.98 186.85 182.93
Table 2
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Table 2
 
Mean amplitudes and latencies (with standard deviations) of the N170 ERP component.
Region Condition 1 N170 amplitude [μV] Condition 2 N170 amplitude [μV] Marginal mean Condition 1 N170 latency [ms] Condition 2 N170 latency [ms] Marginal mean
Left temporal −2.9 ± 2.0 −2.9 ± 1.7 −2.9 189.2 ± 39.1 189.6 ± 41.8 189.4
Right temporal −2.0 ± 1.2 −2.2 ± 1.2 −2.1 200.1 ± 32.5 217.2 ± 26.0 208.6
Parietal −2.1 ± 1.3 −2.0 ± 1.2 −2.0 164.0 ± 39.1 168.0 ± 40.8 166.0
Occipital −3.3 ± 2.2 −3.2 ± 1.9 −3.2 191.3 ± 44.2 204.8 ± 50.5 198.0
Marginal mean −2.58 −2.58 186.15 194.90
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×