Free
Research Article  |   July 2007
Influence of initial hand and target position on reach errors in optic ataxic and normal subjects
Author Affiliations
Journal of Vision July 2007, Vol.7, 8. doi:https://doi.org/10.1167/7.5.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aarlenne Z. Khan, J. Douglas Crawford, Gunnar Blohm, Christian Urquizar, Yves Rossetti, Laure Pisella; Influence of initial hand and target position on reach errors in optic ataxic and normal subjects. Journal of Vision 2007;7(5):8. https://doi.org/10.1167/7.5.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent neurophysiological studies suggest that reach planning areas in the posterior parietal cortex encode both target and initial hand position in gaze-centered coordinates, which could be used to calculate a desired movement vector. We tested how varying gaze, target position, and initial hand position affected reach errors in two left unilateral optic ataxia patients with right PPC damage and seven neurologically intact controls. Both controls' and patients' reaching errors revealed an influence of target position in gaze-centered coordinates; however, both patients' mean errors were offset toward the left, with greater errors when the target was in their left visual field, consistent with the damage to the right PPC. Control subjects also showed a large quasi-independent shoulder-centered influence of target position. This effect was much less present in patient C.F., who had more medial damage to the PPC. In contrast, for patient O.K., who had more lateral PPC damage, the shoulder-centered effect was larger and interacted with the gaze-centered influence of target position. All subjects' errors also revealed a shoulder-centered influence of the initial hand position, with larger influences on the patients' reaching errors. Both patients also showed an interactive influence of the shoulder-centered and gaze-centered initial hand positions. These results suggest that the target and the hand are compared at more than one level in the visuomotor pathway in multiple reference frames, and these comparisons are then integrated. Depending on the location of the damage within the PPC, these comparisons are disrupted, changing the relative influence of hand and target position in different reference frames on the final reaching movement.

Introduction
Reaching toward a viewed object is a complex task that involves a number of different processes in the brain. The first step involves encoding the location of the object. We use mainly visual information acquired through our eyes to specify the location of targets, although for close targets, we could also use proprioceptive or auditory information. For the final movement, the brain needs to determine which arm muscles to contract to successfully reach the object. In between the sensory input and the motor output, the brain must compute a complex visuomotor transformation; that is, sensory representations of the visual target and hand position must be compared to calculate a reach plan in shoulder-centered coordinates (Battaglia-Mayer, Caminiti, Lacquaniti, & Zago, 2003; Crawford, Medendorp, & Marotta, 2004; Flanders, Helms-Tillery, & Soechting, 1992; Gordon, Ghilardi, & Ghez, 1994; McIntyre, Stratta, & Lacquaniti, 1997; McIntyre, Stratta, & Lacquaniti, 1998). Understanding these transformations is essential if we are to understand the bases of clinical visuomotor deficits, such as optic ataxia (OA). 
A number of behavioral reaching studies have shown evidence that the spatial locations of objects are determined relative to gaze (Henriques, Klier, Smith, Lowy, & Crawford, 1998; Poljac & van den Berg, 2003; Pouget, Ducom, Torri, & Bavelier, 2002). Specifically, they show that during reaching or pointing movements, errors vary as a function of the position of the reach target relative to current gaze. Single-unit recordings in monkeys and functional imaging studies in humans also suggest that a gaze-centered reference frame is used to represent and update target locations in specific reach-related areas of the parietal cortex (Batista, Buneo, Snyder, & Andersen, 1999; Cohen, & Andersen, 2000; Medendorp, Goltz, Crawford, & Vilis, 2005; Medendorp, Goltz, Vilis, & Crawford, 2003). For example, Batista et al. (1999) showed that in an area in the posterior parietal cortex (PPC) specialized for reach movements, neuronal activity varied when gaze was changed relative to the reach target. This was recently confirmed in patient studies, where unilateral and bilateral OA patients with damage in the PPC showed deficits in reaching that are consistent with a gaze-centered representation of reach space when performing a reaching task (Khan, Pisella, Rossetti, Vighetto, & Crawford, 2005; Khan, Pisella, Vighetto, et al., 2005). 
To make the appropriate reach movement to the target, it is not sufficient to know the location of the target; knowledge of the initial hand position is also required because the desired movement vector is defined as the difference between current hand and target position. Furthermore, these representations must be compared within a common reference frame in order that the required movement vector is calculated in a spatially consistent manner. Finally, the required muscle contractions for the arm depend primarily on the hand–target movement vector. The existing literature has proposed two different stages in the visuomotor transformation process where target location and initial arm position could be compared (Buneo, Jarvis, Batista, & Andersen, 2002; Flanders et al., 1992; Henriques et al., 1998). First, arm reach movements could be represented in a shoulder-centered reference frame (Crawford et al., 2004; Flanders et al., 1992). The final reaching movement is determined by the contractions of muscles that have their main insertion points in the lower arm, upper arm, and shoulder. Overall, the arm moves as a whole relative to the shoulder, and therefore, we treat the arm as being shoulder fixed. The target position could be transformed through a series of transformations to a shoulder-centered representation and then compared to the current arm position in the same reference frame (Flanders et al., 1992; Henriques et al., 1998). This would result in a motor vector formed in shoulder-centered coordinates. 
Alternatively, initial hand position could be encoded in gaze-centered coordinates (either directly, through vision of the hand, or indirectly, through a reverse transformation from proprioceptive information) and compared to target location in gaze-centered coordinates (Blangero, Rossetti, Honoré, & Pisella, 2005; Buneo & Andersen, 2006; Buneo et al., 2002). This would result in a motor vector determined in gaze-centered coordinates, which then would be transformed through the appropriate reference frame transformations into a shoulder-centered representation (Beurze, Van Pelt, & Medendorp, 2006; Blohm & Crawford, 2007). There is empirical evidence for an early gaze-centered representation of hand position. For instance, Buneo et al. (2002) showed modulation in neuronal activity in Area 5 of the PPC based on the position of the hand relative to gaze, suggesting that this area also represents the position of the hand in gaze-centered coordinates. Based on these findings, it has been postulated that the target and the hand are represented in the same gaze-centered representation, allowing for a gaze-centered motor vector to be formed (Buneo & Andersen, 2006; Buneo et al., 2002). 
We could also consider a third possibility, that hand–target comparisons are done in both visual and somatotopic frames depending on task requirements, available information, or simply an optimal movement plan (Battaglia-Mayer et al., 2003; Blohm, Khan, & Crawford, in press). There is neurophysiological evidence showing that areas such as the parietal and premotor cortex show modulation of neural activity correlated with both hand and target positions in different reference frames (Batista et al., 2005; Battaglia-Mayer et al., 2003, 2001). In this framework, the hand and target could be compared in multiple reference frames and these comparisons could then be combined for the final movement vector. This is also consistent with recent findings showing that an artificial neural network of the visuomotor transformation for reaching performs this comparison gradually across different frames of reference (Blohm, Keith, & Crawford, 2006). 
In the current study, we used these three theoretical frameworks to test the effect of initial hand position on reach errors in OA patients. Unilateral OA patients with right parietal damage show significantly greater reaching errors in their left or impaired visual field compared to their right, intact visual field, consistent with a gaze-centered representation of reach space (Khan, Pisella, Rossetti, et al., 2005). However, it is not known whether these errors are also affected by (a) hand position (either in gaze-centered or in shoulder-centered coordinates) or (b) the shoulder-centered position of the reach target. To answer this, we compared reaching errors in OA patients and in neurologically intact controls while varying the position of the initial hand and reach target. We aimed to determine whether reach errors in these subjects depended on hand and/or target position in gaze- and/or shoulder-centered coordinates. 
Unilateral OA patients' reaching errors reveal both a field effect—deficits related to reaching in the visual field opposite to the lesion—and a hand effect—deficits related to reaching with the contralesional hand (Perenin & Vighetto, 1988; Vighetto & Perenin, 1981). In this study, we wished to focus on the field effect and, thus, had patients reach only with their ipsilesional (right) hand. In this way, we hoped to focus predominantly on the visuospatial factors and exclude the presumably “additive” motor effects of using the contralesional hand (Perenin & Vighetto, 1988; Vighetto & Perenin, 1981). 
Methods
Subjects
We tested two patients, both with left unilateral OA. Patient O.K. is a right-handed 40-year-old man with right posterior parietal lobe damage caused by an ischemic stroke involving the posterior branch of the right sylvian artery (for details, see Revol et al., 2003). Damage includes Brodmann area (BA) 7 in its lateral and medial aspects as well as a slight extension into Areas 39 and 40 and into the right posterior corpus callosum. He exhibited OA in his left visual field (Figure 1A, left panel). 
Figure 1
 
Magnetic resonance imaging scans of the two patients and experimental setup. (A) The left panel shows a T1 scan for patient O.K. The darker area in the right hemisphere reveals the lesion in the right PPC. The right panel shows a T2 scan for patient C.F. The white areas at the bottom of the scan show asymmetrical damage mostly in the right hemisphere in the posterior parietal lobes. There is also some slight damage in the left premotor cortex. (B) Experimental setup. Subjects fixated on one of seven fixation targets (white circles) while reaching to one of three reaching targets (black circles). Reaching movements began from one of three initial hand positions (gray circles). The angular distance of all targets from the cyclopean position (between the two eyes) is shown. LED targets located above the subjects were reflected to appear on the table through the use of a half-reflecting mirror. The mirror was shaped so that subjects were able to see their hand at the initial start position at the beginning of every trial but had no visual feedback for the rest of the movement. In addition, trials took place in complete darkness except for a dim light that allowed subjects to see their hand when the initial start position LED was illuminated.
Figure 1
 
Magnetic resonance imaging scans of the two patients and experimental setup. (A) The left panel shows a T1 scan for patient O.K. The darker area in the right hemisphere reveals the lesion in the right PPC. The right panel shows a T2 scan for patient C.F. The white areas at the bottom of the scan show asymmetrical damage mostly in the right hemisphere in the posterior parietal lobes. There is also some slight damage in the left premotor cortex. (B) Experimental setup. Subjects fixated on one of seven fixation targets (white circles) while reaching to one of three reaching targets (black circles). Reaching movements began from one of three initial hand positions (gray circles). The angular distance of all targets from the cyclopean position (between the two eyes) is shown. LED targets located above the subjects were reflected to appear on the table through the use of a half-reflecting mirror. The mirror was shaped so that subjects were able to see their hand at the initial start position at the beginning of every trial but had no visual feedback for the rest of the movement. In addition, trials took place in complete darkness except for a dim light that allowed subjects to see their hand when the initial start position LED was illuminated.
Patient C.F. is a right-handed 28-year-old male patient who suffered from a watershed posterior infarct, resulting in distributed and asymmetrical bilateral lesions of the occipitoparietal region (BA 18, 19, 7, 5, and 2) with a minute extension to the semiovale centers. At the time of testing, he exhibited OA predominantly in his left visual field, thought to be the consequence of larger damage in the right hemisphere from both BA 7 lesions and a parietofrontal disconnection from intrahemispheric fiber lesions ( Figure 1A, right panel). Neither patient exhibited any purely motor, somatosensory, or visual deficits or any sign of neglect. They were tested using a set of standard clinical tests involving visual field topography, sensory stimulation tests, evaluation of reflexes and muscle tone, and joint movement. 
The clinical evaluation of the static and dynamic proprioception of the upper limbs consisted of applying a slow passive movement in flexion or extension (the test included 25% catch trials) on each joint serially (index, wrist, elbow, and shoulder), while patients kept their eyes closed. We asked them (a) whether they perceived a movement, (b) in which direction, and (c) to reproduce the single joint angles with the other limb (Rivermead Assessment of Somatosensory Performance subtests). In addition, seven neurologically intact controls were also tested (mean age = 33.67 years, age range = 27–46). 
Apparatus
Subjects reached to targets projected onto a tabletop by a half-reflecting mirror ( Figure 1B), which allowed subjects sight of their hand at the beginning of each trial but not during the trial itself. Sight of the hand before each trial is known to improve reaching accuracy (Vindras, Desmurget, Prablanc, & Viviani, 1998). The target array consisted of seven red fixation light-emitting diodes (LEDs) located at 36°, 24°, and 12° left; 0°; and 12°, 24°, and 36° right relative to the cyclopean eye position located midway between the two eyes (shown in Figure 1B as white circles). The 0° fixation LEDs were located at a distance of 58.5 cm from the subjects' eyes, and all fixation targets were aligned horizontally. Three green reaching targets (also LEDs) were located at 12° left, 0°, and 12° right, slightly below the fixation targets that were also aligned horizontally (shown as black circles). The 0° reaching target was at a distance of 57.5 cm from the subjects' eyes. Subjects began their reaching movements from one of three initial hand positions vertically aligned with the reaching targets (the center initial hand position was 44.5 cm from the subjects' eyes). The three initial hand position LEDs were located at 24° left, 0°, and 24° right of the subject's torso aligned to the midsagittal plane (shown as gray circles). All LEDs were projected onto the same plane. The subject's head was fixed using a chin rest vertically aligned with the 0° fixation, reach, and initial hand positions. 
Movements of the right index finger were recorded using an Optotrak 3020 digitizing and motor analysis system. Data were sampled at 1000 Hz. Finger movements were measured in 3D space (in mm) relative to the center initial hand position. Projected LED positions for all fixations, reach, and initial hand position targets as well as the 3D position of the right eye (subjects placed their finger on their right eye) were also measured for each subject. Horizontal eye positions were recorded binocularly through an electrooculogram (EOG) using a DC electrooculograph system (50 Hz) by placing electrodes outside the left and right eyes. 
Subjects performed the same basic task under various conditions. Each trial began with the illumination of an initial hand position LED for 2,000 ms. Next, one of the seven fixation LEDs was illuminated for 2,000 ms and subjects were required to fixate on it. After 1,000 ms, a reach target was illuminated for 1,000 ms. Subjects maintained their fixation position, and after both LEDs were extinguished, an auditory tone signaled subjects to move their hand to the reach target. Figure 2 (top panel) depicts sequential stimulus and behavioral epochs in our experiment. 
Figure 2
 
Experiment timing and examples of eye and arm movements. Events are plotted as a function of time ( x-axis) in seconds. (A) Timing of the various targets (depicted in the same colors as in Figure 1B). Timing begins from 1 s before recording onset (first vertical dotted line) to depict the presentation of the initial hand position LED (where the eye first fixated). Note that the initial hand position LED was illuminated for 2 s in total (here, we only show the last second). (B) Horizontal eye position (top trace) and 3D finger position (bottom overlaid traces) for the control subject. The horizontal eye position trace shows EOG current in volts for the eye initially at the initial hand position then a movement to the fixation target (24° left). This position was held for the remainder of the trial. The finger position traces show a movement from the initial hand position to the central reach target. The y-axis shows the distance in centimeters from the initial hand position for horizontal (solid trace—negative values are to the left), depth (dashed trace—negative values are away from the subject), and vertical (dotted trace—positive values are the finger lifting up from the table). This subject moved slightly to the right and away from the body (and of course, lifted the finger from the table to make the movement). Eye and finger traces for patient C.F. (C) and patient O.K. (D) are shown. The eye trace from patient C.F. shows some drift, but overall, all subjects were able to maintain fixation on the remembered location of the fixation target. The extinction of the fixation and reach targets coincided with an auditory cue for hand movement initiation (second vertical dotted line). The vertical lines show the extracted positions and timings of the hand movements (short vertical line—start and end positions [200 ms before and after start and end times], long vertical lines—start and end times determined by using velocity criteria).
Figure 2
 
Experiment timing and examples of eye and arm movements. Events are plotted as a function of time ( x-axis) in seconds. (A) Timing of the various targets (depicted in the same colors as in Figure 1B). Timing begins from 1 s before recording onset (first vertical dotted line) to depict the presentation of the initial hand position LED (where the eye first fixated). Note that the initial hand position LED was illuminated for 2 s in total (here, we only show the last second). (B) Horizontal eye position (top trace) and 3D finger position (bottom overlaid traces) for the control subject. The horizontal eye position trace shows EOG current in volts for the eye initially at the initial hand position then a movement to the fixation target (24° left). This position was held for the remainder of the trial. The finger position traces show a movement from the initial hand position to the central reach target. The y-axis shows the distance in centimeters from the initial hand position for horizontal (solid trace—negative values are to the left), depth (dashed trace—negative values are away from the subject), and vertical (dotted trace—positive values are the finger lifting up from the table). This subject moved slightly to the right and away from the body (and of course, lifted the finger from the table to make the movement). Eye and finger traces for patient C.F. (C) and patient O.K. (D) are shown. The eye trace from patient C.F. shows some drift, but overall, all subjects were able to maintain fixation on the remembered location of the fixation target. The extinction of the fixation and reach targets coincided with an auditory cue for hand movement initiation (second vertical dotted line). The vertical lines show the extracted positions and timings of the hand movements (short vertical line—start and end positions [200 ms before and after start and end times], long vertical lines—start and end times determined by using velocity criteria).
All trials took place in complete darkness. The experiment was performed in three sessions; in each session, subjects began at a different initial hand position (left, center, or right). Only the center and left reach targets were presented with the left initial hand position, and conversely, only the center and right reach targets were presented with the right initial hand position. All three reach targets were presented for the center initial hand position. 
Within each session, five fixation targets were presented for each reaching target, comprising the fixation LED directly above the reaching target and two fixation LEDs on either side of it (e.g., Figure 1B—the centermost five fixation targets were presented with the center reaching target). Control subjects performed six trials per fixation target per reaching target (except for one who performed nine trials per condition). We deemed six trials to be sufficient for most of the controls as we tested seven controls in total. Patient C.F. performed nine trials per fixation target, and patient O.K. performed six trials per fixation target due to time constraints. Fixation and reach target lights were presented in a blocked design, and the presentation order was counterbalanced between subjects. There were three sessions in each experiment, one session for each initial hand position with a break in between. 
At the end of each session, we performed a set of calibration trials, where we illuminated all targets (as well as the room) and asked subjects to fixate and point to each fixation, reach, and initial hand LED in sequence while maintaining the same head position (chin rest and forehead support). Because of the room illumination, they were able to use visual feedback through the mirror to accurately reach to each target. These finger positions were used to calculate the 3D positions of the initial hand, reach, and fixation positions. 
Data analysis
Eye and finger movements were monitored online for all subjects. Immediately following the completion of each trial, we confirmed that appropriate eye and reach movements had been made. Trials performed incorrectly for any reason were repeated. 
Figure 2 illustrates horizontal eye and arm positions plotted as a function of time for a typical control and the two patients for a 24° left fixation target and the center reach and start positions. The eye position traces show that subjects moved their eyes from the start position to the fixation target and then maintained fixation at that location for the remainder of the trial. Note that, in the hand position traces, the control subject held the final position of the hand until the end of the recorded time, whereas the patients returned to the start position after holding the final position for a relatively shorter period. Either strategy was acceptable as long as subjects maintained the final position long enough for the end position of the hand to be extracted. Trials in which this did not occur were repeated. 
We measured eye movements using EOG and analyzed the eye movement data off-line to ensure that subjects correctly performed the oculomotor aspects of the task; that is, (a) subjects made the appropriate (correct direction and approximate amplitude) eye movement to the fixation target before the reach target was presented, and (b) subjects maintained this eye position until after the hand movement to the target was completed. The EOG system did not allow for highly accurate measurements of saccade metrics (accuracy = 2°) but was sufficiently accurate to determine the timing, direction, and approximate endpoint location of the eye movements made. The relatively long duration of the fixation targets ensured that all subjects would attain and maintain the target. Unpublished data on these patients suggest that their eye movements to targets (remaining illuminated) are within normal parameters, and we did not observe any oculomotor deficits during our experiment. 
The 3D end positions for the fingertip were calculated for each trial using velocity criteria (200 ms after velocity <80 mm/s—short vertical lines in Figure 2). The velocity criteria were also used to calculate arm latency and movement time. 3D arm positions were subsequently converted into angles relative to the cyclopean eye (calculated from the 3D position of the right eye) and the 0° reaching target (as determined by the calibration trials). 
Reach movements were considered anticipatory if they occurred less than 100 ms after target offset, and the maximum time allowed for the hand movement to begin was 1,000 ms after target offset. Erroneous trials were removed from the analysis (1.3% for C.F., 4.7% for O.K., and 3.2% for controls). Reach errors for controls were pooled together across controls as all controls showed similar patterns in reach errors. These grouped data were then analyzed in the same manner as the individual patient data. Therefore, we had approximately 42 trials per fixation, reach, and initial hand position for the control subjects, and the average and standard errors shown in graphs are calculated across controls. 
Results
Reaching errors relative to the gaze-centered position of the reach target have been documented in previous studies on these OA patients (Dijkerman et al., 2006; Khan, Pisella, Rossetti, et al., 2005; Khan, Pisella, Vighetto, et al., 2005). Here, it was necessary to confirm these findings as the first part of a more complete analysis that investigates the effects of initial hand position (gaze- and shoulder-centered positions) and target position in shoulder coordinates. The aim was to tease out the relative influences of these three factors in the subjects' reaching errors. 
Reach errors related to the gaze-centered target and initial hand position
To determine whether reach errors depended on the position of the reach target and/or initial hand position in gaze-centered coordinates, we compared reach errors in two reference frames for the same center reach movement. Figures 3A and 3D show this reach movement schematically (as a gray arrow) where the movement is from the center initial hand position (gray circles) to the center reach target (black circles) with the corresponding fixation positions (white circles) specified in two reference frames: (a) a shoulder-centered reference frame (A and C) and (b) a gaze-centered reference frame (B and D). 
Figure 3
 
Shoulder- versus gaze-centered representations of the central movement. (A–D) Schematic of the central reach movement in a shoulder-centered (A and C) versus gaze-centered (B and D) representation. The three initial hand targets (gray circles), the three reach targets (black circles), and the five central fixation targets (white circles) are shown in the different reference frames. The bull's-eye symbol depicts current fixation, where the upper panels show a fixation to the left (A and B) and the lower panels show a fixation to the right (C and D). The gray arrow shows the current reaching movement. (E) Horizontal reach error in degrees plotted as a function of reach target relative to gaze for the same reach movement (see x-axis). The dotted line at y = 0° depicts the pattern of errors predicted by a shoulder-centered representation. The dashed curved line illustrates errors expected if there was a gaze-centered representation of errors based on data from Henriques et al. (1998). Average data are shown across all controls (thin solid line with white diamonds), patient C.F. (thick gray line with gray squares), and patient O.K. (thick black line with black squares). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 3
 
Shoulder- versus gaze-centered representations of the central movement. (A–D) Schematic of the central reach movement in a shoulder-centered (A and C) versus gaze-centered (B and D) representation. The three initial hand targets (gray circles), the three reach targets (black circles), and the five central fixation targets (white circles) are shown in the different reference frames. The bull's-eye symbol depicts current fixation, where the upper panels show a fixation to the left (A and B) and the lower panels show a fixation to the right (C and D). The gray arrow shows the current reaching movement. (E) Horizontal reach error in degrees plotted as a function of reach target relative to gaze for the same reach movement (see x-axis). The dotted line at y = 0° depicts the pattern of errors predicted by a shoulder-centered representation. The dashed curved line illustrates errors expected if there was a gaze-centered representation of errors based on data from Henriques et al. (1998). Average data are shown across all controls (thin solid line with white diamonds), patient C.F. (thick gray line with gray squares), and patient O.K. (thick black line with black squares). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
The term reference frame is used here in the mathematical sense, that is, as the reference frame for a coordinate system. Note that our study should be able to distinguish reach errors arising in gaze-centered coordinates from head-, shoulder-, or space-centered errors (because we varied gaze position), but we did not independently vary the orientation of the head, shoulder, and space. Therefore, we collectively refer to the latter three frames as shoulder coordinates. In the situation where only gaze is varied, the reach movement (both initial hand positions and reach targets) remains fixed with respect to the shoulder; for example, the movement is identical relative to the shoulder whether fixation is toward the left (A) or the right (C; the bull's-eye symbol depicts current fixation). Thus, errors arising from an intrinsic shoulder-centered representation should remain constant regardless of gaze position in this situation. In contrast, in a gaze-centered reference frame, the position of the initial hand position and final reach target is dependent on gaze direction. 
This is shown in the right panels where the locations of the initial hand positions and reach targets shift entirely from the right (B) to the left (D) visual field depending on the subjects' fixation (the fixation location is necessarily always at the center of the gaze-centered representation, i.e., the fovea). Thus, any position-dependent errors that arise from internal representations of these parameters in gaze-centered coordinates should vary with gaze direction. 
In Figure 3E, we plotted final horizontal errors (final horizontal finger position subtracted from horizontal target position as calculated from the calibration trial—see the Methods section) for this movement as a function of the reach target relative to gaze position. For example, in Figure 3B, for the left fixation position, the center reach target is 24° to the right relative to gaze. The two reference frame hypotheses predict different patterns of errors; the shoulder-centered scheme (horizontal dotted line at y = 0°) predicts that errors do not vary and, therefore, should be the same regardless of fixation position. On the other hand, the gaze-centered scheme predicts that errors should vary depending on the location of the targets relative to gaze. Previous studies have shown that in a gaze-centered frame, subjects tend to overshoot the target depending on its position relative to gaze (Henriques et al., 1998). We replotted the average overshoot shown by Henriques et al. (1998) to show the pattern of errors expected if there was a gaze-centered representation. The dashed (gaze-centered) curve in Figure 3E replots the average mean data reported for left and right fixations in Henriques et al. in the control condition, that is, 1.84° for fixations to the left and 2.17° for fixations to the right. 
The data from the control subjects (solid curve with white diamonds) were very similar to the curve depicting the data replotted from Henriques et al. (1998); that is, errors varied as a function of reach target relative to gaze. To confirm this statistically, we performed a univariate two-way ANOVA with subject and fixation position and found a significant effect of the five positions of the reach target relative to gaze, F(4, 183) = 177.3, p < .001, for the different fixation positions (i.e., reach target relative to gaze positions in the x-axis), confirming that reach errors did vary as a function of reach target relative to gaze. As expected, subject was also a significant factor. Post hoc analyses showed that reaching errors when the subjects were fixating on the two leftward fixation positions were significantly different from the center fixation position, which, in turn, was significantly different from the two rightward fixation positions (Student–Newman–Keuls [S–N–K] post hoc test, p < .05). This pattern of gaze-related reach errors is reflective of that shown in previous studies of control subjects using a similar task (Henriques et al., 1998; Khan, Pisella, Rossetti, et al., 2005; Medendorp & Crawford, 2002; Pouget et al., 2002). 
Both patients (patient C.F.—thick gray line with gray squares—and patient O.K.—thick black line with black squares) also showed the same pattern of errors, in that reach errors varied depending on fixation position. This was confirmed by ANOVA analyses, patient C.F.: F(4, 40) = 24.58, p < .001; patient O.K.: F(4, 24) = 24.72, p < .001. An interesting finding was that both patients' reach curves showed an overall shift toward the left compared to the control data, suggesting an overall bias in reaching in these OA patients in the direction opposite to the lesion. In other words, it may be that the entire reach space is shifted in the damaged hemifield. An F test with subject as a factor (using mean values for control subjects) showed that both patients' mean errors across all five fixation positions were significantly different from the controls, mean of the controls = −0.51° ( n = 35), patient O.K. = −2.32° ( n = 29), patient C.F. = −6.66° ( n = 45), F(2, 94) = 104.24, p < .001, S–N–K, p < .05. The overall bias toward the left was also significantly greater for patient C.F. as compared to patient O.K. (S–N–K, p < .05). 
The patients also showed much greater reach errors when the reach target was in their left visual field (rightward fixations) as compared to when the reach target was in their right visual field, C.F.: t(34) = 7.55, p < .001; O.K.: t(21) = 9.17, p < .001. They also revealed greater variability with rightward fixations (compared to leftward fixations), that is, when the initial and reach targets were presented in their left, impaired visual field. This can be seen through the standard deviation values (average across controls: reach target in right visual field = 1.05, left visual field = 1.13; C.F.: right visual field = 1.24, left visual field = 1.89; O.K.; right visual field = 1.12, left visual field = 1.60). This trend was seen for all reach curves in the subsequent analyses. It should also be pointed out that the two patients showed different magnitudes of reach error for the foveally viewed reach target (central visual field); patient O.K. showed errors for the foveally viewed target, which were comparable to reach errors when the reach target was in his intact visual field (two rightmost positions in the x-axis in Figure 3E). This was shown in the post hoc tests based on an ANOVA with the five fixation positions as a factor, F(4, 24) = 24.72, p < .001, S–N–K, p < .05, which resulted in a significant difference between the two gaze positions when the reach target was in the left visual field and the rest (gaze positions where the reach target was in the center and right visual field). In contrast, patient C.F. showed significantly different reach errors when the reach target was presented in the center and left visual fields compared to the right visual field, F(4, 40) = 24.58, p < .001, S–N–K, p < .05. 
To summarize, although the patients showed a leftward bias as well as larger errors for rightward fixations compared to controls, they nevertheless showed evidence (similar to the controls) that their reaching movements are planned in a gaze-centered frame. However, because the horizontal angle of the initial hand position and the reach target was identical in visual coordinates in the above analysis ( Figure 3B vs. Figure 3D), it is impossible to distinguish whether the reach target, the initial hand position, or both positions had the gaze-centered influence on reach errors. Previous studies showing gaze-centered effects on reach errors have attributed the source of this error to the reach target position and did not consider the potential effects of the initial position of the hand (Henriques et al., 1998; Khan, Pisella, Rossetti, et al., 2005). 
General predictions for reach error curves
We designed a specific set of experimental conditions (i.e., different reach targets and initial hand positions) to analyze conditions where the initial hand and reach target positions varied in one reference frame but either did not change or changed in a different manner in the second frame. Therefore, we have to consider a group of different eye and hand positions rather than just one reaching condition to draw conclusions about which intrinsic reference frame is used for the internal comparison between hand and target positions. To test the influences of the reach target or initial hand position separately, we investigated how this basic curve changed when we varied either the reach target position or the initial hand position. To test the effect of reach target, we compared the three curves for the three reach targets (left, center, and right), all beginning from the center initial hand position. In contrast, to examine the effect of initial hand position, we compared the three curves for the reaching movements from each of the three initial hand positions (left, center, and right) all to the central reach target. 
Our basic prediction is that if there are any changes in the reach errors across different reach targets or initial hand positions, errors should be influenced by the reference frame in which those different signals are encoded. This should also provide insight into the reference frames used by the damaged areas to encode hand and target position; that is, errors should carry the signature of the affected reference frame. For example, if an area that encodes target position in gaze-centered coordinates is damaged, then we expect the resulting reach errors to show an effect in the same gaze-centered coordinates. We only consider the relative changes in the pattern of reach errors for different eye and hand positions, rather than absolute errors in reaching themselves. The reasons for absolute errors in reaching to targets have been addressed in many other studies (e.g., Henriques et al., 1998; Khan, Pisella, Rossetti, et al., 2005; Vindras et al., 1998). 
Consider the locations of the initial hand and reach target positions in Figures 3A and 3D. In a shoulder-centered reference frame (A and C), all three initial hand or reach target locations are shifted horizontally relative to one another. Their positions relative to the shoulder do not change regardless of gaze. On the other hand, in a gaze-centered reference frame (B and D), the positions of these targets depend entirely on where gaze is. In theory, depending on gaze, the position of the three targets can be identical in gaze-centered coordinates. For example, in Figure 3B, when gaze is on the leftmost fixation target as depicted, the position of the leftmost reach target is 12° to the right relative to gaze. This is identical to the case where the gaze is on the center fixation position and the reach target is now the rightmost one. If we then plotted reach errors as a function of reach target relative to gaze, they should overlap entirely if there was only an influence of the gaze-centered representation of reach target. On the other hand, an influence of a shoulder-centered representation of reach target would reveal a shift for the three curves because the shift between the three reach targets is only present in a shoulder-centered representation. 
Figure 4 outlines the pattern of errors predicted across three sets of initial hand or reach target positions. As mentioned above, we analyzed reach errors for the initial hand position or reach targets separately, and therefore, these predictions represent both sets of curves. These reach curve predictions have different meanings depending on whether they predict different reach targets or different initial hand positions. 
Figure 4
 
Error patterns predicted by different influences of reach target or initial hand position. All reaching curves are plotted as a function of reach target position relative to gaze. (A) The reach functions for the left (dashed lines), center (solid line), and right (dotted lined) reach target or initial hand position curves are plotted as an example of the pattern of errors if there were only an effect of reach target in gaze-centered coordinates and no other effect. (B) An example of the pattern of errors if there were a shoulder-centered effect of reach target or initial hand position. (C) An example of an interaction effect between the gaze-centered and shoulder-centered positions of the reach target or an influence of the initial hand position in gaze-centered coordinates is shown. Here, for example, the errors dependent on the gaze-centered position are flipped for the left reach target and compressed for the right reach target.
Figure 4
 
Error patterns predicted by different influences of reach target or initial hand position. All reaching curves are plotted as a function of reach target position relative to gaze. (A) The reach functions for the left (dashed lines), center (solid line), and right (dotted lined) reach target or initial hand position curves are plotted as an example of the pattern of errors if there were only an effect of reach target in gaze-centered coordinates and no other effect. (B) An example of the pattern of errors if there were a shoulder-centered effect of reach target or initial hand position. (C) An example of an interaction effect between the gaze-centered and shoulder-centered positions of the reach target or an influence of the initial hand position in gaze-centered coordinates is shown. Here, for example, the errors dependent on the gaze-centered position are flipped for the left reach target and compressed for the right reach target.
We begin by explaining the predictions for the three different reach targets from the center initial hand position. Figure 4A shows the prediction for the three curves if there is only an effect of reach target in gaze-centered coordinates but no effect of the position of the reach target in shoulder-centered coordinates. The explanations for this prediction have been outlined above. Figure 4B shows the prediction of an independent shoulder-centered effect of the reach target in addition to the gaze-centered effect (the shape of the curve). We cannot predict which direction the three curves will shift relative to one another; a linear relationship between reach target and shift in error would predict the functions as shown in Figure 4B (a nonlinear relationship could provide more complex results, but these examples suffice to make our point). Finally, Figure 4C reveals the prediction where the shoulder-centered and gaze-centered effects interact. For example, reach errors could vary differently as a function of the gaze-centered reach target depending on where the target is relative to the shoulder. This would predict that each reach curve would be shaped differently depending on the different reach targets. 
The predictions for the three different initial hand positions have slightly different meanings. This is because we have plotted errors as a function of reach target position relative to gaze rather than plotting it as a function of initial hand position relative to gaze. Consider the leftmost reach target and the center initial hand position in Figure 3B. At the current gaze position (leftmost position), both reach target and initial hand position are in the right visual field. However, if gaze shifts to the center position, then the reach target is now in the left visual field, but the initial hand position is in the central visual field. This means that if there is an influence of the gaze-centered initial hand position, the reach curves should vary as a function of both the reach target and the initial hand position. Therefore, the curves shown in Figure 4A would occur only if there is no gaze-centered effect of the initial hand position at all. Using the same argument as for a shoulder-centered reach target position, a shoulder-centered initial hand position influence is predicted by Figure 4B. Finally, the predictions in Figure 4C would reveal a gaze-centered influence of initial hand position, which means that reach errors vary as a function of both the reach target and the initial hand position in gaze-centered coordinates. It is difficult to make any specific predictions on what the possible reaching curves should look like, but we can predict that the curves should change their shape in a more complex way. An example is shown in Figure 4C, but there could be any number of variations on this scheme. Note that in our analyses, we are assuming that the effect of the different reference frames is additive on the reach errors and that they may interact with one another. 
Effect of the gaze-centered versus shoulder-centered position of the hand
We began by first investigating the influence of the initial hand position in either a gaze- or a shoulder-centered representation. 
Figure 5 depicts the reach error curves for the three initial hand positions for the controls (A), patient C.F. (B), and patient O.K. (C) plotted as a function of the reach target position relative to gaze. 
Figure 5
 
Reach errors for the three initial hand positions. Horizontal reach errors are shown as a function of reach target relative to gaze for the controls (A), patient C.F. (B), and patient O.K. (C). The left initial hand position curve is represented by the dashed lines, the center initial hand position is depicted by the solid lines, and the right initial hand position is denoted by the dotted lines. The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 5
 
Reach errors for the three initial hand positions. Horizontal reach errors are shown as a function of reach target relative to gaze for the controls (A), patient C.F. (B), and patient O.K. (C). The left initial hand position curve is represented by the dashed lines, the center initial hand position is depicted by the solid lines, and the right initial hand position is denoted by the dotted lines. The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
The control data show curves for the left (dashed line), center (solid line), and right (dotted line) initial hand positions ( Figure 5A). When compared to the three sets of predictions, the pattern of curves matches best to a purely shoulder-centered effect of initial position of the hand ( Figure 4B), where the overall curves shift relative to the center initial hand position curve. A three-way ANOVA was performed with initial hand position, reach target relative to gaze, and subject as factors (we used a three-way design to account for intersubject variability). There was a main effect for initial hand position: left initial hand position mean = −2.05, center initial hand position mean = −0.51, right initial hand position mean = 1.2, F(2, 549) = 303.9, p < .001, S–N–K, p < .05, which shows that all three curves were significantly different from one another. We calculated the partial η 2 value for each factor, which is the partial regression coefficient and gives an indication of the relative amount of variance explained by each factor. For the initial hand position factor, the partial η 2 was 53%, which, in this case, describes the magnitude of the relative contribution of the initial hand position in shoulder coordinates in the reach error. 
For comparison, the reach target factor, which was also significant, gave a partial η 2 value of 71%. An interaction between the initial hand position and reach target factors would reveal some effects of the gaze-centered initial hand position. We found a significant interaction effect between initial hand position and reach target relative to gaze, F(8, 549) = 5.9, p < .001; however, the partial η 2 value was very small at 8%. Overall, the controls revealed a linear relationship between the shoulder-centered position and the mean shift in the corresponding reach curve. The difference between each initial hand position was 24° relative to the eyes, and the average shift of the reach curves was only about 1.6°. Compared to controls, all the reaching curves of patient C.F. were shifted toward the left ( Figure 5B), and errors for the left visual field were greater than those for the right visual field. However, the three reach curves for the different initial hand positions also showed an overall shift relative to one another. The overall shifts for the three initial hand positions relative to 0° was 4.1° to the left for the left initial hand position, 6.66° to the left of the center initial hand position, and 1.42° to the left for the right initial hand position. A two-way ANOVA revealed a significant effect of initial hand position, F(2, 120) = 78.699, p < .001, S–N–K, p < .05. The partial η 2 value for initial hand position for C.F. was 56%. As expected, C.F. also showed a main effect for reach target, F(4, 120) = 68.986, p < .001, S–N–K, p < .05, with a partial η 2 value of 69%, revealing the greater influence of reach target on reaching errors. We found it interesting that patient C.F. showed a greater shift toward the left for the central initial hand position curve compared to the left initial hand position. 
Overall, the average shift between the reach curves was higher than that for the controls, at 2.62°. From viewing the graph in Figure 5B, there is some suggestion that the shapes of the three curves are somewhat different; for example, the 24° right data point for the center initial hand position moves downward, whereas the point for the left initial hand position moves upward. This was confirmed by a two-way interaction effect between initial hand position and reach target relative to gaze, F(8, 120) = 3.16, p < .01, which reveals some influence of the gaze-centered initial hand position (see the General predictions for reach error curves section). However, the effect size of this interaction was much smaller than the main initial hand position effect, with a partial η 2 of 17%. For comparison purposes, partial η 2 values for all three groups of subjects are shown in Table 1
Table 1
 
Partial η 2 values for all comparisons. η 2 values are shown as percentages and reveal the relative amount of variance in the data that can be explained by the factor. IHP = initial hand position; RT = reach target; main = main effect.
Table 1
 
Partial η 2 values for all comparisons. η 2 values are shown as percentages and reveal the relative amount of variance in the data that can be explained by the factor. IHP = initial hand position; RT = reach target; main = main effect.
Controls C.F. O.K.
IHP/RT
IHP main 53 56 74
RT main 71 69 72
Interaction 8 17 27
RT gaze/RT shoulder
RT gaze main 15 78 33
RT shoulder main 51 18 36
Interaction 68 23 57
In summary, the reach curves for the three initial hand positions for patient C.F. show evidence of an independent effect of the shoulder-centered initial hand position as well as a small influence of the gaze-centered initial hand position. Moreover, this independent influence of the shoulder-centered initial hand position appears to be greater than the shoulder-centered influence of initial hand position in the controls. 
Across all three initial hand positions, patient O.K. also showed an overall shift toward the left compared to the controls ( Figure 5C). The mean shift from 0° was 5.36° to the left for the left initial hand position, 2.37° to the left for the center initial hand position, and 0.01° to the left for the right initial hand position. The shift between the center and right initial hand position was 2.36°, whereas the shift between the center and left initial hand position was greater, showing a difference of 2.99°. These shifts were significantly different, F(2, 71) = 102.26, p < .001, S–N–K, p < .05, η 2 = 74%, revealing a strong shoulder-centered influence of the initial hand position. Similar to patient C.F., patient O.K. showed a greater influence of the shoulder-centered initial hand position on the reach curves than the controls (mean shift = 2.68°). A significant main effect of reach target was revealed, F(4, 71) = 46.45, p < .001, S–N–K, p < .05, η 2 = 72%. We also found a small, significant interaction effect between initial hand position and reach target relative to gaze, F(8, 71) = 3.2, p < .001, η 2 = 27%. To summarize, patient O.K. showed a large, independent, shoulder-centered influence of the initial hand position as well as a small interaction effect between the gaze-centered and shoulder-centered initial hand position. 
Effect of the gaze-centered versus the shoulder-centered position of the reach target
Having shown above that the influence of the initial hand position on reach errors was predominantly shoulder centered, we could presume that the gaze-centered effect shown in Figure 3E was mainly due to the gaze-centered location of the reach target rather than the initial hand position. In the section below, we show that this is only partly true and we also reveal an additional shoulder-centered effect of the reach target. 
We tested whether the gaze-centered or shoulder-centered reach target position affected reach errors by comparing reaches to the three different reach targets while maintaining the initial hand position constant. Our predicted reach error curves in Figure 4 are plotted as a function of target position relative to gaze, and therefore, complete overlap between the three reach curves would mean that errors varied consistently as a function of where the reach target was relative to gaze for all three reach targets, regardless of the shoulder-centered reach target position. In the reach target analysis, the pattern of errors from Figure 4A would be the pattern predicted if there was no influence of the shoulder-centered reach target position, but there was a gaze-centered influence. The pattern seen in Figure 4B shows an independent shoulder-centered influence of reach target, and the pattern of reach curves in Figure 4C reveals an interaction between the shoulder-centered and the gaze-centered reach target position. 
Figure 6 depicts reach errors plotted as a function of the position of the reach target relative to gaze for the left (dashed line), center (solid line), and right (dotted line) reach targets for controls ( Figure 6A), patient C.F. ( Figure 6B), and patient O.K. ( Figure 6C). 
Figure 6
 
Reach errors to the three reach targets. Horizontal reach errors are plotted as a function of the reach target position relative to gaze for the left (dashed line), center (solid line), and right (dotted line) reach targets for control subjects (A), patient C.F. (B), and patient O.K. (C). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 6
 
Reach errors to the three reach targets. Horizontal reach errors are plotted as a function of the reach target position relative to gaze for the left (dashed line), center (solid line), and right (dotted line) reach targets for control subjects (A), patient C.F. (B), and patient O.K. (C). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Reach errors for all three reach targets for the controls are shown in Figure 6A. The figure shows that the right reach target curve is shifted downward (toward the left in the y-axis) compared to the center reach target curve. We compared overall mean errors for each reach target (left reach target curve = 0.37° to the left, center reach target curve = 0.44° to the left, right reach target = 3.33° to the left). A three-way ANOVA between the shoulder-centered and the gaze-centered position of the reach target and subject showed a significant main effect for the shoulder-centered position of the reach target, F(2, 556) = 293.85, p < .01. Post hoc analyses showed a significant shift only for the right reach target (S–N–K post hoc test, p < .05). Further, a significant effect was seen for the position of the reach target relative to gaze, F(4, 556) = 23.78, p < .01. The η 2 values for the effect of reach target in shoulder-centered and gaze-centered coordinates were 51% and 15%, respectively. 
The center (solid line) and right (dotted line) reach target curves appear to have similar shapes; however, this is not true for the left (dashed line) reach target curve, which appears to curve upward instead of downward for the two leftmost data points (the 24° and 12° left positions). This suggests an interaction between the shoulder- and gaze-centered reach target positions, which was confirmed by a significant two-way interaction effect, F(8, 556) = 147.5, p < .01. The partial η 2 value was 68%. Comparing the reach curves to the predictions in Figure 4, the controls show evidence for an independent gaze-centered and shoulder-centered effect (shift of the right reach target curve) of reach target. The errors also show evidence for an interaction between the two (mainly due to the change in the shape of the left reach target curve). 
Patient C.F.'s errors for the three reach targets are plotted in Figure 6B. All three reach target curves appear to be very similar to one another and do not appear to be shifted relative to one another. In fact, his error curves resemble the prediction from Figure 4A for a solely gaze-centered effect of reach target. However, a two-way ANOVA between the reach target position (left, center, and right) and the reach target position relative to gaze resulted in significant main effects for the reach target position relative to the shoulder, F(2, 120) = 12.7, p < .01, and the reach target position relative to gaze, F(4, 120) = 106.6, p < .01, as well as a significant interaction effect, F(8, 120) = 4.567, p < .01. Although there was a significant main effect for the reach target position in shoulder-centered coordinates (left, center, or right), the difference between the three reach curves was very small—a 0.81° shift between the left and center reach curves and a 0.91° shift between the right and center reach curves. This was confirmed by the partial η 2 values, which revealed the greatest effect for the gaze-centered position of the reach target (78%), with much smaller effects for the shoulder-centered reach target position (18%) and the interaction (23%). 
These findings show that compared to the controls, patient C.F.'s errors revealed a smaller influence of the shoulder-centered position of the reach target. In addition, this influence was different on the different reach target positions relative to gaze, resulting in a small interaction effect. Overall, his reach errors were influenced mainly by the gaze-centered position of the reach target; across all three reach targets, he showed greater errors when the reach target was in his left or damaged visual field compared to when the reach target was in his right visual field, regardless of the different shoulder-centered positions of the three reach targets. 
Patient O.K. showed a pattern of reaching errors that was very different from patient C.F.'s errors. As can be seen in Figure 6C, his errors appear to show a strong interaction effect, resembling the prediction from Figure 4C. This suggests a strong interactive influence of both the gaze-centered reach target and the shoulder-centered position of the reach target. The curves for all three reach targets appeared to have shapes that were different from one another, and this was confirmed by a significant two-way interaction effect, F(8, 73) = 12.29, p < .01. In addition, the errors for the left reach target appear to be shifted compared to the right and central reach target curves. A two-way ANOVA revealed a significant difference for the left reach target curve from the center and right reach target curves, F(2, 73) = 18.42, p < .001, S–N–K post hoc test, p < .05. The mean for the left reach target curve was 4.72° left of straight ahead, whereas the means for the center and right reach target curves were 2.87° and 2.32° left of straight ahead. Finally, the two-way ANOVA also resulted in a significant effect of the gaze-centered reach target position, F(4, 73) = 10.04, p < .01. The partial η 2 values for the gaze-centered effect, the shoulder-centered effect, and the interaction effect were 33%, 36%, and 57%, respectively, confirming the relatively stronger effect of the interaction. Thus, patient O.K. showed evidence for a gaze-centered influence of reach position, a large interaction effect between the gaze-centered and the shoulder-centered positions of the reach target, and a shoulder-centered bias for the left reach target. 
Discussion
In summary, reaching errors for all subjects revealed a predominant influence of reach target that depended on both its gaze-centered and its shoulder-centered locations. Controls also showed a large influence of the shoulder-centered position of the reach target on reach errors. This effect was also present in C.F.'s reaching errors but was greatly reduced. In contrast, for patient O.K., the shoulder-centered position of the reach target appeared to interact with the gaze-centered position to a larger degree, similar to control subjects. The main effect of the initial hand position on all subjects was in a shoulder-centered representation, with both patients showing a greater influence of the initial hand position than the controls. In addition, both patients showed a small effect of the gaze-centered hand position on reach errors as revealed by the interaction terms in Table 1
The finding of an influence of the reach target position relative to gaze for both controls and patients on reach errors confirms current findings spanning neurophysiology, functional imaging, and behavioral studies that support the notion that the reach target is encoded in a gaze-centered reference frame (Batista et al., 1999; Blangero et al., 2005; Henriques et al., 1998; Medendorp et al., 2003; Poljac & van den Berg, 2003; Pouget et al., 2002). In addition, the unilateral OA patients showed that reach errors were larger when the reach target was in the visual field contralateral to the lesioned hemisphere regardless of its position in space. This error pattern has previously been shown for OA patients and is consistent with a damaged visuomotor transformation of a gaze-centered representation of space (Dijkerman et al., 2006; Khan, Pisella, Rossetti, et al., 2005). 
An effect of the shoulder-centered reach target position was also seen in our control subjects, which interacted to some degree with the gaze-centered position of the reach target. This is consistent with the notion that the comparison between the hand and the target—needed to calculate the reach trajectory—takes place gradually across different representations of space. A distributed comparison would reconcile previously contradictory findings about gaze- or shoulder-centered comparisons and is consistent with distributed networks performing visuomotor transformations in areas such as the PPC or PM (Battaglia-Mayer et al., 2003; Beurze, De Lange, Toni, & Medendorp, 2007; Blohm et al., in press; Pouget & Sejnowski, 1997; Salinas & Abbot, 1996, 2001). Indirect evidence for this scheme also arises from psychophysical experiments that showed a shoulder-centered influence on reach errors (Beurze et al., 2006; Carrozzo, McIntyre, Zago, & Lacquaniti, 1999; McIntyre et al., 1998). 
A distributed comparison of the target position in various reference frames could account for the differences seen between the two patients. Patient C.F., with damage to both parietal cortices (although mostly in the right medial parietal cortex), showed a higher contribution of the gaze-centered reach target position to the reach errors compared to controls. This could be due to damage to the area of the parietal cortex that represents other reference frame representations, thus biasing the influence of the reach target position in gaze-centered coordinates. On the other hand, patient O.K. showed greater interaction effects between the shoulder- and gaze-centered positions of the reach target. Disrupting a different part of the parietal cortex could impair the optimal comparison of these target and hand positions in different reference frames, thus resulting in the observed biases in reaching. 
Although both controls and patients showed evidence of an effect of hand position in shoulder-centered coordinates, the effect of hand position was larger for the patients than for the controls. We believe that this was due to damage at the level of the visuomotor transformation at an early comparison stage in our patients. The interaction between the gaze-centered and shoulder-centered hand position shown in the patients' reaches (that was not almost present in the control subjects) could be interpreted as indicating an impairment in the gaze-centered comparison between hand and target positions. Therefore, in the lesioned hemifield, reaches have to rely more on the shoulder-centered comparison. This has two consequences. First, compared to control subjects who could better integrate the gaze- and shoulder-centered comparison results, patients relied more on the shoulder-centered comparison. This resulted in a larger spread between left and right initial hand errors ( Figure 5). Second, the absolute reach errors of the patients were greatest when starting form the left (and central for C.F.) initial hand position. This might be an indication that the right parietal cortex damage also affects reaches from the left initial hand position, which is consistent with shoulder-centered hand position being represented in the PPC (Battaglia-Mayer et al., 2003). In addition to interactions, the shoulder-centered hand position independently affected errors in both the control subjects and the patients. This suggests that the hand and target could be compared at a later stage in the visuomotor pathway (Blohm et al., in press). This scheme is consistent with evidence that the shoulder-centered position of the arm is represented in premotor and motor cortices (Fogassi et al., 1996; Graziano, Hu, & Gross, 1997). 
A task-dependent comparison based on available sensory information may partially account for our results showing a shoulder-centered influence of hand position (Carrozzo et al., 1999). Although, in the current task, the initial hand position was visible, this position was not visible at the same time as the reach target. Rather, the task entailed a comparison of an updated remembered hand position with a peripherally viewed remembered reach target. Alternatively, subjects could also use proprioceptive information about the hand, which was constantly available throughout the task. Although target location is determined from visual information, the brain can either visually encode the position of the viewed hand or extract hand position through proprioceptive information from the arm itself (Buneo et al., 2002; Crawford et al., 2004). This proprioceptive information is coded in joint coordinates (Costanzo & Gardner, 1981; Graziano & Gross, 1993), but there is evidence that proprioceptive information can be transformed into a gaze-centered coordinate system (Blangero et al., 2005; Buneo et al., 2002). It may be that because currently available proprioceptive information is more reliable than the memorized visual information about hand position, this proprioceptive information could then partially override visual memory, thus biasing the comparison of hand and target in a shoulder-centered representation (Carrozzo et al., 1999; McIntyre et al., 1998). This would be consistent with the results from both the control subjects and patients, which show an effect for the shoulder-centered hand position. 
It may also be that the reach system is normally calibrated to use multiple hand–target comparisons in an optimal way. For example, when both the hand and the target are visible, a visual comparison between the two is likely more accurate (Ren et al., 2006). However, when the hand is not visible, especially when the target is also no longer visible, a comparison in proprioceptive coordinates is likely to have a greater weight (Ren et al., 2006). Multiple comparisons might exist because, sometimes, certain signals are not present in a certain reference frame or the outcome of comparisons in different reference frames may serve different purposes: for perception, movement control, and so forth. The existence of these multiple comparisons may also make the system more robust to damage because, now, the weighting could be placed on the less damaged parts (e.g., our patients were stable to reach despite certain errors); hence, some aspects of the normal optimization algorithm should still be present. 
It is interesting to note that the way that initial hand position and the shoulder-centered reach target modulated the gaze-centered pattern of reach errors in our study bears a resemblance to the way that gain fields modulate gaze-centered neural activity in the PPC (Andersen, Essick, & Siegel, 1985; Battaglia-Mayer et al., 2003). We postulate that these reach errors seen in our behavioral task may reflect the neural mechanisms in the PPC. For example, reach errors reflect the gain modulations of the initial hand position and shoulder-centered reach target on a gaze-centered target representation (Battaglia-Mayer et al., 2003; Buneo et al., 2002). 
It should be kept in mind that the pattern of errors shown in this study reflects the visuospatial effects of the damage in the parietal cortex, namely, the field effect (Perenin & Vighetto, 1988; Vighetto & Perenin, 1981). For reaches with the contralesional hand, we would expect a stronger effect of initial hand position, which has been shown to be additive to the field effect (Vighetto & Perenin, 1981). We postulate that using the contralesional hand would increase the overall influence of the different initial hand positions and, thus, perhaps change the relative influences of the target and initial hand positions on reach errors. This hypothesis is based on neuroimaging findings that show that the PPC has a greater representation for the contralateral hand compared to the ipsilateral hand (Medendorp et al., 2005). Nevertheless, our findings show that OA results in more than a simple hand or visual field deficit and seems to involve forming reach plans that depend on the reach target as well as initial hand position across space. 
There is considerable evidence that target position is encoded in gaze-centered coordinates in PPC, but this has only recently been suggested to also be true for initial hand position based on neurophysiological findings (Battaglia-Mayer et al., 2003; Buneo et al., 2002). We confirmed this to be the case for both our patients, who showed a small influence of the gaze-centered position of initial hand position. However, both our controls and patients also showed strong influences on the shoulder-centered reach target position as well as the shoulder-centered initial hand position. Thus, our results do not support the idea that the target position is only compared to the initial hand position at an early gaze-centered stage (Buneo et al. 2002), nor do they support the idea that the comparison is only done at a very late stage in motor processing (Flanders et al., 1992; Henriques et al., 1998). Instead, our findings rather support a combination of both. Indeed, we found that reach errors in patients with PPC damage were correlated with both gaze- and shoulder-centered coordinates. 
Conclusions
In this study, we have dissociated both reach target location and hand target location in gaze-centered and shoulder-centered coordinates. The results showed a bias arising from both frames in control subjects and more so in subjects with parietal damage. We conclude that the pattern of reaching reflects a distributed visuomotor transformation process where the hand and the target are compared in both gaze-centered and shoulder-centered reference frames to compute the desired hand path. These findings bring together neurophysiological reports demonstrating multiple areas in the PPC and frontal cortex with different representations of target and initial hand position in multiple reference frames (Andersen et al., 1985; Battaglia-Mayer et al., 2003; Buneo et al., 2002). 
Acknowledgments
The authors would like to thank C.F. and O.K. for their kind participation in the experiments as well as Romeo Salemme for technical programming. We would also like to thank Drs. Denise Henriques and Lauren Sergio for comments on the manuscript. This work was supported by grants from Institut national de la santé et de la recherche médicale (Y.R. and L.P.), the McDonnell-Pew foundation (Y.R.), the Natural Sciences and Engineering Research Council of Canada (A.K.), a Marie Curie Fellowship, European Union (G.B.), and the Canadian Institutes of Health Research (G.B. and J.D.C.). J.D.C. holds a Canada Research Chair. 
Commercial relationships: none. 
Corresponding author: Aarlenne Khan. 
Email: aarlenne@ski.org. 
Address: 2318 Fillmore Street, San Francisco, CA 94115, USA. 
References
Andersen, R. A. Essick, G. K. Siegel, R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456–458. [PubMed] [CrossRef] [PubMed]
Batista, A. P. Buneo, C. A. Snyder, L. H. Andersen, R. A. (1999). Reach plans in eye-centered coordinates. Science, 285, 257–260. [PubMed] [CrossRef] [PubMed]
Batista, A. P. Santhanam, G. Yu, B. M. Ryu, S. I. Afshar, A. Shenoy, K. (2005). Society for Neuroscience Abstracts..
Battaglia-Mayer, A. Caminiti, R. Lacquaniti, F. Zago, M. (2003). Multiple levels of representation of reaching in the parieto-frontal network. Cerebral Cortex, 13, 1009–1022. [PubMed] [Article] [CrossRef] [PubMed]
Battaglia-Mayer, A. Ferraina, S. Genovesio, A. Marconi, B. Squatrito, S. Molinari, M. (2001). Eye-hand coordination during reaching II An analysis of the relationships between visuomanual signals in parietal cortex and parieto-frontal association projections. Cerebral Cortex, 11, 528–544. [PubMed] [Article] [CrossRef] [PubMed]
Beurze, S. M. de Lange, F. P. Toni, I. Medendorp, W. P. (2007). Integration of target and effector information in the human brain during reach planning. Journal of Neurophysiology, 97, 188–199. [PubMed] [CrossRef] [PubMed]
Beurze, S. M. Van Pelt, S. Medendorp, W. P. (2006). Behavioral reference frames for planning human reaching movements. Journal of Neurophysiology, 96, 352–362. [PubMed] [Article] [CrossRef] [PubMed]
Blangero, A. Rossetti, Y. Honoré, J. Pisella, L. (2005). Influence of gaze direction on pointing to unseen proprioceptive targets. Advances in Cognitive Psychology, 1, 9–16. [CrossRef]
Blohm, G. Crawford, J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7, (5):4, 1–22, http://journalofvision.org/7/5/4/, doi:10.1167/7.5.4. [PubMed] [Article] [CrossRef] [PubMed]
Blohm, G. Keith, G. P. Crawford, J. D. (2006). Society for Neuroscience Abstracts.
Blohm, G. Khan, A. Z. Crawford, J. D. Squire,, L. Albright,, T. Bloom,, F. Gage,, F. Spitzer, N. (in press). Spatial transformations for eye‐hand control. The New Encyclopedia of Neuroscience.
Buneo, C. A. Andersen, R. A. (2006). The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44, 2594–2606. [PubMed] [CrossRef] [PubMed]
Buneo, C. A. Jarvis, M. R. Batista, A. P. Andersen, R. A. (2002). Direct visuomotor transformations for reaching. Nature, 416, 632–636. [PubMed] [CrossRef] [PubMed]
Carrozzo, M. McIntyre, J. Zago, M. Lacquaniti, F. (1999). Viewer-centered and body-centered frames of reference in direct visuomotor transformations. Experimental Brain Research, 129, 201–210. [PubMed] [CrossRef] [PubMed]
Cohen, Y. E. Andersen, R. A. (2000). Reaches to sounds encoded in an eye-centered reference frame. Neuron, 27, 647–652. [PubMed] [Article] [CrossRef] [PubMed]
Costanzo, R. M. Gardner, E. P. (1981). Multiple-joint neurons in somatosensory cortex of awake monkeys. Brain Research, 214, 321–333. [PubMed] [CrossRef] [PubMed]
Crawford, J. D. Medendorp, W. P. Marotta, J. J. (2004). Spatial transformations for eye‐hand coordination. Journal of Neurophysiology, 92, 10–19. [PubMed] [Article] [CrossRef] [PubMed]
Dijkerman, H. C. McIntosh, R. D. Anema, H. A. de Haan, E. H. Kappelle, L. J. Milner, A. D. (2006). Reaching errors in optic ataxia are linked to eye position rather than head or body position. Neuropsychologia, 44, 2766–2673. [PubMed] [CrossRef] [PubMed]
Flanders, M. Helms-Tillery, S. I. Soechting, J. F. (1992). Early stages in a sensorimotor transformation. Behavioral and Brain Sciences, 15, 309–362. [CrossRef]
Fogassi, L. Gallese, V. Fadiga, L. Luppino, G. Matelli, M. Rizolatti, G. (1996). Coding of peripersonal space in inferior premotor cortex (area F4. Journal of Neurophysiology, 76, 141–157. [PubMed] [PubMed]
Gordon, J. Ghilardi, M. F. Ghez, C. (1994). Accuracy of planar reaching movements I Independence of direction and extent variability. Experimental Brain Research, 99, 97–111. [PubMed] [CrossRef] [PubMed]
Graziano, M. S. Gross, C. G. (1993). A bimodal map of space: Somatosensory receptive fields in the macaque putamen with corresponding visual receptive fields. Experimental Brain Research, 97, 96–109. [PubMed] [CrossRef] [PubMed]
Graziano, M. S. Hu, X. T. Gross, C. G. (1997). Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology, 77, 2268–2292. [PubMed] [Article] [PubMed]
Henriques, D. Y. Klier, E. M. Smith, M. A. Lowy, D. Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18, 1583–1594. [PubMed] [Article] [PubMed]
Khan, A. Z. Pisella, L. Rossetti, Y. Vighetto, A. Crawford, J. D. (2005). Impairment of gaze-centered updating of reach targets in bilateral parietal‐occipital damaged patients. Cerebral Cortex, 15, 1547–1560. [PubMed] [Article] [CrossRef] [PubMed]
Khan, A. Z. Pisella, L. Vighetto, A. Cotton, F. Luauté, J. Boisson, D. (2005). Optic ataxia errors depend on remapped, not viewed, target location. Nature Neuroscience, 8, 418–420. [PubMed] [PubMed]
McIntyre, J. Stratta, F. Lacquaniti, F. (1997). Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. Journal of Neurophysiology, 78, 1601–1618. [PubMed] [Article] [PubMed]
McIntyre, J. Stratta, F. Lacquaniti, F. (1998). Short-term memory for reaching to visual targets: Psychophysical evidence for body-centered reference frames. Journal of Neuroscience, 18, 8423–8435. [PubMed] [Article] [PubMed]
Medendorp, W. P. Crawford, J. D. (2002). Visuospatial updating of reaching targets in near and far space. Neuroreport, 13, 633–636. [PubMed] [CrossRef] [PubMed]
Medendorp, W. P. Goltz, H. C. Crawford, J. D. Vilis, T. (2005). Integration of target and effector information in human posterior parietal cortex for the planning of action. Journal of Neurophysiology, 93, 954–962. [PubMed] [Article] [CrossRef] [PubMed]
Medendorp, W. P. Goltz, H. C. Vilis, T. Crawford, J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23, 6209–6214. [PubMed] [Article] [PubMed]
Perenin, M. T. Vighetto, A. (1988). Optic ataxia: A specific disruption in visuomotor mechanisms I Different aspects of the deficit in reaching for objects. Brain, 111, 643–674. [PubMed] [CrossRef] [PubMed]
Poljac, E. van der Berg, A. V. (2003). Representation of heading direction in far and near head space. Experimental Brain Research, 151, 501–513. [PubMed] [CrossRef] [PubMed]
Pouget, A. Ducom, J. C. Torri, J. Bavelier, D. (2002). Multisensory spatial representations in eye-centered coordinates for reaching. Cognition, 83, B1–B11. [PubMed] [CrossRef] [PubMed]
Pouget, A. Sejnowski, T. J. (1997). A new view of hemineglect based on the response properties of parietal neurones. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences, 352, 1449–1459. [PubMed] [Article] [CrossRef]
Ren, L. Khan, A. Z. Blohm, G. Henriques, D. Y. Sergio, L. E. Crawford, J. D. (2006). Proprioceptive guidance of saccades in eye–hand coordination. Journal of Neurophysiology, 96, 1464–1477. [PubMed] [CrossRef] [PubMed]
Revol, P. Rossetti, Y. Vighetto, A. Rode, G. Boisson, D. Pisella, L. (2003). Pointing errors in immediate and delayed conditions in unilateral optic ataxia. Spatial Vision, 16, 347–364. [PubMed] [CrossRef] [PubMed]
Salinas, E. Abbott, L. F. (1996). A model of multiplicative neural responses in parietal cortex. Proceedings of the National Academy of Sciences of the United States of America, 93, 11956–11961. [PubMed] [Article] [CrossRef] [PubMed]
Salinas, E. Abbott, L. F. (2001). Coordinate transformations in the visual system: How to generate gain fields and what to compute with them. Progress in Brain Research, 130, 175–190. [PubMed] [PubMed]
Vighetto, A. Perenin, M. T. (1981). Optic ataxia: Analysis of eye and hand responses in pointing at visual targets (author's translation. Revue Neurologique (Paris), 137, 357–372. [PubMed]
Vindras, P. Desmurget, M. Prablanc, C. Viviani, P. (1998). Pointing errors reflect biases in the perception of the initial hand position. Journal of Neurophysiology, 79, 3290–3294. [PubMed] [Article] [PubMed]
Figure 1
 
Magnetic resonance imaging scans of the two patients and experimental setup. (A) The left panel shows a T1 scan for patient O.K. The darker area in the right hemisphere reveals the lesion in the right PPC. The right panel shows a T2 scan for patient C.F. The white areas at the bottom of the scan show asymmetrical damage mostly in the right hemisphere in the posterior parietal lobes. There is also some slight damage in the left premotor cortex. (B) Experimental setup. Subjects fixated on one of seven fixation targets (white circles) while reaching to one of three reaching targets (black circles). Reaching movements began from one of three initial hand positions (gray circles). The angular distance of all targets from the cyclopean position (between the two eyes) is shown. LED targets located above the subjects were reflected to appear on the table through the use of a half-reflecting mirror. The mirror was shaped so that subjects were able to see their hand at the initial start position at the beginning of every trial but had no visual feedback for the rest of the movement. In addition, trials took place in complete darkness except for a dim light that allowed subjects to see their hand when the initial start position LED was illuminated.
Figure 1
 
Magnetic resonance imaging scans of the two patients and experimental setup. (A) The left panel shows a T1 scan for patient O.K. The darker area in the right hemisphere reveals the lesion in the right PPC. The right panel shows a T2 scan for patient C.F. The white areas at the bottom of the scan show asymmetrical damage mostly in the right hemisphere in the posterior parietal lobes. There is also some slight damage in the left premotor cortex. (B) Experimental setup. Subjects fixated on one of seven fixation targets (white circles) while reaching to one of three reaching targets (black circles). Reaching movements began from one of three initial hand positions (gray circles). The angular distance of all targets from the cyclopean position (between the two eyes) is shown. LED targets located above the subjects were reflected to appear on the table through the use of a half-reflecting mirror. The mirror was shaped so that subjects were able to see their hand at the initial start position at the beginning of every trial but had no visual feedback for the rest of the movement. In addition, trials took place in complete darkness except for a dim light that allowed subjects to see their hand when the initial start position LED was illuminated.
Figure 2
 
Experiment timing and examples of eye and arm movements. Events are plotted as a function of time ( x-axis) in seconds. (A) Timing of the various targets (depicted in the same colors as in Figure 1B). Timing begins from 1 s before recording onset (first vertical dotted line) to depict the presentation of the initial hand position LED (where the eye first fixated). Note that the initial hand position LED was illuminated for 2 s in total (here, we only show the last second). (B) Horizontal eye position (top trace) and 3D finger position (bottom overlaid traces) for the control subject. The horizontal eye position trace shows EOG current in volts for the eye initially at the initial hand position then a movement to the fixation target (24° left). This position was held for the remainder of the trial. The finger position traces show a movement from the initial hand position to the central reach target. The y-axis shows the distance in centimeters from the initial hand position for horizontal (solid trace—negative values are to the left), depth (dashed trace—negative values are away from the subject), and vertical (dotted trace—positive values are the finger lifting up from the table). This subject moved slightly to the right and away from the body (and of course, lifted the finger from the table to make the movement). Eye and finger traces for patient C.F. (C) and patient O.K. (D) are shown. The eye trace from patient C.F. shows some drift, but overall, all subjects were able to maintain fixation on the remembered location of the fixation target. The extinction of the fixation and reach targets coincided with an auditory cue for hand movement initiation (second vertical dotted line). The vertical lines show the extracted positions and timings of the hand movements (short vertical line—start and end positions [200 ms before and after start and end times], long vertical lines—start and end times determined by using velocity criteria).
Figure 2
 
Experiment timing and examples of eye and arm movements. Events are plotted as a function of time ( x-axis) in seconds. (A) Timing of the various targets (depicted in the same colors as in Figure 1B). Timing begins from 1 s before recording onset (first vertical dotted line) to depict the presentation of the initial hand position LED (where the eye first fixated). Note that the initial hand position LED was illuminated for 2 s in total (here, we only show the last second). (B) Horizontal eye position (top trace) and 3D finger position (bottom overlaid traces) for the control subject. The horizontal eye position trace shows EOG current in volts for the eye initially at the initial hand position then a movement to the fixation target (24° left). This position was held for the remainder of the trial. The finger position traces show a movement from the initial hand position to the central reach target. The y-axis shows the distance in centimeters from the initial hand position for horizontal (solid trace—negative values are to the left), depth (dashed trace—negative values are away from the subject), and vertical (dotted trace—positive values are the finger lifting up from the table). This subject moved slightly to the right and away from the body (and of course, lifted the finger from the table to make the movement). Eye and finger traces for patient C.F. (C) and patient O.K. (D) are shown. The eye trace from patient C.F. shows some drift, but overall, all subjects were able to maintain fixation on the remembered location of the fixation target. The extinction of the fixation and reach targets coincided with an auditory cue for hand movement initiation (second vertical dotted line). The vertical lines show the extracted positions and timings of the hand movements (short vertical line—start and end positions [200 ms before and after start and end times], long vertical lines—start and end times determined by using velocity criteria).
Figure 3
 
Shoulder- versus gaze-centered representations of the central movement. (A–D) Schematic of the central reach movement in a shoulder-centered (A and C) versus gaze-centered (B and D) representation. The three initial hand targets (gray circles), the three reach targets (black circles), and the five central fixation targets (white circles) are shown in the different reference frames. The bull's-eye symbol depicts current fixation, where the upper panels show a fixation to the left (A and B) and the lower panels show a fixation to the right (C and D). The gray arrow shows the current reaching movement. (E) Horizontal reach error in degrees plotted as a function of reach target relative to gaze for the same reach movement (see x-axis). The dotted line at y = 0° depicts the pattern of errors predicted by a shoulder-centered representation. The dashed curved line illustrates errors expected if there was a gaze-centered representation of errors based on data from Henriques et al. (1998). Average data are shown across all controls (thin solid line with white diamonds), patient C.F. (thick gray line with gray squares), and patient O.K. (thick black line with black squares). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 3
 
Shoulder- versus gaze-centered representations of the central movement. (A–D) Schematic of the central reach movement in a shoulder-centered (A and C) versus gaze-centered (B and D) representation. The three initial hand targets (gray circles), the three reach targets (black circles), and the five central fixation targets (white circles) are shown in the different reference frames. The bull's-eye symbol depicts current fixation, where the upper panels show a fixation to the left (A and B) and the lower panels show a fixation to the right (C and D). The gray arrow shows the current reaching movement. (E) Horizontal reach error in degrees plotted as a function of reach target relative to gaze for the same reach movement (see x-axis). The dotted line at y = 0° depicts the pattern of errors predicted by a shoulder-centered representation. The dashed curved line illustrates errors expected if there was a gaze-centered representation of errors based on data from Henriques et al. (1998). Average data are shown across all controls (thin solid line with white diamonds), patient C.F. (thick gray line with gray squares), and patient O.K. (thick black line with black squares). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 4
 
Error patterns predicted by different influences of reach target or initial hand position. All reaching curves are plotted as a function of reach target position relative to gaze. (A) The reach functions for the left (dashed lines), center (solid line), and right (dotted lined) reach target or initial hand position curves are plotted as an example of the pattern of errors if there were only an effect of reach target in gaze-centered coordinates and no other effect. (B) An example of the pattern of errors if there were a shoulder-centered effect of reach target or initial hand position. (C) An example of an interaction effect between the gaze-centered and shoulder-centered positions of the reach target or an influence of the initial hand position in gaze-centered coordinates is shown. Here, for example, the errors dependent on the gaze-centered position are flipped for the left reach target and compressed for the right reach target.
Figure 4
 
Error patterns predicted by different influences of reach target or initial hand position. All reaching curves are plotted as a function of reach target position relative to gaze. (A) The reach functions for the left (dashed lines), center (solid line), and right (dotted lined) reach target or initial hand position curves are plotted as an example of the pattern of errors if there were only an effect of reach target in gaze-centered coordinates and no other effect. (B) An example of the pattern of errors if there were a shoulder-centered effect of reach target or initial hand position. (C) An example of an interaction effect between the gaze-centered and shoulder-centered positions of the reach target or an influence of the initial hand position in gaze-centered coordinates is shown. Here, for example, the errors dependent on the gaze-centered position are flipped for the left reach target and compressed for the right reach target.
Figure 5
 
Reach errors for the three initial hand positions. Horizontal reach errors are shown as a function of reach target relative to gaze for the controls (A), patient C.F. (B), and patient O.K. (C). The left initial hand position curve is represented by the dashed lines, the center initial hand position is depicted by the solid lines, and the right initial hand position is denoted by the dotted lines. The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 5
 
Reach errors for the three initial hand positions. Horizontal reach errors are shown as a function of reach target relative to gaze for the controls (A), patient C.F. (B), and patient O.K. (C). The left initial hand position curve is represented by the dashed lines, the center initial hand position is depicted by the solid lines, and the right initial hand position is denoted by the dotted lines. The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 6
 
Reach errors to the three reach targets. Horizontal reach errors are plotted as a function of the reach target position relative to gaze for the left (dashed line), center (solid line), and right (dotted line) reach targets for control subjects (A), patient C.F. (B), and patient O.K. (C). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Figure 6
 
Reach errors to the three reach targets. Horizontal reach errors are plotted as a function of the reach target position relative to gaze for the left (dashed line), center (solid line), and right (dotted line) reach targets for control subjects (A), patient C.F. (B), and patient O.K. (C). The error bars depict the standard error of the mean (for the control data, standard errors of the mean are calculated across all controls).
Table 1
 
Partial η 2 values for all comparisons. η 2 values are shown as percentages and reveal the relative amount of variance in the data that can be explained by the factor. IHP = initial hand position; RT = reach target; main = main effect.
Table 1
 
Partial η 2 values for all comparisons. η 2 values are shown as percentages and reveal the relative amount of variance in the data that can be explained by the factor. IHP = initial hand position; RT = reach target; main = main effect.
Controls C.F. O.K.
IHP/RT
IHP main 53 56 74
RT main 71 69 72
Interaction 8 17 27
RT gaze/RT shoulder
RT gaze main 15 78 33
RT shoulder main 51 18 36
Interaction 68 23 57
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×