Open Access
Article  |   July 2024
Instruction alters the influence of allocentric landmarks in a reach task
Author Affiliations
  • Lina Musa
    Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
    Department of Psychology, York University, Toronto, ON, Canada
    lmusa09@yorku.ca
  • Xiaogang Yan
    Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
    xgyan@yorku.ca
  • J. Douglas Crawford
    Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
    Department of Psychology, York University, Toronto, ON, Canada
    Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
    jdc@yorku.ca
Journal of Vision July 2024, Vol.24, 17. doi:https://doi.org/10.1167/jov.24.7.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lina Musa, Xiaogang Yan, J. Douglas Crawford; Instruction alters the influence of allocentric landmarks in a reach task. Journal of Vision 2024;24(7):17. https://doi.org/10.1167/jov.24.7.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.

Introduction
Humans use specific spatial reference frames to retain remembered visual information for a goal-directed action (Crawford, Henriques, & Medendorp, 2011; Soechting & Flanders, 1992). The visual system is thought to utilize two types of spatial reference frames: observer-centered, egocentric reference frames versus world-fixed, allocentric reference frames, often anchored relative to reliable landmarks (Byrne, Cappadocia, & Crawford, 2010; Howard & Templeton, 1966; Vogeley & Fink, 2003). In ordinary circumstances, the brain integrates information from these two reference frames in Bayesian manner (Byrne & Crawford, 2010; Fiehler, Wolf, Klinghammer, & Blohm, 2014). However, certain circumstances may require one to ignore surrounding landmarks (i.e., when they are unstable or irrelevant to the task) or focus strongly on surrounding landmarks (such as a workspace fixed to a moving base). Various experiments have tapped into these mechanisms by instructing participants to use one reference frame over the other (Byrne & Crawford, 2010; Lemay, Bertram, & Stelmach, 2004). The question thus arises regarding which of these instructions leads to better performance? 
Several studies have explored the implicit influence of visual landmarks on goal-directed actions such as saccades, reaches, or pointing toward a seen or remembered target. Previous findings have shown that reach targets can be remembered reasonably well in the absence of visual landmarks (Lemay & Stelmach, 2005; McIntyre, Stratta, & Lacquaniti, 1997; Vindras & Viviani, 1998) with certain stereotypical errors such as gaze-centered overshoots (Bock, 1986; Henriques, Klier, Smith, Lowy, & Crawford, 1998). However, the addition of a visual landmark can influence reaching, reducing both constant and variable errors (Byrne et al., 2010; Krigolson & Heath, 2004; Lemay et al., 2004; Redon & Hay, 2005). The stabilizing influence of a landmark was particularly prominent in a task that involved remapping the reach target to the opposite visual hemifield, where one would expect egocentric signals to be less stable (Byrne et al., 2010). Conversely, the landmarks had less stabilizing influence on behavior when they were shifted and rotated relative to the reach goal (Thaler & Todd, 2009). Finally, the addition of visual landmarks can negate the accumulation of reach errors after prolonged memory delays in the dark (Chen, Byrne, & Crawford, 2011). 
Normally, egocentric and allocentric cues agree, but they can also conflict, either in tasks that introduce egocentric noise or when the visual environment is unstable (Byrne & Crawford, 2010; Byrne et al., 2010; Chen et al., 2011). The latter has been replicated experimentally in cue-conflict tasks where the landmark is surreptitiously shifted relative to egocentric coordinates during a memory delay (Byrne & Crawford, 2010). In this situation, ego/allocentric cues appear to be optimally integrated, based on their relative reliability (Byrne & Crawford, 2010). Usually, more weight is placed on egocentric coordinates, such that the movement shifts approximately 1/3 in the direction of the landmark shift (Byrne & Crawford, 2010; Fiehler et al., 2014; Li, Sajad et al., 2017). However, the specific weighting depends on task details. For example, Byrne and Crawford (2010) found that participants relied more on a landmark when it was perceived to be stable or when gaze position was less stable. Further, in simulated naturalistic settings, landmarks had more influence when they were task relevant, when more than one landmark was shifted in the same direction, and when the landmark was closer to the target (Fiehler et al., 2014). Thus, in the absence of explicit instructions, the visual system uses implicit algorithms to determine how to weight ego/allocentric cues. Recent physiological studies suggest that this implicit integration may occur in frontal cortex (Bharmauria et al., 2020; Bharmauria, Sajad, Yan, Wang, & Crawford, 2021). 
Alternatively, people can be instructed to ignore or put full weight on a visual landmark, such as in common driving instructions—for example, “Ignore the first stop sign and turn right at the second.” Likewise, experimental participants can be instructed to either ignore a landmark or reach to a fixed egocentric location relative to the landmark. Such instructions were used in neuropsychological and neuroimaging experiments that suggest involvement of the ventral visual stream in allocentric representations and dorsal stream in egocentric transformations (Chen et al., 2014; Chen, Monaco, & Crawford, 2018; Goodale, Westwood, & Milner, 2004; Schenk, 2006). 
Thus, the influence of a landmark on goal-directed action can be determined by both bottom–up factors (e.g., priors, reliability) or by explicit top–down task instructions (e.g., prioritize egocentric vs. allocentric cues). However, the influence of the instruction itself on performance (accuracy, precision, and reaction time) is less clear, particularly when egocentric and allocentric cues conflict. For example, in their control tasks, Byrne and Crawford (2010) found no overall difference in performance when participants were explicitly instructed to use egocentric or allocentric cues, but there was no cue conflict in these tasks, and the visual stimuli were not held constant. Instruction has a well-documented impact on perception, as seen in dichotic listening tasks, where they influence encoding and recall, often overriding other stimuli except highly significant ones such as the participant's name (Moray, 1959). Conversely, perceptual illusions, such as the pong effect, are resistant to instruction, as shown by Laitin and Witt (2020), where guiding instructions did not reduce susceptibility to the illusion. Based on this, it is reasonable to expect that ego/allocentric instruction might also affect visually guided action, beyond the simple task switching intended in the design of an experiment. 
Here, we tested the influence of instruction (to use or not use a landmark) in a cue-conflict, memory-guided reach task where the visual stimuli were equal and balanced across tasks. By forcing participants to either ignore the landmark or use it for the task, we could isolate and directly compare how top–down reliance on allocentric versus egocentric cues affected reach behavior. It has been suggested that landmark-centered encoding is a less noisy process and more stable over a time delay (Byrne et al., 2010; Chen et al., 2011). Thus, we predicted that instruction to attend to and use the landmark for spatial coding would increase weighting on the more stable code and therefore improve performance, especially when the task complexity increases egocentric noise (Byrne & Crawford, 2010; Chen et al., 2011). To simulate the latter case, we included an “anti-reach” condition where participants had to reach toward the mirror-opposite position relative to the visual fixation point (Cappadocia, Monaco, Chen, Blohm, & Crawford, 2017; Everling & Munoz, 2000; Gail & Andersen, 2006). We found that (a) reaching was less variable and more accurate in the allocentric instruction tasks than egocentric instruction tasks (especially in the right visual field), and (b) the beneficial effect of allocentric encoding was more pronounced when participants were required to respond in the visual field opposite the stimulus encoding. 
Materials and methods
Participants
Thirteen individuals (seven males and six females; ages 20–33 years) provided informed consent to participate in this study. All participants were right handed, with no neuromuscular or uncorrected visual deficits, based on self-report. Data from one participant was excluded from further analysis because that person did not meet the inclusion criteria described below, leaving 12 participants for data analysis. This met the sample size required for sufficient power (see Sample size analysis section). All participants were naïve to the purpose of the experiment and given monetary compensation for their time. The experimental procedures were approved by the York Human Participants Review Subcommittee and performed in accordance with the tenets of the Declaration of Helsinki. 
Apparatus
The experiment was conducted in darkness, and participants wore a black glove on their right hand to avoid any ambient reflections off their right hand. Participants sat behind a table, on a chair of an adjustable height (Figure 1). The head was stabilized on a personalized bite bar made of a dental impression compound (Kerr Corporation, Brea, CA). The right hand of each participant was positioned on a button box on the table top, directly in front of them. The button box was used to control the pace of the experiment and was the designated start position for reaching. A customized ring with a 3 × 3 array of infrared-emitting diodes (IREDs) was attached to the participant’s right index finger, and the participant's three-dimensional (3D) position was continuously recorded by two OptoTrak 3020 tracking systems (Northern Digital, Waterloo, ON, Canada). Gaze direction was monitored from only the right eye using the EyeLink II infrared eye-tracking system (SR Research, Ottawa, ON, Canada), which was mounted to the bite-bar stand. The stimulus display (described below) was presented 50 cm ahead of the eye and 152 mm to the right, 91 mm below the center of the bite bar stand, approximately aligned with the right shoulder bone (acromion). The height of the chair was adjusted so that the acromion was aligned with the center of the display panel. This arrangement ensured that stimuli were centered in the mechanical range of the right arm and within easy reach. During reaches, the participant's head was facing forward, but gaze was fixated at the center of this display, such that the shoulder and central visual coordinates aligned. Audio instructions were delivered from a speaker. Two 40-watt desk lamps, placed on either side of the desk, were turned on every 10 trials (2 minutes) to eliminate potential dark adaptation. The experimental set-up is shown in Figure 1
Figure 1.
 
Experimental set-up. From left to right and top to bottom: OptoTrak 3020 tracking systems on both sides of the room (second one not shown in the figure) were used to track the finger motion. EyeLink II cameras were dismounted from the headset and installed on the bite-bar stand. The left camera is occluded in this view of the set-up. The home button box (yellow in the figure), on the desk immediately in front of the participants, was used to control the pace of the experiment. The height of each participant was adjusted relative to the fixed bite-bar height using a metallic screw on a rigid chair with an adjustable height. Two 40-watt dark adaptation desk lamps illuminated the dark room during breaks and every three trials. A custom-made 307 mm × 161 mm wooden panel, fitted with LEDs was used to display stimuli. It was shifted 152 mm to the right of the bite-bar stand center. Audio instructions were played from two desktop speakers (one of the speakers is not shown in the figure). The finger-pointing device was a customized ring with a 3 × 3 array of infrared-emitting diodes (IREDs) that continuously relayed signals to the OptoTrak 3020 tracking system.
Figure 1.
 
Experimental set-up. From left to right and top to bottom: OptoTrak 3020 tracking systems on both sides of the room (second one not shown in the figure) were used to track the finger motion. EyeLink II cameras were dismounted from the headset and installed on the bite-bar stand. The left camera is occluded in this view of the set-up. The home button box (yellow in the figure), on the desk immediately in front of the participants, was used to control the pace of the experiment. The height of each participant was adjusted relative to the fixed bite-bar height using a metallic screw on a rigid chair with an adjustable height. Two 40-watt dark adaptation desk lamps illuminated the dark room during breaks and every three trials. A custom-made 307 mm × 161 mm wooden panel, fitted with LEDs was used to display stimuli. It was shifted 152 mm to the right of the bite-bar stand center. Audio instructions were played from two desktop speakers (one of the speakers is not shown in the figure). The finger-pointing device was a customized ring with a 3 × 3 array of infrared-emitting diodes (IREDs) that continuously relayed signals to the OptoTrak 3020 tracking system.
Calibration
Before each experiment, participants completed a set of calibration procedures. Eye position calibration was done through sequential fixation of five light-emitting diodes (LEDs), including the center position and the four corners of the display panel (see Figure 2A). Each participant then completed two OptoTrak calibration sessions. Finger-tip position was calibrated by having the participant point with a precalibrated “cross” of IREDs on a rigid body fixed to the right index finger. Further calibration was done by having the participant successively point to the four corner positions in the LED display. IRED position data in the OptoTrak intrinsic coordinate system was compared offline with known positions of the calibration dot positions to create a linear mapping between IRED positions and the screen coordinates. This procedure allowed for the conversion of recorded finger-tip position into screen and then visual coordinates, as described below. 
Figure 2.
 
Experimental stimuli and paradigm. (A) Stimuli. The stimuli were displayed in a horizontal array. The left side and center of the array are shown; the right side is the same as the left side (mirror image). The central fixation, off-center fixation locations, and eye calibration points were displayed by white LEDs. The green LED target was randomly displayed in one of the 18 LED positions (nine left, nine right). The position of the first target LED was three circles (4.57°) from the screen center. The red landmark simultaneously appeared on the same side of the screen as the target, displayed by one of the three LEDs in the middle of the nine target LEDs (half-red, half-green circles in the figure), and the second red landmark randomly appeared in one of the remaining landmark positions. (B) Paradigm. The order of a typical trial is shown in the figure. Each trial began with an audio instruction, where participants were instructed to remember the spatial location of the target or the position of the target relative to the landmark. The response audio depended on task. EGO trials were followed by “target,” instructing participants to point toward the target, or “opposite,” instructing participants to point to the mirror-opposite side. All ALLO trials were simply followed by “reach,” instructing participants to point to the remembered target position relative to the second landmark.
Figure 2.
 
Experimental stimuli and paradigm. (A) Stimuli. The stimuli were displayed in a horizontal array. The left side and center of the array are shown; the right side is the same as the left side (mirror image). The central fixation, off-center fixation locations, and eye calibration points were displayed by white LEDs. The green LED target was randomly displayed in one of the 18 LED positions (nine left, nine right). The position of the first target LED was three circles (4.57°) from the screen center. The red landmark simultaneously appeared on the same side of the screen as the target, displayed by one of the three LEDs in the middle of the nine target LEDs (half-red, half-green circles in the figure), and the second red landmark randomly appeared in one of the remaining landmark positions. (B) Paradigm. The order of a typical trial is shown in the figure. Each trial began with an audio instruction, where participants were instructed to remember the spatial location of the target or the position of the target relative to the landmark. The response audio depended on task. EGO trials were followed by “target,” instructing participants to point toward the target, or “opposite,” instructing participants to point to the mirror-opposite side. All ALLO trials were simply followed by “reach,” instructing participants to point to the remembered target position relative to the second landmark.
Visual stimuli and basic task
The experimental paradigm was based on a simpler paradigm used previously in a functional magnetic resonance imaging experiment (Chen et al., 2014). The task involved touching the remembered location of a transient visual target (that always appeared simultaneously with a visual landmark) as accurately as possible. Figure 2A shows the visual/touch display. LEDs (1.15°) were fixed to a wooden panel (307 mm × 161 mm). Target, landmark, and fixation LEDs were placed at intervals of 1 cm (∼1.15 degrees of visual angle) along this panel. Seven white gaze-fixation LEDs were placed at center and 1.15°, 2.29°, and 3.43° to the left and right of the center. Eighteen green reach target LEDs were placed in the periphery, starting at a 4.57° distance from the center. Six red landmark LEDs were positioned 7.96°, 9.09°, or 11.31° to the right or left of the center. Forty LEDs used as a visual mask were placed 1.15° above and below the target and landmark stimuli, inbetween the target and landmark LED positions. 
The same stimulus paradigm was used in all experiments, with LED positions and task instructions randomized as described below. The sequence of stimulus events is shown in Figure 2B. Participants started by being seated comfortably, with their hand placed on the button box near their chest. At the beginning of each trial, the participant received an audio instruction from the speakers telling them how to reach (see next section for details). Participants were then required to visually fixate one of the six non-center white LEDs. After 2 seconds, a green target LED was displayed to the right or the left of the center. This was accompanied simultaneously by presentation of one of the six red landmark LEDs relative to the target, on the same side of the center as the target. After 2 seconds, both the target and landmark stimuli disappeared, and the gaze fixation point moved to center, allowing 300 ms for a saccade to that position. This was done so that the fixation LED could not be used itself as a reliable allocentric landmark (Chen et al., 2014). The array of white “mask” LEDs was then illuminated for 150 ms to negate visual after-images and focus memory resources (Medendorp, Goltz, Vilis, & Crawford, 2003). After the mask, the center fixation LED reappeared, requiring continued central fixation for the rest of the trial. A landmark then reappeared for 2 seconds, but always at a different location than where it was initially viewed, creating a conflict between the location of the target in egocentric coordinates versus allocentric coordinates. At this point, participants were instructed via the speakers to reach, and they were then given 4 seconds to touch the stimulus panel with the right index finger as accurately as possible, based on the instruction they had received at the start of the trial. The participant then returned their hand to the starting position and pressed the button when ready for another trial. 
Task instruction and conditions
As noted above, the visual paradigms used in this task were always the same for different task conditions. What differed between conditions was the task instruction delivered via the speakers at the start of the trial. In the allocentric (ALLO) condition, participants were instructed to “reach relative to cue”; that is, they were required to remember the position of the green target relative to the red landmark. Because the landmark was shifted to a new position at this time, egocentric coordinates provided no useful cues for this task. In this case, the instruction to reach at the end was simply “reach.” In contrast, in the egocentric (EGO) condition, participants heard “reach to target” at the beginning of the task. In this condition, participants were instructed to ignore the landmark (which in this condition provides invalid information). Henceforth, we refer to these as the EGO instruction and the ALLO instruction conditions. 
To further challenge the system, in half of the trials participants were required to perform “anti-reaches” to the visual field opposite to the field where they saw the target (Cappadocia et al., 2017; Gail & Andersen, 2006). In the ALLO condition, this was done by shifting the landmark to the opposite hemifield. In the EGO condition, an additional instruction was provided. When participates heard “target,” they reached toward the original location of the target (PRO task). When participants heard “opposite,” they were expected to reach toward the mirror-opposite position, relative to the fixation point (ANTI task). To be consistent with the literature, we then divided our data into EGO PRO/ANTI or ALLO PRO/ANTI reach trials (based on the final reach direction relative to initial target). Finally, in both PRO and ANTI trials, stimuli were arranged so the EGO and ALLO goals covered the same range. 
Experimental design
Before each experiment, participants engaged in a practice session to ensure that they understood the instructions and were able to perform each element of the task correctly. This consisted of three practice blocks of 10 trials. The first two practice blocks were made up of each EGO instruction condition and ALLO instruction condition separately, and the last practice block was randomly interleaved. After they were successful and clear on the instructions, the participants proceeded to the actual experiment. Actual experiments were divided into three blocks of 72 trials. Participants had a 5-minute rest period between blocks and four 2-minute breaks within each block (with room lights fully illuminated). Individual trials lasted 12.05 seconds in both instruction conditions, but the experiment was self-paced, and subsequent trials were initiated through a button press. On average, participants took 21.65 minutes to complete an entire block. The order of instructions, target locations, and landmark locations was pseudorandomized beforehand so that they were unpredictable for the participants but provided an equal dataset with distributed stimuli for each experimental condition. 
Data analysis
All data obtained from OptoTrak and EyeLink were analyzed offline using custom software written in MATLAB R2019a (MathWorks, Natick, MA). A program was written to generate a mapping between the OptoTrak coordinates and the position of the fingers in screen coordinates of the fingertip. This was done by utilizing the data exported from the screen calibration session and known screen coordinates of the four calibration points on the screen corners. This mapping was then utilized to obtain reach endpoints in screen coordinates for analysis. Finally, target and touch positions in screen coordinates were converted into visual angle (relative to the central fixation point) using geometric measures obtained from the laboratory set-up. 
Exclusion criteria
Movement kinematics were inspected to ascertain that the participants followed instructions, and a program was written to automatically average eye fixation from the period of the second landmark to the go signal. Trials were excluded if eye variation was greater than 2° in the horizontal direction for more than 20% of time. Trials with brief saccades to remembered target locations that then returned were not excluded, because this did not appear to influence pointing accuracy (Van Pelt & Medendorp, 2007). However, trials where participants looked at the landmark were excluded. Finally, trials that included anticipatory reaches (reaches initiated before the audio instructions) were excluded from further analysis. 
Overall, 12 participants who had at least 175 viable trials (81.02%) were included in the study. Participants completed on average (± SD) 72.90 ± 20.58 EGO instruction trials. On average, 37.80 ± 8.98 of trials were trials in which the target was viewed, and the movement was executed in the same visual field, whereas 36.70 ± 10.60 of trials were trials in which the movement was executed in the opposite visual field. On average, participants also completed 70.60 ± 18.53 ALLO instruction trials; 32.70 ± 7.81 of these trials were on the same visual field and 35.70 ± 12.99 of the trials were in the opposite visual field. 
Kinematic analysis
Using a customized MATLAB program, movement start was marked at 20% maximum resultant velocity (Vmax), and movement end was marked at 8% of Vmax. The reach endpoint was found by averaging 30% of points at 8% Vmax and at 1.15 times the minimum finger to screen distance (distance in the z direction). The screen relative coordinates of reach endpoints in the horizontal (x) and vertical (y) dimensions were used to compute the variable error, a measure of the distance of reach endpoints from the mean final position. R 4.1.2 (R Foundation for Statistical Computing, Vienna, Austria) was used to create 95% confidence ellipses of the scatter of reach endpoints. The area of each ellipse was found as shown in Equation 1, by first computing eigenvalues (σ1, σ2) of the covariance matrix of reach endpoints. The eigenvalues were then used to derive half the lengths of the semi-major (principal axis) and semi-minor (orthogonal to the principal axis) axes of the 95% confidence ellipse and used to compute the ellipse area (Equation 1 shown below). The ellipse areas were then used to compare the EGO and ALLO instruction data. Mean ellipses for each target were constructed by averaging the covariance matrices of the corresponding ellipses of individual participants.  
\begin{eqnarray} &&{\rm{Ellipse} \ {area}} \,{=}\, {\rm{\pi * sqrt}}( {{\rm{5}}{\rm{.991* \sigma 1}}})*{\rm{sqrt}}( {{\rm{5}}{\rm{.991* \sigma 2}}}) \; \end{eqnarray}
(1)
 
The overshoot error in horizontal reach endpoints was used to analyze response accuracy. The horizontal overshoot error in reaching response at each target location was found by subtracting the difference between the reach endpoint to the screen midpoint distance and the expected target location to the screen midpoint distance. The expected target location was the same as the initial location for EGO PRO reaches and was the mirror opposite to the target location for EGO ANTI reaches but was based on the shifted landmark for all allocentric reaches. The mean overshoot error was found by averaging the magnitude and direction of the reaching error (positive to the right, negative to the left), separately for each instruction condition (EGO or ALLO), task condition (PRO or ANTI), and visual field of response. Ellipse areas, reaction times, and movement times were averaged in a similar manner. The reaction time was the period between the go signal and the movement start, and the movement time was the period between the movement start and the movement end. 
Sample size analysis
Sample size was calculated based on the simulation approach described by Green and MacLeod (2016), using the simr R package, which calculates the power for generalized linear mixed models from the lme4 R package. The power calculations were based on Monte Carlo simulations. Reaching variance was obtained from a pilot study of 2 individuals and 72 observations for each (EGO/ALLO) instruction condition. A mixed-effects model was fitted on pilot data to obtain an estimated effect size of 0.267 for reaching variance. Using the obtained effect size and the simulation package, the sample size was gradually increased to achieve a power of 85%. The sample size required to achieve this power was 12. 
Statistical analysis
Linear mixed-effects models were used to analyze differences in ellipse areas, overshoot errors, reaction times, and movement times using the lme4 R package. The reaching responses relative to the expected target positions were fit with quadratic mixed effects models, using the lme4 R package, to determine differences in reaching behavior due to the task conditions. Interaction contrasts were performed using the eemeans R package. 
For statistical analysis, we grouped our data into four task conditions, as follows: 
  • 1. EGO instruction with PRO task
  • 2. EGO instruction with ANTI task
  • 3. ALLO instruction with PRO task
  • 4. ALLO instruction with ANTI task
In addition, based on the findings of Byrne et al. (2010), we also sorted these data into hemifields (left or right), based on the final direction of the instructed reach goals. 
Results
Our experiment was designed to test if an explicit instruction to reach relative to an allocentric visual cue (as opposed to ignoring this cue and relying on egocentric coordinates) would alter reach performance in the presence of otherwise identical stimuli. For example, Figure 3A shows the target and landmark in the right visual field, with the landmark shifting to the left after the target disappears. Figures 3B1 and 3B2 display hand and gaze trajectories for ANTI task trials, with Figure 3B1 following egocentric “opposite” instructions and Figure 3B2 following instructions to point relative to the shifted landmark. As one can see, the participant was able to perform both tasks, although with subtle differences. In total, 12 (of 13) participants completed a sufficient number of these trials in all versions of our task (including the easier PRO task versions) for statistical analysis of their data. An analysis focusing on comparisons of precision, accuracy, and reaction time among the conditions will follow. 
Figure 3.
 
Typical eye and finger trajectories. (A) Spatial location of example stimuli (see graphic key for details). (B1) EGO ANTI condition, where the participant was instructed to touch the mirror-opposite spatial location of the remembered target position. The same color conventions as (A) were used, except the unfilled green circle shows the ideal goal position, the black lines show the 2D finger trajectory for six example trials, and the gray lines show the corresponding 2D gaze locations during central fixation, where gaze was required to remain within ±2° of the fixation point 80% of the time. (B2) ALLO ANTI condition, where the landmark appeared in the opposite side of the fixation (same graphic conventions as panel B1).
Figure 3.
 
Typical eye and finger trajectories. (A) Spatial location of example stimuli (see graphic key for details). (B1) EGO ANTI condition, where the participant was instructed to touch the mirror-opposite spatial location of the remembered target position. The same color conventions as (A) were used, except the unfilled green circle shows the ideal goal position, the black lines show the 2D finger trajectory for six example trials, and the gray lines show the corresponding 2D gaze locations during central fixation, where gaze was required to remain within ±2° of the fixation point 80% of the time. (B2) ALLO ANTI condition, where the landmark appeared in the opposite side of the fixation (same graphic conventions as panel B1).
Reach accuracy and precision: General observations
Figure 4 provides a spatial overview of pointing results across four conditions, showing two-dimensional (2D) target positions, reach data points, and 95% confidence ellipses, with a color gradient indicating proximity to central fixation. The lower parts of each panel illustrate one-dimensional (1D) probability densities for target positions along the horizontal dimension. The left panels display data from a single participant, and the middle and right panels show averaged PRO and ANTI data across participants. Some qualitative observations are that (a) the distribution of error was relatively large, as one might expect in a memory-guided task; (b) the vertical distributions tended to be larger than the horizontal distribution; (c) the distributions did not overlap but rather shifted with target location; (d) the EGO instruction ellipses (top panels, blue) were generally larger than the ALLO instruction ellipses (red, bottom panels); and (e) the ALLO instruction distributions seem slightly larger in then PRO task condition. In the ALLO task (right column), the EGO distributions were still larger; thus, the difference between the EGO and ALLO distributions seems clearer, at least in the left visual field, where the ALLO distributions are visibly smaller than the EGO distributions. Each of these observations will be quantified in more detail below. 
Figure 4.
 
Reach endpoint variability. Panels for a representative participant (A, B) The 95% confidence ellipses (2D reach endpoint variability) are shown at the top of each panel, and horizontal reach endpoints probability densities (1D reach endpoint variability) are shown at the bottom of each panel. The small, filled circles are individual reach endpoints relative to the central fixation. Ellipses, probability density curves, and reach endpoints are all color coded by initial target spatial location. The color gradient, from light to dark, represents targets that are central to peripheral. (CF) Mean ellipses, mean horizontal probability densities, and mean reach endpoints of 12 participants. (A, C) PRO task with EGO instruction (top) in blue. (B, D) PRO task with ALLO instruction (bottom) in red. (E) ANTI task with EGO instruction (top) in blue. (F) ANTI task with ALLO instruction (top) in red. The target positions are referenced using black dots in the ellipse figures (the exact distances are labeled on the plot); as the axis ticks on the x-axis in the 1D density plot, both target position labels are color coded using the same color gradient.
Figure 4.
 
Reach endpoint variability. Panels for a representative participant (A, B) The 95% confidence ellipses (2D reach endpoint variability) are shown at the top of each panel, and horizontal reach endpoints probability densities (1D reach endpoint variability) are shown at the bottom of each panel. The small, filled circles are individual reach endpoints relative to the central fixation. Ellipses, probability density curves, and reach endpoints are all color coded by initial target spatial location. The color gradient, from light to dark, represents targets that are central to peripheral. (CF) Mean ellipses, mean horizontal probability densities, and mean reach endpoints of 12 participants. (A, C) PRO task with EGO instruction (top) in blue. (B, D) PRO task with ALLO instruction (bottom) in red. (E) ANTI task with EGO instruction (top) in blue. (F) ANTI task with ALLO instruction (top) in red. The target positions are referenced using black dots in the ellipse figures (the exact distances are labeled on the plot); as the axis ticks on the x-axis in the 1D density plot, both target position labels are color coded using the same color gradient.
Quantification of reaching variance
Figure 5 quantifies the distributions of ellipse fits in Figure 4, suggesting a difference in reach endpoint variance between the EGO instruction data (blue) and ALLO instruction data (red) that also depends on task (PRO task in Figure 4A, upper panel, vs. ANTI task in Figure 4B, bottom panel). The general trend is for smaller ellipses fits (i.e., higher precision) in ALLO instruction data. To quantify these data, we performed a statistical analysis of the instruction influence (EGO or ALLO), task (PRO or ANTI), and visual field (left or right) ellipse fits using a generalized linear mixed model. The analysis included the interaction of spatial instructions with both task and visual field of response. No significant three-way interaction effect was found, leading to the fitting of a simpler model (Table 1). 
Figure 5.
 
Reach endpoint ellipse areas. Violin plots of ellipse areas averaged over right and left visual field targets and for each participant. The legend on top explains the conventions used to generate these plots. Each plot summarizes the observed trends in the left visual field (left side) and right visual field (right) for each task/instruction condition. (A) Mean areas of 95% confidence ellipses for the PRO task data. (B) Mean areas of 95% confidence ellipses for the ANTI task data. EGO and ALLO instruction conditions are shown in the figure as blue and red, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 5.
 
Reach endpoint ellipse areas. Violin plots of ellipse areas averaged over right and left visual field targets and for each participant. The legend on top explains the conventions used to generate these plots. Each plot summarizes the observed trends in the left visual field (left side) and right visual field (right) for each task/instruction condition. (A) Mean areas of 95% confidence ellipses for the PRO task data. (B) Mean areas of 95% confidence ellipses for the ANTI task data. EGO and ALLO instruction conditions are shown in the figure as blue and red, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Table 1.
 
Fixed effects of multilevel model using ellipse areas as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.145* (95% CI, 0.00–0.24). *p < 0.05, **p < 0.01.
Table 1.
 
Fixed effects of multilevel model using ellipse areas as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.145* (95% CI, 0.00–0.24). *p < 0.05, **p < 0.01.
The outcome of this analysis revealed a significant effect of spatial instruction, with ellipse areas in the EGO instruction condition (mean ± SD, 29.87 ± 4.66 degrees2) being significantly larger (i.e., less precise) than the ALLO instruction condition (20.32 ± 3.77 degrees2; p = 2.0e-15). The ALLO instruction led to improved precision consistently across visual fields of response, especially when participants had to point opposite to the visual field of response. Post hoc interaction contrasts, averaged across visual fields, showed that participants were significantly less precise in the EGO instruction than in the ALLO instruction condition, especially when they had to point opposite to the visual field of response (p = 1.0e-5). 
Reaching accuracy
As noted above, the horizontal positions of the response distributions tended to vary with corresponding desired target locations (Figure 4), suggesting that participants did not simply point to the left or right visual field. To quantify the accuracy of these responses, the absolute horizontal values of participants’ end points (averaged across trials for each target within each participant) were regressed relative to the corresponding ideal target locations (Figure 6). A general observation is that the distribution of the ALLO instruction data (red) appears to be more compact and follows the line of unity (ideal accuracy) more closely that the EGO instruction data (blue). 
Figure 6.
 
Scatterplots of the horizontal component of reach endpoints versus expected horizontal target positions. See graphic key (upper right) for details. The scatterplots of participants’ horizontal reach endpoints in the right visual field (right panel) and left visual field (left panel) were fitted with a quadratic line (solid-colored line). (A) PRO task data. (B) ANTI task data. The EGO and ALLO instruction conditions are shown in blue and red, respectively. The dashed colored lines are the locally estimated scatterplot smoothing (LOESS) lines. Black lines in the figures are the lines of unity. Shaded gray areas show the standard error of the estimate.
Figure 6.
 
Scatterplots of the horizontal component of reach endpoints versus expected horizontal target positions. See graphic key (upper right) for details. The scatterplots of participants’ horizontal reach endpoints in the right visual field (right panel) and left visual field (left panel) were fitted with a quadratic line (solid-colored line). (A) PRO task data. (B) ANTI task data. The EGO and ALLO instruction conditions are shown in blue and red, respectively. The dashed colored lines are the locally estimated scatterplot smoothing (LOESS) lines. Black lines in the figures are the lines of unity. Shaded gray areas show the standard error of the estimate.
To select the most appropriate model to quantify these data, we employed a k-fold (leave one out) cross-validation, comparing the fit of three models: a linear regression model, a quadratic regression model centered at the first target, and a model with nominal predictors. The quadratic model exhibited a significantly better fit, as indicated by a lower root mean squared error (RMSE) and a higher R2 value. These fits are shown as curved, color-coded lines in Figure 6
Table 2 summarizes the relative contributions (β) of the model parameters, indicating those that were significant (*) and their 95% confidence intervals (CIs). Overall, this fit yielded a significant (p < 0.01) R2 of 0.534 (95% CI, 0.48–0.57). The relationship between the participants’ reach endpoints and the targets demonstrated a significant negative quadratic trend (p = 0.02), suggesting a decrease in slope for more peripheral targets, possibly due to the decreased perceptual acuity of peripheral targets (see Overshoot errors section below). This effect was significantly more pronounced in the EGO instruction condition (p = 4.0e-3). In the EGO instruction condition, participants exhibited a significant decrease in the slope between reach endpoints and target location, indicating greater accuracy in the ALLO instruction condition. Reaching trajectory was not significantly influenced by the task (PRO or ANTI), but a significant visual field effect was observed (p = 8.9e-3), such that participants exhibited a more negative quadratic trend and an increase in the tangential linear slope at the 4° target. This suggests that participants’ reaches in the right visual field were relatively accurate for more central targets but demonstrated poor reaching performance for peripheral targets. 
Table 2.
 
Fixed effects of multilevel quadratic model using reach endpoints as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.53* (95% CI, 0.48–0.57). *p < 0.05, **p < 0.01.
Table 2.
 
Fixed effects of multilevel quadratic model using reach endpoints as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.53* (95% CI, 0.48–0.57). *p < 0.05, **p < 0.01.
Influence of landmark proximity and shift distance on reach endpoints
As described above, reach precision and accuracy were influenced by the EGO and ALLO instructions, but there was an influence of specific stimulus parameters—specifically, the initial distance between the landmark and target (1°–3°) and the amplitude of the landmark shift (1°–3° for the PRO task and 9°–14° for the ANTI task). If participants were able to suppress landmark influence after the EGO instruction, one would expect these parameters to have no effect, whereas they might have an influence after the ALLO instruction. To test this, two mixed-effects models were employed separately for the EGO and ALLO instruction conditions to assess the impact of landmark proximity and shift on accuracy and precision (ellipse areas), as well as their interaction with instruction (EGO or ALLO) and visual field of response (Table 3). To treat the PRO and ANTI task data similarly, we used separate predictors for these tasks. 
Table 3.
 
Fixed effects of multilevel model. β represents standardized regression weights. In part A, model adjusted R2 = 0.18* (95% CI, 0.15–0.21); in part B, model adjusted R2 = 0.09* (95% CI, 0.03–0.15). *p < 0.05, **p < 0.01.
Table 3.
 
Fixed effects of multilevel model. β represents standardized regression weights. In part A, model adjusted R2 = 0.18* (95% CI, 0.15–0.21); in part B, model adjusted R2 = 0.09* (95% CI, 0.03–0.15). *p < 0.05, **p < 0.01.
In summary, target–landmark distance and landmark shift amplitude had no significant influence on either reach precision or accuracy in the EGO instruction condition. In the ALLO condition, target–landmark distance had a significant influence on both precision (p = 7.9e-10) and accuracy (p = 4.1e-3), when separate shift amplitudes were included in the model. Specifically, performance was improved at the largest (3°) distance from the target (or its mirror opposite). Finally, an interaction with task was also observed (p = 0.03), where the ANTI task showed a stronger association between the degree of landmark shift and precision compared with the PRO task. 
Gaze-centered overshoot
One potential source of error is the phenomenon of gaze-centered overshoot, sometimes referred to as retinal magnification (Bock, 1986; Enright, 1995; Henriques et al., 1998). In this case, the overshoot is expected to be outward relative to the central fixation point. Figure 7 shows the data distributions of overshoot errors (across participant means) for both the EGO instruction (blue) and ALLO instruction data, separated into left and right visual fields. Overall, the data showed the expected trends—leftward overshoots in the left visual field and rightward overshoots in the right visual field—but the magnitudes were smaller for the ALLO instruction, especially in the ANTI task (Figure 8B). 
Figure 7.
 
Mean overshoot errors for EGO instruction (blue) and ALLO instruction (red) data (same graphic conventions as in Figure 5). To obtain each data point, the horizontal distance between reach endpoints and expected target location (relative to fixation) was averaged separately for each goal position and then across goals within the left and right visual fields, respectively. (A) PRO task data. (B) ANTI task data. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 7.
 
Mean overshoot errors for EGO instruction (blue) and ALLO instruction (red) data (same graphic conventions as in Figure 5). To obtain each data point, the horizontal distance between reach endpoints and expected target location (relative to fixation) was averaged separately for each goal position and then across goals within the left and right visual fields, respectively. (A) PRO task data. (B) ANTI task data. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 8.
 
Mean reaction times (same graphic conventions as in Figure 5). The reaction times were averaged for targets on the right and left visual fields of response (right and left sides of the figure, respectively). (A) PRO task data. (B) ANTI task data. EGO and ALLO instruction conditions are shown as blue and red in the figure, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 8.
 
Mean reaction times (same graphic conventions as in Figure 5). The reaction times were averaged for targets on the right and left visual fields of response (right and left sides of the figure, respectively). (A) PRO task data. (B) ANTI task data. EGO and ALLO instruction conditions are shown as blue and red in the figure, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
We analyzed the influence of instruction, task, and visual field on gaze-centered overshoots using a generalized linear mixed model (Table 4). This analysis revealed a significant effect of EGO and ALLO instruction (p = 8.9e-9) and interaction effects, which were analyzed using post hoc t-tests. The effect of EGO and ALLO instructions was conditional, depending on the visual field of response and whether participants pointed to the same or opposite side of the target. In the PRO task, the ALLO instruction only produced significantly smaller overshoot errors in the left visual field (p = 0.03), whereas it produced significantly smaller errors in both the left (p = 3.1e-4) and right (p = 2.7e-4) visual fields in the ANTI task. The second interaction indicated a significantly higher overshot error in the PRO task compared with the ANTI task, but only in the right visual field. Notably, this effect was solely driven by overshoot reaching errors in the ALLO instruction condition, as the absolute overshoot error was less in the EGO instruction condition when participants had to pro-point rather than anti-point. Overall, the main finding here was that the ALLO instruction appeared to suppress gaze-centered overshoot errors in these tasks. 
Table 4.
 
Fixed effects of multilevel model using absolute overshoot error as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.33* (95% CI, 0.25–0.41). *p < 0.05, **p < 0.01.
Table 4.
 
Fixed effects of multilevel model using absolute overshoot error as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.33* (95% CI, 0.25–0.41). *p < 0.05, **p < 0.01.
Table 5.
 
Fixed effects of multilevel model using reaction time as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.22* (95% CI, 0.18–0.26). *p < 0.05, **p < 0.01.
Table 5.
 
Fixed effects of multilevel model using reaction time as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.22* (95% CI, 0.18–0.26). *p < 0.05, **p < 0.01.
Reaction time
Reaction times were assessed from the moment of the GO signal until the movement onset. In general, the ALLO instruction data showed reduced reaction times relative to the EGO instruction data. This advantage was modest in the PRO task data (Figure 8A) but much stronger in the ANTI task data (Figure 8B), where EGO instruction reaction times appeared to be elevated relative to the PRO data. 
This was assessed for the 2 (EGO and ALLO) conditions × 2 (PRO and ANTI) tasks and 2 (left or right) visual fields using a generalized linear mixed model. The simplified model in Table 5 was fitted, as there was no significant three-way interaction effect. A significant effect of EGO or ALLO instruction was observed (p = 4.9e-4). Participants exhibited significantly slower reaction times in the EGO instruction condition, with a relative delay of 242.75 ms (95% CI, 109.30–376.21). Additionally, a significant interaction effect was identified (p = 9.8e-3), which was analyzed using a post hoc interaction contrast: When averaging over the visual field of response, the impact of EGO or ALLO instruction conditions depended on the task requirements (i.e., whether participants had to point to the same side as or opposite side from the target). ALLO instruction resulted in faster reaction times, although the effect was only significant when participants had to point in the opposite visual field (p = 0.03), with a reduction of 245.90 ms (95% CI, 136.9–355.0). A similar analysis showed no significant difference in movement duration. 
Discussion
Previous studies have reported the sensory influence of a visual landmark on reaching behavior (Krigolson & Heath, 2004; Lemay et al., 2004; Obhi & Goodale, 2005; Schütz, Henriques, & Fiehler, 2013) or the weighting of egocentric and allocentric cues (e.g., Byrne & Crawford, 2010; Fiehler et al., 2014), but here we provide a detailed behavioral analysis of the specific influence of instructions on performance in an otherwise stimulus-matched, cue-conflict task. Specifically, we measured how the instruction to use (or ignore) landmark-centered coordinates influences the accuracy, precision, and timing of memory-guided reaches, either to the same or opposite visual hemifield as the target. Our results show that participants were generally more accurate, precise, and quicker to react in the allocentric instruction condition, especially when reaching to the visual field opposite to the target. We also observed a left/right visual field effect, where performance was worse overall in the right visual field. We will interpret each of these findings and other details below in terms of previous literature, potential physiological mechanisms, and their practical implications. 
Egocentric versus allocentric aiming: Natural versus experimental conditions
Before discussing our data, we are obliged to consider how our experimental design relates to natural reaching behavior (Fooken et al., 2023). First, the normal visual feedback of the goal and hand was removed throughout the memory delay and reach. Reaches based on initial sensory conditions are thought to rely on internal models of the eye-head-hand system (Blohm & Crawford, 2007; Blohm, Khan, Ren, Schreiber, & Crawford, 2008). In general, this provides accurate behavior (Vercher, Magenes, Prablanc, & Gauthier, 1994), but with certain errors variable and systematic errors (Henriques et al., 1998; Medendorp & Crawford, 2002; Van Pelt & Medendorp, 2007). It is noteworthy that, even without an instruction, the presence of a visual landmark tends to dampen these errors (Krigolson & Heath, 2004; Lemay et al., 2004; Obhi & Goodale, 2005; Schütz et al., 2013), especially as the memory delay increases (Chen et al., 2011). 
Second, egocentric and allocentric cues normally complement each other, whereas we introduced a conflict by surreptitiously shifting the landmark during the memory delay. It has been argued that this simulates situations where visual landmarks are unstable (Byrne & Crawford, 2010) or egocentric signals are unreliable (Byrne et al., 2010; Chen et al., 2011). In the absence of explicit instruction, healthy individuals tend to optimally weigh egocentric versus allocentric cues based on their relative reliability and perceived stability (Byrne & Crawford, 2010). This weighting is remarkably consistent (∼1/3 allocentric, ∼2/3 egocentric) and generalizes well to naturalistic situations, although the exact egocentric-to-allocentric ratio is modulated by various factors such as distance and similarity between the target and landmark (Fiehler et al., 2014; Klinghammer, Blohm, & Fiehler, 2017). Although we instructed our participants to shift this weighting completely toward one cue or the other, we cannot assume that the other cue had no implicit influence. 
Finally, unlike most natural reaches (which employ the implicit processes above) we explicitly required our participants to follow two different rules (egocentric vs. allocentric) based on a verbal instruction and a set of color-coded cues. Specifically, the allocentric instruction required participants to both attend to the landmark and remember its spatial relationship to the target. This required the integration of bottom–up perception, top–down cognition, and environmental factors (Caduff & Timpf, 2008). This additional degree of cognitive processing could interfere with reaching performance. Thus, one cannot assume either that the allocentric instruction would improve performance, or conversely, that the landmark would have no influence in the egocentric condition. 
The EGO instruction condition as a control: General observations
Our EGO instruction condition was designed as a control for the ALLO condition, based on several assumptions confirmed in our results. First, as expected from previous studies (Goodale & Milner, 1992; Hu, Eagleson, & Goodale, 1999; McIntyre et al., 1997; Westwood, Heath, & Roy, 2003) participants were able to perform the task (i.e., reach positions correlated with the required goal positions) but with considerable variable and systematic errors. We could not distinguish which egocentric frame (eye, head, or shoulder centered) was the source of these errors, because these frames were fixed relative to each other in this study. However, the systematic errors were consistent with storage of information in gaze-centered visual coordinates. In particular, the EGO instruction data showed the expected gaze-centered overshoots (Henriques & Crawford, 2000; Henriques et al., 1998; Van Pelt & Medendorp, 2007), which likely originate in the comparison between hand and target position signals used to compute the reach vector in visual coordinates (Dessing, Oostwoud Wijdenes, Peper, & Beek, 2009). Based on previous literature, we can speculate that this gaze-centered information was then converted into shoulder-centered coordinates, which act as stable anchor points for computing a reach displacement vector (Blohm & Crawford, 2007; Crawford et al. 2011; McGuire & Sabes, 2009). 
In the ANTI task version of the EGO condition, participants had to suppress the normal PRO reach, and then calculate an opposite reach goal from the original target relative to the gaze fixation point (i.e., in visual coordinates) (Cappadocia et al., 2017; Gail & Andersen, 2006). Neural noise is expected to arise at each point in these additional computations, adding to the uncertainty of the goal and ultimately in reach errors. As expected, both systematic and variable errors increased in the ANTI version of the EGO condition (we will compare this to the ALLO condition below). 
Finally, neither target–landmark distance nor landmark shift amplitude had a significant influence on performance in the EGO instruction task. This result is consistent with active suppression of landmark information (although we cannot quantify this without a no-instruction control). Overall, the EGO instruction produced the expected results, providing a control for comparison with the ALLO instruction. We will focus on the difference between these conditions below. 
Influence of the landmark instruction: Precision and accuracy
Our main result was that the instruction to reach relative to the landmark generally enhanced both accuracy and precision relative to egocentric instruction. In particular, there was a systematic reduction of the gaze-centered overshoot effect that was observed in our egocentric data and many previous studies (Henriques & Crawford, 2000; Henriques et al., 1998; Van Pelt & Medendorp, 2007). We also found significant improvements in variable error. Likely this advantage was due to the greater attentional weighting on landmark cues, which tend to improve performance (Lemay et al., 2004), especially after a delay (Chen et al., 2011; Goodale & Milner, 1992; Hay & Redon, 2006; Hu et al., 1999; Krigolson & Heath, 2004; McIntyre et al., 1997; Milner & Goodale, 2006). 
Although we equalized the stimuli in conditions, our instructions may have interacted with stimulus perception in some way. In contrast to previous studies (Aagten-Murphy & Bays, 2019; Fiehler et al., 2014), landmark influences on performance increased with distance from the target. This is likely because the target–landmark difference used here was small (1.15°–3.44°) compared with those previous studies and relative to the distance of stimuli from the fovea (4.57°–15.07°). The beneficial influence of our nearest landmarks might have been negated by sensory confusion within response fields during initial perception (Ransom-Hogg & Spillmann, 1980) or, more likely, confusion between their representations at the time of response. It is also possible that our choice of color cues interacted with the instruction in some fashion (Nakshian, 1964; Nathans, 1999). But, overall, these factors do not negate or explain the benefits of instruction, especially in light of the known advantages of landmark-centered coding for spatial memory (Chen et al., 2011; Lemay et al., 2004). 
Finally, we cannot find previous literature focused on the influence of egocentric and allocentric instruction on behavior, but other studies have reported conflicting behavior data from tasks that employed such instructions. For example, Byrne and Crawford (2010) reported that their control egocentric and allocentric reach tasks did not show significantly different variable error. However, there was no cue conflict in their control tasks, and stimuli were not held constant in these tasks (i.e., a landmark was present in the landmark task but not in the egocentric task). Our results also differed from those of Thaler and Tod (2009), who found reaching precision to be lowest in their allocentric task. But, again, there was no cue conflict in their paradigm, and the task involved hand alignment, which is thought to employ different mechanisms than reach (Goodale et al., 1994; Monaco, Menghi, & Crawford, 2024). 
Results specific to anti-pointing: Accuracy and timing
Our secondary finding was that the advantage of the allocentric instruction was more pronounced for reaches to the opposite visual field than the target in terms of accuracy, precision, and reaction time. As expected, these effects appear to be driven by decreased performance (including larger gaze-centered overshoots) in the EGO ANTI condition relative to the EGO PRO condition, whereas the ALLO condition showed similar performance for both the ANTI and PRO task versions. As noted above, this is likely because anti-reaching requires specific cognitive transformations and neural mechanisms (Fernandez-Ruiz, Goltz, DeSouza, Vilis, & Crawford, 2007; Fischer & Weber, 1992; Gail, Klaes, & Westendorff, 2009), whereas the mechanisms for reaching relative to the landmark required fixed target–landmark associations that were relatively independent of the final landmark location in either the PRO or ANTI tasks (Chen et al., 2014). It is noteworthy that ANTI movements fall within the class of non-standard rule-based spatial transformations that pervade much of our modern life (Sergio, Gorbet, Tippett, Yan, & Neagu, 2009), so these behaviors might benefit the most from the presence and use of allocentric landmarks. 
Visual field dependence
Our tertiary observation was that the benefit of the allocentric instruction for anti-reaching was asymmetric across the visual hemifields. Whereas the egocentric anti-pointing task produced symmetric gaze-centered errors, the allocentric instruction mitigated these errors for only goals and landmarks that shifted from the right to the left visual field. Byrne et al. (2010) found that allocentric landmarks had more influence on pointing when saccades caused remembered targets to remap from the right to the left hemifield. This might be related somehow to a tendency of our participants to be right-hand and right-eye dominant. Also, various studies have suggested that the right cortical hemisphere (corresponding to the left visual) plays a stronger role in allocentric processing than the left (Faillenot, Decety, & Jeannerod, 1999; Fink et al., 2003; Galati et al., 2000; Weiss et al., 2006; Zaehle et al., 2007). 
Possible physiological mechanisms
Most sensorimotor neuroscience studies only consider the egocentric transformation from sensory to motor coordinates, but to interpret the current results we must consider both (a) the integration of sensory input to construct two different spatial rules, and (b) the implementation of these rules through separate allocentric and egocentric transformations. For the first instruction component, one might safely assume that the auditory instruction is processed through the auditory cortex and higher level language areas (Friederici, Meyer, & Von Cramon, 2000; Humphries, Willard, Buchsbaum, & Hickok, 2001; Vandenberghe, Nobre, & Price, 2002; Wernicke, 1874), whereas initial color and spatial information from the visual stimuli are processed by well-known pathways in the occipital cortex (DeYoe & Van Essen, 1988; Livingstone & Hubel, 1988; Zeki, 1993). The key question is how does the former impose a rule on how the latter is processed? One likely candidate area is the prefrontal cortex. For example, the lateral prefrontal cortex receives complex multisensory inputs and is thought to play roles in imposing rules on sensory inputs (Miller & Cohen, 2001), including both inhibiting incorrect responses (DeSouza, Menon, & Everling, 2003) and selecting correct responses (Rowe, Toni, Josephs, Frackowiak, & Passingham, 2000). It is also likely that the parietofrontal attention network is involved in directing attention to the landmark (Bisley & Goldberg, 2003; Buschman & Miller, 2007). Consistent with these speculations, the parietofrontal cortex is thought to exert recurrent influence on the early visual areas during reach (Blohm et al., 2019; Cappadocia et al., 2017) and appears to alter visual processing in these areas (Monaco et al., 2024; Velji-Ibrahim, Crawford, & Monaco, 2018). 
When such rules are enacted, it is thought that egocentric and allocentric transformations for reach are processed through separate pathways. Classic neuropsychology experiments suggest that the dorsal visual stream (via parietal cortex) handles egocentric visual transformations (Carey, Dijkerman, Murphy, Goodale, & Milner, 2006; Goodale & Milner 1992; Schenk, 2006). Numerous studies have examined the role of parietal cortex in action, too many to review here (e.g., Buneo & Andersen, 2006; Crawford et al., 2011; Gallivan & Culham, 2015; Vesia & Crawford, 2012). However, it is pertinent that damage to the dorsal stream alters the gaze-centered errors reported here (Khan, Pisella, Rossetti, Vighetto, & Crawford, 2005a; Khan et al., 2005b), and that the visual field effects observed here could be attributed to interactions between hand and hemifield lateralization in parietal cortex (Medendorp, Goltz, Crawford, & Vilis, 2005; Perenin & Vighetto, 1988; Rossetti, Pisella, & Vighetto, 2003). Parietal cortex also plays a role in coding rule-based egocentric transformations, such as the anti-reach task (Cappadocia et al., 2017; Gail & Andersen, 2006). 
In contrast, the ventral visual stream (via temporal cortex) is thought to handle allocentric transformations (Carey et al., 2006; Goodale & Milner, 1992; Schenk, 2006). This distinction is supported by neuroimaging experiments that showed spatial tuning for egocentric targets in the dorsal stream and allocentrically defined reach targets in the ventral stream (Chen et al., 2014). Ultimately, the latter must be integrated into the motor system for egocentric control of action, and this appears to happen in both the parietal and frontal cortex (Chen et al., 2018). Consistent with this, frontal cortex visual responses integrate target and landmark location (Schütz et al., 2023), whereas their memory and motor responses for gaze are able to integrate allocentric and egocentric coordinates (Bharmauria et al., 2020; Bharmauria et al., 2021) in a cue-conflict task similar to that used in reach studies. 
Conclusions
Whereas several studies have reported the influence of allocentric landmarks on reach and some studies employed explicit instructions to reach in allocentric coordinates, this study examined the interaction of these two factors: whether an explicit instruction to attend to and use a landmark to encode reach direction provides additional behavioral benefits. We found that reach performance (accuracy, precision, and reaction time) was generally enhanced, especially in the more difficult task of reaching toward a location defined in the visual hemifield opposite to the target. This has the practical implication that explicit instructions to use visual landmark cues may enhance performance, especially in high-demand non-standard spatial tasks (Dalecki, Gorbet, Macpherson, & Sergio, 2019; Sergio et al., 2009) and for individuals with degraded egocentric transformations due to age, developmental disorders, or brain damage (e.g., Goodale et al., 1994; Khan et al., 2005a; Khan et al., 2005b; Niechwiej-Szwedo et al., 2011; Tippett, Krajewski, & Sergio, 2007). 
Acknowledgments
The authors thank V. Bharmauria for helpful comments and proofreading and S. Sun for assistance with coding. 
L. Musa and X. Yan were supported by VISTA, and J. D. Crawford was supported by a grant from the Canada Research Chair, Canada First Research Excellence Fund (101035774). 
Funded by a grant from the Vision: Science to Applications (VISTA) Program (102001171). 
Commercial relationships: none. 
Corresponding author: J. Douglas Crawford. 
Email: jdc@yorku.ca. 
Address: Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON M3J 1P3, Canada. 
References
Aagten-Murphy, D., & Bays, P. M. (2019). Independent working memory resources for egocentric and allocentric spatial information. PLoS Computational Biology, 15(2), e1006563. [CrossRef] [PubMed]
Bharmauria, V., Sajad, A., Li, J., Yan, X., Wang, H., & Crawford, J. D. (2020). Integration of eye-centered and landmark-centered codes in frontal eye field gaze responses. Cerebral Cortex, 30(9), 4995–5013. [CrossRef]
Bharmauria, V., Sajad, A., Yan, X., Wang, H., & Crawford, J. D. (2021). Spatiotemporal coding in the macaque supplementary eye fields: Landmark influence in the target-to-gaze transformation. eNeuro, 8(1), ENEURO.0446–20.2020. [CrossRef] [PubMed]
Bisley, J. W., & Goldberg, M. E. (2003). Neuronal activity in the lateral intraparietal area and spatial attention. Science, 299(5603), 81–86. [CrossRef] [PubMed]
Blohm, G., Alikhanian, H., Gaetz, W., Goltz, H. C., DeSouza, J. F., Cheyne, D. O., … Crawford, J. D. (2019). Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. NeuroImage, 197, 306–319. [CrossRef] [PubMed]
Blohm, G., & Crawford, J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7(5):4, 1–22, https://doi.org/10.1167/7.5.4. [CrossRef]
Blohm, G., Khan, A. Z., Ren, L., Schreiber, K. M., & Crawford, J. D. (2008). Depth estimation from retinal disparity requires eye and head orientation signals. Journal of Vision, 8(16):3, 1–23, https://doi.org/10.1167/8.16.3. [CrossRef] [PubMed]
Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64(3), 476–482. [CrossRef] [PubMed]
Buneo, C. A., & Andersen, R. A. (2006). The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44(13), 2594–2606. [CrossRef] [PubMed]
Buschman, T. J., & Miller, E. K. (2007). Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science, 315(5820), 1860–1862. [CrossRef] [PubMed]
Byrne, P. A., Cappadocia, D. C., & Crawford, J. D. (2010). Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Research, 50(24), 2661–2670. [CrossRef] [PubMed]
Byrne, P. A., & Crawford, J. D. (2010). Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach. Journal of Neurophysiology, 103(6), 3054–3069. [CrossRef] [PubMed]
Caduff, D., & Timpf, S. (2008). On the assessment of landmark salience for human navigation. Cognitive Processing, 9(4), 249–267. [CrossRef] [PubMed]
Cappadocia, D. C., Monaco, S., Chen, Y., Blohm, G., & Crawford, J. D. (2017). Temporal evolution of target representation, movement direction planning, and reach execution in occipital–parietal–frontal cortex: An fMRI study. Cerebral Cortex, 27(11), 5242–5260.
Carey, D. P., Dijkerman, H. C., Murphy, K. J., Goodale, M. A., & Milner, A. D. (2006). Pointing to places and spaces in a patient with visual form agnosia. Neuropsychologia, 44(9), 1584–1594. [CrossRef] [PubMed]
Chen, Y., Byrne, P., & Crawford, J. D. (2011). Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia, 49(1), 49–60. [CrossRef] [PubMed]
Chen, Y., Monaco, S., Byrne, P. A., Yan, X., Henriques, D. Y. P., & Crawford, J. D. (2014). Allocentric versus egocentric representation of remembered reach targets in human cortex. Journal of Neuroscience, 34(37), 12515–12526. [CrossRef]
Chen, Y., Monaco, S., & Crawford, J. D. (2018). Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. European Journal of Neuroscience, 47(8), 901–917. [CrossRef]
Crawford, J. D., Henriques, D. Y., & Medendorp, W. P. (2011). Three-dimensional transformations for goal-directed action. Annual Review of Neuroscience, 34, 309–331. [CrossRef] [PubMed]
Dalecki, M., Gorbet, D. J., Macpherson, A., & Sergio, L. E. (2019). Sport experience is correlated with complex motor skill recovery in youth following concussion. European Journal of Sport Science, 19(9), 1257–1266. [CrossRef] [PubMed]
DeSouza, J. F., Menon, R. S., & Everling, S. (2003). Preparatory set associated with pro-saccades and anti-saccades in humans investigated with event-related fMRI. Journal of Neurophysiology, 89(2), 1016–1023. [CrossRef] [PubMed]
Dessing, J. C., Oostwoud Wijdenes, L., Peper, C. E., & Beek, P. J. (2009). Visuomotor transformation for interception: Catching while fixating. Experimental Brain Research, 196, 511–527. [CrossRef] [PubMed]
DeYoe, E. A., & Van Essen, D. C. (1988). Concurrent processing streams in monkey visual cortex. Trends in Neurosciences, 11(5), 219–226. [CrossRef] [PubMed]
Enright, J. T. (1995). The non-visual impact of eye orientation on eye–hand coordination. Vision Research, 35(11), 1611–1618. [CrossRef] [PubMed]
Everling, S., & Munoz, D. P. (2000). Neuronal correlates for preparatory set associated with pro-saccades and anti-saccades in the primate frontal eye field. Journal of Neuroscience, 20(1), 387–400. [CrossRef]
Faillenot, I., Decety, J., & Jeannerod, M. (1999). Human brain activity related to the perception of spatial features of objects. NeuroImage, 10(2), 114–124. [CrossRef] [PubMed]
Fernandez-Ruiz, J., Goltz, H. C., DeSouza, J. F., Vilis, T., & Crawford, J. D. (2007). Human parietal “reach region” primarily encodes intrinsic visual direction, not extrinsic movement direction, in a visual–motor dissociation task. Cerebral Cortex, 17(10), 2283–2292. [CrossRef]
Fiehler, K., Wolf, C., Klinghammer, M., & Blohm, G. (2014). Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Frontiers in Human Neuroscience, 8, 636. [CrossRef] [PubMed]
Fink, G. R., Marshall, J. C., Weiss, P. H., Stephan, T., Grefkes, C., Shah, N. J., … Dieterich, M. (2003). Performing allocentric visuospatial judgments with induced distortion of the egocentric reference frame: An fMRI study with clinical implications. NeuroImage, 20(3), 1505–1517. [CrossRef] [PubMed]
Fischer, B., & Weber, H. (1992). Characteristics of “anti” saccades in man. Experimental Brain Research, 89, 415–424. [CrossRef] [PubMed]
Fooken, J., Baltaretu, B. R., Barany, D. A., Diaz, G., Semrau, J. A., Singh, T., … Crawford, J. D. (2023). Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. Journal of Neuroscience, 43(45), 7511–7522. [CrossRef]
Friederici, A. D., Meyer, M., & Von Cramon, D. Y. (2000). Auditory language comprehension: An event-related fMRI study on the processing of syntactic and lexical information. Brain and Language, 74(2), 289–300. [CrossRef] [PubMed]
Gail, A., & Andersen, R. A. (2006). Neural dynamics in monkey parietal reach region reflect context-specific sensorimotor transformations. Journal of Neuroscience, 26(37), 9376–9384. [CrossRef]
Gail, A., Klaes, C., & Westendorff, S. (2009). Implementation of spatial transformation rules for goal-directed reaching via gain modulation in monkey parietal and premotor cortex. Journal of Neuroscience, 29(30), 9490–9499. [CrossRef]
Galati, G., Lobel, E., Vallar, G., Berthoz, A., Pizzamiglio, L., & Le Bihan, D. (2000). The neural basis of egocentric and allocentric coding of space in humans: A functional magnetic resonance study. Experimental Brain Research, 133(2), 156–164. [CrossRef] [PubMed]
Gallivan, J. P., & Culham, J. C. (2015). Neural coding within human brain areas involved in actions. Current Opinion in Neurobiology, 33, 141–149. [CrossRef] [PubMed]
Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. [CrossRef] [PubMed]
Goodale, M. A., Jakobson, L. S., Milner, A. D., Perrett, D. I., Benson, P. J., & Hietanen, J. K. (1994). The nature and limits of orientation and pattern processing supporting visuomotor control in a visual form agnosic. Journal of Cognitive Neuroscience, 6(1), 46–56. [CrossRef] [PubMed]
Goodale, M. A., Westwood, D. A., & Milner, A. D. (2004). Two distinct modes of control for object-directed action. Progress in Brain Research, 144, 131–144. [CrossRef] [PubMed]
Green, P., & MacLeod, C. J. (2016). SIMR: An R package for power analysis of generalized linear mixed models by simulation. Methods in Ecology and Evolution, 7(4), 493–498. [CrossRef]
Hay, L., & Redon, C. (2006). Response delay and spatial representation in pointing movements. Neuroscience Letters, 408(3), 194–198. [CrossRef] [PubMed]
Henriques, D. Y., & Crawford, J. D. (2000). Direction-dependent distortions of retinocentric space in the visuomotor transformation for pointing. Experimental Brain Research, 132(2), 179–194. [CrossRef] [PubMed]
Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18(4), 1583–1594. [CrossRef]
Howard, I. P., & Templeton, W. B. (1966). Human spatial orientation. New York: John Wiley & Sons.
Hu, Y., Eagleson, R., & Goodale, M. A. (1999). The effects of delay on the kinematics of grasping. Experimental Brain Research, 126(1), 109–116. [CrossRef] [PubMed]
Humphries, C., Willard, K., Buchsbaum, B., & Hickok, G. (2001). Role of anterior temporal cortex in auditory sentence comprehension: An fMRI study. NeuroReport, 12(8), 1749–1752. [CrossRef] [PubMed]
Khan, A. Z., Pisella, L., Rossetti, Y., Vighetto, A., & Crawford, J. D. (2005a). Impairment of gaze-centered updating of reach targets in bilateral parietal–occipital damaged patients. Cerebral Cortex, 15(10), 1547–1560. [CrossRef]
Khan, A. Z., Pisella, L., Vighetto, A., Cotton, F., Luauté, J., Boisson, D., … Rossetti, Y. (2005b). Optic ataxia errors depend on remapped, not viewed, target location. Nature Neuroscience, 8(4), 418–420. [CrossRef] [PubMed]
Klinghammer, M., Blohm, G., & Fiehler, K. (2017). Scene configuration and object reliability affect the use of allocentric information for memory-guided reaching. Frontiers in Neuroscience, 11, 229248. [CrossRef]
Krigolson, O., & Heath, M. (2004). Background visual cues and memory-guided reaching. Human Movement Science, 23(6), 861–877. [CrossRef] [PubMed]
Laitin, E. L., & Witt, J. K. (2020). The Pong effect as a robust visual illusion: Evidence from manipulating instructions. Perception, 49(12), 1362–1370. [CrossRef] [PubMed]
Lemay, M., Bertram, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8(1), 16–32. [CrossRef] [PubMed]
Lemay, M., & Stelmach, G. E. (2005). Multiple frames of reference for pointing to a remembered target. Experimental Brain Research, 164, 301–310. [CrossRef] [PubMed]
Li, J., Sajad, A., Marino, R., Yan, X., Sun, S., Wang, H., … Crawford, J. D. (2017). Effect of allocentric landmarks on primate gaze behavior in a cue conflict task. Journal of Vision, 17(5):20, 1–18, https://doi.org/10.1167/17.5.20. [CrossRef]
Livingstone, M., & Hubel, D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240(4853), 740–749. [CrossRef] [PubMed]
McGuire, L. M., & Sabes, P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12(8), 1056–1061. [CrossRef] [PubMed]
McIntyre, J., Stratta, F., & Lacquaniti, F. (1997). Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. Journal of Neurophysiology, 78(3), 1601–1618. [CrossRef] [PubMed]
Medendorp, P. W., & Crawford, D. J. (2002). Visuospatial updating of reaching targets in near and far space. NeuroReport, 13(5), 633–636. [CrossRef] [PubMed]
Medendorp, W. P., Goltz, H. C., Crawford, J. D., & Vilis, T. (2005). Integration of target and effector information in human posterior parietal cortex for the planning of action. Journal of Neurophysiology, 93(2), 954–962. [CrossRef] [PubMed]
Medendorp, W. P., Goltz, H. C., Vilis, T., & Crawford, J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23(15), 6209–6214. [CrossRef]
Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167–202. [CrossRef] [PubMed]
Milner, D., & Goodale, M. (2006). The visual brain in action (2nd ed.). Oxford, UK: Oxford University Press.
Monaco, S., Menghi, N., & Crawford, J. D. (2024). Action-specific feature processing in the human cortex: An fMRI study. Neuropsychologia, 194, 108773. [PubMed]
Moray, N. (1959). Attention in dichotic listening: Affective cues and the influence of instructions. Quarterly Journal of Experimental Psychology, 11(1), 56–60.
Nakshian, J. S. (1964). The effects of red and green surroundings on behavior. The Journal of General Psychology, 70(1), 143–161. [PubMed]
Nathans, J. (1999). The evolution and physiology of human color vision: Insights from molecular genetic studies of visual pigments. Neuron, 24(2), 299–312. [PubMed]
Niechwiej-Szwedo, E., Goltz, H. C., Chandrakumar, M., Hirji, Z., Crawford, J. D., … Wong, A. M. (2011). Effects of anisometropic amblyopia on visuomotor behavior, part 2: Visually guided reaching. Investigative Ophthalmology & Visual Science, 52(2), 795–803. [PubMed]
Obhi, S. S., & Goodale, M. A. (2005). The effects of landmarks on the performance of delayed and real-time pointing movements. Experimental Brain Research, 167, 335–344. [PubMed]
Perenin, M. T., & Vighetto, A. (1988). Optic ataxia: A specific disruption in visuomotor mechanisms: I. Different aspects of the deficit in reaching for objects. Brain, 111(3), 643–674. [PubMed]
Ransom-Hogg, A., & Spillmann, L. (1980). Perceptive field size in fovea and periphery of the light-and dark-adapted retina. Vision Research, 20(3), 221–228. [PubMed]
Redon, C., & Hay, L. (2005). Role of visual context and oculomotor conditions in pointing accuracy. NeuroReport, 16(18), 2065–2067. [PubMed]
Rossetti, Y., Pisella, L., & Vighetto, A. (2003). Optic ataxia revisited. Experimental Brain Research, 153(2), 171–179. [PubMed]
Rowe, J. B., Toni, I., Josephs, O., Frackowiak, R. S., & Passingham, R. E. (2000). The prefrontal cortex: Response selection or maintenance within working memory? Science, 288(5471), 1656–1660. [PubMed]
Schenk, T. (2006). An allocentric rather than perceptual deficit in patient DF. Nature Neuroscience, 9(11), 1369–1370. [PubMed]
Schütz, A., Bharmauria, V., Yan, X., Wang, H., Bremmer, F., & Crawford, J. D. (2023). Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Communications Biology, 6(1), 938. [PubMed]
Schütz, I., Henriques, D. Y. P., & Fiehler, K. (2013). Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Research, 87, 46–52. [PubMed]
Sergio, L. E., Gorbet, D. J., Tippett, W. J., Yan, X., & Neagu, B. (2009). Cortical mechanisms of vision for complex action. In Jenkin, M. & Harris, L. (Eds.), Cortical mechanisms of vision (pp. 81–118). Cambridge, UK: Cambridge University Press.
Soechting, J. F., & Flanders, M. (1992). Moving in three-dimensional space: Frames of reference, vectors, and coordinate systems. Annual Review of Neuroscience, 15(1), 167–191. [PubMed]
Thaler, L., & Todd, J. T. (2009). The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments. Neuropsychologia, 47(5), 1227–1244. [PubMed]
Tippett, W. J., Krajewski, A., & Sergio, L. E. (2007). Visuomotor integration is compromised in Alzheimer's disease patients reaching for remembered targets. European Neurology, 58(1), 1–11. [PubMed]
Van Pelt, S., & Medendorp, W. P. (2007). Gaze-centered updating of remembered visual space during active whole-body translations. Journal of Neurophysiology, 97(2), 1209–1220. [PubMed]
Vandenberghe, R., Nobre, A. C., & Price, C. J. (2002). The response of left temporal cortex to sentences. Journal of Cognitive Neuroscience, 14(4), 550–560. [PubMed]
Velji-Ibrahim, J., Crawford, J. D., & Monaco, S. (2018). Beyond sensory processing: Human neuroimaging shows task-dependent functional connectivity between V1 and somatomotor areas during action planning. Journal of Vision, 18(10), 70, https://doi.org/10.1167/18.10.70.
Vercher, J. L., Magenes, G., Prablanc, C., & Gauthier, G. M. (1994). Eye-head-hand coordination in pointing at visual targets: Spatial and temporal analysis. Experimental Brain Research, 99(3), 507–523. [PubMed]
Vesia, M., & Crawford, J. D. (2012). Specialization of reach function in human posterior parietal cortex. Experimental Brain Research, 221(1), 1–18. [PubMed]
Vindras, P., & Viviani, P. (1998). Frames of reference and control parameters in visuomanual pointing. Journal of Experimental Psychology: Human Perception and Performance, 24(2), 569–591. [PubMed]
Vogeley, K., & Fink, G. R. (2003). Neural correlates of the first-person-perspective. Trends in Cognitive Sciences, 7(1), 38–42. [PubMed]
Weiss, P. H., Rahbari, N. N., Lux, S., Pietrzyk, U., Noth, J., & Fink, G. R. (2006). Processing the spatial configuration of complex actions involves right posterior parietal cortex: An fMRI study with clinical implications. Human Brain Mapping, 27(12), 1004–1014. [PubMed]
Wernicke, C. (1874). The symptom complex of aphasia: A psychological study on an anatomical basis. In Cohen, R. S. & Wartofsky, M. W. (Eds.), Boston studies in the philosophy of science (pp. 34–97). Dordrecht: D. Reidel Publishing Company.
Westwood, D. A., Heath, M., & Roy, E. A. (2003). No evidence for accurate visuomotor memory: Systematic and variable error in memory-guided reaching. Journal of Motor Behavior, 35(2), 127–133. [PubMed]
Zaehle, T., Jordan, K., Wüstenberg, T., Baudewig, J., Dechent, P., & Mast, F. W. (2007). The neural basis of the egocentric and allocentric spatial frame of reference. Brain Research, 1137(1), 92–103. [PubMed]
Zeki, S. (1993). A vision of the brain. Hoboken, NJ: Blackwell Scientific.
Figure 1.
 
Experimental set-up. From left to right and top to bottom: OptoTrak 3020 tracking systems on both sides of the room (second one not shown in the figure) were used to track the finger motion. EyeLink II cameras were dismounted from the headset and installed on the bite-bar stand. The left camera is occluded in this view of the set-up. The home button box (yellow in the figure), on the desk immediately in front of the participants, was used to control the pace of the experiment. The height of each participant was adjusted relative to the fixed bite-bar height using a metallic screw on a rigid chair with an adjustable height. Two 40-watt dark adaptation desk lamps illuminated the dark room during breaks and every three trials. A custom-made 307 mm × 161 mm wooden panel, fitted with LEDs was used to display stimuli. It was shifted 152 mm to the right of the bite-bar stand center. Audio instructions were played from two desktop speakers (one of the speakers is not shown in the figure). The finger-pointing device was a customized ring with a 3 × 3 array of infrared-emitting diodes (IREDs) that continuously relayed signals to the OptoTrak 3020 tracking system.
Figure 1.
 
Experimental set-up. From left to right and top to bottom: OptoTrak 3020 tracking systems on both sides of the room (second one not shown in the figure) were used to track the finger motion. EyeLink II cameras were dismounted from the headset and installed on the bite-bar stand. The left camera is occluded in this view of the set-up. The home button box (yellow in the figure), on the desk immediately in front of the participants, was used to control the pace of the experiment. The height of each participant was adjusted relative to the fixed bite-bar height using a metallic screw on a rigid chair with an adjustable height. Two 40-watt dark adaptation desk lamps illuminated the dark room during breaks and every three trials. A custom-made 307 mm × 161 mm wooden panel, fitted with LEDs was used to display stimuli. It was shifted 152 mm to the right of the bite-bar stand center. Audio instructions were played from two desktop speakers (one of the speakers is not shown in the figure). The finger-pointing device was a customized ring with a 3 × 3 array of infrared-emitting diodes (IREDs) that continuously relayed signals to the OptoTrak 3020 tracking system.
Figure 2.
 
Experimental stimuli and paradigm. (A) Stimuli. The stimuli were displayed in a horizontal array. The left side and center of the array are shown; the right side is the same as the left side (mirror image). The central fixation, off-center fixation locations, and eye calibration points were displayed by white LEDs. The green LED target was randomly displayed in one of the 18 LED positions (nine left, nine right). The position of the first target LED was three circles (4.57°) from the screen center. The red landmark simultaneously appeared on the same side of the screen as the target, displayed by one of the three LEDs in the middle of the nine target LEDs (half-red, half-green circles in the figure), and the second red landmark randomly appeared in one of the remaining landmark positions. (B) Paradigm. The order of a typical trial is shown in the figure. Each trial began with an audio instruction, where participants were instructed to remember the spatial location of the target or the position of the target relative to the landmark. The response audio depended on task. EGO trials were followed by “target,” instructing participants to point toward the target, or “opposite,” instructing participants to point to the mirror-opposite side. All ALLO trials were simply followed by “reach,” instructing participants to point to the remembered target position relative to the second landmark.
Figure 2.
 
Experimental stimuli and paradigm. (A) Stimuli. The stimuli were displayed in a horizontal array. The left side and center of the array are shown; the right side is the same as the left side (mirror image). The central fixation, off-center fixation locations, and eye calibration points were displayed by white LEDs. The green LED target was randomly displayed in one of the 18 LED positions (nine left, nine right). The position of the first target LED was three circles (4.57°) from the screen center. The red landmark simultaneously appeared on the same side of the screen as the target, displayed by one of the three LEDs in the middle of the nine target LEDs (half-red, half-green circles in the figure), and the second red landmark randomly appeared in one of the remaining landmark positions. (B) Paradigm. The order of a typical trial is shown in the figure. Each trial began with an audio instruction, where participants were instructed to remember the spatial location of the target or the position of the target relative to the landmark. The response audio depended on task. EGO trials were followed by “target,” instructing participants to point toward the target, or “opposite,” instructing participants to point to the mirror-opposite side. All ALLO trials were simply followed by “reach,” instructing participants to point to the remembered target position relative to the second landmark.
Figure 3.
 
Typical eye and finger trajectories. (A) Spatial location of example stimuli (see graphic key for details). (B1) EGO ANTI condition, where the participant was instructed to touch the mirror-opposite spatial location of the remembered target position. The same color conventions as (A) were used, except the unfilled green circle shows the ideal goal position, the black lines show the 2D finger trajectory for six example trials, and the gray lines show the corresponding 2D gaze locations during central fixation, where gaze was required to remain within ±2° of the fixation point 80% of the time. (B2) ALLO ANTI condition, where the landmark appeared in the opposite side of the fixation (same graphic conventions as panel B1).
Figure 3.
 
Typical eye and finger trajectories. (A) Spatial location of example stimuli (see graphic key for details). (B1) EGO ANTI condition, where the participant was instructed to touch the mirror-opposite spatial location of the remembered target position. The same color conventions as (A) were used, except the unfilled green circle shows the ideal goal position, the black lines show the 2D finger trajectory for six example trials, and the gray lines show the corresponding 2D gaze locations during central fixation, where gaze was required to remain within ±2° of the fixation point 80% of the time. (B2) ALLO ANTI condition, where the landmark appeared in the opposite side of the fixation (same graphic conventions as panel B1).
Figure 4.
 
Reach endpoint variability. Panels for a representative participant (A, B) The 95% confidence ellipses (2D reach endpoint variability) are shown at the top of each panel, and horizontal reach endpoints probability densities (1D reach endpoint variability) are shown at the bottom of each panel. The small, filled circles are individual reach endpoints relative to the central fixation. Ellipses, probability density curves, and reach endpoints are all color coded by initial target spatial location. The color gradient, from light to dark, represents targets that are central to peripheral. (CF) Mean ellipses, mean horizontal probability densities, and mean reach endpoints of 12 participants. (A, C) PRO task with EGO instruction (top) in blue. (B, D) PRO task with ALLO instruction (bottom) in red. (E) ANTI task with EGO instruction (top) in blue. (F) ANTI task with ALLO instruction (top) in red. The target positions are referenced using black dots in the ellipse figures (the exact distances are labeled on the plot); as the axis ticks on the x-axis in the 1D density plot, both target position labels are color coded using the same color gradient.
Figure 4.
 
Reach endpoint variability. Panels for a representative participant (A, B) The 95% confidence ellipses (2D reach endpoint variability) are shown at the top of each panel, and horizontal reach endpoints probability densities (1D reach endpoint variability) are shown at the bottom of each panel. The small, filled circles are individual reach endpoints relative to the central fixation. Ellipses, probability density curves, and reach endpoints are all color coded by initial target spatial location. The color gradient, from light to dark, represents targets that are central to peripheral. (CF) Mean ellipses, mean horizontal probability densities, and mean reach endpoints of 12 participants. (A, C) PRO task with EGO instruction (top) in blue. (B, D) PRO task with ALLO instruction (bottom) in red. (E) ANTI task with EGO instruction (top) in blue. (F) ANTI task with ALLO instruction (top) in red. The target positions are referenced using black dots in the ellipse figures (the exact distances are labeled on the plot); as the axis ticks on the x-axis in the 1D density plot, both target position labels are color coded using the same color gradient.
Figure 5.
 
Reach endpoint ellipse areas. Violin plots of ellipse areas averaged over right and left visual field targets and for each participant. The legend on top explains the conventions used to generate these plots. Each plot summarizes the observed trends in the left visual field (left side) and right visual field (right) for each task/instruction condition. (A) Mean areas of 95% confidence ellipses for the PRO task data. (B) Mean areas of 95% confidence ellipses for the ANTI task data. EGO and ALLO instruction conditions are shown in the figure as blue and red, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 5.
 
Reach endpoint ellipse areas. Violin plots of ellipse areas averaged over right and left visual field targets and for each participant. The legend on top explains the conventions used to generate these plots. Each plot summarizes the observed trends in the left visual field (left side) and right visual field (right) for each task/instruction condition. (A) Mean areas of 95% confidence ellipses for the PRO task data. (B) Mean areas of 95% confidence ellipses for the ANTI task data. EGO and ALLO instruction conditions are shown in the figure as blue and red, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 6.
 
Scatterplots of the horizontal component of reach endpoints versus expected horizontal target positions. See graphic key (upper right) for details. The scatterplots of participants’ horizontal reach endpoints in the right visual field (right panel) and left visual field (left panel) were fitted with a quadratic line (solid-colored line). (A) PRO task data. (B) ANTI task data. The EGO and ALLO instruction conditions are shown in blue and red, respectively. The dashed colored lines are the locally estimated scatterplot smoothing (LOESS) lines. Black lines in the figures are the lines of unity. Shaded gray areas show the standard error of the estimate.
Figure 6.
 
Scatterplots of the horizontal component of reach endpoints versus expected horizontal target positions. See graphic key (upper right) for details. The scatterplots of participants’ horizontal reach endpoints in the right visual field (right panel) and left visual field (left panel) were fitted with a quadratic line (solid-colored line). (A) PRO task data. (B) ANTI task data. The EGO and ALLO instruction conditions are shown in blue and red, respectively. The dashed colored lines are the locally estimated scatterplot smoothing (LOESS) lines. Black lines in the figures are the lines of unity. Shaded gray areas show the standard error of the estimate.
Figure 7.
 
Mean overshoot errors for EGO instruction (blue) and ALLO instruction (red) data (same graphic conventions as in Figure 5). To obtain each data point, the horizontal distance between reach endpoints and expected target location (relative to fixation) was averaged separately for each goal position and then across goals within the left and right visual fields, respectively. (A) PRO task data. (B) ANTI task data. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 7.
 
Mean overshoot errors for EGO instruction (blue) and ALLO instruction (red) data (same graphic conventions as in Figure 5). To obtain each data point, the horizontal distance between reach endpoints and expected target location (relative to fixation) was averaged separately for each goal position and then across goals within the left and right visual fields, respectively. (A) PRO task data. (B) ANTI task data. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 8.
 
Mean reaction times (same graphic conventions as in Figure 5). The reaction times were averaged for targets on the right and left visual fields of response (right and left sides of the figure, respectively). (A) PRO task data. (B) ANTI task data. EGO and ALLO instruction conditions are shown as blue and red in the figure, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Figure 8.
 
Mean reaction times (same graphic conventions as in Figure 5). The reaction times were averaged for targets on the right and left visual fields of response (right and left sides of the figure, respectively). (A) PRO task data. (B) ANTI task data. EGO and ALLO instruction conditions are shown as blue and red in the figure, respectively. Significant differences are indicated by asterisks: *p < 0.05, **p < 0.01.
Table 1.
 
Fixed effects of multilevel model using ellipse areas as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.145* (95% CI, 0.00–0.24). *p < 0.05, **p < 0.01.
Table 1.
 
Fixed effects of multilevel model using ellipse areas as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.145* (95% CI, 0.00–0.24). *p < 0.05, **p < 0.01.
Table 2.
 
Fixed effects of multilevel quadratic model using reach endpoints as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.53* (95% CI, 0.48–0.57). *p < 0.05, **p < 0.01.
Table 2.
 
Fixed effects of multilevel quadratic model using reach endpoints as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.53* (95% CI, 0.48–0.57). *p < 0.05, **p < 0.01.
Table 3.
 
Fixed effects of multilevel model. β represents standardized regression weights. In part A, model adjusted R2 = 0.18* (95% CI, 0.15–0.21); in part B, model adjusted R2 = 0.09* (95% CI, 0.03–0.15). *p < 0.05, **p < 0.01.
Table 3.
 
Fixed effects of multilevel model. β represents standardized regression weights. In part A, model adjusted R2 = 0.18* (95% CI, 0.15–0.21); in part B, model adjusted R2 = 0.09* (95% CI, 0.03–0.15). *p < 0.05, **p < 0.01.
Table 4.
 
Fixed effects of multilevel model using absolute overshoot error as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.33* (95% CI, 0.25–0.41). *p < 0.05, **p < 0.01.
Table 4.
 
Fixed effects of multilevel model using absolute overshoot error as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.33* (95% CI, 0.25–0.41). *p < 0.05, **p < 0.01.
Table 5.
 
Fixed effects of multilevel model using reaction time as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.22* (95% CI, 0.18–0.26). *p < 0.05, **p < 0.01.
Table 5.
 
Fixed effects of multilevel model using reaction time as the criterion. β represents standardized regression weights. Model adjusted R2 = 0.22* (95% CI, 0.18–0.26). *p < 0.05, **p < 0.01.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×