Open Access
Article  |   June 2017
Effect of allocentric landmarks on primate gaze behavior in a cue conflict task
Author Affiliations
  • Jirui Li
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • Amirsaman Sajad
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • Robert Marino
    Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • Xiaogang Yan
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • Saihong Sun
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • Hongying Wang
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
  • J. Douglas Crawford
    Centre for Vision Research; Vision: Science to Applications Program; and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
Journal of Vision June 2017, Vol.17, 20. doi:https://doi.org/10.1167/17.5.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jirui Li, Amirsaman Sajad, Robert Marino, Xiaogang Yan, Saihong Sun, Hongying Wang, J. Douglas Crawford; Effect of allocentric landmarks on primate gaze behavior in a cue conflict task. Journal of Vision 2017;17(5):20. https://doi.org/10.1167/17.5.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The relative contributions of egocentric versus allocentric cues on goal-directed behavior have been examined for reaches, but not saccades. Here, we used a cue conflict task to assess the effect of allocentric landmarks on gaze behavior. Two head-unrestrained macaques maintained central fixation while a target flashed in one of eight radial directions, set against a continuously present visual landmark (two horizontal/vertical lines spanning the visual field, intersecting at one of four oblique locations 11° from the target). After a 100-ms delay followed by a 100-ms mask, the landmark was displaced by 8° in one of eight radial directions. After a second delay (300–700 ms), the fixation point extinguished, signaling for a saccade toward the remembered target. When the landmark was stable, saccades showed a significant but small (mean 15%) pull toward the landmark intersection, and endpoint variability was significantly reduced. When the landmark was displaced, gaze endpoints shifted significantly, not toward the landmark, but partially (mean 25%) toward a virtual target displaced like the landmark. The landmark had a larger influence when it was closer to initial fixation, and when it shifted away from the target, especially in saccade direction. These findings suggest that internal representations of gaze targets are weighted between egocentric and allocentric cues, and this weighting is further modulated by specific spatial parameters.

Introduction
There are at least two ways in which the visual system can encode the location of peripheral targets in visual space: relative to a part of the observer (egocentric coordinates) or to an external landmark or stimuli (allocentric coordinates; Bridgeman, Perry, & Anand, 1997; Burnod et al., 1999; Carrozzo, Stratta, McIntyre, & Lacquaniti, 2002; Colby, 1998; Crawford, Henriques, & Medendorp, 2011; Vogeley & Fink, 2003). For example, imagine a golfer hits the ball into the tall grass of “the rough” so that it is no longer visible. The golfer has two ways to locate the ball. First, he may rely on egocentric information: the last location where the ball was visible (registered on his retina, relative to his direction of gaze, head, and body orientation). Alternatively, he may rely on allocentric information: the ball was last visible 10 m to the right of a tree, a stable landmark. Finally, his visual system might attempt an optimal strategy, based on the weighting of both egocentric and allocentric cues (Byrne & Crawford, 2010; Fiehler, Wolf, Klinghammer, & Blohm, 2014; Thompson & Henriques, 2010). 
The distinction between egocentric and allocentric coding is important for visual neuroscience because it has informed major accounts of human cortical processing. In particular, the two-streams hypothesis of visual processing associates allocentric representations of relative object locations with the ventral stream and egocentric representations for localization and action in the dorsal stream (Goodale & Milner, 1992; Milner & Goodale, 2008; Schenk, 2006). The ventral stream mechanisms are thought to retain spatial information for a longer period of time relative to the dorsal stream (Carrozzo et al., 2002; Lemay, Bertram, & Stelmach, 2004; McIntyre, Stratta, & Lacquaniti, 1998). In contrast, egocentric information quickly fades, making the brain more reliant on allocentric cues when memory delays increase (Glover & Dixon, 2004; Goodale & Haffenden, 1998; Obhi & Goodale, 2005). However, if allocentric information were to influence motor behavior, it must be transformed into egocentric motor commands, i.e., from the ventral perception system to the dorsal action stream (Chen, Byrne, & Crawford, 2011). Imaging studies of visual memory for reach and saccades are generally consistent with this dorsal-ventral scheme, with activity relating to allocentric representation of targets showing occipital-temporal cortex and activity relating to egocentric memory and motor execution in occipital-parietal-frontal cortex (Chen et al., 2014), while also demonstrating influences of background cues in posterior parietal cortex (Inoue, Harada, Fujisawa, Uchimura, & Kitazawa, 2015). 
Nonhuman primate studies of visually controlled action have contributed more detailed knowledge of the mechanisms of egocentric versus allocentric coding. Most have focused on egocentric mechanisms, emphasizing the persistence of gaze-centered codes in higher levels of the dorsal visual stream (for reviews, see Andersen & Buneo, 2002; Colby & Goldberg, 1999), but some have explored allocentric influences on motor commands. In particular, several studies have found object-centered spatial coding in supplementary eye field (SEF) and visual area 7a (Olson & Gettner, 1996; Olson & Tremblay, 2000; Tremblay & Tremblay, 2002). Others have argued that object-centered spatial coding can arise from neurons with gaze-centered receptive fields that show object-modulated firing rates (Deneve & Pouget, 2003), in other words, utilizing an underlying egocentric frame (Fillimon, 2015). 
Regardless of the details of the neural mechanism, the brain clearly solves both egocentric and allocentric tasks. Human behavioral studies have shown that in the absence of allocentric cues, saccades and reaches toward remembered targets are reasonably accurate but show various posture and gaze-dependent errors (Blohm & Crawford, 2007; Bock, 1986; Crawford et al., 2011; Henriques, Klier, Smith, Lowy, & Crawford, 1998). Additionally, humans are able to reach toward locations defined relative to a mobile visual landmark with varying degrees of accuracy (Byrne, Cappadocia, & Crawford, 2010; Chen et al., 2014). When viewing a natural environment, both egocentric and allocentric information are normally available, and it is thought that the brain combines both sources of information (Battaglia-Mayer, Caminiti, Lacquaniti, & Zago, 2003; Diedrichsen, Werner, Schmidt, & Trommershauser, 2004; Sheth & Shimojo, 2004). This has primarily been studied in reach studies. When allocentric information agrees with egocentric cues, it tends to improve the accuracy and precision of movements toward a remembered target location (Krigolson & Heath, 2004; Krigolson, Clark, Heath, & Binsted, 2007; Obhi & Goodale, 2005; Thaler & Goodale, 2011). However, when these cues conflict they are weighted based on their relative reliabilities (Byrne & Crawford, 2010; Fiehler et al., 2014; Thompson & Henriques, 2010). Furthermore, wide range of variables have been shown to influence these interactions, such as age (Hanisch et al., 2001; Lemay et al., 2004), memory delay (Carrozzo et al., 2002; Chen et al., 2011; Glover & Dixon, 2004; Hay & Redon, 2006; Obhi & Goodale, 2005), context (Neely, Tessmer, Binsted, & Heath, 2008), task demands (Bridgeman et al., 1997), size of the allocentric landmark (Inoue et al., 2015; Uchimura & Kitazawa, 2013), location of the landmark relative to reach direction (de Grave, Brenner, & Smeets, 2004), and perceived stability of the landmark (Byrne & Crawford, 2010; Byrne et al., 2010). 
In contrast to the studies described already, relatively little is known about the influence of allocentric landmarks on memory-guided saccade accuracy. One cannot assume that saccades follow the same principles as reaches, because they use different neural circuits (Andersen & Buneo, 2002; Vesia & Crawford, 2012), and in some respects different computational principles (Crawford et al., 2011). Natural scenes are replete with allocentric cues that can influence saccade selection (e.g., Foulsham & Kingstone, 2010; Foulsham, Teszka, & Kingstone, 2011; Rothkopf, Ballard, & Hayhoe, 2007; Wismeijer & Gegenfurtner, 2012), and both saccade latency and direction are affected by interactions with other distractors (e.g., Edelman, Kristjánsson, & Nakayama, 2007; He & Kowler, 1989; Khan, Munoz, Takahashi, Blohm, & McPeek, 2016; Lee & McPeek, 2013; Wu & Kowler, 2013). Several studies used two-step saccade tasks to show that human gaze accuracy improves in the presence of allocentric cues (Dassonville, Schlag, & Schlag-Rey, 1995; Karn, Moller, & Hayhoe, 1997; Sharika, Ramakrishnan, & Murthy, 2014). However, the relative influence of an independent landmark on saccade accuracy has not been studied in a cue-conflict task. To address this question, the present study will (1) provide the cue-conflict (landmark shift) paradigm on saccades for the first time, and (2) use this to establish a more complete analysis of the spatial relationship between targets and landmarks than performed before (for either saccades or reach). We then directly tested the hypotheses that (1) the presence of allocentric landmarks improves the precision and accuracy of gaze shifts, (2) gaze endpoints tend to be biased towards an allocentric landmark, and (3) when egocentric and allocentric cues conflict, the brain chooses an intermediate location (in our task weighted more toward the egocentrically defined location). 
Methods
Surgical preparation
Data were collected from two female rhesus macaque monkeys (Macaca mulatta, M1 and M2). Both animals were 10 years of age and weighed approximately 6 kg at the time of study. Each monkey was prepared for experiments by undergoing a surgery described previously (Crawford, Ceylan, Klier, & Guitton, 1999; Klier, Wang, & Crawford, 2001; Klier, Wang, & Crawford, 2003) under general anesthesia (1.5% isoflurane following intramuscular injection of 10 mg/kg ketamine hydrochloride, 0.05 mg/kg atropine sulphate, and 0.5 mg/kg acepromazine). A stainless steel head post was attached to the skull using a dental acrylic head cap anchored using stainless steel cortex screws. One Teflon coated stainless steel search coil (18 mm in diameter) was implanted subconjunctivally to record horizontal and vertical two-dimensional (2D) eye position. The monkeys were allowed two weeks of recovery following the surgery with unrestricted food and fluid intake. The animal care staff and university veterinarians closely monitored the monkeys' intake, weight, and health. All surgical and experimental protocols were consistent with the Canadian Council for Animal Care and The Association for Research in Vision and Ophthalmology guidelines on the use of laboratory animals and were approved by the York University Animal Care Committee. 
Experimental apparatus
Visual displays were generated using MATLAB (MathWorks, Natick, MA), and displayed on a 2 m by 1.5 m screen that was 0.81 m in front of the animal, using a projector (WT600 DLP Projector, NEC, Tokyo, Japan). Custom software was used to control the behavioral paradigms, send data to a Plexon data acquisition system (Plexon Inc., Dallas, TX), and deliver reward to the monkeys. Eye positions were monitored using the magnetic search coil technique (Fuchs & Robinson, 1966), where subconjunctival eye coils were placed in a magnetic field, and the voltage generated in the coils based on horizontal and vertical eye position was recorded. During experiments, two orthogonal coils were secured in a plastic base and attached to the head cap to record three-dimensional (3D) head position. The monkeys were trained to sit in a custom chair (Crist Instruments Inc., Hagerstown, MD) designed to allow unrestrained head movement inside a 1 m3 magnetic field generator (Crawford et al., 1999). Fluid rewards for correct behavior were delivered via a “juice tube” mounted onto the head implant. Head unrestrained recordings were used because they represent the more natural situation, whereas any restraint on eye or head motion can result in adapted neural strategies (Crawford & Guitton, 1997; Martinez-Trujillo, Wang, & Crawford, 2003). However, head movement did not play any specific role in this experiment. During the current tasks, head displacement stayed within a range of approximately 0 to 5° in both animals. 
Calibration
Before each training and experimental session, two distinct calibrations were completed to ensure accurate search coil signals. First, the magnetic fields were precalibrated and assessed by rotating an external coil through each field and adjusting gains until the output signal was equal to one at each maximum point (Crawford et al., 1999; Tweed, Cadara, & Vilis, 1990). Then, each monkey performed a calibration paradigm inside the magnetic field to correct for any deviation in the surgically implanted eye coils from the forward position and to determine the reference position of eye in space. This calibration paradigm required the monkey to make sequential saccades to nine targets and maintaining fixation on each target for 1 s. The targets were presented at −30°, 0°, and +30° along each of the X and Y axes, forming a 3 by 3 grid centered at (0, 0). Calibration started from the central fixation point (0, 0), and then continued from the top left (−30, 30) to the bottom right (30, −30). When the monkey successfully maintained fixation on a target for 1 s, it was rewarded two drops of water. Reference position of eye in space was derived using the average data from two consecutive runs of the calibration paradigm. 
Training and behavioral paradigm
To prepare animals for the experimental task (Figure 1) we trained animals on a series of successive approximations using standard fixation, saccade, and memory-guided saccade paradigms (Hikosaka & Wurtz, 1983). This proceeded until animals maintained fixation within 5° of a visual target for 1500 ms before making saccades to previously presented target. At that point, additional visual features (Figure 1, described as follow) were slowly introduced (by increasing their contrast) until animals consistently performed the experimental task for several hundred trials per day. During experimental trials with the “shift” condition described as follows, the spatial reward window for final gaze position was increased to 10° so that animals were rewarded whether they were influenced by the landmark or not. 
Figure 1
 
Cue conflict task. (A) Time course for the cue conflict task. (B) The dotted circles represent the eye position at each interval, and the arrows indicate gaze shifts. The red dot represents the fixation point, the white dot represents the target, and the white crosses represent the allocentric landmark that spans the range of the screen. The red arrows represent a head-unrestrained gaze shift towards the remembered location of the target (T = original target location, T' = shifted target location).
Figure 1
 
Cue conflict task. (A) Time course for the cue conflict task. (B) The dotted circles represent the eye position at each interval, and the arrows indicate gaze shifts. The red dot represents the fixation point, the white dot represents the target, and the white crosses represent the allocentric landmark that spans the range of the screen. The red arrows represent a head-unrestrained gaze shift towards the remembered location of the target (T = original target location, T' = shifted target location).
Figure 1 illustrates the main experimental tasks that were used to test the influence of allocentric landmarks on gaze behavior. The task was presented on a black background (luminance: 0.01 cd/m2). Animals began each trial by maintaining fixation on the central fixation spot (red circle with luminance of 2.68 cd/m2 and diameter of 0.5°). After 500 ms of fixation, a target (white circle with luminance of 2.68 cd/m2 and diameter of 0.5°) was presented for 100 ms in one of eight radial locations forming a square (−20°, 0°, +20° horizontal vs. 20°, 0°, +20° vertical from center). An allocentric landmark (two intersecting lines, one horizontal and one vertical, spanning the visual field, luminance 2.68 cd/m2) appeared simultaneously and remained visible for the remainder of the trial. The intersection point of these lines was located at one of four oblique directions 11° from the target. After a 100-ms delay period, a grid-like mask (white grid lines separated by 2° visual angle) was shown for 100 ms, such as to occlude any current or future landmark. When the mask was removed the allocentric landmark re-appeared, and after a variable delay (300–700 ms) the fixation point extinguished, signaling the animal to saccade toward the remembered saccade target location within 400 ms, and then fixate for another 400 ms to obtain a reward. These stimulus durations could be reduced by 13–28 ms due to computer operating system and screen refresh delays. 
In the “no shift” condition, the landmark reappeared in the same location, whereas in the “shift” condition, the allocentric landmark was displaced by 8° in one of eight radial directions. 8° was selected to provide an adequately measureable effect on gaze behavior relative to the normal variance of gaze precision, and accounting for the expectation that this effect might have a gain of less than one (Byrne & Crawford, 2010). One “no shift” control was provided for each initial target/landmark combination, and both conditions were randomly interspersed. In an additional experiment (not shown), the influence of a stable landmark was assessed by interspersing cued “no shift” trials with “control” trials where the landmark was completely absent, but otherwise identical. 
Data analysis
Raw coil signals were digitized at a sampling rate of 1000 Hz and converted into 2D angles of the eye and head in space. Specifically, these were the components of the 2D vector orthogonal to the forward magnetic field and current gaze/head pointing direction, scaled by the magnitude of this rotation. Experimental data were analyzed offline using custom scripts written in MATLAB. The beginning and end of each rewarded saccade was marked manually using a visual display. Anticipatory and multi-step saccades were excluded from analysis. Trials where a saccade was made during the fixation interval were also excluded. The gaze trajectory was determined for each trial and their endpoints were calculated. 
When assessing the influence of the allocentric landmark on gaze endpoints, we compared both the accuracy (mean distance from each target) and precision (variance for each target) of gaze endpoints in each condition. To isolate the influence of the landmark or landmark shift from systematic gaze errors, we subtracted the mean gaze error for the “no landmark” control from the “no shift” data, and the mean “no shift” error from the “shift” data, respectively. This correction was performed separately for each of the eight targets and for each recording session. To quantify the influence of a stable landmark in the “no shift” condition we defined the landmark position as the intersection point of the two landmark lines, because it captures both the minimum distance of the two lines from the target and a highly salient visual feature. Landmark influence (LI, see Figure 3A, inset) was calculated using the target location (T), landmark intersection location (L), and gaze endpoint (G). TG was projected onto TL (d), and then divided by the magnitude of TL (D). The output can be 0, indicating no landmark influence, or it can be positive or negative, indicating a bias toward or away from the landmark, respectively. 
Figure 2
 
Saccade endpoint correction procedure, sample session for M2. The procedure for factoring out general memory guided saccade errors. (A) Gaze trajectories in the “no landmark” control condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Close-up of the upper right target in Figure 2A. Landmark locations are represented using blue crosses. (C) Uncorrected mean gaze endpoints for each landmark in the cued “no shift” condition are shown in red. Black lines associate gaze endpoints to their corresponding landmarks. (D) Gaze endpoints for the “no shift” condition are corrected by subtracting the mean gaze endpoint.
Figure 2
 
Saccade endpoint correction procedure, sample session for M2. The procedure for factoring out general memory guided saccade errors. (A) Gaze trajectories in the “no landmark” control condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Close-up of the upper right target in Figure 2A. Landmark locations are represented using blue crosses. (C) Uncorrected mean gaze endpoints for each landmark in the cued “no shift” condition are shown in red. Black lines associate gaze endpoints to their corresponding landmarks. (D) Gaze endpoints for the “no shift” condition are corrected by subtracting the mean gaze endpoint.
Figure 3
 
Distribution of landmark influence. (A) Histogram showing the distribution of LI (d/D) for M1 (X axis) plotted against the number of trials in each bin (Y axis) collapsed across all sessions. Inset, LI is calculated using the target location (T), landmark location (L), and gaze endpoint (G). (B) Histogram showing the distribution of LI for M2.
Figure 3
 
Distribution of landmark influence. (A) Histogram showing the distribution of LI (d/D) for M1 (X axis) plotted against the number of trials in each bin (Y axis) collapsed across all sessions. Inset, LI is calculated using the target location (T), landmark location (L), and gaze endpoint (G). (B) Histogram showing the distribution of LI for M2.
Likewise, for the shift condition, reliance of gaze on the allocentric landmark was established by calculating the allocentric weight (AW, see Figure 6A, inset) from the original target location (T), shifted target location (T'), and gaze endpoint (G). If gaze coding was exclusively egocentric, the monkey would ignore the landmark altogether and simply saccade to T. On the other hand, if allocentric coding dominated, the monkey would encode the vector between T and L then apply this vector to the shifted landmark (L') and saccade to the virtual location of T'. To calculate AW, the projection of TG onto TT' (d) was divided by the magnitude of TT' (D). The outputs were values between 0 and 1 where the absolute gaze coding is egocentric or allocentric, respectively. These weights (LI and AW) were categorized across combinations of different spatial parameters such as target, landmark, shift, and gaze directions and their means were compared using one-tailed Welch's t tests, two-tailed paired t tests, and post hoc analysis using the Bonferroni correction. Statistical analyses were computed using a combination of MATLAB, SPSS, and Microsoft Excel. 
Figure 4
 
Influence of landmark position relative to initial gaze position on landmark influence. (A) Comparison of the mean LI (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. (B) In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned orthogonal to this axis. Overall LI sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents landmark locations. The mean LI for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the LI for each direction and serves as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean LI (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall LI sorted by landmark direction for M2. Same convention as Figure 4B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 4
 
Influence of landmark position relative to initial gaze position on landmark influence. (A) Comparison of the mean LI (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. (B) In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned orthogonal to this axis. Overall LI sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents landmark locations. The mean LI for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the LI for each direction and serves as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean LI (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall LI sorted by landmark direction for M2. Same convention as Figure 4B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 5
 
Converting raw data to allocentric weight, sample session for M2. (A) Gaze trajectories in the “no shift” condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Uncorrected gaze endpoints in the “shift” condition are shown in red. Shifted target locations are represented using blue circles. Black lines associate gaze endpoints to their corresponding shifted targets. (C) Corrected gaze endpoints for the “shift” condition. (D) Each group of original target, shifted target, and gaze endpoint from Figure 5C is transformed to control for landmark shift direction and amplitude. Original target location is centered at the origin, the shifted target location is represented by the blue dot, and each red dot represents the gaze endpoint of a unique trial in this sample session.
Figure 5
 
Converting raw data to allocentric weight, sample session for M2. (A) Gaze trajectories in the “no shift” condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Uncorrected gaze endpoints in the “shift” condition are shown in red. Shifted target locations are represented using blue circles. Black lines associate gaze endpoints to their corresponding shifted targets. (C) Corrected gaze endpoints for the “shift” condition. (D) Each group of original target, shifted target, and gaze endpoint from Figure 5C is transformed to control for landmark shift direction and amplitude. Original target location is centered at the origin, the shifted target location is represented by the blue dot, and each red dot represents the gaze endpoint of a unique trial in this sample session.
Figure 6
 
Distribution of allocentric weights. (A) Histogram showing the distribution of AW for M1 (X axis) plotted against the number of trials (Y axis) collapsed across all sessions. AW is calculated using the original target location (T), shifted target location (T'), and gaze endpoint (G). (B) Histogram showing the distribution of AW for M2.
Figure 6
 
Distribution of allocentric weights. (A) Histogram showing the distribution of AW for M1 (X axis) plotted against the number of trials (Y axis) collapsed across all sessions. AW is calculated using the original target location (T), shifted target location (T'), and gaze endpoint (G). (B) Histogram showing the distribution of AW for M2.
Results
Allocentric landmark influence on gaze endpoints
First, we were interested in the effect of the large-scale allocentric landmarks on gaze behavior towards a stable remembered target by comparing the “control” and “no shift” conditions. Data were collected on consecutive days until we collected a total of 1394 trials for M1 and 1555 trials for M2, where the landmark was available in 50% of the trials. This provided a minimum of 20 trials per landmark location, averaging 21.78 and 23.69 for M1 and M2, respectively. First, the general influence of the landmark on gaze accuracy and precision were analyzed. Figure 2A shows example gaze trajectories from the center fixation point to the eight memory targets trajectories in the “no landmark” control condition. Precision was calculated using the variance of gaze endpoints for each target location. Across the “no landmark” control condition, the variance of gaze endpoints was 12.55° in animal M1 and 9.03° in M2. The introduction of an allocentric landmark caused these values to drop to 10.52° and 5.41°, respectively. This decrease in variability was significant in both animals (F test; M1: F(696, 696) = 1.42, p < 0.01; M2: F(757,796) = 2.78, p < 0.01). In addition to reduced endpoint variability, we also saw a trend of improved gaze accuracy with the presence of an allocentric landmark (decreasing from an overall mean error of 5.59° to 5.39° and 3.43° to 3.37° in animals M1 and M2, respectively). However, this change in accuracy did not reach significance (Welch's t test; M1: p = 0.06; M2: p = 0.24). 
Subsequently, we tested the specific spatial influence of the landmark (i.e., if gaze was drawn or repelled from the landmark) using gaze endpoint locations calculated with respect to the “no landmark” control data. Figure 2 illustrates the entire calculation. First, we collected gaze trajectories from the “no landmark” control condition, and found the mean gaze endpoint for each target location (Figure 2A). To account for systematic biases in the monkeys' gaze behavior, we determined the mean gaze errors, which are the vectors between each target location and the corresponding “no landmark” mean gaze endpoint location (Figure 2B). Mean gaze endpoints in the cued “no shift” condition were calculated (Figure 2C), then corrected by extracting the mean gaze error for the corresponding target (Figure 2D). We performed this correction for each individual recording session. Each trial in the “shift” condition was then normalized for landmark direction in order to calculate the landmark influence (LI; Figure 3A, inset). 
If the monkeys were not influenced by the landmark, LI would equal to 0, but if the gaze endpoints were biased towards or away from the landmark, LI would be positive or negative, respectively. Figure 3 shows histograms of LI distribution, we categorized the LI of each monkey into 40 bins, each with a size of 0.05. All in all, there was a significant bias toward the landmark (two-tailed, one-sample t test; p < 0.01), the LI were normally distributed (Kim, 2013; M1: skewness = 0.004, kurtosis = 0.31; M2: skewness = −0.06, kurtosis = 0.03), with a mean LI of 0.14 in M1 (where 0 = target location and 1 = landmark location) and 0.17 in M2. Furthermore, we also analyzed whether the distance between the allocentric landmark position and the initial gaze position affected LI. The results are shown in Figure 4. We separated trials into three groups based on landmark eccentricity relative to the initial gaze position. We found that LI increased as the landmark eccentricity decreased in M1 (Figure 4A; closer = 0.17; further = 0.11; Welch's t test; p < 0.01), but not in M2 (Figure 4C; closer = 0.17; further = 0.16; Welch's t test; p = 0.26). 
Effect of allocentric landmark shift
After we tested the influence of the stable allocentric landmark on gaze behavior, we recorded both monkeys while they performed the cue conflict task (see Figure 5 for the conversion of raw gaze trajectories to allocentric weights or AW). First, we collected gaze trajectories from the “no shift” condition, and found the average gaze endpoint for each initial target location (Figure 5A). To account for systematic biases in the monkeys' gaze behavior, we determined the mean gaze errors, which are the vectors between each initial target location and the corresponding “no shift” mean gaze endpoint location. Mean gaze endpoints in the “shift” condition were calculated (Figure 5B), and then corrected by extracting the mean gaze error for the corresponding target (Figure 5C). We performed this correction for each individual recording session. Each trial in the “shift” condition was then normalized for landmark shift direction and amplitude (Figure 5D) in order to calculate their AW (Figure 6A, inset). 
If the monkeys were making gaze shifts toward the original, egocentric target location, we would expect an AW of 0. On the other hand, if they were making gaze shifts toward the shifted, allocentric target location, AW would equal to 1. Figure 6 shows histograms of AW distribution, we categorized the AW of each monkey into 40 bins, each with a size of 0.1. Overall, there was a significant allocentric shift in gaze endpoints in the “shift” condition relative to the “no shift” condition (two-tailed, one-sample t test; p < 0.01), with a mean AW of 0.27 in M1 and 0.23 in M2. The results revealed a normal AW distribution given the large sample size (Kim, 2013; M1: skewness = 0.91, kurtosis = 2.48; M2: skewness = 1.26, kurtosis = 3.52). These results suggest that internal representations of gaze targets are weighted between egocentric and allocentric cues. 
Spatial parameters influence allocentric weight
Lastly, we examined the impact of spatial parameters on allocentric weight by comparing the weights between various combinations of target position, preshift allocentric landmark position, landmark shift, and gaze shift directions and magnitudes. Welch's t tests with subsequent post-hoc Bonferroni corrections were applied to identify and quantify the significance of these spatial parameters on allocentric weight. We decided to use the preshift allocentric landmark position after repeating our “no-shift” landmark influence analysis (Figure 2) on the preshift and postshift landmark positions from the shift condition, and then did paired t tests, comparing these two datasets in both M1 (LIpre-shift = 0.20; LIpost-shift = 0.19; p < 0.01) and M2 (LIpre-shift = 0.17; LIpost-shift = 0.15; p < 0.01). These results suggest that overall, the preshift landmark locations had slightly (but significantly) greater influence on the final gaze position. We found that AW is dependent on allocentric landmark position (Figure 7) and shift direction (Figure 8) relative to initial gaze position, and is also dependent on landmark shift direction relative to target position (Figure 9). 
Figure 7
 
Influence of allocentric landmark position relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned perpendicular to this axis. (B) Overall AW sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents shifted target locations. The mean AW for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the AW for each direction and serve as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean AW (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 7
 
Influence of allocentric landmark position relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned perpendicular to this axis. (B) Overall AW sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents shifted target locations. The mean AW for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the AW for each direction and serve as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean AW (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 8
 
Influence of landmark shift relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M1. (B) Overall AW sorted by landmark shift direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark shift direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 8
 
Influence of landmark shift relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M1. (B) Overall AW sorted by landmark shift direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark shift direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 9
 
Influence of landmark shift relative to initial target position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift toward or away from the initial target position (X axis) for M1. (B) Overall AW sorted by landmark direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial target position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 9
 
Influence of landmark shift relative to initial target position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift toward or away from the initial target position (X axis) for M1. (B) Overall AW sorted by landmark direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial target position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
To examine allocentric landmark position and shift direction relative to initial gaze position, each “shift” trial was normalized for the gaze shift direction and amplitude. For allocentric landmark position, we separated the trials based on the landmark eccentricity relative to the initial gaze position. There was a significant increase (Welch's t test with Bonferroni correction; p < 0.01) in AW when the landmark is located closer in comparison with further, to the initial gaze position in both M1 (Figure 7A; closer = 0.31; further = 0.26) and M2 (Figure 7C; closer = 0.26, further = 0.17). We then separated the trials based on their direction of landmark shift relative to the initial gaze position. Welch's t test with Bonferroni correction revealed a significant increase (p < 0.01) in AW when the landmark shifts away from the initial gaze position in both M1 (Figure 8A; toward = 0.21; away = 0.39) and M2 (Figure 8C; toward = 0.20; away = 0.25). To study landmark shift direction relative to target position, we normalized the landmark shift direction by separating the trials based on the direction of landmark shift relative to the target position. We found a significant increase (Welch's t test with Bonferroni correction; p < 0.01) in allocentric weight when the landmark shifts away from the initial target position in both M1 (Figure 9A; toward = 0.04; away = 0.44) and M2 (Figure 9C; toward = 0.07; away = 0.39). 
Discussion
The goals of this study were to provide the first description of the influence of an allocentric landmark on saccade accuracy in a cue-conflict task, and provide a systematic quantification of the spatial interactions between landmarks location, target location, and gaze behavior. As expected, we found that the presence of a landmark reduced variability in gaze endpoints, but there was also a tendency for gaze to be drawn toward the allocentric landmark. Further, when the landmark shifted during the memory interval, the upcoming gaze endpoint was also shifted in the same direction. Both of these effects were stronger when the landmark was nearer to initial fixation point, and the shift effect was greater for landmark shifts away from the target, especially when these were away from initial gaze position (i.e., in saccade direction). We will consider each of these findings with respect to the literature and possible computational and neurophysiological mechanisms. 
Influence of a stable landmark
It is important to note that there is no such thing as “absolute space” in the post-Einstein worldview, and even if humans might wish to reference their behavior toward a very large external reference frame such as the earth, we can only do so through indirect sensory cues, such as local gravitation cues and visual landmarks. Previous experiments have shown that humans are capable of encoding object and target locations relative to a part of their body (Crawford, Medendorp, & Marotta, 2004; Henriques et al., 1998; Lemay & Stelmach, 2005; McIntyre et al., 1998; Pouget, Ducom, Torri, & Bavelier, 2002; Vindras & Viviani, 1998), and they can also encode these locations relative to surrounding landmarks and stimuli (Brouwer & Knill, 2007; Burnod et al., 1999; Carrozzo et al., 2002; Goodale & Haffenden, 1998; Obhi & Goodale, 2005; Olson, 2003). Typically, both types of information are available in the natural environment to be utilized for spatial coding. Allocentric information provides an additional frame of reference, given that it is available at the same time as the target and within a close proximity (Chen et al., 2011; Diedrichsen et al., 2004). 
Previous studies demonstrated that the presence of allocentric landmarks improves both the accuracy and precision of reach and gaze (Dassonville et al., 1995; Krigolson & Heath, 2004; Krigolson et al., 2007; Obhi & Goodale, 2005; Sharika et al., 2014; Thaler & Goodale, 2011). Results from the present study revealed an improvement in precision but not accuracy when monkeys made gaze shifts toward remembered target locations in the presence of a large-scale allocentric landmark. One possible explanation for the lack of improvement in gaze accuracy may be interspecific differences between humans and nonhuman primates and their abilities to make use of allocentric information. A more likely explanation is that our experiment introduces only a single allocentric landmark, whereas multiple allocentric landmarks are available when viewing natural scenes. Because allocentric landmarks have an attractive bias on gaze endpoint, a lone landmark can be distracting for the subject. Not only do multiple landmarks provide a more reliable allocentric reference frame for the subject (Fiehler et al., 2014), they could also cancel out the landmark attraction effect, thereby improving gaze accuracy. 
Subsequently, we tested the specific spatial influence of the allocentric landmark on gaze behavior and found that the landmark exerts an attractive influence on the mean gaze endpoint, with mean LIs of 0.14 (M1) and 0.17 (M2). On average, gaze behavior moved around 15% of the total distance between the remembered target location and the landmark in cued versus noncued conditions. Similarly, a previous study by Diedrichsen et al. (2004) found that human reach behavior tends to be biased towards nearby landmark(s) in memory-guided reach tasks. Overall, this influence did not cause gaze accuracy to decrease in our experiment, but note that since we counterbalanced landmark locations, these effects would tend to cancel out. Our data showed a difference between high and low landmark eccentricities. When landmarks are located closer to the initial gaze position, they have a greater attractive influence on the gaze endpoint. A simple explanation for this relies on the fact that allocentric landmarks increase in salience and relevance when they are closer to currently attended fixation point, and this aligns with the natural tendency of saccades to undershoot targets (Baizer & Bender, 1989; Robinson, 1981). More generally, the attractive influence of our landmark appears to be consistent with the “center of gravity” and “averaging” effects that have been described many times for saccades, and so may share similar underlying mechanisms (Edelman et al., 2007; Glimcher & Sparks, 1993; He & Kowler, 1989; van Opstal & van Gisbergen, 1990). 
Influence of the landmark shift
The cue conflict task was designed to introduce a dissociation between egocentric and allocentric information. In this task, various spatial and gaze parameters can be controlled to assess their influence on egocentric and allocentric weighting. The current study manipulated the eccentricity of the allocentric landmark to the fixation point and the direction of landmark shift relative to both the fixation point and the target locations. 
Viewed as a cue conflict task, our experiment suggests that monkeys place an average weight of 25% on the allocentric landmark versus 75% on other, presumably egocentric, mechanisms, at least in the context of the experimental parameters that we used. The allocentric influence might be partially due to the attraction effect discussed above for static landmarks, but had nearly twice the gain (25% as opposed to 15%). These results generally agree with previous findings in human reach models where the egocentric and allocentric reference frames are combined based on their relative reliabilities (Byrne & Crawford, 2010; Fiehler et al., 2014; Thompson & Henriques, 2010). Data from these reach models established mean allocentric weights between 0.3 and 0.5, which is higher than the current result of 0.25. However, we cannot attribute this difference to differences in reach and gaze control. In the current data, certain combinations of parameters that maximize allocentric reliability can also push the allocentric weight to above 0.4. It is also important to note that human studies often used longer delays (5 to 10 s) relative to our study (0.5 to 0.9 s), leading to increased decay of egocentric information, which promotes the use of the allocentric cues. 
Here, we further assessed three spatial parameters: the eccentricity of the allocentric landmark to the initial gaze location (fixation point) and the directions of landmark shift relative to the initial gaze and target locations. For landmark eccentricity, the results were comparable to the “landmark” versus “no landmark” experiment: dependence on allocentric information increases with the proximity of the landmark to the initial gaze position, presumably for reasons similar to those that were discussed already. 
For the direction of landmark shift, allocentric weight increases when the landmark shifts away from the fixation point or the target. This effect is especially remarkable when comparing different landmark shift directions relative to the target. It can be explained in terms of saliency and task relevance (Klinghammer, Blohm, & Fiehler, 2015). Because the monkeys must attend to the fixation point and target locations during this task, any landmark shifts towards these locations were more likely to be perceived as apparent motion (for reviews, see Nakayama, 1985). Although we hid the landmark shift behind the mask, the mask duration might not be enough to conceal apparent motion of the landmark toward attended regions on the screen, reducing allocentric reliability, thereby resulting in the current dataset. Another possible cause for the decrease in allocentric weight when the landmark is shifted towards the initial target location may be due to the conflict of egocentric and allocentric information, as both remembered target location and landmark location are adjacent postlandmark shift. Previous studies have shown that remembered egocentric information decays over the course of 2 to 5 s and people are more reliant on allocentric information when the memory delay increases (Carrozzo et al., 2002; Glover & Dixon, 2004; Goodale & Haffenden, 1998; Lemay et al., 2004; McIntyre et al., 1998; Obhi & Goodale, 2005). The current study used delays between 0.5 to 0.9 s, consequently, the monkeys preferred egocentric information when there is a conflict between the reference frames. One might expect to see a gradual transition to allocentric information with increasing memory delays. 
Finally, Byrne and Crawford (2010) showed that the egocentric-allocentric weighting could be changed by manipulating the relative reliability of egocentric information and the perceived stability of the landmark. We did not do this here (for reasons that will be discussed in the next section). However, we would expect Byrne and Crawford's (2010) findings to generalize to gaze in real-world settings. 
Study limitations
In order to train and maintain monkey behavior it is necessary to provide rewards based on correct behavior. This became a limitation in the current study because a normally small “spatial reward window” around actual target location would teach animals not to use the allocentric landmark. Conversely, a small reward window around the virtually shifted target in the landmark “shift” condition would force monkeys to rely fully on the allocentric landmark. This is why we could not replicate the control conditions required for the model in Byrne and Crawford (2010): Training animals to do pure egocentric and pure allocentric tasks (which one can simply ask humans to do) might not be possible and very likely would alter the normal weighting that one is trying to test. Here, we compromised during experiments by using a reward window small enough to promote consistent behavior, while also containing both the real and virtual target. One might still argue that we trained animals to make gaze shifts somewhere within this range, but the fact that our results tend to agree with analogous human studies mitigates this (Byrne & Crawford, 2010; Fiehler et al., 2014). Further, our result would still show that monkeys are capable of using allocentric cues in behavior, even if they do not ordinarily do so. 
Another possibility is that we underestimated the amount of allocentric weighting used in real-world conditions by conducting experiments in a visually impoverished environment. A more ecologically sound experiment was done by Fiehler et al. (2014) in human reach. Natural scenes (breakfast arrangement) were shown to subjects who were instructed to point at a missing object in the subsequent image, while other objects were moved according to several parameters. They found that allocentric weight increased with the number of task relevant objects that were displaced in the second image. The monkeys' natural habitat contains many more complex landmarks that can all be used as allocentric cues, but we chose a single, relatively simple landmark to enable precise quantification of the data. Thus, it is certainly possible that allocentric cues could play a stronger role in nature. 
Possible computational and neural mechanisms
At the computational level, allocentric vs. egocentric weighting has been explained in terms of Bayesian integration (Fiehler et al., 2014). In particular, Byrne and Crawford (2010) proposed a Maximum Likelihood Estimator (MLE) model that weighed these cues as a function of the reliability of their sources, and a prior related to the perceived stability of the landmark. (Going back to the golf analogy presented in our Introduction, experience dictates that a tree is a more dependable landmark than tumbleweed, even if there is no wind at the moment.) Such models can explain the general result of our landmark shift experiment, but do not explain the nuances of the effects we observed, which might be modeled at the computational or mechanistic level in terms of saliency maps, motion detectors, and the behavior of neural populations in structures related to gaze control. For example, some of our results might be explained by aspects of superior colliculus physiology such as the predominance of fixation-related signals in the anterior superior colliculus (Munoz & Wurtz, 1993), nonhomogeneities in superior colliculus receptive field topography (Hafed and Chen, 2016), or population behavior during saccade averaging (Glimcher & Sparks, 1993; van Opstal & van Gisbergen, 1990). Our findings might also be considered in terms of attractor dynamics within the recurrent networks for spatial memory. However, physiological recordings and modeling goes beyond the scope of the current behavioral study. 
More generally, neural representation of egocentric and allocentric reference frames have been studied as a part of the dorsal and ventral streams of the two-stream hypothesis, respectively. Both streams originate from early visual areas (area V1) and project dorsally to the posterior parietal cortex (PPC) or ventrally to the interior temporal cortex (IT) (Carey, Dijkerman, Murphy, Goodale, & Milner, 2006; Goodale & Humphrey, 1998; Goodale, Westwood, & Milner, 2004; Merigan & Maunsell, 1993; Schenk, 2006). The dorsal (action) stream has been suggested to represent and update visual target locations in gaze-centered coordinates in the PPC for action planning (Batista, Buneo, Snyder, & Andersen, 1999; Crawford et al., 2004; Fernandez-Ruiz, Goltz, DeSouza, Vilis, & Crawford, 2007; Medendorp, Goltz, Vilis, & Crawford, 2003; Pouget et al., 2002). Several neurophysiology studies have also shown projections from the ventral stream including area V4 (Ungerleider, Galkin, Desmone, & Gattass, 2008) and posterior inferior temporal area (TEO) (Distler, Boussaoud, Desimone, & Underleider, 1993; Webster, Bachevalier, & Ungerleider, 1994) to the lateral intraparietal cortex (LIP) in the PPC. Chen et al. (2011) proposed that allocentric information about a remembered target location is transformed into egocentric information at the earliest opportunity to be included in action planning. 
Allocentric information can also be processed in the frontal cortex. Olson et al. (Moorman & Olson, 2007a, 2007b; Olson & Gettner, 1995; Olson & Tremblay, 2000; Tremblay, Gettner, & Olson, 2002) showed object-centered responses in the SEF in monkeys trained in object-centered saccade tasks. Granted, SEF projects are reciprocal with LIP, making it hard to discern their unique roles. However, a study by Sabes, Breznen, and Andersen (2002) using a similar object-centered saccade task found no object-centered responses in the LIP, rather, there was a retinotopic movement oriented correlation with neuron firing rate. These studies, however, only examined interactions within objects, whereas our study looked at interactions between saccade target representations and an independent landmark. To our knowledge, this has not yet been studied at the neurophysiological level. 
Conclusions
In this study, we designed the cue conflict task to investigate the effect of large-scale allocentric landmarks on gaze shifts toward remembered target locations. We calculated a measure of allocentric weight to quantify behavioral reliance on both egocentric and allocentric information. The experimental data demonstrated that large-scale allocentric landmarks have an attractive influence on gaze behavior both when it is stable and when it is shifted. This influence fluctuates with different spatial parameters such as the eccentricity of the landmark location and its shift direction relative to the target and the initial gaze position. These results corroborate our hypotheses, which were derived from literature surrounding human reach models of egocentric versus allocentric frames of references. This paper bridges the gap between previous studies that compared egocentric and allocentric frames of reference in the human reach model and future neurophysiological studies in nonhuman primate gaze models to reveal the underlying neural substrates responsible for allocentric encoding of saccade targets. 
Acknowledgments
We would like to thank Dr. Amirsaman Sajad and Dr. Robert Marino for their contribution to the experimental setup, Dr. Xiaogang Yan and Saihong Sun for technical support, and Dr. Hongying Wang for contributing to data acquisition. This work was supported by the Canadian Institutes of Health Research. Dr. J. Douglas Crawford was supported by a Canada Research Chair and the Canada First Research Excellence Fund as a part of Vision: Science to Application, Dr. Robert Marino was supported by the Natural Sciences and Engineering Council CAN-ACT CREATE program, and Dr. Amirsaman Sajad was supported by an Ontario Graduate Scholarship. 
Commercial relationships: none. 
Corresponding author: J. Douglas Crawford. 
Email: jdc@yorku.ca
Address: Centre for Vision Research, York University, Toronto, Ontario, Canada 
References
Andersen, R. A., & Buneo, C. A. (2002). Intentional maps in posterior parietal cortex. Annual Review of Neuroscience, 25, 189–220. [PubMed] [Article]
Baizer, J. S., & Bender, D. B. (1989). Comparison of saccadic eye movements in humans and macaques to single-step and double-step target movements. Vision Research, 29, 485–498. [PubMed]
Batista, A. P., Buneo, C. A., Snyder, L. H., & Andersen, R. A. (1999). Reach plans in eye-centered coordinates. Science, 285, 257–260. [PubMed] [Article]
Battaglia-Mayer, A., Caminiti, R., Lacquaniti, F., & Zago, M. (2003). Multiple levels of representation of reaching in the parieto-frontal network. Cerebral Cortex, 13, 1009–1022. [PubMed]
Blohm, G., & Crawford, J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7 (5): 4, 1–22, doi:10.1167/7.5.4. [PubMed] [Article]
Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64, 476–482. [PubMed]
Bridgeman, B., Perry, S., & Anand, S. (1997). Interaction of cognitive and sensorimotor maps of visual space. Perception and Psychophysics, 59, 456–469. [PubMed]
Brouwer, A.-M., & Knill, D. C. (2007). The role of memory in visually guided reaching. Journal of Vision, 7 (5): 6, 1–12, doi:10.1167/7.5.6. [PubMed] [Article]
Burnod, Y., Baraduc, P., Battaglia-Mayer, A., Guigon, E., Koechlin, E., Ferraina, S.,… Caminiti, R. (1999). Parieto-frontal coding of reaching: an integrated framework. Experimental Brain Research, 129, 325–346. [PubMed]
Byrne, P. A., Cappadocia, D. C., & Crawford, J. D. (2010). Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Research, 50, 2661–2670. [PubMed] [Article]
Byrne, P. A., & Crawford, J. D. (2010). Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach. Journal of Neurophysiology, 103, 3054–3069. [PubMed] [Article]
Carey, D. P., Dijkerman, H. C., Murphy, K. J., Goodale, M. A., & Milner, A. D. (2006). Pointing to places and spaces in a patient with visual form agnosia. Neuropsychologia, 34, 329–337. [PubMed] [Article]
Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research, 147, 426–436. [PubMed] [Article]
Chen, Y., Byrne, P. A., & Crawford, J. D. (2011). Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia, 49, 49–60. [PubMed] [Article]
Chen, Y., Monaco, S., Byrne, P. A., Yan, X., Henriques, D. Y., & Crawford, J. D. (2014). Allocentric versus egocentric representation of remembered reach targets in human cortex. Journal of Neuroscience, 34, 12515–12526. [PubMed] [Article]
Colby, C. L. (1998). Action-oriented spatial reference frames in cortex. Neuron, 20, 15–24. [PubMed] [Article]
Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22, 319–349. [PubMed] [Article]
Crawford, J. D., Ceylan, M. Z., Klier, E. M., & Guitton, D. (1999). Three-dimensional eye-head coordination during gaze saccades in the primate. Journal of Neurophysiology, 81, 1760–1782. [PubMed] [Article]
Crawford, J. D., & Guitton, D. (1997). Primate head-free saccade generator implements a desired (post-VOR) eye position command by anticipating intended head motion. Journal of Neurophysiology, 78, 2811–2816. [PubMed] [Article]
Crawford, J. D., Henriques, D. Y., & Medendorp, W. P. (2011). Three-dimensional transformations for goal-directed action. Annual Review of Neuroscience, 34, 309–331. [PubMed] [Article]
Crawford, J. D., Medendorp, W. P., & Marotta, J. J. (2004). Spatial transformations for eye-hand coordination. Journal of Neurophysiology, 92, 10–19. [PubMed] [Article]
Dassonville, P., Schlag, J., & Schlag-Rey, M. (1995). The use of egocentric and exocentric location cues in saccadic programming. Vision Research, 35, 2191–2199. [PubMed] [Article]
de Grave, D. D. J., Brenner, E., & Smeets, J. B. (2004). Illusions as a tool to study the coding of pointing movements. Experimental Brain Research, 155, 56–62. [PubMed] [Article]
Deneve, S., & Pouget, A. (2003). Basis functions for object-centered representations. Neuron, 37, 347–359. [PubMed] [Article]
Diedrichsen, J., Werner, S., Schmidt, T., & Trommershauser, J. (2004). Immediate spatial distortions of pointing movements induced by visual landmarks. Perception and Psychophysics, 66, 89–103. [PubMed]
Distler, C., Boussaoud, D., Desimone, R., & Ungerleider, L. G. (1993). Cortical connections of inferior temporal area TEO in macaque monkeys. Journal of Comparative Neurology, 334, 125–150. [PubMed] [Article]
Edelman, J. A., Kristjánsson, Á., & Nakayama, K. (2007). The influence of object-relative visuomotor set on express saccades. Journal of Vision, 7 (6): 12, 1–13, doi:10.1167/7.6.12. [PubMed] [Article]
Fernandez-Ruiz, J., Goltz, H. C., DeSouza, J. F., Vilis, T., & Crawford, J. D. (2007). Human parietal “reach region” primarily encodes intrinsic visual direction, not extrinsic movement direction, in a visual motor dissociation task. Cerebral Cortex, 17, 2283–2292. [PubMed] [Article]
Fiehler, K., Wolf, C., Klinghammer, M., & Blohm, G. (2014). Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Frontiers in Human Neuroscience, 8, 636. [PubMed] [Article]
Filimon, F. (2015) Are all spatial reference frames egocentric? Reinterpreting evidence for allocentric, object-centered, or world-centered reference frames. Frontiers in Human Neuroscience, 9, 648. [PubMed] [Article]
Foulsham, T., & Kingstone, A. (2010). Asymmetries in the direction of saccades during perception of scenes and fractals: effect of image type and image features. Vision Research, 50, 779–795. [PubMed] [Article]
Foulsham, T., Teszka, R., & Kingstone, A. (2011). Saccade control in natural images is shaped by the information visible at fixation: Evidence from asymmetric gaze-contingent windows. Attention, Perception, & Psychophysics, 73, 226. [PubMed] [Article]
Fuchs, A. F., & Robinson, D. A. (1966). A method for measuring horizontal and vertical eye movement chronically in the monkey. Journal of Applied Physiology, 21, 1068–1070. [PubMed]
Glimcher, P. W., & Sparks, D. L. (1993). Representation of averaging saccades in the superior colliculus of the monkey. Experimental Brain Research, 95, 429–435. [PubMed]
Glover, S., & Dixon, P. (2004). A step and a hop on the Müller-Lyer: Illusion effects on lower-limb movements. Experimental Brain Research, 154, 504–512. [PubMed] [Article]
Goodale, M. A., & Haffenden, A. (1998). Frames of reference for perception and action in the human visual system. Neuroscience & Biobehavioral Reviews, 22, 161–172. [PubMed] [Article]
Goodale, M. A., & Humphrey, G. K. (1998). The objects of action and perception. Cognition, 67, 181–207. [PubMed]
Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15, 20–25. [PubMed] [Article]
Goodale, M. A., Westwood, D. A., & Milner, A. D. (2004). Two distinct modes of control for object-directed action. Progress in Brain Research, 144, 131–144. [PubMed]
Hafed, Z. M., & Chen, C. Y. (2016). Sharper, stronger, faster upper visual field representation in primate superior colliculus. Current Biology, 26, 1647–1658. [PubMed] [Article]
Hanisch, C., Konczak, J., & Dohle, C. (2001). The effect of the Ebbinghaus illusion on grasping behaviour of children. Experimental Brain Research, 137, 237–245. [PubMed]
Hay, L., & Redon, C. (2006). Response delay and spatial representation in pointing movements. Neuroscience Letters, 408, 194–198. [PubMed] [Article]
He, P. Y., & Kowler, E. (1989). The role of location probability in the programming of saccades: implications for center-of-gravity tendencies. Vision Research, 29, 1165–1181. [PubMed]
Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18, 1583–1594. [PubMed] [Article]
Hikosaka, O., & Wurtz, R. H. (1983). Visual and oculomotor functions of monkey substantia nigra pars reticulata. III. Memory-contingent visual and saccade responses. Journal of Neurophysiology, 49, 1268–1284. [PubMed] [Article]
Inoue, M., Harada, H., Fujisawa, M., Uchimura, M., & Kitazawa, S. (2015). Modulation of prism adaptation by a shift of background in the monkey. Behavioral Brain Research, 297, 56–66. [PubMed] [Article]
Karn, K. S., Møller, P., & Hayhoe, M. M. (1997). Reference frames in saccadic targeting. Experimental Brain Research, 115 (2), 267–282. [PubMed]
Khan, A. Z., Munoz, D. P., Takahashi, N., Blohm, G., & McPeek, R. M. (2016). Effects of a pretarget distractor on saccade reaction times across space and time in monkeys and humans. Journal of Vision, 16 (7): 5, 1–20, doi:10.1167/16.7.5. [PubMed] [Article]
Kim, H.-Y. (2013). Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis. Restorative Dentistry & Endodontics, 38, 52–54. [PubMed]
Klier, E. M., Wang, H., & Crawford, J. D. (2001). The superior colliculus encodes gaze commands in retinal coordinates. Nature Neuroscience, 4, 627–632. [PubMed] [Article]
Klier, E. M., Wang, H., & Crawford, J. D. (2003). Three-dimensional eye-head coordination is implemented downstream from the superior colliculus. Journal of Neurophysiology, 89, 2839–2853. [PubMed] [Article]
Klinghammer, M., Blohm, G., & Fiehler, K. (2015). Contextual factors determine the use of allocentric information for reaching in a naturalistic scene. Journal of Vision, 15 (13): 24, 1–13, doi:10.1167/15.13.24. [PubMed] [Article]
Krigolson, O., Clark, N., Heath, M., & Binsted, G. (2007). The proximity of visual landmarks impacts reaching performance. Spatial Vision, 20, 317–336. [PubMed]
Krigolson, O., & Heath, M. (2004). Background visual cues and memory-guided reaching. Human Movement Science, 23, 861–877. [PubMed] [Article]
Lee, B. T., & McPeek, R. M. (2013). The effects of distractors and spatial precues on covert visual search in macaque. Vision Research, 76, 43–49. [PubMed] [Article]
Lemay, M., Bertram, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16–32. [PubMed]
Lemay, M., & Stelmach, G. E. (2005). Multiple frames of reference for pointing to a remembered target. Experimental Brain Research, 164, 301–310. [PubMed] [Article]
Martinez-Trujillo, J. C., Wang, H., & Crawford, J. D. (2003). Electrical stimulation of the supplementary eye fields in the head-free macaque evokes kinematically normal gaze shifts. Journal of Neurophysiology, 89, 2961–2974. [PubMed] [Article]
McIntyre, J., Stratta, F., & Lacquaniti, F. (1998). Short-term memory for reaching to visual targets: psychophysical evidence for body-centered reference frames. Journal of Neuroscience, 18, 8423–8435. [PubMed] [Article]
Medendorp, W. P., Goltz, H. C., Vilis, T., & Crawford, J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23, 6209–6214. [PubMed] [Article]
Merigan, W. H., & Maunsell, J. H. (1993). How parallel are the primate visual pathways? Annual Review of Neuroscience, 16, 369–402. [PubMed] [Article]
Milner, A. D., & Goodale, M. A. (2008). Two visual systems re-viewed. Neuropsychologia, 46, 774–785. [PubMed] [Article]
Moorman, D. E., & Olson, C. R. (2007a). Combination of neuronal signals representing object centered location and saccade direction in the macaque supplementary eye field. Journal of Neurophysiology, 97, 3554–3566. [PubMed] [Article]
Moorman, D. E., & Olson, C. R. (2007b). Impact of experience on the representation of object-centered space in the macaque supplementary eye field. Journal of Neurophysiology, 97, 2159–2173. [PubMed] [Article]
Munoz, D. P., & Wurtz, R. H. (1993). Fixation cells in monkey superior colliculus. I. Characteristics of cell discharge. Journal of Neurophysiology, 70, 559–575. [PubMed] [Article]
Nakayama, K. (1985). Biological image motion processing: a review. Visual Research, 25, 625–660. [PubMed] [Article]
Neely, K. A., Tessmer, A., Binsted, G., & Heath, M. (2008). Goal-directed reaching: Movement strategies influence the weighting of allocentric and egocentric visual cues. Experimental Brain Research, 186, 375–384. [PubMed] [Article]
Obhi, S. S., & Goodale, M. A. (2005). The effects of landmarks on the performance of delayed and real-time pointing movements. Experimental Brain Research, 167, 335–344. [PubMed] [Article]
Olson, C. R. (2003). Brain representation of object-centered space in monkeys and humans. Annual Review of Neuroscience, 26, 331–354. [PubMed] [Article]
Olson, C. R., & Gettner, S. N. (1995). Object-centered direction selectivity in the macaque supplementary eye field. Science, 269, 985–988. [PubMed] [Article]
Olson, C. R., & Gettner, S. N. (1996). Brain representation of object-centered space. Current Opinion in Neurobiology, 6, 165–170. [PubMed] [Article]
Olson, C. R., & Tremblay, L. (2000). Macaque supplementary eye field neurons encode object-centered locations relative to both continuous and discontinuous objects. Journal of Neurophysiology, 83, 2392–2411. [PubMed] [Article]
Pouget, A., Ducom., J. C., Torri, J., & Bavelier, D. (2002). Multisensory spatial representations in eye-centered coordinates for reaching. Cognition, 83, B1–11. [PubMed] [Article]
Robinson, D. A. (1981). The use of control systems analysis in the neurophysiology of eye movements. Annual Review of Neuroscience, 4, 463–503. [PubMed] [Article]
Rothkopf, C. A., Ballard, D. H., & Hayhoe, M. M. (2007). Task and context determine where you look. Journal of Vision, 7 (14): 16, 1–20, doi:10.1167/7.14.16. [PubMed] [Article]
Sabes, P. N., Breznen, B., & Andersen, R. A. (2002). Parietal representation of object-based saccades. Journal of Neurophysiology, 88, 1815–1829. [PubMed] [Article]
Schenk, T. (2006). An allocentric rather than perceptual deficit in patient D.F. Nature Neuroscience, 9, 1369–1370. [PubMed] [Article]
Sharika, K. M., Ramakrishnan, A., & Murthy, A. (2014). Use of exocentric and egocentric representations in the concurrent planning of sequential saccades. Journal of Neuroscience, 34 (48), 16009–16021. [PubMed] [Article]
Sheth, B. R., & Shimojo, S. (2004). Extrinsic cues suppress the encoding of intrinsic cues. Journal of Cognitive Neuroscience, 16, 339–350. [PubMed] [Article]
Thaler, L., & Goodale, M. A. (2011). Reaction times for allocentric movements are 35ms slower than reaction times for target-directed movements. Experimental Brain Research, 211, 313–328. [PubMed] [Article]
Thompson, A. A., & Henriques, D. Y. (2010). Locations of serial reach targets are coded in multiple reference frames. Vision Research, 50, 2651–2660. [PubMed] [Article]
Tremblay, F., & Tremblay, L. E. (2002). Cortico-motor excitability of the lower limb motor representation: A comparative study in Parkinson's disease and healthy controls. Clinical Neurophysiology, 113, 2006–2012. [PubMed] [Article]
Tremblay, L., Gettner, S. N., & Olson C. R. (2002). Neurons with object-centered spatial selectivity in macaque SEF: do they represent locations or rules? Journal of Neurophysiology, 87, 333–350. [PubMed] [Article]
Tweed, D., Cadara, W., & Vilis, T. (1990). Computing three-dimensional eye position quaternions and eye velocity from search coil signals. Vision Research, 30, 97–110. [PubMed]
Uchimura, M., & Kitazawa, S. (2013). Cancelling prism adaptation by a shift of background: a novel utility of allocentric coordinates for extracting motor errors. Journal of Neuroscience, 33, 7595–7602. [PubMed] [Article]
Ungerleider, L. G., Galkin, T. W., Desmone, R., & Gattass, R. (2008). Cortical connections of area V4 in the macaque. Cerebral Cortex, 18, 477–499. [PubMed] [Article]
van Opstal, A. J., & van Gisvergen, J. A. (1990). Role of monkey superior colliculus in saccade averaging. Experimental Brain Research, 79, 143–149. [PubMed]
Vesia, M., & Crawford, J. D. (2012). Specialization of reach function in human posterior parietal cortex. Experimental Brain Research, 221, 1–18. [PubMed] [Article]
Vindras, P., & Viviani, P. (1998). Frames of reference and control parameters in visuomanual pointing. Journal of Experimental Psychology: Human Perception & Performance, 24, 569–591. [PubMed] [Abstract]
Vogeley, K., & Fink, G. R. (2003). Neural correlates of the first-person-perspective. Trends in Cognitive Science, 7, 38–42. [PubMed] [Article]
Webster, M. J., Bachevalier, J., & Ungerleider, L. G. (1994). Connections of inferior temporal areas TEO and TE with parietal and frontal cortex in the macaque monkeys. Cerebral Cortex, 4, 470–483. [PubMed]
Wismeijer, D. A., & Gegenfurtner, K. R. (2012). Orientation of noisy texture affects saccade direction during free viewing. Vision Research, 58, 19–26. [PubMed] [Article]
Wu, C. C., & Kowler, E. (2013). Timing of saccadic eye movements during visual search for multiple targets. Journal of Vision, 13 (11): 11, 1–21, doi:10.1167/13.11.11. [PubMed] [Article]
Figure 1
 
Cue conflict task. (A) Time course for the cue conflict task. (B) The dotted circles represent the eye position at each interval, and the arrows indicate gaze shifts. The red dot represents the fixation point, the white dot represents the target, and the white crosses represent the allocentric landmark that spans the range of the screen. The red arrows represent a head-unrestrained gaze shift towards the remembered location of the target (T = original target location, T' = shifted target location).
Figure 1
 
Cue conflict task. (A) Time course for the cue conflict task. (B) The dotted circles represent the eye position at each interval, and the arrows indicate gaze shifts. The red dot represents the fixation point, the white dot represents the target, and the white crosses represent the allocentric landmark that spans the range of the screen. The red arrows represent a head-unrestrained gaze shift towards the remembered location of the target (T = original target location, T' = shifted target location).
Figure 2
 
Saccade endpoint correction procedure, sample session for M2. The procedure for factoring out general memory guided saccade errors. (A) Gaze trajectories in the “no landmark” control condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Close-up of the upper right target in Figure 2A. Landmark locations are represented using blue crosses. (C) Uncorrected mean gaze endpoints for each landmark in the cued “no shift” condition are shown in red. Black lines associate gaze endpoints to their corresponding landmarks. (D) Gaze endpoints for the “no shift” condition are corrected by subtracting the mean gaze endpoint.
Figure 2
 
Saccade endpoint correction procedure, sample session for M2. The procedure for factoring out general memory guided saccade errors. (A) Gaze trajectories in the “no landmark” control condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Close-up of the upper right target in Figure 2A. Landmark locations are represented using blue crosses. (C) Uncorrected mean gaze endpoints for each landmark in the cued “no shift” condition are shown in red. Black lines associate gaze endpoints to their corresponding landmarks. (D) Gaze endpoints for the “no shift” condition are corrected by subtracting the mean gaze endpoint.
Figure 3
 
Distribution of landmark influence. (A) Histogram showing the distribution of LI (d/D) for M1 (X axis) plotted against the number of trials in each bin (Y axis) collapsed across all sessions. Inset, LI is calculated using the target location (T), landmark location (L), and gaze endpoint (G). (B) Histogram showing the distribution of LI for M2.
Figure 3
 
Distribution of landmark influence. (A) Histogram showing the distribution of LI (d/D) for M1 (X axis) plotted against the number of trials in each bin (Y axis) collapsed across all sessions. Inset, LI is calculated using the target location (T), landmark location (L), and gaze endpoint (G). (B) Histogram showing the distribution of LI for M2.
Figure 4
 
Influence of landmark position relative to initial gaze position on landmark influence. (A) Comparison of the mean LI (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. (B) In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned orthogonal to this axis. Overall LI sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents landmark locations. The mean LI for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the LI for each direction and serves as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean LI (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall LI sorted by landmark direction for M2. Same convention as Figure 4B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 4
 
Influence of landmark position relative to initial gaze position on landmark influence. (A) Comparison of the mean LI (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. (B) In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned orthogonal to this axis. Overall LI sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents landmark locations. The mean LI for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the LI for each direction and serves as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean LI (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall LI sorted by landmark direction for M2. Same convention as Figure 4B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 5
 
Converting raw data to allocentric weight, sample session for M2. (A) Gaze trajectories in the “no shift” condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Uncorrected gaze endpoints in the “shift” condition are shown in red. Shifted target locations are represented using blue circles. Black lines associate gaze endpoints to their corresponding shifted targets. (C) Corrected gaze endpoints for the “shift” condition. (D) Each group of original target, shifted target, and gaze endpoint from Figure 5C is transformed to control for landmark shift direction and amplitude. Original target location is centered at the origin, the shifted target location is represented by the blue dot, and each red dot represents the gaze endpoint of a unique trial in this sample session.
Figure 5
 
Converting raw data to allocentric weight, sample session for M2. (A) Gaze trajectories in the “no shift” condition are shown in blue. Target locations are represented using dark green circles. Mean gaze endpoints, magenta crosses, are calculated for each target location. (B) Uncorrected gaze endpoints in the “shift” condition are shown in red. Shifted target locations are represented using blue circles. Black lines associate gaze endpoints to their corresponding shifted targets. (C) Corrected gaze endpoints for the “shift” condition. (D) Each group of original target, shifted target, and gaze endpoint from Figure 5C is transformed to control for landmark shift direction and amplitude. Original target location is centered at the origin, the shifted target location is represented by the blue dot, and each red dot represents the gaze endpoint of a unique trial in this sample session.
Figure 6
 
Distribution of allocentric weights. (A) Histogram showing the distribution of AW for M1 (X axis) plotted against the number of trials (Y axis) collapsed across all sessions. AW is calculated using the original target location (T), shifted target location (T'), and gaze endpoint (G). (B) Histogram showing the distribution of AW for M2.
Figure 6
 
Distribution of allocentric weights. (A) Histogram showing the distribution of AW for M1 (X axis) plotted against the number of trials (Y axis) collapsed across all sessions. AW is calculated using the original target location (T), shifted target location (T'), and gaze endpoint (G). (B) Histogram showing the distribution of AW for M2.
Figure 7
 
Influence of allocentric landmark position relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned perpendicular to this axis. (B) Overall AW sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents shifted target locations. The mean AW for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the AW for each direction and serve as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean AW (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 7
 
Influence of allocentric landmark position relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks positioned (relative to the target) closer or further from the initial gaze position (X axis) for M1. In other words, “closer” landmarks lay between initial gaze and the target, whereas “further” landmarks lay beyond the target, and “neutral” represents landmark intersections positioned perpendicular to this axis. (B) Overall AW sorted by landmark direction for M1. The green circle at the center represents target location, whereas the outer circle (blue) represents shifted target locations. The mean AW for a given direction is represented by the intersection point of the red curve with the black line segment corresponding with each direction. The red curve is a cubic spline interpolation of the AW for each direction and serve as a visual guide for the data. Data for closer and further from initial gaze position are represented in the left and right semicircles, respectively. (C) Comparison of the mean AW (Y axis) between landmarks positioned closer or further from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 8
 
Influence of landmark shift relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M1. (B) Overall AW sorted by landmark shift direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark shift direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 8
 
Influence of landmark shift relative to initial gaze position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M1. (B) Overall AW sorted by landmark shift direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial gaze position (X axis) for M2. (D) Overall AW sorted by landmark shift direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 9
 
Influence of landmark shift relative to initial target position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift toward or away from the initial target position (X axis) for M1. (B) Overall AW sorted by landmark direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial target position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
Figure 9
 
Influence of landmark shift relative to initial target position on allocentric weight. (A) Comparison of the mean AW (Y axis) between landmarks that shift toward or away from the initial target position (X axis) for M1. (B) Overall AW sorted by landmark direction for M1. Same conventions Figure 7B. (C) Comparison of the mean AW (Y axis) between landmarks that shift towards or away from the initial target position (X axis) for M2. (D) Overall AW sorted by landmark direction for M2. Same convention as Figure 7B. Error bars indicate one standard error of the mean, statistical significance is denoted by (*) above the bar graphs.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×