Free
Research Article  |   October 2010
Similar perceptual costs for dividing attention between retina- and space-centered targets in humans
Author Affiliations
Journal of Vision October 2010, Vol.10, 4. doi:https://doi.org/10.1167/10.12.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Niebergall, Lawrence Huang, Julio C. Martinez-Trujillo; Similar perceptual costs for dividing attention between retina- and space-centered targets in humans. Journal of Vision 2010;10(12):4. https://doi.org/10.1167/10.12.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual-spatial attention enhances the perception of behaviorally relevant stimuli. One issue that remains unclear is whether attention is preferentially allocated to stimuli that remain fixed in one reference frame (e.g., retina-centered), or whether it could be equally allocated to stimuli fixed in other frames. We investigated this issue by asking observers to covertly attend to sinusoidal gratings fixed in different reference frames and to discriminate changes in their orientation. First, we quantified orientation discrimination thresholds (ODTs) while subjects pursued a moving dot and either attended to a retina- or a space-centered grating. We then measured ODTs while subjects divided attention between the two gratings. We found that dividing attention proportionally increased ODTs for both target gratings relative to the focused attention condition. Second, we used the same stimulus configuration and conditions during a fixation task. Here, one grating was retina- and space-centered while the other moved in space and on the retina. Again, ODTs during divided attention proportionally increased for both gratings. These increases were similar to those measured during smooth pursuit. Our results show that humans can proportionally divide attention between targets centered in different reference frames during both smooth pursuit eye movements and fixations.

Introduction
Observers can define the position of visual objects in different frames of reference (see Wade & Swanston, 1996 for review), for example, relative to their gaze pointing direction (retina-centered) or relative to space (space-centered). One issue that has been targeted in studies of vision is the reference frame to which attentional resources are allocated. 
Functional imaging studies in humans have reported attentional modulation of brain signals across retinotopically organized areas of the visual cortex (Brefczynski & DeYoe, 1999; Tootell et al., 1998), suggesting that attention modulates the processing of retina-centered representations. It has also been shown that retina-centered targets lead to faster reaction times and higher accuracy rates during reflexive shifts of attention triggered by exogenous cues (Barrett, Bradshaw, Rose, Everatt, & Simpson, 2001), as well as during voluntary allocation of attention, induced by endogenous cues (Barrett, Bradshaw, & Rose, 2003). The latter study, however, also showed a significant, albeit smaller benefit of allocentric (with respect to other objects in the scene) target representations, suggesting that attention can be effectively directed to non-retina-fixed targets. 
Recently, Golomb, Chun, and Mazer (2008) showed that after a saccade, identification is faster and more accurate for targets falling on the same retinal location as a pre-saccadic cue. Changing the task instructions biased the processing in favor of space-centered targets, particularly at later time periods after saccades. Nevertheless, residual retina-centered facilitation persisted to some degree (Golomb, Pulido, Albrecht, Chun, & Mazer, 2010). The authors suggested that the space-centered bias may be due to fast post-saccadic updating of an attentional map in a retina-centered frame, and that the attentional effects triggered by the pre-saccadic cue might be transferred to the new space-centered target, a process resembling predictive remapping (Melcher, 2007). 
On the other hand, in many situations it may be intuitive to allocate attention in a space-centered reference frame. For instance, attending to a stationary peripheral object during self-motion or smooth pursuit movements requires attention to change position dynamically within a retinotopic map following the retinal displacement of the target representation. In this circumstance, it would be more efficient for attention to act on space-centered representations of the target object, avoiding repetitive shifts of attention across a retinotopic map and leading, at least in theory, to a more robust perceptual enhancement. 
The latter hypothesis implies that attention can act on space-invariant representations of objects. Indeed, it has been shown that responses of neurons in regions of the parietal cortex, which show space invariance within their receptive field region (Duhamel, Bremmer, BenHamed, & Graf, 1997; Galletti, Battaglini, & Fattori, 1993), are strongly modulated by attention (Cook & Maunsell, 2002). Some of these parietal cortex neurons seem to encode target motion in space-centered coordinates (Ilg, Schumann, & Thier, 2004; Thier & Ilg, 2005). Moreover, it has been suggested that neurons in human area MT/V5, where responses are also modulated by attention (Saenz, Buracas, & Boynton, 2002), encode the spatial rather than retinal position of visual stimuli (d'Avossa et al., 2007; Tootell et al., 1998; but see Gardner, Merriam, Movshon, & Heeger, 2008). Accordingly, a study using a task design comparable to that of Golomb et al. (2008) showed that inhibition of return linked to attention is stronger for space-centered targets (Pertzov, Zohary, & Avidan, 2010). 
In addition, during multiple-object tracking attention can be simultaneously allocated and maintained on several moving objects (for review see Cavanagh & Alvarez, 2005). Liu et al. (2005) showed that to successfully track multiple objects, subjects relied on the spatial position of objects relative to each other rather than on their retinotopic positions. This result suggests that during tracking attention can be allocated to allocentric representations. 
One potential approach to investigate whether attentional resources can be similarly allocated to targets in different frames of reference is by instructing subjects to divide attention between targets centered in the different frames and quantifying changes in performance relative to when attention is focused on each target. This is based on the fact that previous studies have demonstrated that dividing attention between two visual targets both fixed on the retina and in space impairs perception of both stimuli relative to focusing attention on one stimulus at the time (Braun & Julesz, 1998; Joseph, Chun, & Nakayama, 1997; Lee, Itti, Koch, & Braun, 1999; Lee, Koch, & Braun, 1997). In the present study, we dissociated the reference frames of two perceptual targets during smooth pursuit eye movements (Experiment 1) and fixation (Experiment 2) and tested whether dividing attention between them leads to different perceptual costs relative to focusing attention on each target at a time. A lower cost for one target may indicate that attention is more easily allocated within the corresponding frame. On the other hand, a similar cost for both targets may indicate that attention can be equally allocated to targets centered in different frames of reference. 
Experiment 1
We tested the ability of human observers to discriminate the direction of a transient change in the orientation of one of two sine-wave gratings, eccentrically positioned relative to the gaze pointing direction, while pursuing a moving dot. We measured the subjects' performance in three different conditions: (a) when focusing attention on a retina-centered target grating, (b) when focusing attention on a space-centered grating, and (c) when dividing attention between the two gratings. Importantly, in the three conditions the sensory stimulation was identical but the allocation of attention varied. 
Methods
Subjects
A total of 14 subjects with ages ranging from 20 to 40 years participated in the experiment. All of them had normal or corrected-to-normal vision and signed an informed consent form before starting the experiment. All the procedures used in this study were pre-approved by the Ethics Committee of the Faculty of Medicine at McGill University. Except for two of the authors (“rni”, “jcm”), all subjects were naïve as to the purpose of the experiment. 
Apparatus and stimuli
All stimuli were generated using a custom computer program running on a Macintosh G4 PowerPC. Stimuli were displayed on a CRT monitor (LaCie, Oregon, USA) at a refresh rate of 75 Hz and with a resolution of 1280 × 1024. The stimuli were presented on a white background (mean luminance = 41.5 cd/m2). They consisted of two identical sinusoidal gratings with a spatial frequency of 2 cycles per degree and a diameter of 1.6 degrees of visual angle. At the beginning of each trial, the two gratings appeared at the center of the screen superimposed on one another (Figure 1A). The smooth pursuit target was a black dot with a diameter of 0.6 degree of visual angle. Its initial position was either to the left, right, or above the two central sinusoidal gratings at an eccentricity of 8.75 degrees of visual angle. 
Figure 1
 
Experiment 1. (A) Trial sequence. The panels represent the visual display, the black dot represents the pursuit target, and the gratings represent the orientation discrimination targets. The gray and black curve arrows represent the grating's trajectory, the pursued target trajectory, and the grating tilt (clockwise and counterclockwise), respectively. Dotted lines indicate stimulus eccentricity. The labels 3a, 3b, and 3c illustrate the different attentional conditions. Attention was focused on: (a) the retina-centered target or (b) the space-centered target or (c) was divided between both. (B) Hypothetical retinal image of the attended gratings. The left cartoon represents a behind view of the eye and retinas, and the right one represents the stimulus display with the pursued dot and the grating's trajectory. The gray dashed arrows represent gaze direction.
Figure 1
 
Experiment 1. (A) Trial sequence. The panels represent the visual display, the black dot represents the pursuit target, and the gratings represent the orientation discrimination targets. The gray and black curve arrows represent the grating's trajectory, the pursued target trajectory, and the grating tilt (clockwise and counterclockwise), respectively. Dotted lines indicate stimulus eccentricity. The labels 3a, 3b, and 3c illustrate the different attentional conditions. Attention was focused on: (a) the retina-centered target or (b) the space-centered target or (c) was divided between both. (B) Hypothetical retinal image of the attended gratings. The left cartoon represents a behind view of the eye and retinas, and the right one represents the stimulus display with the pursued dot and the grating's trajectory. The gray dashed arrows represent gaze direction.
The experiments were conducted in dim light conditions. Subjects were seated 57 cm away from the screen. Viewing was binocular. In order to avoid subjects using the screen's square edges or any other environmental cue as a reference for perceiving the orientation of the target, they viewed only a portion of the screen through a black cylinder (aperture 45 cm), rendering the visible area circular (Figure 1, see also Ruiz-Ruiz & Martinez-Trujillo, 2008). Responses were collected via a standard keyboard. 
Procedure
The different experimental conditions were run in separate blocks. Each trial began with the appearance of the black dot at the screen periphery and the two vertically oriented, superimposed sinusoidal gratings at the center. 
Subjects were instructed to fixate the peripheral dot (Figure 1A, 1). After 660 ms, the dot and the upper one of the two sinusoidal gratings started moving at the same angular velocity of 5.75 degrees per second (deg/s) describing a quarter of a circle around the screen center (Figure 1A, 2). Within a trial, the direction of both moving stimuli was identical, but it could vary across trials, moving from the horizontal to the vertical meridian or vice versa. The second sinusoidal grating always remained stationary at the screen center. Subjects were instructed to pursue the dot and discriminate the direction (clockwise or counterclockwise) of a transient change (lasting 80 ms) in the orientation of the target grating. The two experimental conditions were named according to the reference frame in which the target grating was fixed. In the retina-centered condition, subjects were instructed to attend to the grating moving with the pursuit dot (Figure 1A, 3a), which remained fixed in a retina-centered frame (Figure 1B, left panel). In the space-centered condition (Figure 1A, 3b), subjects attended to the grating that remained stationary at the screen center. This grating moved on the retina but remained fixed in space (Figure 1B, right panel). In these conditions, the orientation change occurred in the attended grating with a probability of 1. In the divided attention condition (Figure 1A, 3c), the orientation change could occur in either of the gratings with equal probability (0.5). This rendered both stimuli equally relevant and therefore encouraged subjects to “proportionally” divide their attention between them. 
Target gratings changed their orientation at one of two possible times (after 1170 ms, or after 1730 ms from movement onset), encouraging subjects to attend to the target grating throughout the entire trial. At the end of the trial (Figure 1A, 4) and after the smooth pursuit was completed, subjects reported the direction of the orientation change (left arrow key for counterclockwise or right arrow key for clockwise). Instructions were given to respond as accurately as possible. We did not instruct the subjects to give speeded responses. 
Psychophysical data acquisition
We used a weighted staircase method (Kaernbach, 1991) to determine ODTs for each subject and condition. The up (miss)/down (hit) algorithm of 3/1 converged to the orientation change level at 75% correct response rate on the psychometric function. Orientation change intensities were defined in polar angles. They ranged from ±1° (smallest) to ±40° (largest), with positive signs representing clockwise changes and negative signs representing counterclockwise changes with respect to the vertical. For every possible combination of movement direction of the pursuit spot (from horizontal to vertical meridian and vice versa) and tilt direction (clockwise, counterclockwise) of the target grating, a separate staircase was run, resulting in a counterbalanced number of left and right responses within an experimental block. Each of the staircases was presented in pseudo-randomized order, which prevented the subject from predicting the tilt direction in the current trial. Blocks of each condition (retina-centered and space-centered) consisted of 80 trials each. Staircases within a block were sampled at a fixed number of 20 trials. For each subject, the order of the different blocks was randomized across experimental sessions. 
Before entering the data collection sessions, all subjects underwent at least three training sessions of 1-h duration each, after which they reached a stable performance. This avoided the effect of learning as a confounder in our final measurements (Ahissar & Hochstein, 1997) and allowed subjects to become proficient in pursuing the dot without making saccades toward the covertly attended grating(s). During training sessions, auditory feedback on the subject's performance was provided. The feedback was removed during experimental sessions. In order to start each staircase at an appropriate difficulty level, close to the potential convergence point, we adjusted the initial orientation change intensity of each staircase to the ODT obtained in the last training session. These values usually differed between targets in the two reference frames. Once the subject entered the experiments, the values were kept constant across sessions. Each subject completed four blocks of each condition. In 9 subjects, the smooth pursuit spot moved across the left upper screen quadrant in two of the blocks, and in the other two, it moved across the right upper quadrant. In 4 subjects, we tested only two blocks with targets moving in only one direction and located in the lower right quadrant. The latter was done after analyzing the data from the first 9 subjects and concluding that the quadrant and the direction of the pursuit had no effect on the trial outcome (see Supplementary Figure 1). 
Eye movement recordings
In order to control for smooth pursuit accuracy and potential saccades toward the target grating, eye position signals from both eyes were recorded using an infrared video-based head-mounted eye tracking system (Chronos Vision, Berlin, Germany) with a manufacturer-specified spatial resolution of 0.05°. Head movements were restrained by using a bite bar, which was customized for each subject using dental impression putty. The eye tracker was fixed to the head and firmly adjusted in order to avoid translational movements of the transducers relative to the subjects' head (DiScenna, Das, Zivotofsky, Seidman, & Leigh, 1995). We have previously used this procedure in a more demanding task in terms of head stabilization and it proved to be accurate (Ruiz-Ruiz & Martinez-Trujillo, 2008). 
The horizontal and vertical eye position components were monitored online at a sampling frequency of 100 Hz. The eye position data as well as the stimulus parameters and timing of the events during each trial were stored on a computer hard drive for offline analysis. The eye tracking software used a pupil tracking algorithm for converting video images into eye-in-head position signals. Each experimental block started with a calibration procedure in which subjects were instructed to fixate a single dot at four different positions on the monitor (center, 10 degrees up, down, right, and left of the center). 
Data analysis
Smooth pursuit movements
Eye position data were processed and analyzed using Matlab (The Math-Works, Natick, MA, USA). Smooth pursuit data were isolated by visually choosing the onset and offset of the smooth pursuit movement during offline display of trial events and eye movement signals. Eye position signals were filtered by a second-order Butterworth filter with a cut-off frequency of 30 Hz. Blinks were detected by visual inspection of the position signals and were removed from the data. 
In order to quantify the measurement error of the smooth pursuit signals and to obtain a threshold velocity criterion for saccade detection, we conducted eye movement measurements in three subjects using a different task (see Supplementary material for detailed explanation of the task and results). Small saccades (1 degree visual angle off the smooth pursuit target) reliably resulted in angular velocities >20 deg/s (Supplementary Figure 2D). Based on these measurements, we established that if velocity exceeded 20 deg/s during the two time periods of the target change (1170–1250 ms or 1730–1810 ms after smooth pursuit target motion onset) a trial was determined to contain a saccade and removed from the analysis. We also excluded those trials in which the removed segments comprised more than 30% of the smooth pursuit duration. Across the included subjects, the total number of discarded trials due to saccades was marginal and they were approximately equally distributed across conditions (1% focused retina-centered, 1.1% focused space-centered, 2.8% divided attention; p = 0.13, one-way ANOVA). We therefore considered the effect of saccades on ODT measurements negligible. One subject, however, was excluded from the analysis due to the presence of numerous saccades toward the target grating during the orientation change period. 
Smooth pursuit velocity
For each trial, the smooth pursuit horizontal and vertical velocity (vel) components were computed using the following equation: 
v e l ( i + Δ t 2 ) = x ( i + Δ t ) x i Δ t ,
(1)
where x is the eye position in degrees at time i, and Δt is the time window (50 ms) over which the velocity is calculated. 
The resulting component velocities were then used to compute eye angular velocity (angVel) using 
a n g V e l = v e l H o r i z o n t a l 2 + v e l V e r t i c a l 2 .
(2)
 
Angular velocity signals were filtered by a Butterworth filter with a cut-off frequency of 20 Hz (Schütz, Braun, & Gegenfurtner, 2009). Since we were mainly interested in eye velocity signals before and during the time of the orientation change, only the velocity data at the time around the potential target changes (1000–1900 ms after smooth pursuit onset) were included in the analysis, leaving out the initial acceleration and compensation of the catch-up saccade. For each trial, we computed a smooth pursuit gain by dividing eye angular velocity by the velocity of the smooth pursuit dot. A gain of 1.0 denotes identical eye and target velocities. Mean pursuit gains for each experimental condition were obtained by pooling single trial data. 
Smooth pursuit spatial accuracy
In order to measure the smooth pursuit accuracy, we first obtained 2D trajectories by plotting horizontal vs. vertical eye position signals from onset to offset of the smooth pursuit movement for every trial. The distance (dist) from each single data point (i) to the center of the space-centered target grating (which was also the center of the screen) was computed using 
d i s t ( i ) = x ( i ) 2 + y ( i ) 2 ,
(3)
where x is the horizontal eye position (deg), and y is the vertical eye position at sampling point i. In order to obtain an average radius per trial, the mean of the single point distances (radii) was calculated. The mean radii of individual trials were subsequently pooled across experimental conditions and used for statistical analysis. We also estimated the variability of the eye positions around the mean radius by computing, for every trial, the standard deviation (Std) of the distribution of single point distances around the radius. Thereafter, a mean Std was determined for each subject and condition. 
Orientation discrimination performance
ODTs were computed using the staircase method (see above). Figure 2 shows staircases from a sample subject. Each staircase represents a different combination of reference frame (retina-centered and space-centered) and attentional state (focused attention and divided attention). The orientation change intensity in degrees off the vertical (0°) is plotted as a function of staircase trial number. Staircases started at 6° for the retina-centered trials (white) and at 10° for the space-centered trials (gray) and converged to the threshold level. The smooth pursuit movement direction (upward) and orientation change direction of the target grating (rightward) were identical for all staircases. 
Figure 2
 
Experiment 1. Staircases corresponding to one sample subject (“jcm”). The abscissa represents the trial number, and the ordinate represents the orientation change intensity. The data points on the right are the ODTs, obtained by averaging the orientation change intensities corresponding to each staircase after the third reversal point.
Figure 2
 
Experiment 1. Staircases corresponding to one sample subject (“jcm”). The abscissa represents the trial number, and the ordinate represents the orientation change intensity. The data points on the right are the ODTs, obtained by averaging the orientation change intensities corresponding to each staircase after the third reversal point.
For each staircase, we computed an ODT, defined as the mean orientation change intensity across trials occurring after the third reversal point. It represents the magnitude of the orientation change (in degrees) at which the subject correctly discriminates the change direction relative to the vertical in 75% of the trials. In order to ensure that our staircase procedure correctly converged to the ODT, staircases with less than five reversal points were excluded. Across subjects, this led to the dismissal of 10.3% of the total number of recorded staircases (retina-centered/focused attention: 10.1%; retina-centered/divided attention: 7.5%; space-centered/focused attention: 2.2%; space-centered/divided attention: 11.2%). 
To quantify the effects of attentional state (focused vs. divided) and of reference frame (eye-centered vs. space-centered) on ODTs, we computed two indices, an Attentional Modulation Index (AMI; Treue & Martinez Trujillo, 1999) and a Reference Frame Index (RFI). The AMI was defined by 
A M I = O D T d i v i d e d A t t e n t i o n O D T f o c u s e d A t t e n t i o n O D T d i v i d e d A t t e n t i o n + O D T f o c u s e d A t t e n t i o n .
(4)
 
The AMI values vary between −1.0 and +1.0 with negative values representing lower ODTs with divided attention, positive values representing lower ODTs with focused attention, and zero representing no differences. Likewise, the RFI was defined by 
R F I = O D T s p a c e c e n t e r e d O D T r e t i n a c e n t e r e d O D T s p a c e c e n t e r e d + O D T r e t i n a c e n t e r e d .
(5)
 
Here, negative values represent lower ODTs in the space-centered condition, positive values represent lower ODTs in the retina-centered condition, and zero represents no differences. 
Statistical analysis
In order to avoid any unwarranted assumptions about the underlying distributions of ODTs, we used non-parametric statistical tests to analyze the performance data. However, the results did not change when parametric tests (t-test) were applied. To facilitate the interpretation of the eye position data and comparison with the existing literature, we used parametric statistics. 
Results
In order to determine whether subjects accurately pursued the target in the different experimental conditions, we measured the subjects' eye position during trials. We will address both the spatial and the temporal movement profiles during the different conditions. 
Smooth pursuit spatial accuracy
We first tested whether eye position trajectories diverged from the pursuit target trajectory. 
Figure 3A shows an example of a smooth pursuit trajectory (gray) of one subject superimposed on its mean radius (solid black line). The mean radius represents the average distance of all the data points along the trajectory to the central target. Figure 3B displays the population mean radii as a function of experimental condition. They are located close to the trajectory of the pursuit dot (dashed line) but were found to be systematically lower (p < 0.0001 in all three conditions, t-test). A possible explanation for this result is that subjects tried to minimize the distance between the smooth pursuit dot and the gratings by fixating the edge closer to the target, rather than the dot's center. More importantly, mean radii of the three conditions (retina-centered = 8.59, space-centered = 8.6, divided attention = 8.56) were not significantly different from each other (p = 0.79, one-way ANOVA). This indicates that subjects pursued the dot with similar accuracy in all conditions. 
Figure 3
 
Experiment 1. Smooth pursuit accuracy. (A) Sample trial. The gray trace depicts horizontal and vertical positions of a smooth pursuit eye movement trajectory moving from left bottom to top right. The black dashed line represents the trajectory of the pursuit dot (black disk), and the black solid line represents the mean radius corresponding to the smooth pursuit movement trajectory. The black arrows represent the positions at which the gratings could change orientation. The bottom right grating represents the space-centered target. (B) Mean radii values (±1 Std) of pursuit trajectories in the different experimental conditions across subjects (n = 13). The dashed line represents the trajectory of the pursuit dot. (C) Mean standard deviation of the radii (±1 Std).
Figure 3
 
Experiment 1. Smooth pursuit accuracy. (A) Sample trial. The gray trace depicts horizontal and vertical positions of a smooth pursuit eye movement trajectory moving from left bottom to top right. The black dashed line represents the trajectory of the pursuit dot (black disk), and the black solid line represents the mean radius corresponding to the smooth pursuit movement trajectory. The black arrows represent the positions at which the gratings could change orientation. The bottom right grating represents the space-centered target. (B) Mean radii values (±1 Std) of pursuit trajectories in the different experimental conditions across subjects (n = 13). The dashed line represents the trajectory of the pursuit dot. (C) Mean standard deviation of the radii (±1 Std).
In order to test how well each trajectory “adhered” to the mean radius, we computed the Std of the eye trajectories in each trial. A low Std indicates that the trajectory closely followed the circular motion of the smooth pursuit dot. Figure 3C shows the mean Std across subjects in the three conditions. The average deviations from the mean radius are small (retina-centered = 0.233, space-centered = 0.221, divided attention = 0.227), considering that on average they fall within the size of the smooth pursuit dot (0.6-degree diameter, Figure 3A). We therefore conclude that the subjects' eye position trajectories are well described by a quarter of a circle. In addition, the mean Std values of the three groups closely resembled each other (p = 0.89, one-way ANOVA), indicating that trajectories did not significantly change across the experimental conditions. 
Smooth pursuit velocity
Making accurate smooth pursuit eye movements requires the eyes to rotate with similar velocity as the pursuit target. We examined whether subjects showed systematic changes in eye movement velocity relative to the pursuit target velocity across experimental conditions. 
Figure 4A shows the means and confidence intervals of velocity profiles in the three experimental conditions for one sample subject. Each profile includes trajectories from all valid trials of one experimental block. All profiles show an initial acceleration phase followed by a stable plateau at target velocity (dashed line). During the analysis period used to compute the pursuit gain (highlighted in gray), the velocity profiles closely match the pursuit target velocity and largely overlap one another. 
Figure 4
 
Experiment 1. (A) Mean eye angular velocities as a function of trial time in the different conditions for one subject. Eye velocities (colored lines) are superimposed on the velocity of the smooth pursuit spot (dashed line). The shaded areas represent the 95% confidence intervals of the mean. (B) Average smooth pursuit (SP) gain across subjects for the different conditions (n = 13). For each subject, averages were computed over the time period indicated by the gray shaded area in (A). Error bars indicate Std.
Figure 4
 
Experiment 1. (A) Mean eye angular velocities as a function of trial time in the different conditions for one subject. Eye velocities (colored lines) are superimposed on the velocity of the smooth pursuit spot (dashed line). The shaded areas represent the 95% confidence intervals of the mean. (B) Average smooth pursuit (SP) gain across subjects for the different conditions (n = 13). For each subject, averages were computed over the time period indicated by the gray shaded area in (A). Error bars indicate Std.
Figure 4B shows the mean smooth pursuit gain across subjects in the three conditions. The mean values are close to 1 (retina-centered = 1, space-centered = 0.98, divided attention = 0.99, black solid line) and were not significantly different between conditions (p = 0.82, one-way ANOVA). These results suggest that subjects tracked the pursuit target with similar accuracy in the three conditions without making eye movements toward the corresponding target grating. 
Orientation discrimination performance
We measured ODTs while subjects covertly attended to either one (focused attention) or both (divided attention) target gratings during the smooth pursuit task. Figure 5A shows the effect of dividing attention on ODTs for targets in the two reference frames. A single data point represents the ODT value during focused attention (abscissa) and its corresponding value during divided attention (ordinate). Each subject contributes with two data points, one representing the retina-centered (white circles) and the other the space-centered (gray triangles) condition. 
Figure 5
 
Experiment 1. (A) Effect of dividing attention between targets in retina-centered and space-centered reference frames. Each symbol represents the average ODT of an individual subject. ODTs for retina-centered (white circles) and space-centered targets (gray triangles) during focused attention are plotted against their corresponding ODTs during divided attention (n = 13). (B) Average reference frame index (RFI) computed on focused attention data in (A). (C) Average attentional modulation index (AMI) as a function of reference frame. The error bars represent SEM.
Figure 5
 
Experiment 1. (A) Effect of dividing attention between targets in retina-centered and space-centered reference frames. Each symbol represents the average ODT of an individual subject. ODTs for retina-centered (white circles) and space-centered targets (gray triangles) during focused attention are plotted against their corresponding ODTs during divided attention (n = 13). (B) Average reference frame index (RFI) computed on focused attention data in (A). (C) Average attentional modulation index (AMI) as a function of reference frame. The error bars represent SEM.
We first tested whether during focused attention ODTs were different for targets fixed in the two reference frames. The clear separation between ODTs corresponding to retina-centered and space-centered targets (along the abscissa in Figure 5A) suggests consistently higher values for the latter. We quantified this effect by computing RFIs (Equation 5) for each subject. The population average RFI (Figure 5B) is significantly above zero (mean = 0.34, p = 0.0002, Wilcoxon Rank Sign Test), confirming higher ODTs for space-centered relative to retina-centered targets. One likely cause of this difference in ODTs is that besides being centered in different frames, the two targets had different retinal velocity. Indeed, we performed a control experiment in three subjects and found that when increasing the retinal velocity of a target, the ODTs steadily increased (see Supplementary Figure 3). This indicates that when evaluating possible changes in ODTs in the divided relative to the focused attention condition, we must compensate for the effect of each target retinal velocity. 
Likewise, Figure 5A shows that for both targets data points are located above the unity line, suggesting that dividing attention increases ODTs for both retina- and space-centered targets. In order to quantitatively test this result, taking into account the effect of retinal velocity on ODTs, we computed an AMI for each subject (Equation 4). This index computes relative changes in ODTs in the divided compared to the focused attention condition for a given target retinal velocity. 
Figure 5C displays the average AMI across subjects for both target gratings. Positive values indicate larger ODTs in the divided relative to the focused attention condition. Both average AMIs are larger than zero (retina-centered = 0.1 [or 22%]; space-centered = 0.08 [or 17%]), indicating that dividing attention increases ODTs (retina-centered: p = 0.006; space-centered: p = 0.001, Wilcoxon Rank Sign Test). When comparing the mean values corresponding to both reference frames, we found no difference in magnitude of attentional modulation (p = 0.5, Wilcoxon Rank Sign Test for paired data). These data show that the relative cost of dividing attention on ODTs was similar for both the retina-centered and space-centered targets. 
Discussion
The results of this experiment demonstrated that (a) retina-centered targets lead to lower ODTs than space-centered targets and (b) the relative increase in ODTs caused by dividing attention between both target gratings—each one centered in a different frame—was similar. 
We consider a potential explanation for the former result the differences in the targets' retinal velocity. During smooth pursuit, the image of the retina-centered target remains stationary relative to the retinal surface (Figure 1B, left panel), while the image of the space-centered target constantly changes its position on the retina (Figure 1B, right panel). Changing retinal position over time may produce motion blur (Chung, Levi, & Bedell, 1996; Land, 1999) and deteriorate the quality or stability of the space-centered target representation by orientation-selective visual neurons in areas such as V1 and V4 with retina-fixed receptive fields (Maunsell & Newsome, 1987). This may ultimately lead to impaired perception of orientation changes in this target. We measured changes in ODTs as a function of retinal velocity in three subjects, and indeed, we found a continuous increase in ODTs with increasing retinal velocity (Supplementary Figure 3). 
Another possibility that may explain the difference in ODTs for the different targets as well as the lack of differences in smooth pursuit gain between different conditions is that smooth pursuit during attentional tracking of the space-centered target grating was more difficult. In order to compensate, subjects may have “dragged attention away” from the target grating to the pursuit target, keeping pursuit gain similar but causing a decrease in ODTs in the space- relative to the retina-centered target. We will address this issue in Experiment 2
The second result of this experiment was our primary focus of interest. We found a similar increase in ODTs for both targets in the divided relative to the focused attention condition. This result is consistent with previous studies of divided attention between stimuli that remained fixed in the same frame of reference (Braun & Julesz, 1998; Joseph et al., 1997; Lee et al., 1999, 1997). Our subjects performed on average 20% better in the focused relative to the divided attention condition. The magnitude of this effect is comparable to that reported by Joseph et al. (1997) using Gabor stimuli. 
We have previously considered the interaction between smooth pursuit and attention as a potential explanation for the differences in ODTs between the retina- and space-centered targets. This also applies to the comparison between focused vs. divided attention. It is possible that during divided attention subjects devoted more attentional resources to the pursuit target. Dragging attention “away” from both target gratings might have produced the decreases in ODTs. This explanation is in agreement with studies reporting that smooth pursuit demands attention (Kerzel, Born, & Souto, 2009; Kerzel, Souto, & Ziegler, 2008; Khurana & Kowler, 1987; Lovejoy, Fowler, & Krauzlis, 2009; Madelain, Krauzlis, & Wallman, 2005; Schütz, Delipetkos, Braun, Kerzel, & Gegenfurtner, 2007; van Donkelaar, 1999; van Donkelaar & Drew, 2002). Nevertheless, if that were the case, our main conclusion still holds since we obtained a similar drop in ODTs for each target. If available attentional resources were “dragged away” from the two gratings to be reallocated to the pursuit target, this was done proportionally, without favoring either target grating. We will address the possible role of pursuit eye movements in our results in Experiment 2
In summary, our results indicate that subjects did not favor (in their allocation of attentional resources) any of the two target gratings during the divided relative to the focused attention condition. 
Experiment 2
In this experiment, we tested whether our previous results (i.e., the decrease in ODTs in the divided attention condition) were influenced by the fact that we used smooth pursuit eye movements to dissociate the targets' reference frame. As previously mentioned, smooth pursuit eye movements require attention. It is possible that in Experiment 1 smooth pursuit demanded more attentional resources while attending to the space-centered target as well as during divided attention. Subjects might have achieved similar pursuit accuracy by “dragging” attention away from the perceptual target and allocating it to pursue the dot, resulting in larger ODTs. As mentioned before, that would not affect our general observation of similar decrease in ODTs for both targets but may limit our conclusions to the specific situation of pursuing a target while dividing attention between peripheral stimuli. 
To test this hypothesis, we conducted a second experiment using a fixation task and the same stimulus configuration and attentional conditions as in the previous experiment. Similar results during fixation and smooth pursuit (i.e., a similar perceptual cost for dividing attention between targets in different reference frames) would indicate that smooth pursuit was not a confounding factor in the previous experiment. 
Methods
Apparatus and stimuli
Apparatus, viewing conditions, target gratings, and smooth pursuit dot (now also the fixation target) were the same as in the previous experiment. 
Subjects
A total of seven subjects participated in the experiment. Four subjects (including two of the authors) also participated in Experiment 1
Procedure
In this experiment, we measured ODTs while subjects performed a smooth pursuit eye movement similar to the one used in Experiment 1 (smooth pursuit condition). In addition, we measured ODTs during a fixation task (fixation condition). The task instructions for the smooth pursuit condition were identical to those of Experiment 1
The only difference was that the pursuit spot always moved across the left upper screen quadrant from the horizontal to the vertical meridian with the target gratings presented in the lower right visual quadrant (Figure 6A). During fixation, subjects maintained gaze on the fixation spot (the pursuit target in the other condition) at the center of the screen (Figure 6B). One of the target gratings remained stationary on the horizontal meridian to the right of the fixation spot while the other moved from the horizontal to the vertical meridian in the lower visual field. Its speed was identical to that of the pursuit target in the smooth pursuit condition. We used this configuration in order to match the retinal locations, trajectory, and speed of the target images to those during the smooth pursuit task. 
Figure 6
 
Experiment 2. Experimental layout. (A) The smooth pursuit condition was identical to that of Experiment 1. (B) In the fixation condition, the target (sinusoidal grating) remained either stationary on the horizontal meridian, to the right of the fixation spot (black dot), or moved on a circular trajectory (gray curved arrow) in the lower right quadrant of the visual field. In both conditions, targets were defined according to their retinal image motion as either “moving” or “stationary.”
Figure 6
 
Experiment 2. Experimental layout. (A) The smooth pursuit condition was identical to that of Experiment 1. (B) In the fixation condition, the target (sinusoidal grating) remained either stationary on the horizontal meridian, to the right of the fixation spot (black dot), or moved on a circular trajectory (gray curved arrow) in the lower right quadrant of the visual field. In both conditions, targets were defined according to their retinal image motion as either “moving” or “stationary.”
In the fixation condition, the definition of reference frames for target gratings is no longer similar to the one during the pursuit condition, since the stationary grating is both retina-centered and space-centered, and the moving grating is neither retina- nor space-centered. We therefore redefined the target stimuli according to their retinal position during a trial as “stationary” and “moving” to provide a common framework for the fixation and pursuit task (Figure 6). The instructions for the attentional task (i.e., focusing attention on the stationary or the moving target or dividing attention between the two) as well as timing of events and eccentricities of the stimuli were identical to those of Experiment 1
Eye movement recordings and analysis
The same methods as previously described were applied for recording and processing of the eye position data and saccade detection. The analysis of the smooth pursuit eye movements was identical to that of Experiment 1. In order to test whether during fixation subjects maintained gaze direction on the central fixation spot, we computed the average eye position for each trial and subsequently pooled those to obtain an average eye position per subject and condition. As a measure of fixation position offset, we computed, for each condition, the distance of each subjects' average eye position to the center of the fixation spot. 
Orientation discrimination performance
We measured ODTs using the staircase method described in the previous experiment. Staircases with insufficient reversal points (<5) were discarded (3.4% smooth pursuit; 2.6% fixation). We computed the AMIs for the fixation and smooth pursuit conditions in the same manner as in Experiment 1. Additionally, we computed a retinal velocity index (RVI) relating ODTs corresponding to the stationary and moving targets in the focused attention condition (Equation 6). For ODTs obtained during smooth pursuit, this index is equivalent to the RFI of Experiment 1 (see Equation 5): 
R V I = O D T m o v i n g O D T s t a t i o n a r y O D T m o v i n g + O D T s t a t i o n a r y .
(6)
 
Negative RVI values represent lower ODTs for the moving target, positive values represent lower ODTs for the stationary target, and zero represents no difference. 
Results
Eye positions during smooth pursuit and fixation
The results of the analysis of the smooth pursuit eye movements are incorporated in the Results section of Experiment 1 (Figures 3 and 4), since the experimental conditions were similar. In addition, we tested whether during fixation the subject's eye positions deviated from the central fixation spot (Figure 7). 
Figure 7
 
Experiment 2. Eye positions during fixation. (A) Average eye positions (white circles) of individual subjects (n = 7) relative to the fixation spot (gray disk) in the three conditions. (B) Average offset of fixation positions. The offset represents the distance from the fixation spot center to the mean average eye position of a subject. All data represent mean ±1 Std.
Figure 7
 
Experiment 2. Eye positions during fixation. (A) Average eye positions (white circles) of individual subjects (n = 7) relative to the fixation spot (gray disk) in the three conditions. (B) Average offset of fixation positions. The offset represents the distance from the fixation spot center to the mean average eye position of a subject. All data represent mean ±1 Std.
In all three conditions, the average positions fell within the area covered by the fixation spot (gray shaded area). Comparing the offsets across the three conditions revealed no significant difference (p = 0.17, one-way ANOVA), suggesting that the subjects' eye positioning during fixation did not change across conditions. Supporting this finding we also found quasi-homogenous and low saccade detection rates across conditions (stationary = 0.7%; moving = 1%; divided attention = 0.7%). 
Orientation discrimination performance
We tested whether during fixation dividing attention has a similar effect on ODTs as during smooth pursuit. Figure 8A shows the ODTs of individual subjects for fixation (triangles) and smooth pursuit (circles). The data points are mainly located above the diagonal, suggesting that dividing attention causes ODTs to increase for both moving (gray) and stationary (white) targets. This effect seems to be similar during smooth pursuit (circles) and fixation (triangles). 
Figure 8
 
Experiment 2. Effect of dividing attention during fixation (Fix) and smooth pursuit (SP). (A) Raw ODTs of individual subjects. Average ODTs with focused attention on the stationary (white) or the moving (gray) target are plotted against corresponding ODTs during divided attention for both smooth pursuit (circles) and fixation (triangles). (B) Average RVIs. (C) Average AMIs. Color coding is similar to that in (A). All error bars represent SEM.
Figure 8
 
Experiment 2. Effect of dividing attention during fixation (Fix) and smooth pursuit (SP). (A) Raw ODTs of individual subjects. Average ODTs with focused attention on the stationary (white) or the moving (gray) target are plotted against corresponding ODTs during divided attention for both smooth pursuit (circles) and fixation (triangles). (B) Average RVIs. (C) Average AMIs. Color coding is similar to that in (A). All error bars represent SEM.
We first quantified the differences in ODTs between the moving and stationary targets within each condition by computing RVIs for both smooth pursuit and fixation (Figure 8B). As anticipated from the previous experiments (Figure 5B and Supplementary Figure 3C), average RVIs were positive, indicating higher ODTs for moving targets. Interestingly, during fixation, target retinal motion seems to have a stronger impact on orientation discrimination performance, leading to a higher average RVI relative to smooth pursuit (smooth pursuit = 0.38; fixation = 0.65, p = 0.016, Wilcoxon Rank Sign Test for paired data). 
This effect was mainly due to a decrease in ODTs for the stationary target and an increase in ODTs for the moving target during fixation relative to during pursuit (i.e., white triangles distributed lower along the diagonal relative to white circles, and gray triangles distributed higher relative to gray circles in Figure 8A). We will refer to a possible explanation for this effect in the discussion. 
More importantly, AMIs corresponding to the stationary and moving targets were similar during smooth pursuit (stationary = 0.13; moving = 0.1, p = 0.69, Wilcoxon Rank Sign Test for paired data) and during fixation (stationary = 0.13; moving = 0.09, p = 0.69, Wilcoxon Rank Sign Test for paired data). We further conducted an ANOVA comparing all four AMIs and found no difference between the four groups (p = 0.77, Kruskal–Wallis ANOVA, see gray and white bars in Figure 8C). This result indicates that dividing attention produced a similar increase in ODTs for the different target types during both fixation and smooth pursuit. 
Discussion
This experiment aimed at testing whether the results of Experiment 1 were due to the use of smooth pursuit eye movements during the divided and focused attention tasks. We found that dividing attention between a retina-fixed and a retina-moving target resulted in similar impairments of discrimination performance during fixation and during smooth pursuit. These findings discard the hypothesis that interactions between smooth pursuit and the allocation of attention caused our pattern of results. We will discuss this finding in more detail in the General discussion section. 
To our surprise, we found that during fixation the effect of target retinal speed on ODTs was larger than during smooth pursuit. During fixation, ODTs were lower for the stationary target and higher for the moving target, relative to those during smooth pursuit (see distribution of data points in Figure 8A). At first glance, this is not surprising, since smooth pursuit requires attentional resources, which might have been “dragged away” from the target gratings to be allocated to the pursuit target. However, this explanation would predict the same effect for the moving target, and we found the opposite. 
Although this was not the main focus of our study and clarification of this result needs further investigation, we will elaborate on at least one plausible explanation. It is possible that fixing the target gratings in both retina- and space-centered frames results in lower ODTs relative to fixing them in only one of the reference frames. That would account for the differences between the fixation and pursuit ODTs for the stationary target. Moreover, fixing the target grating in only one frame (e.g., space) may result in lower ODTs than when the target is neither retina- nor space-fixed. During smooth pursuit, the moving grating was space-fixed, but during fixation, it was neither retina- nor space-fixed. If one examines Figure 8A, the alignment of data points agrees with this hypothesis (stationary: “retina- and space-fixed” < “retina-fixed”; moving: “space-fixed” < “neither retina- nor space-fixed”). Note that when comparing fixation and smooth pursuit retinal velocity was identical for the stationary targets as well as for the moving targets. 
We should state that this hypothesis needs further testing; however, an argument in its favor is that in the primate brain stimulus representations in different frames of references coexist. For example, in early visual cortex, representations are retinotopically organized (Gardner et al., 2008), while in higher areas of the parietal and frontal cortices, representations are more space-invariant (Martinez-Trujillo, Medendorp, Wang, & Crawford, 2004; Olson & Gettner, 1995). This space invariance must arise from the early retinotopic representations through neural computations. One could easily imagine that when the two reference frames coincide, computations are less demanding. However, when the representation is not fixed in either frame, computations are the most demanding since the only available option is updating in one or the other frame while the eccentrically positioned target moves on the retina and/or in space (Merriam, Genovese, & Colby, 2003). The latter operation may be the most computationally demanding since it may require remapping of receptive fields. 
General discussion
The main contribution of this study was to demonstrate that attention can be proportionally divided between targets in different frames of reference. This effect was not dependent on the attended targets' retinal velocity, or on whether subjects were pursuing a dot or fixating it when covertly attending to the targets. 
Our results agree with previous reports demonstrating that attention can be allocated to targets in various reference frames (Barrett et al., 2003; Behrmann & Tipper, 1999; Danziger, Kingstone, & Ward, 2001; Egly, Driver, & Rafal, 1994; Khurana & Kowler, 1987; Vecera & Farah, 1994). Those studies, however, have not directly compared and quantified the cost of dividing attention between retina- and space-centered targets. 
Some studies have used exogenous cues to direct attention to retina-centered or space-centered locations. Two have reported retina-centered facilitation (Golomb et al., 2008, 2010), and a more recent one reported space-centered inhibition of return (Pertzov et al., 2010). These studies, however, are not directly comparable to ours. They instructed subjects to attend to the location of a briefly presented exogenous cue, and after an intervening saccade, they tested performance on a test probe appearing either at the cued space-centered or retina-centered location. In those tasks, subjects had no prior knowledge of the reference frame of the test probe. In contrast, in our task we used endogenous cueing, and subjects constantly monitored the target(s) in both frames from trial onset. This may have facilitated a voluntary and proportional allocation of attention to both targets. 
Another factor that may play a role in studies using saccades to explore the reference frame question is that during saccades (in contrast to smooth pursuit eye movements and fixation) stimulus representations can be partially suppressed (Duffy & Burchfiel, 1975). This process may favor a peri-saccadic allocation of attention that could preserve a “memory” of the target retinal location under conditions where the position of the probe after the saccade is unknown (Golomb et al., 2008, 2010). How this hypothesis would explain the space-centered inhibition of return found by Pertzov et al. (2010) remains to be investigated. 
Moreover, Horowitz, Holcombe, Wolfe, Arsenio, and DiMase (2004) have made a distinction between the time needed to make saccade-like shifts of attention (attentional saccades) and smooth pursuit-like shifts of attention (attentional pursuit). It is possible that these two types of attentional shifts differ in their mechanisms. While attentional saccades may favor retina-centered targets, attentional pursuit may be equally effective for both retina- and space-centered targets. In our task, the likely mechanism of attentional shifts was attentional pursuit. That may explain why attention did not favor one or the other target. 
Finally, another plausible hypothesis is that when attention is automatically (involuntarily or exogenously) allocated to a target, the preferred frame of reference is retina-centered, while voluntary or endogenous attention can be equally allocated in different reference frames. This notion may be supported by the view that early visual areas encode the position of stimuli in retina-centered coordinates (Gardner et al., 2008), while space- or object-centered representations do not appear in the visual system until later stages after visual signals have undergone processing in early areas (Martinez-Trujillo et al., 2004; Olson & Gettner, 1995). If one considers that exogenous attention is mostly a bottom-up driven process and the saliency of the stimulus is primarily encoded in retina-centered maps in visual areas, one may anticipate retina-centered allocation of attention. On the other hand, because endogenous attention is a top-down process, it may have access to multiple representations at different levels in the hierarchy of processing. 
Conclusions
We conclude that humans can proportionally divide attention between targets in different frames of reference, and/or with different retinal velocities, during both smooth pursuit and fixation. This demonstrates that visual attention is a flexible mechanism that modulates information processing in the human brain by accessing stimulus representations at different levels in the visual hierarchy. 
Supplementary Materials
Supplementary Figure 1 - Supplementary Figure 1 
Supplementary Figure 2 - Supplementary Figure 2 
Supplementary Figure 3 - Supplementary Figure 3 
Supplementary Material - Supplementary Material 
Acknowledgments
We would like to thank Walter Kurcharski for technical assistance and Navid Sadeghi Ghandehari for helpful comments on the manuscript. 
This study was supported by a DAAD fellowship awarded to R. N., CFI, NSERC, CIHR, and EJLB Foundation Grants awarded to J. C. M.-T., and the Canada Research Chairs Program. 
Commercial relationships: none. 
Corresponding author: Julio C. Martinez-Trujillo. 
Email: julio.martinez@mcgill.ca. 
Address: Cognitive Neurophysiology Laboratory, Department of Physiology, McGill University, 3655 Promenade Sir William Osler, Montreal, QC H3G 1Y6, Canada. 
References
Ahissar M. Hochstein S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387, 401–406. [CrossRef] [PubMed]
Barrett D. J. Bradshaw M. F. Rose D. (2003). Endogenous shifts of covert attention operate within multiple coordinate frames: Evidence from a feature-priming task. Perception, 32, 41–52. [CrossRef] [PubMed]
Barrett D. J. Bradshaw M. F. Rose D. Everatt J. Simpson P. J. (2001). Reflexive shifts of covert attention operate in an egocentric coordinate frame. Perception, 30, 1083–1091. [CrossRef] [PubMed]
Behrmann M. Tipper S. P. (1999). Attention accesses multiple reference frames: Evidence from visual neglect. Journal of Experimental Psychology: Human Perception and Performance, 25, 83–101. [CrossRef] [PubMed]
Braun J. Julesz B. (1998). Withdrawing attention at little or no cost: Detection and discrimination tasks. Perception & Psychophysics, 60, 1–23. [CrossRef] [PubMed]
Brefczynski J. A. DeYoe E. A. (1999). A physiological correlate of the “spotlight” of visual attention. Nature Neuroscience, 2, 370–374. [CrossRef] [PubMed]
Cavanagh P. Alvarez G. A. (2005). Tracking multiple targets with multifocal attention. Trends in Cognitive Sciences, 9, 349–354. [CrossRef] [PubMed]
Chung S. T. Levi D. M. Bedell H. E. (1996). Vernier in motion: What accounts for the threshold elevation? Vision Research, 36, 2395–2410. [CrossRef] [PubMed]
Cook E. P. Maunsell J. H. (2002). Dynamics of neuronal responses in macaque MT and VIP during motion detection. Nature Neuroscience, 5, 985–994. [CrossRef] [PubMed]
Danziger S. Kingstone A. Ward R. (2001). Environmentally defined frames of reference: Their time course and sensitivity to spatial cues and attention. Journal of Experimental Psychology: Human Perception and Performance, 27, 494–503. [CrossRef] [PubMed]
d'Avossa G. Tosetti M. Crespi S. Biagi L. Burr D. C. Morrone M. C. (2007). Spatiotopic selectivity of BOLD responses to visual motion in human area MT. Nature Neuroscience, 10, 249–255. [CrossRef] [PubMed]
DiScenna A. O. Das V. Zivotofsky A. Z. Seidman S. H. Leigh R. J. (1995). Evaluation of a video tracking device for measurement of horizontal and vertical eye rotations during locomotion. Journal of Neuroscience Methods, 58, 89–94. [CrossRef] [PubMed]
Duffy F. H. Burchfiel J. L. (1975). Eye movement-related inhibition of primate visual neurons. Brain Research, 89, 121–132. [CrossRef] [PubMed]
Duhamel J. R. Bremmer F. BenHamed S. Graf W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389, 845–848. [CrossRef] [PubMed]
Egly R. Driver J. Rafal R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123, 161–177. [CrossRef] [PubMed]
Galletti C. Battaglini P. P. Fattori P. (1993). Parietal neurons encoding spatial locations in craniotopic coordinates. Experimental Brain Research, 96, 221–229. [CrossRef] [PubMed]
Gardner J. L. Merriam E. P. Movshon J. A. Heeger D. J. (2008). Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. Journal of Neuroscience, 28, 3988–3999. [CrossRef] [PubMed]
Golomb J. D. Chun M. M. Mazer J. A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience, 28, 10654–10662. [CrossRef] [PubMed]
Golomb J. D. Pulido V. Z. Albrecht A. R. Chun M. M. Mazer J. A. (2010). Robustness of the retinotopic attentional trace after eye movements. Journal of Vision, 10, (3):19, 1–12, http://www.journalofvision.org/content/10/3/19, doi:10.1167/10.3.19. [PubMed] [Article] [CrossRef] [PubMed]
Horowitz T. S. Holcombe A. O. Wolfe J. M. Arsenio H. C. DiMase J. S. (2004). Attentional pursuit is faster than attentional saccade. Journal of Vision, 4, (7):6, 585–603, http://www.journalofvision.org/content/4/7/6, doi:10.1167/4.7.6. [PubMed] [Article] [CrossRef]
Ilg U. J. Schumann S. Thier P. (2004). Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron, 43, 145–151. [CrossRef] [PubMed]
Joseph J. S. Chun M. M. Nakayama K. (1997). Attentional requirements in a “preattentive” feature search task. Nature, 387, 805–807. [CrossRef] [PubMed]
Kaernbach C. (1991). Simple adaptive testing with the weighted up–down method. Perception & Psychophysics, 49, 227–229. [CrossRef] [PubMed]
Kerzel D. Born S. Souto D. (2009). Smooth pursuit eye movements and perception share target selection, but only some central resources. Behavioural Brain Research, 201, 66–73. [CrossRef] [PubMed]
Kerzel D. Souto D. Ziegler N. E. (2008). Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements. Vision Research, 48, 958–969. [CrossRef] [PubMed]
Khurana B. Kowler E. (1987). Shared attentional control of smooth eye movement and perception. Vision Research, 27, 1603–1618. [CrossRef] [PubMed]
Land M. F. (1999). Motion and vision: Why animals move their eyes. Journal of Comparative Physiology A, 185, 341–352. [CrossRef]
Lee D. K. Itti L. Koch C. Braun J. (1999). Attention activates winner-take-all competition among visual filters. Nature Neuroscience, 2, 375–381. [CrossRef] [PubMed]
Lee D. K. Koch C. Braun J. (1997). Spatial vision thresholds in the near absence of attention. Vision Research, 37, 2409–2418. [CrossRef] [PubMed]
Liu G. Austen E. L. Booth K. S. Fisher B. D. Argue R. Rempel M. I. et al. (2005). Multiple-object tracking is based on scene, not retinal, coordinates. Journal of Experimental Psychology: Human Perception and Performance, 31, 235–247. [CrossRef] [PubMed]
Lovejoy L. P. Fowler G. A. Krauzlis R. J. (2009). Spatial allocation of attention during smooth pursuit eye movements. Vision Research, 49, 1275–1285. [CrossRef] [PubMed]
Madelain L. Krauzlis R. J. Wallman J. (2005). Spatial deployment of attention influences both saccadic and pursuit tracking. Vision Research, 45, 2685–2703. [CrossRef] [PubMed]
Martinez-Trujillo J. C. Medendorp W. P. Wang H. Crawford J. D. (2004). Frames of reference for eye–head gaze commands in primate supplementary eye fields. Neuron, 44, 1057–1066. [CrossRef] [PubMed]
Maunsell J. H. Newsome W. T. (1987). Visual processing in monkey extrastriate cortex. Annual Review of Neuroscience, 10, 363–401. [CrossRef] [PubMed]
Melcher D. (2007). Predictive remapping of visual features precedes saccadic eye movements. Nature Neuroscience, 10, 903–907. [CrossRef] [PubMed]
Merriam E. P. Genovese C. R. Colby C. L. (2003). Spatial updating in human parietal cortex. Neuron, 39, 361–373. [CrossRef] [PubMed]
Olson C. R. Gettner S. N. (1995). Object-centered direction selectivity in the macaque supplementary eye field. Science, 269, 985–988. [CrossRef] [PubMed]
Pertzov Y. Zohary E. Avidan G. (2010). Rapid formation of spatiotopic representations as revealed by inhibition of return. Journal of Neuroscience, 30, 8882–8887. [CrossRef] [PubMed]
Ruiz-Ruiz M. Martinez-Trujillo J. C. (2008). Human updating of visual motion direction during head rotations. Journal of Neurophysiology, 99, 2558–2576. [CrossRef] [PubMed]
Saenz M. Buracas G. T. Boynton G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5, 631–632. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Gegenfurtner K. R. (2009). Object recognition during foveating eye movements. Vision Research, 49, 2241–2253. [CrossRef] [PubMed]
Schütz A. C. Delipetkos E. Braun D. I. Kerzel D. Gegenfurtner K. R. (2007). Temporal contrast sensitivity during smooth pursuit eye movements. Journal of Vision, 7, (13):3, 1–15, http://www.journalofvision.org/content/7/13/3, doi:10.1167/7.13.3. [PubMed] [Article] [CrossRef] [PubMed]
Thier P. Ilg U. J. (2005). The neural basis of smooth-pursuit eye movements. Current Opinion in Neurobiology, 15, 645–652. [CrossRef] [PubMed]
Tootell R. B. Hadjikhani N. Hall E. K. Marrett S. Vanduffel W. Vaughan J. T. et al.(1998). The retinotopy of visual spatial attention. Neuron, 21, 1409–1422. [CrossRef] [PubMed]
Treue S. Martinez Trujillo J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399, 575–579. [CrossRef] [PubMed]
van Donkelaar P. (1999). Spatiotemporal modulation of attention during smooth pursuit eye movements. Neuroreport, 10, 2523–2526. [CrossRef] [PubMed]
van Donkelaar P. Drew A. S. (2002). The allocation of attention during smooth pursuit eye movements. Progress in Brain Research, 140, 267–277. [PubMed]
Vecera S. P. Farah M. J. (1994). Does visual attention select objects or locations? Journal of Experimental Psychology: General, 123, 146–160. [CrossRef] [PubMed]
Wade N. J. Swanston M. T. (1996). A general model for the perception of space and motion. Perception, 25, 187–194. [CrossRef] [PubMed]
Figure 1
 
Experiment 1. (A) Trial sequence. The panels represent the visual display, the black dot represents the pursuit target, and the gratings represent the orientation discrimination targets. The gray and black curve arrows represent the grating's trajectory, the pursued target trajectory, and the grating tilt (clockwise and counterclockwise), respectively. Dotted lines indicate stimulus eccentricity. The labels 3a, 3b, and 3c illustrate the different attentional conditions. Attention was focused on: (a) the retina-centered target or (b) the space-centered target or (c) was divided between both. (B) Hypothetical retinal image of the attended gratings. The left cartoon represents a behind view of the eye and retinas, and the right one represents the stimulus display with the pursued dot and the grating's trajectory. The gray dashed arrows represent gaze direction.
Figure 1
 
Experiment 1. (A) Trial sequence. The panels represent the visual display, the black dot represents the pursuit target, and the gratings represent the orientation discrimination targets. The gray and black curve arrows represent the grating's trajectory, the pursued target trajectory, and the grating tilt (clockwise and counterclockwise), respectively. Dotted lines indicate stimulus eccentricity. The labels 3a, 3b, and 3c illustrate the different attentional conditions. Attention was focused on: (a) the retina-centered target or (b) the space-centered target or (c) was divided between both. (B) Hypothetical retinal image of the attended gratings. The left cartoon represents a behind view of the eye and retinas, and the right one represents the stimulus display with the pursued dot and the grating's trajectory. The gray dashed arrows represent gaze direction.
Figure 2
 
Experiment 1. Staircases corresponding to one sample subject (“jcm”). The abscissa represents the trial number, and the ordinate represents the orientation change intensity. The data points on the right are the ODTs, obtained by averaging the orientation change intensities corresponding to each staircase after the third reversal point.
Figure 2
 
Experiment 1. Staircases corresponding to one sample subject (“jcm”). The abscissa represents the trial number, and the ordinate represents the orientation change intensity. The data points on the right are the ODTs, obtained by averaging the orientation change intensities corresponding to each staircase after the third reversal point.
Figure 3
 
Experiment 1. Smooth pursuit accuracy. (A) Sample trial. The gray trace depicts horizontal and vertical positions of a smooth pursuit eye movement trajectory moving from left bottom to top right. The black dashed line represents the trajectory of the pursuit dot (black disk), and the black solid line represents the mean radius corresponding to the smooth pursuit movement trajectory. The black arrows represent the positions at which the gratings could change orientation. The bottom right grating represents the space-centered target. (B) Mean radii values (±1 Std) of pursuit trajectories in the different experimental conditions across subjects (n = 13). The dashed line represents the trajectory of the pursuit dot. (C) Mean standard deviation of the radii (±1 Std).
Figure 3
 
Experiment 1. Smooth pursuit accuracy. (A) Sample trial. The gray trace depicts horizontal and vertical positions of a smooth pursuit eye movement trajectory moving from left bottom to top right. The black dashed line represents the trajectory of the pursuit dot (black disk), and the black solid line represents the mean radius corresponding to the smooth pursuit movement trajectory. The black arrows represent the positions at which the gratings could change orientation. The bottom right grating represents the space-centered target. (B) Mean radii values (±1 Std) of pursuit trajectories in the different experimental conditions across subjects (n = 13). The dashed line represents the trajectory of the pursuit dot. (C) Mean standard deviation of the radii (±1 Std).
Figure 4
 
Experiment 1. (A) Mean eye angular velocities as a function of trial time in the different conditions for one subject. Eye velocities (colored lines) are superimposed on the velocity of the smooth pursuit spot (dashed line). The shaded areas represent the 95% confidence intervals of the mean. (B) Average smooth pursuit (SP) gain across subjects for the different conditions (n = 13). For each subject, averages were computed over the time period indicated by the gray shaded area in (A). Error bars indicate Std.
Figure 4
 
Experiment 1. (A) Mean eye angular velocities as a function of trial time in the different conditions for one subject. Eye velocities (colored lines) are superimposed on the velocity of the smooth pursuit spot (dashed line). The shaded areas represent the 95% confidence intervals of the mean. (B) Average smooth pursuit (SP) gain across subjects for the different conditions (n = 13). For each subject, averages were computed over the time period indicated by the gray shaded area in (A). Error bars indicate Std.
Figure 5
 
Experiment 1. (A) Effect of dividing attention between targets in retina-centered and space-centered reference frames. Each symbol represents the average ODT of an individual subject. ODTs for retina-centered (white circles) and space-centered targets (gray triangles) during focused attention are plotted against their corresponding ODTs during divided attention (n = 13). (B) Average reference frame index (RFI) computed on focused attention data in (A). (C) Average attentional modulation index (AMI) as a function of reference frame. The error bars represent SEM.
Figure 5
 
Experiment 1. (A) Effect of dividing attention between targets in retina-centered and space-centered reference frames. Each symbol represents the average ODT of an individual subject. ODTs for retina-centered (white circles) and space-centered targets (gray triangles) during focused attention are plotted against their corresponding ODTs during divided attention (n = 13). (B) Average reference frame index (RFI) computed on focused attention data in (A). (C) Average attentional modulation index (AMI) as a function of reference frame. The error bars represent SEM.
Figure 6
 
Experiment 2. Experimental layout. (A) The smooth pursuit condition was identical to that of Experiment 1. (B) In the fixation condition, the target (sinusoidal grating) remained either stationary on the horizontal meridian, to the right of the fixation spot (black dot), or moved on a circular trajectory (gray curved arrow) in the lower right quadrant of the visual field. In both conditions, targets were defined according to their retinal image motion as either “moving” or “stationary.”
Figure 6
 
Experiment 2. Experimental layout. (A) The smooth pursuit condition was identical to that of Experiment 1. (B) In the fixation condition, the target (sinusoidal grating) remained either stationary on the horizontal meridian, to the right of the fixation spot (black dot), or moved on a circular trajectory (gray curved arrow) in the lower right quadrant of the visual field. In both conditions, targets were defined according to their retinal image motion as either “moving” or “stationary.”
Figure 7
 
Experiment 2. Eye positions during fixation. (A) Average eye positions (white circles) of individual subjects (n = 7) relative to the fixation spot (gray disk) in the three conditions. (B) Average offset of fixation positions. The offset represents the distance from the fixation spot center to the mean average eye position of a subject. All data represent mean ±1 Std.
Figure 7
 
Experiment 2. Eye positions during fixation. (A) Average eye positions (white circles) of individual subjects (n = 7) relative to the fixation spot (gray disk) in the three conditions. (B) Average offset of fixation positions. The offset represents the distance from the fixation spot center to the mean average eye position of a subject. All data represent mean ±1 Std.
Figure 8
 
Experiment 2. Effect of dividing attention during fixation (Fix) and smooth pursuit (SP). (A) Raw ODTs of individual subjects. Average ODTs with focused attention on the stationary (white) or the moving (gray) target are plotted against corresponding ODTs during divided attention for both smooth pursuit (circles) and fixation (triangles). (B) Average RVIs. (C) Average AMIs. Color coding is similar to that in (A). All error bars represent SEM.
Figure 8
 
Experiment 2. Effect of dividing attention during fixation (Fix) and smooth pursuit (SP). (A) Raw ODTs of individual subjects. Average ODTs with focused attention on the stationary (white) or the moving (gray) target are plotted against corresponding ODTs during divided attention for both smooth pursuit (circles) and fixation (triangles). (B) Average RVIs. (C) Average AMIs. Color coding is similar to that in (A). All error bars represent SEM.
Supplementary Figure 1
Supplementary Figure 2
Supplementary Figure 3
Supplementary Material
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×