Open Access
Article  |   September 2019
Saccade-induced changes in ocular torsion reveal predictive orientation perception
Author Affiliations
Journal of Vision September 2019, Vol.19, 10. doi:10.1167/19.11.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      T. Scott Murdison, Gunnar Blohm, Frank Bremmer; Saccade-induced changes in ocular torsion reveal predictive orientation perception. Journal of Vision 2019;19(11):10. doi: 10.1167/19.11.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.

Introduction
We move our eyes all the time, and with every movement we induce massive shifts of the retinal projection in all three dimensions: horizontal, vertical, and torsional. Despite this motion, we can keep track of both the locations and features (e.g., orientation) of objects in space. To achieve such stability in the presence of sensorimotor delays, the perceptual system is thought to compensate for each eye movement using predictive remapping, but if or how predictive remapping accounts for changes in the torsional state of the retinal image when interpreting spatial orientation is unclear. Further, the visual system needs to simultaneously keep track of features of objects (e.g., orientation) and their physical location in order to achieve spatial constancy during eye movements. 
Previous remapping work has only considered two-dimensional (2D) motion on the retina when in fact, shifts in the third, torsional dimension (i.e., around a rotation axis parallel to the line of sight) is also present during almost any eye movement and is a key component of ocular orienting. For example, retinal torsion can be induced by ocular counter-roll during head roll (Blohm & Lefèvre, 2010; Murdison, Paré-Bingley, & Blohm, 2013), by the natural tilt of Listing's plane (Blohm, Khan, Ren, Schreiber, & Crawford, 2008), or by simply manipulating the geometry of the retinal projection using oblique gaze orientations (Blohm & Lefèvre, 2010; which, importantly, does not require any mechanical torsion of the eyeball, and is sometimes termed “false torsion”). 
Orientation perception is not only influenced by e.g., head roll, but also by static head or body orientation in space. As an example, the oblique effect, i.e., the finding of smaller just-noticeable differences (JNDs) for orientation along the horizontal or vertical meridian as compared to oblique directions, is rather fixed to the head than to external space (Buchanan-Smith & Heeley, 1993). Differentiating between perceptual effects occurring in retinal or spatial coordinates has been impossible without linking remapping to exogenous factors such as the motion after-effect (Turi & Burr, 2012), the tilt after-effect (Melcher, 2007), or object features (Golomb, L'Heureux, & Kanwisher, 2014; Harrison & Bex, 2014). Conveniently, torsion provides a natural misalignment between retinal and spatial coordinates for which the perceptual system must directly compensate. Here, we geometrically induced torsional shifts by projecting a fronto-parallel stimulus onto the retina during movements to and from oblique eye orientations (oblique orientation-induced retinal torsion, ORT; Figure 1A). Past work has found that ORT influences orientation perception in a retinally predicted way during fixation (Haustein & Mittelstaedt, 1990; Nakayama & Balliet, 1977), yet no study has examined how ORT affects orientation perception during ongoing eye movements. 
Figure 1
 
Geometry of predictive remapping models. (A) Example ORT during oblique-to-oblique horizontal saccades. The real-world geometry, which is tilted on the retina due to ORT, does not correspond to a veridical representation of “vertical.” Note that ORT magnitude is exaggerated for illustration purposes. (B) Retinocentric representation of the retino-spatial predictive model and (C) of the purely retinal predictive model. Solid lines represent the actual retinal projection of the stimulus while dotted lines represent the corresponding percept.
Figure 1
 
Geometry of predictive remapping models. (A) Example ORT during oblique-to-oblique horizontal saccades. The real-world geometry, which is tilted on the retina due to ORT, does not correspond to a veridical representation of “vertical.” Note that ORT magnitude is exaggerated for illustration purposes. (B) Retinocentric representation of the retino-spatial predictive model and (C) of the purely retinal predictive model. Solid lines represent the actual retinal projection of the stimulus while dotted lines represent the corresponding percept.
Separate recordings from distinct retinotopic areas have revealed that receptive fields (RFs) presaccadically modulate their spatial tuning. In their seminal paper, Duhamel, Colby, and Goldberg (1992) showed that, before the onset of a saccade, some neurons from the lateral intraparietal area (LIP) became responsive for locations in the visual field which corresponded to receptive-filed locations only after the eye had landed. In this study, Duhamel and colleagues only tested the current and future receptive field location for responsiveness. Only more recently, Wang and colleagues (2016) could show that in such case LIP neurons expand their RF along the saccade trajectory. Neurons in the macaque frontal eye field (FEF) have been shown to converge towards the target location (Zirnsak et al., 2014). Consequently, these presaccadic RF modulations are assumed to be involved in the maintenance of perceptual stability, though how is unclear. 
Different potential explanations that have garnered some recent debate (Burr, Tozzi, & Morrone, 2007; Duhamel, Bremmer, Ben Hamed, & Graf, 1997; Harrison & Bex, 2014; Harrison, Mattingley, & Remington, 2012; Melcher, 2005; Morris, Bremmer, & Krekelberg, 2016; Morris & Krekelberg, 2019; Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011; Turi & Burr, 2012; Zimmermann, Burr, & Morrone, 2011; Zimmermann, Morrone, Fink, & Burr, 2013; Zirnsak & Moore, 2014) are that either these RF modulations predictively remap a retinocentric representation purely in compensation for the upcoming retinal motion or they are involved in constructing a stable spatial map of the visual scene. Importantly, the remapping theoretically could be solved by two different mechanisms: either by tilting the orientation tuning towards the final ORT (similar to shifting spatial tuning across saccades; Duhamel et al., 1992), or by tilting the tuning away from the final ORT (similar to remapping neural activation in the direction opposite to the saccade; Rolfs et al., 2011). In the presence of torsional motion of the retinal image, these two models produce different predictions (Figure 1). In the one scenario (Figure 1B), the representation is remapped according to the preprogrammed spatial saccadic endpoint and accounting for the retino-spatial 3D geometry, such that the perception updates ballistically ahead of the eye. Under this hypothesis, there is a presaccadic remapping stage at which orientation perception is tilted in the direction of the saccade endpoint. While the eye is in flight, orientation perception leads the actual retinal projection. Therefore, at the midpoint of a symmetric trajectory, the perception of a spatially vertical tree is tilted towards the upcoming saccadic endpoint retinal projection. In the other scenario (Figure 1C), the perception is predictively remapped according to the vector difference between the initial retinal projection and that at the next time-step, such that the perception dynamically follows the eye. Under this hypothesis, there is a presaccadic remapping stage at which orientation perception is titled away from the saccade endpoint, allowing the perceptual system to account for visuomotor delays during motion to maintain a retinally accurate perception. Because of this predictive compensation, while the eye is in flight, orientation perception continuously matches the retinal projection. Therefore, near the midpoint of the same symmetric trajectory, the perception of a spatially vertical tree matches its projection onto the retina. 
There are three possible perceptual outcomes of torsional shifts of the retinal image during eye movements. First, there might be no predictive remapping, with orientation perception mostly adhering to ORT (Haustein & Mittelstaedt, 1990; Nakayama & Balliet, 1977) throughout the movement (null model). Second, the perceptual system might use an estimate of the future retino-spatial geometry based on the preprogrammed saccade endpoint to predictively tilt perception towards the final ORT, ahead of the eyes (retino-spatial model, Figure 1B). Third, the perceptual system might presaccadically tilt perception away from the final orientation, allowing a retinocentric perception to move with the eyes (purely retinal model, Figure 1C). Here we provide strong evidence in support of the purely retinal model using ORT during a perisaccadic orientation perception task. 
Materials and methods
Participants
Eight adults with normal or corrected to normal vision performed the experiment (five males, three females; age range 20–30 years). Participants were paid for their participation and were all naïve to the purpose of the experiment, and all had previous experience with psychophysical experiments involving video eye tracking. Each participant gave informed written consent prior to the experiment. All procedures used in this study conformed to the Declaration of Helsinki. 
Materials
Stimuli were computer-generated using the Psychophysics Toolbox (Brainard, 1997) within MATLAB (MathWorks, Natick, MA), and were projected onto a large 120 cm (81°) × 90 cm (65.5°) flat screen by means of a DS+6K-M Christie projector (Christie Digital, Cypress, California) at a frame rate of 120 Hz and a resolution of 1152 × 864 pixels. Participants sat in complete darkness 70 cm away from the screen, and a table-mounted chin rest supported their heads. The complete darkness was required to prevent participants to perceive a compression of space, which might have confounded our data by causing all orientations being perceived closer to vertical than they were (Krekelberg, Kubischik, Hoffmann, & Bremmer, 2003; Lappe, Awater, & Krekelberg, 2000; Morrone, Ross, & Burr, 1997). Eye movements were recorded using an infrared video-based Eyelink II (SR Research, Ottawa, Ontario) that was attached to the chin rest, providing a table-fixed head strap that kept each participant's head in a constant position throughout each experimental session. The screen was viewed binocularly, and eye position was sampled at 500 Hz. Prior to each block, participants performed a 13-point calibration sequence over a maximum eccentricity of 25°. The eye to which the perceptual stimulus was fovea-locked for each block was selected based on calibration performance. Drift correction was performed offline every 10 trials, based on a central fixation position. To ensure precise temporal measurement of trial start and stimulus presentation, we positioned a photosensitive diode over the lower left corner of the screen, where we flashed a white patch of pixels both at the start of each trial and at the presentation of the oriented bar stimulus (at the current on-screen gaze position of the participant). This part of the experimental apparatus was occluded from the view of the participant. After calibration for constant data acquisition delays, the photosensitive diode's voltage spikes provided reliable estimates of each trial's time-course (within a precision of approximately 2 ms). 
Procedure
Participants performed a two-alternative, forced choice (2AFC) perceptual task in which they made large horizontal saccades between targets 40° apart either along a 20° vertically eccentric horizontal axis (test trials) or along the horizontal meridian of the screen (control trials, Figure 2A). Importantly, test trials induced ORT throughout the eye movement. Participants began each trial by fixating the initial 0.3° diameter dot on the left side of the screen (at −20°) and indicated with a key press that they were prepared to start the trial (Figure 2B). Three hundred milliseconds later, a 0.3° diameter target was illuminated 40° to the right on the opposite side of the screen (at +20°). After a randomly selected duration (400–600 ms), the initial target was extinguished, representing the participant's “go” cue. At some point in time, either immediately before saccade onset (∼250 ms prior), during the saccade (average saccade duration ∼120 ms) or after the saccade, we presented an oriented bar stimulus in one of seven different orientations (from −8° to +8° rotated from vertical). For each trial, the exact time at which we presented the stimulus was chosen randomly from one of four 200 ms-width Gaussians, linearly spaced from the average reaction time (based on a 10-trial moving window) to 100 ms after, approximating the end of the movement. After the participant's eyes had landed on the saccade target, they were asked to respond with a key press representing their perception of the stimulus orientation (counterclockwise or clockwise perceptions). The trial ended after participants made their selection. This paradigm allowed us to reliably compute each participant's psychometric function with a fine time resolution throughout a saccade. 
Figure 2
 
Paradigm and task timing. (A) Illustration demonstrating rotational effects induced on retina due to oblique eye orientations while participants do task in either the test or control condition. Note that these retinal rotations are exaggerated for illustration purposes. Lower panel in (A) shows the geometrically predicted retinal torsion as a function of horizontal screen position for the Test (red) and control conditions (green). (B) Schematic showing task timing and stimulus presentation frequency distributions (inset). Bidirectional arrows represent 200 ms time window within which we randomly varied “go” cue.
Figure 2
 
Paradigm and task timing. (A) Illustration demonstrating rotational effects induced on retina due to oblique eye orientations while participants do task in either the test or control condition. Note that these retinal rotations are exaggerated for illustration purposes. Lower panel in (A) shows the geometrically predicted retinal torsion as a function of horizontal screen position for the Test (red) and control conditions (green). (B) Schematic showing task timing and stimulus presentation frequency distributions (inset). Bidirectional arrows represent 200 ms time window within which we randomly varied “go” cue.
Participants also performed a fixation version of the same task in which they fixated one of six randomly selected locations (−20°, 0° or +20° horizontal along either the 0° or 20° screen meridian) and we flashed the identical stimulus at the fixation location for a single frame. After the stimulus flash participants responded with a key press indicating their perception of its orientation, identically to the first experiment. In all conditions (fixation, test, and control condition), oriented bar stimuli were presented for a single frame (8.3 ms). 
Identifying Listing's plane for each participant
Finally, to correctly compute the retinal model predictions, we measured each participant's individual Listing's plane (Table 1) using photographs taken during fixation at each of 10 orientations on the screen (rectangular grid in the upper half of the screen along 0° and 20° meridians, with five equally spaced orientations along each horizontal and 20° eccentricity). Listing's Law states that with the head stationary, upright, and the eyes fixating an eccentric target, it can be assumed that this eccentric gaze can be achieved by a single rotation, starting from straight ahead gaze. Importantly, for all eccentric positions, all these rotation axes lie in the same plane, i.e., Listing's plane (Blohm & Crawford. 2007; Hepp 1990; Tweed, Cadera, & Vilis, 1990; von Helmholtz, 1867; Westheimer, 1957). From these photographs we extracted the natural ocular torsion based on the irises compared between the central orientation (0°, 0°) and eccentric locations, using an algorithm developed by Otero-Millan and colleagues (Otero-Millan, Roberts, Lasker, & Zee, 2015) modified for still images and implementation in MATLAB. 
Table 1
 
Identified Listing's plane tilt for each participant.
Table 1
 
Identified Listing's plane tilt for each participant.
Analysis
All analyses were performed using custom MATLAB code (MathWorks, Natick, MA) and psychometric functions were fit using the Psignifit toolbox (Wichmann & Hill, 2001). Each participant performed 2080 trials in total, following a Gaussian distribution of presented stimulus orientations. Each performed a minimum of 221 repetitions for each of the most extreme bar orientations (±8°); conversely, for 0° bar orientations, they performed a maximum of 369 repetitions. These repetitions allowed us to be confident in our psychometric fits while not extending the sessions by oversampling easy trials. Trials containing blinks, loss of eye tracking, no saccades, hypometric, or inaccurate saccades (<25° amplitude or beyond 10° radius from target), or with reaction times greater than 1.5 s were all removed from the dataset (20% of all trials). Group-level statistics were computed using paired Student t tests, and participant-level and pooled analyses were performed using the bootstrapped 95% CI determined from Monte Carlo simulations during the psychometric curve fitting. 
We generated predictions for retinal torsion using the quaternion algebraic formulation developed in previous work from our lab (Blohm & Crawford, 2007; Blohm & Lefèvre, 2010; Leclercq, Blohm, & Lefèvre, 2013; Leclercq, Lefèvre, & Blohm, 2013; Murdison, Leclercq, Lefèvre, & Blohm, 2015). Briefly, this consisted of finding the torsional difference between screen coordinates and retinal coordinates, based on the orientation of Listing's plane for each participant with measured tilt α0 (see Table 1):  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}{q_{LP}} = \left[ {\matrix{ 0 \cr 0 \cr {\cos \left( {{\alpha _0}} \right)} \cr { - sin\left( {{\alpha _0}} \right)} \cr } } \right]\end{equation}
 
Then, the gaze orientation vector qES was multiplied by qLP to obtain the eye-in-head orientation qEH observing Listing's Law:  
\begin{equation}\tag{2}{q_{EH}} = {q_{LP}}{q_{ES}}\end{equation}
where  
\begin{equation}\tag{3}{q_E} = \left[ {\matrix{ 0 \cr {{\theta _x}} \cr {{\theta _y}} \cr {{\theta _z}} \cr } } \right]\end{equation}
with θx, θy, and θz corresponding to the angular orientation of the eye.  
Finally, we computed the rotation V′ of a vertical unit vector V on the retina using standard quaternion rotation:  
\begin{equation}\tag{4}V^{\prime} = {q_{EH}}V{q_{EH}}^{ - 1}\end{equation}
 
Results
We directly investigated how ORT influences orientation perception across saccades using a novel retinal feature remapping paradigm in complete darkness. This paradigm allowed us to reliably compute each participant's psychometric function with a fine time resolution throughout the saccade. A fixation version of the task in which we presented the same stimulus at one of six possible fixation locations (three along each test or control trajectory—left, center, and right) allowed us to account for any perceptual effects during stable fixation. We also measured each participant's natural Listing's plane to account for any natural ocular torsion when making predictions using our retinal model (Table 1 in Methods). As can be seen, deviations of Listing's plane from the expected fronto-parallel plane was very small (< 1°). 
We examined the performance of each participant as a function of trial time (aligned to saccade onset) revealing orientation perception throughout any given trial, and we compared perceptions to the prediction of a retinal model. Participants had clear perceptual differences (Figure 3) between the start (light shades) and end (dark shades) of the saccade, but these differences were most pronounced for test trials. As the eyes moved across screen, we found that the perceptual changes were captured by the retinal model predictions during test trials (pooled regression analysis across 4° on-screen bins, n = 12, slope = 0.87, R2 = 0.7, p < 0.01), and matched perceptions during fixation at the extreme time points: paired t test for points of subjective equality, t(15) = 1.52, p = 0.15, and for just-noticeable differences, t(15) = 0.47, p = 0.65, indicating that they behaved consistently during periods when the eyes were stationary, regardless of the behavioral context (fixation vs. saccade trials). 
Figure 3
 
Psychometric curves for all participants. Early (trial start until 50 ms presaccade; light shades) and late (100 ms postsaccade until trial end; dark shades) psychometric function fits are shown alongside the fixation experiment results for left (dotted) and right (dashed) targets. Color-matched arrows represent retinal predictions for ORT, corresponding to PSEs, during each time bin.
Figure 3
 
Psychometric curves for all participants. Early (trial start until 50 ms presaccade; light shades) and late (100 ms postsaccade until trial end; dark shades) psychometric function fits are shown alongside the fixation experiment results for left (dotted) and right (dashed) targets. Color-matched arrows represent retinal predictions for ORT, corresponding to PSEs, during each time bin.
After pooling the data across participants, we were able to attain a time resolution of 15 ms for which we could compute the bin-wise psychometric functions, extracting the points of subjective equality (PSEs) to quantify the psychophysical biases and the just-noticeable differences (JNDs) to quantify the corresponding precision. These time-resolved biases (PSEs with 95% CIs) are shown alongside the retinal predictions (dashed lines) for both control (green) and test trials (blue) relative to saccade onset (Figure 4A). Psychophysical biases depended on whether participants performed control or test trials. Throughout control trials, perceptual biases followed the retinally predicted perception, with excursions from the retinal prediction occurring upon, but not prior to, saccade onset. Throughout test trials, however, orientation perception was biased towards the retinal prediction throughout the movement, with the exception of a significant perceptual rotation immediately prior to the movement onset. Using the pooled data, the effect began approximately 50 ms prior to the movement (gray-shaded window; inset), consistent with the timing of both attentional (Harrison et al., 2012; Rolfs et al., 2011) and RF shifts observed in retinotopic areas (Wang et al., 2016; Zirnsak et al., 2014). Furthermore, this deviation went in the direction opposite to the upcoming shift in ORT in a manner consistent with maintaining the retinocentric orientation throughout the upcoming movement, matching the purely retinal model. 
Figure 4
 
Pooled and participant-level biases. (A) Pooled PSEs (left column) and JNDs (right column) for control (green) and test trials (blue), plotted alongside the retinal predictions (color-matched dashes) over time. Gray shaded regions represent presaccadic (−50 ms to 0 ms) time bin. Inset reveals significant presaccadic perceptual rotation for test PSEs (asterisks). (B) Participant-level PSEs and JNDs, binned into early (t < −50 ms), presaccadic (−50 ms < t < 0 ms), perisaccadic (0 ms < t < 100 ms), and postsaccadic time bins (t > 100 ms), aligned to saccade onset for control (top row) and test trials (bottom row). Participant-level significant effects are shown by color-matched bars crossing bin thresholds (vertical dotted lines), and group-level significant effects are shown by bold black crossing lines and black asterisks. Insets (center column) reveal direct comparisons between PSEs in presaccadic (ordinate) and early time bins (abscissa). Within the test inset, shaded quadrant represents the retinal hypothesis for either time epoch, and arrows represent direction of retino-spatial (black) or purely retinal (red) remapping. Black circles and error bars represent across-participant means and standard deviations.
Figure 4
 
Pooled and participant-level biases. (A) Pooled PSEs (left column) and JNDs (right column) for control (green) and test trials (blue), plotted alongside the retinal predictions (color-matched dashes) over time. Gray shaded regions represent presaccadic (−50 ms to 0 ms) time bin. Inset reveals significant presaccadic perceptual rotation for test PSEs (asterisks). (B) Participant-level PSEs and JNDs, binned into early (t < −50 ms), presaccadic (−50 ms < t < 0 ms), perisaccadic (0 ms < t < 100 ms), and postsaccadic time bins (t > 100 ms), aligned to saccade onset for control (top row) and test trials (bottom row). Participant-level significant effects are shown by color-matched bars crossing bin thresholds (vertical dotted lines), and group-level significant effects are shown by bold black crossing lines and black asterisks. Insets (center column) reveal direct comparisons between PSEs in presaccadic (ordinate) and early time bins (abscissa). Within the test inset, shaded quadrant represents the retinal hypothesis for either time epoch, and arrows represent direction of retino-spatial (black) or purely retinal (red) remapping. Black circles and error bars represent across-participant means and standard deviations.
We next determined if this observed effect during test trials was simply a phenomenological effect of pooling the data across participants (Figure 4B). We separated each participant's data into four separate time bins representing characteristic time epochs during any given trial: (a) early fixation (trial start to 50 ms prior to onset); (b) Presaccadic (50 ms prior to saccade onset); (c) Perisaccadic (saccade onset to 100 ms later); and (d) Postsaccadic (100 ms post saccade onset until trial end). Using these binned data, we observed the same presaccadic bias shift on the group level for test trials, paired t test, t(7) = −4.33, p < 0.01, indicating that it was not due to pooling data across participants. We varied the presaccadic bin size as much as participants' time resolutions allowed and found qualitatively identical group-level presaccadic remapping effects up to 40 ms prior to onset (not shown here). Finally, as these bias shifts could potentially be simply explained by a less precise perception, we also examined the time-resolved changes in precision. We did this with JNDs in an identical way (Figure 4A and B, right column), and found that they only increased perisaccadically (paired t tests, all transsaccadic p < 0.01), as expected from retinal blurring and/or saccadic suppression (Bremmer, Kubischik, Hoffmann, & Krekelberg, 2009; Burr, Morrone, & Ross, 1994), but presaccadic precision was not different from precision during fixation. Thus, presaccadic perceptual shifts could not be explained by a decrease in perceptual precision. 
Discussion
We found that ORT, which is only partially corrected for during fixation (Haustein & Mittelstaedt, 1990; Nakayama & Balliet, 1977), is predictively remapped across saccades in an orientation perception task. Instead of updating the perception ahead of the eye movement using an estimate of the spatial geometry at the final gaze location (retino-spatial model), the presaccadic shifts we observe instead are compensatory for the future ORT, allowing the retinocentric orientation to be maintained while the eyes move (purely retinal model). This key finding agrees with recent psychophysical work (Golomb et al., 2014; Rolfs et al., 2011). 
Importantly, these behavioral results are in line with the earliest neurophysiological data from the macaque monkey. In their seminal paper, Duhamel et al. showed that neurons in the lateral intraparietal area (area LIP) respond completely retinocentric when tested before and after a saccade (1992). And even across saccades, these neurons anticipate the new location of their RF in eye-centered coordinates. Neurons in area LIP are also implicated in eye-movement control, which, by definition, is oculocentric. In follow-up studies it was argued by the same authors (Colby et al., 1995; Colby & Goldberg, 1999), that such an encoding is advantageous since it facilitates the programming of appropriate eye-movements without the need of transforming visual signals into a nonretino-centric frame of reference (Bremmer, Pouget, & Hoffmann, 1998; Boussaoud & Bremmer, 1999; Zipser & Andersen, 1988). Instead, visual signals are directly transformed into motor output. In this sense, sensorimotor processing is facilitated, since the receptor (eye, with the retina as receptor epithelium) is identical with the effector (eye). 
In the control condition, the average PSE for times ranging from 100 ms to 200 ms postsaccadically did not return to baseline values as observed long before the saccade. Although this might appear surprising at first glance, it is well in line with results from previous studies on perisaccadic visual perception. These studies have shown that effects of e.g., saccadic suppression, perisaccadic compression of perceptual space, as well as of compression of heading perception, outlast the end of a saccade by approximately 50–100 ms (Bremmer et al., 2009; Bremmer, Churan, & Lappe, 2017; Diamond et al., 2000; Ross, Morrone, & Burr, 1997). In our study, saccades were rather large and, hence, had an average duration of roughly 120 ms. Accordingly, the postsaccadic data as shown in Figures 3 and 4 are still in a temporal context in which visual perception is supposed to return to normal. Importantly, the studies mentioned above all employed visually guided saccades along the horizontal or vertical meridian, i.e., as in our control condition. Remarkably, also in the control condition we found a (comparably small) negative deflection of the PSE at saccade onset. As shown in Table 1, the Listing's planes of our observers were all slightly off the ideal case of being fronto-parallel (Bockisch & Haslwanter, 2001; Haslwanter, Straumann, Hess, & Henn, 1992). Accordingly, the saccade in the control condition might have been not exactly along the HM of the observer and hence most likely induced a minimal torsion. The observed modulation of the PSE could be indicative of this slight misalignment and of a smaller, predictive orientation remapping. 
Our study complements previous studies on transsaccadic orientation perception (Ganmor, Landy, & Simoncelli, 2015; Wolf and Schütz, 2015). In both studies, oriented stimuli were presented before, after, or before and after a saccade along the horizontal meridian (thus not introducing different torsional values before and after the saccade). Best performance was found when an oriented stimulus was visible both before (peripherally) and after a saccade (foveally), suggesting that humans integrate both signals. A detailed analysis further revealed that humans combined the two views close to optimal, using a weighted sum, with weights assigned based on the relative precision of foveal and peripheral representations. It would be interesting to apply this approach of stimuli being presented before and after a saccade to our paradigm. Similar to the study by Wolf and Schütz (2015), the analysis would allow determining the time-course of the weighting of foveal and peripheral information in case of saccades inducing retinal torsion. 
Our psychophysical results predict that the activity of orientation-selective neurons involved in predictive remapping should also exhibit torsion-induced modulations. Importantly, we assume that the predictive remapping of orientation perception concerns the whole visual field and not only an area around the fovea. Accordingly, we do not assume single neurons to rotate their orientation selectivity perisaccadically. Instead, we assume that orientation selective responses are combined with estimates about the rotational angle of the eye. This process would be very similar to perisaccadic processes combining RF location with eye position signals as have been shown for extrastriate and parietal areas (Morris et al., 2012, 2016) but recently also for primary visual cortex (Morris & Krekelberg, 2019). 
To our best knowledge, no such recordings have been made so far to test for a representation of the rotational angle of the eye in the ongoing or perisaccadic activity of neurons along the visual pathway, as has been done for eye position signals (e.g., Bremmer, Distler, & Hoffmann, 1997; Bremmer, Ilg, Thiele, Distler, & Hoffmann 1997; Bremmer, Graf, Hamed, & Duhamel, 1999; Bremmer, Duhamel, Ben, & Graf, 2000) and corollary discharge concerning upcoming saccades (e.g., Sommer & Wurtz, 2004; Zimmermann & Bremmer, 2016). Accordingly, we can only speculate that such signals might exist. 
The implication that the brain expends computational energy with each eye movement to predictively remap a (spatially incorrect) retinal perception is seemingly paradoxical; after all, in theory the brain has access to all the self-motion signals required to compensate for retinal blurring and/or retino-spatial misalignments. However, compensating for self-motion requires either updating of a nonspatial (e.g., retinal) representation (Henriques, Klier, Smith, Lowy, & Crawford, 1998; Medendorp, Van Asselt, & Gielen, 1999; Murdison et al., 2013) or subjecting sensory signals to reference frame transformations (Blohm & Crawford, 2007; Blohm & Lefèvre, 2010; Murdison et al., 2015) to achieve spatial accuracy. As both updating (Medendorp et al., 1999) and reference frame transformations appear to be stochastic processes (Alikhanian, Carvalho, & Blohm, 2015; Burns & Blohm, 2010; Burns, Nashed, & Blohm, 2011; Schlicht & Schrater, 2007; Sober & Sabes, 2003), eye-centered signals might provide high acuity sensory information on which to base working memory (Golomb, Chun, & Mazer, 2008), perception (Burns et al., 2011; Rolfs et al., 2011) and movement generation (Schlicht & Schrater, 2007; Sober & Sabes, 2003) explicitly requiring a reference frame transformation. 
The apparent dominance of retinocentric signals we observed during saccades is consistent with a growing body of psychophysical (Golomb et al., 2014; Murdison et al., 2013; Rolfs et al., 2011; Zirnsak, Gerhards, Kiani, Lappe, & Hamker, 2011) and electrophysiological (Colby, Duhamel, & Goldberg, 1995; Duhamel et al., 1992; Duhamel et al., 1997; Wang et al., 2016; Zirnsak et al., 2014) evidence. Indeed, participants are better at recalling the locations of stimuli across saccades in eye-centered coordinates compared to their spatial locations, which are degraded with each subsequent eye movement (Golomb & Kanwisher, 2012). Additionally, attention appears to be allocated in retinocentric coordinates (Golomb et al., 2008; Golomb, Nguyen-Phuc, Mazer, McCarthy, & Chun, 2010; Yao, Ketkar, Treue, & Krishna, 2016) and there is evidence that its locus shifts to the retinocentric target of upcoming saccades (Rolfs et al., 2011). Memorized targets for movement also appear to be encoded retinocentrically, as observed during saccades (Inaba & Kawano, 2014), smooth pursuit (Murdison et al., 2013), and reaching (Batista, Buneo, Snyder, & Andersen, 1999; Henriques et al., 1998; Medendorp et al., 1999). Together with this past work, our findings indicate that reliable retinal signals are paramount to maintaining a stable world percept during self-motion. 
Conclusions
For the first time, we have shown the orientation-specific perceptual consequences of shifts in the torsional dimension during saccades. Together with previous work (Wang et al., 2016; Zirnsak & Moore, 2014; Zirnsak et al., 2014), our current findings imply that the perceptual system faithfully maintains an eye-centered representation by predictively remapping across both translational and torsional retinal shifts. In the midst of motion on the retina with each exploratory eye movement, it appears that this predictive remapping underlies the seamless stability that is a hallmark of our perceptual experience. 
Acknowledgments
The authors want to thank Dr. Dominic Standage for his helpful comments on the manuscript, as well as the participants for their kind participation. This work was supported by DFG (IRTG/CREATE-1901) and CRC/TRR-135 (project number 222641018), Germany), NSERC (Canada), CFI (Canada), the Botterell Fund (Queen's University, Kingston, ON, Canada) and ORF (Canada). TSM was also supported by DAAD (Germany). 
Commercial relationships: none. 
Corresponding author: T. Scott Murdison. 
Address: Facebook Reality Labs (FRL), Redmond, WA, USA. 
References
Alikhanian, H., Carvalho, S. R., & Blohm, G. (2015). Quantifying effects of stochasticity in reference frame transformations on posterior distributions. Frontiers in Computational Neuroscience, 9 (July), 1–9, https://doi.org/10.3389/fncom.2015.00082.
Batista, A. P., Buneo, C. A., Snyder, L. H., & Andersen, R. A. (1999, July 9). Reach plans in eye-centered coordinates. Science, 285 (5425), 257–260. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10398603
Blohm, G., & Crawford, J. D. (2007). Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7 (5): 4, 1–22, https://doi.org/10.1167/7.5.4. [PubMed] [Article]
Blohm, G., Khan, A. Z., Ren, L., Schreiber, K. M., & Crawford, J. D. (2008). Depth estimation from retinal disparity requires eye and head orientation signals. Journal of Vision, 8 (16): 3, 1–23, https://doi.org/10.1167/8.16.3. [PubMed] [Article]
Blohm, G., & Lefèvre, P. (2010). Visuomotor velocity transformations for smooth pursuit eye movements. Journal of Neurophysiology, 104 (4), 2103–2115, https://doi.org/10.1152/jn.00728.2009.
Bockisch, C. J., & Haslwanter, T. (2001). Three-dimensional eye position during static roll and pitch in humans. Vision Research, 41 (16), 2127–2137, https://doi.org/10.1016/S0042-6989(01)00094-3.
Boussaoud, D., & Bremmer, F. (1999). Gaze effects in the cerebral cortex: Reference frames for space coding and action. Experimental Brain Research, 128 (1–2), 170–180.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436, https://doi.org/10.1163/156856897X00357.
Bremmer, F., Churan, J., & Lappe, M. (2017). Heading representations in primates are compressed by saccades. Nature Communications, 8 (1), 920, https://doi.org/10.1038/s41467-017-01021-5.
Bremmer, F., Distler, C., & Hoffmann, K. P. (1997). Eye position effects in monkey cortex. II. Pursuit- and fixation-related activity in posterior parietal areas LIP and 7A. Journal of Neurophysiology, 77 (2), 962–977, https://doi.org/10.1152/jn.1997.77.2.962.
Bremmer, F., Duhamel, J. R., Ben, H. S., & Graf, W. (2000). Stages of self-motion processing in primate posterior parietal cortex. International Review of Neurobiology, 44, 173.
Bremmer, F., Graf, W., Hamed, S. B., & Duhamel, J. R. (1999). Vision central-eye position encoding in the macaque ventral intraparietal area (VIP). NeuroReport, 10 (4), 875–880.
Bremmer, F., Ilg, U. J., Thiele, A., Distler, C., & Hoffmann, K. P. (1997). Eye position effects in monkey cortex . I . Visual and pursuit-related activity in extrastriate areas MT and MST. Journal of Neurophysiology, 77, 944–961.
Bremmer, F., Kubischik, M., Hoffmann, K.-P., & Krekelberg, B. (2009). Neural dynamics of saccadic suppression. The Journal of Neuroscience, 29 (40), 12374–12383, https://doi.org/10.1523/JNEUROSCI.2908-09.2009.
Bremmer, F., Pouget, A., & Hoffmann, K.-P. (1998). Eye position encoding in the macaque posterior parietal cortex. European Journal Neuroscience, 10, 153–160.
Buchanan-Smith, H. M., & Heeley, D. W. (1993). Anisotropic axes in orientation perception are not retinotopically mapped. Perception, 22 (12), 1389–1402, https://doi.org/10.1068/p221389.
Burns, J. K. & Blohm, G. (2010). Multi-sensory weights depend on contextual noise in reference frame transformations. Frontiers in Human Neuroscience, 4 (December), 1–15, https://doi.org/10.3389/fnhum.2010.00221.
Burns, J. K., Nashed, J. Y., & Blohm, G. (2011). Head roll influences perceived hand position. Journal of Vision, 11 (9): 3, 1–9, https://doi.org/10.1167/11.9.3. [PubMed] [Article]
Burr, D. C., Morrone, M. C., & Ross, J. (1994, October 6). Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature, 371 (6497), 511–513, https://doi.org/10.1038/371511a0.
Burr, D., Tozzi, A., & Morrone, M. C. (2007). Neural mechanisms for timing visual events are spatially selective in real-world coordinates. Nature Neuroscience, 10 (4), 423–425, https://doi.org/10.1038/nn1874.
Colby, C. L., Duhamel, J., & Goldberg, M. E. (1995). Oculocentric spatial representation in parietal cortex. Cerebral Cortex, 5, 470–481.
Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22 (1), 319–349, https://doi.org/10.1146/annurev.neuro.22.1.319.
Diamond, M. R., Ross, J., & Morrone, M. C. (2000). Extraretinal control of saccadic suppression. The Journal of Neuroscience, 20 (9), 3449–3455, https://doi.org/10.1523/JNEUROSCI.20-09-03449.2000.
Duhamel, J., Colby, C. L., & Goldberg, M. E. (1992, January 3). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92.
Duhamel, J. R., Bremmer, F., Ben Hamed, S., & Graf, W. (1997, October 23). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389 (6653), 845–848, https://doi.org/10.1038/39865.
Ganmor, E., Landy, M. S., & Simoncelli, E. P. (2015). Near-optimal integration of orientation information across saccades. Journal of Vision, 15 (16): 8, 1–12, https://doi.org/10.1167/15.16.8. [PubMed] [Article]
Golomb, J. D., Chun, M. M., & Mazer, J. A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience, 28 (42), 10654–10662, https://doi.org/10.1523/JNEUROSCI.2525-08.2008.
Golomb, J. D., & Kanwisher, N. (2012). Retinotopic memory is more precise than spatiotopic memory. Proceedings of the National Academy of Sciences, USA, 109 (5), 1796–1801, https://doi.org/10.1073/pnas.1113168109.
Golomb, J. D., L'Heureux, Z., & Kanwisher, N. (2014). Feature-binding errors after eye movements and shifts of attention. Psychological Science, 25 (5), 1067–1078, https://doi.org/10.1177/0956797614522068.
Golomb, J. D., Nguyen-Phuc, A. Y., Mazer, J. A., McCarthy, G., & Chun, M. M. (2010). Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. Journal of Neuroscience, 30 (31), 10493–10506, https://doi.org/10.1523/JNEUROSCI.1546-10.2010.
Harrison, W. J., & Bex, P. J. (2014). Integrating retinotopic features in spatiotopic coordinates. The Journal of Neuroscience, 34 (21), 7351–7360, https://doi.org/10.1523/JNEUROSCI.5252-13.2014.
Harrison, W. J., Mattingley, J. B., & Remington, R. W. (2012). Pre-saccadic shifts of visual attention. PLoS One, 7 (9), https://doi.org/10.1371/journal.pone.0045670.
Haslwanter, T., Straumann, D., Hess, B. J. M., & Henn, V. (1992). Static roll and pitch in the monkey: Shift and rotation of listing's plane. Vision Research, 32 (7), 1341–1348, https://doi.org/10.1016/0042-6989(92)90226-9.
Haustein, W., & Mittelstaedt, H. (1990). Evaluation of retinal orientation and gaze direction in the perception of the vertical. Vision Research, 30 (2), 255–262. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/2309460
Henriques, D. Y., Klier, E. M., Smith, M. A., Lowy, D., & Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. The Journal of Neuroscience, 18 (4), 1583–1594. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9454863
Hepp, K. (1990). On Listing's law. Communications in Mathematical Physics, 132 (1), 285–292, https://doi.org/10.1007/BF02278012.
Inaba, N., & Kawano, K. (2014). Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements. Proceedings of the National Academy of Sciences, USA, 111 (21), 7825–7830, https://doi.org/10.1073/pnas.1401370111.
Krekelberg, B., Kubischik, M., Hoffmann, K. P., & Bremmer, F. (2003). Neural correlates of visual localization and perisaccadic mislocalization. Neuron, 37 (3), 537–545, https://doi.org/10.1016/S0896-6273(03)00003-5.
Lappe, M., Awater, H., & Krekelberg, B. (2000, February 24). Postsaccadic visual references generate presaccadic compression of space. Nature, 403 (6772), 892–895, https://doi.org/10.1038/35002588.
Leclercq, G., Blohm, G., & Lefèvre, P. (2013). Accounting for direction and speed of eye motion in planning visually guided manual tracking. Journal of Neurophysiology, 110 (8), 1945–1957, https://doi.org/10.1152/jn.00130.2013.
Leclercq, G., Lefèvre, P., & Blohm, G. (2013). 3D kinematics using dual quaternions: Theory and applications in neuroscience. Frontiers in Behavioral Neuroscience, 7 (February), 7, https://doi.org/10.3389/fnbeh.2013.00007.
Medendorp, W. P., Van Asselt, S., & Gielen, C. C. A. M. (1999). Pointing to remembered visual targets after active one-step self-displacements within reaching space. Experimental Brain Research, 125 (1), 50–60, https://doi.org/10.1007/s002210050657.
Melcher, D. (2005). Spatiotopic transfer of visual-form adaptation across saccadic eye movements. Current Biology, 15 (19), 1745–1748, https://doi.org/10.1016/j.cub.2005.08.044.
Melcher, D. (2007). Predictive remapping of visual features precedes saccadic eye movements. Nature Neuroscience, 10 (7), 903–907, https://doi.org/10.1038/nn1917.
Morris, A. P., Bremmer, F., & Krekelberg, B. (2016). The dorsal visual system predicts future and remembers past eye position. Frontiers in Systems Neuroscience, 10 (February), 9, https://doi.org/10.3389/fnsys.2016.00009.
Morris A. P., Krekelberg B. (2019). A stable visual world in primate primary visual cortex. Current Biology, 29 (9), 1471–1480, doi: https://doi.org/10.1016/j.cub.2019.03.069.
Morris, A. P., Kubischik, M., Hoffmann, K.-P., Krekelberg, B., & Bremmer, F. (2012). Dynamics of eye-position signals in the dorsal visual system. Current Biology, 22 (3), 173–179, https://doi.org/10.1016/j.cub.2011.12.032.
Morrone, M. C., Ross, J., & Burr, D. C. (1997). Apparent position of visual targets during real and simulated saccadic eye movements. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 17 (20), 7941–7953, https://doi.org/10.1038/nn1488.
Murdison, T. S., Leclercq, G., Lefèvre, P., & Blohm, G. (2015). Computations underlying the visuomotor transformation for smooth pursuit eye movements. Journal of Neurophysiology, 113, 1377–1399.
Murdison, T. S., Paré-Bingley, C. A., & Blohm, G. (2013). Evidence for a retinal velocity memory underlying the direction of anticipatory smooth pursuit eye movements. Journal of Neurophysiology, 110, 732–747, https://doi.org/10.1152/jn.00991.2012.
Nakayama, K., & Balliet, R. (1977). Listing's law, eye position sense, and perception of the vertical. Vision Research, 17 (3), 453–457. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/878335
Otero-Millan, J., Roberts, D. C., Lasker, A., & Zee, D. S. (2015). Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion. Journal of Vision, 15 (14): 11, 1–15, https://doi.org/10.1167/15.14.11. [PubMed] [Article]
Rolfs, M., Jonikaitis, D., Deubel, H., & Cavanagh, P. (2011). Predictive remapping of attention across eye movements. Nature Neuroscience, 14 (2), 252–256, https://doi.org/10.1038/nn.2711.
Ross, J., Morrone, M. C., & Burr, D. C. (1997, April 10). Compression of visual space before saccades. Nature, 386 (6625), 598–601, https://doi.org/10.1038/386598a0.
Schlicht, E. J., & Schrater, P. R. (2007). Impact of coordinate transformation uncertainty on human sensorimotor control. Journal of Neurophysiology, 97 (6), 4203–4214, https://doi.org/10.1152/jn.00160.2007.
Sober, S. J., & Sabes, P. N. (2003). Multisensory integration during motor planning. Journal of Neuroscience, 23 (18), 6982–6992.
Sommer, M. A., & Wurtz, R. H. (2004). What the brain stem tells the frontal cortex. II. Role of the SC-MD-FEF pathway in corollary discharge. Journal of Neurophysiology, 91 (3), 1403–1423, https://doi.org/10.1152/jn.00740.2003.
Turi, M., & Burr, D. (2012). Spatiotopic perceptual maps in humans: Evidence from motion adaptation. Proceedings of the Royal Society B: Biological Sciences, 279 (April), 3091–3097, https://doi.org/10.1098/rspb.2012.0637.
Tweed, D., Cadera, W., & Vilis, T. (1990). Computing three-dimensional eye position quaternions and eye velocity from search coil signals. Vision Research, 30 (1), 97–110, https://doi.org/10.1016/0042-6989(90)90130-D.
von Helmholtz, H. (1867). Handbuch der physiologischen Optik (Vol. 9). Leipzig, Germany: Voss.
Wang, X., Fung, C. C. A., Guan, S., Wu, S., Goldberg, M. E., & Zhang, M. (2016). Perisaccadic receptive field expansion in the lateral intraparietal area. Neuron, 90 (2), 400–409, https://doi.org/10.1016/j.neuron.2016.02.035.
Westheimer, G. (1957). Kinematics of the eye. Journal of the Optical Society of America, 47 (10), 967, https://doi.org/10.1364/josa.47.000967.
Wichmann, A. F., & Hill, N. J. (2001). The psychometric function: II. Bootstrap-based confidence intervals and sampling. Perception & Psychophysics, 63 (8), 1314–1329, https://doi.org/10.3758/BF03194545.
Wolf, C., & Schütz, A. C. (2015). Trans-saccadic integration of peripheral and foveal feature information is close to optimal. Journal of Vision, 15 (16): 1, 1–18, https://doi.org/10.1167/15.16.1. [PubMed] [Article]
Yao, T., Ketkar, M., Treue, S., & Krishna, B. S. (2016). Visual attention is available at a task-relevant location rapidly after a saccade. ELife, 5, 1–12, https://doi.org/10.7554/eLife.18009.
Zimmermann, E., & Bremmer, F. (2016). Visual neuroscience: The puzzle of perceptual stability. Current Biology, 26 (5), R199–R201, https://doi.org/10.1016/j.cub.2016.01.050.
Zimmermann, E., Burr, D., & Morrone, M. C. (2011). Spatiotopic visual maps revealed by saccadic adaptation in humans. Current Biology, 21 (16), 1380–1384, https://doi.org/10.1016/j.cub.2011.06.014.
Zimmermann, E., Morrone, M. C., Fink, G. R., & Burr, D. (2013). Spatiotopic neural representations develop slowly across saccades. Current Biology, 23 (5), 1–2, https://doi.org/10.1016/j.cub.2013.01.065.
Zipser, D., & Andersen, R. A. (1988, February 25). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679–684.
Zirnsak, M., Gerhards, R. G. K., Kiani, R., Lappe, M., & Hamker, F. H. (2011). Anticipatory saccade target processing and the presaccadic transfer of visual features. Journal of Neuroscience, 31 (49), 17887–17891, https://doi.org/10.1523/JNEUROSCI.2465-11.2011.
Zirnsak, M., & Moore, T. (2014). Saccades and shifting receptive fields: Anticipating consequences or selecting targets? Trends in Cognitive Sciences, 18 (12), 621–628, https://doi.org/10.1016/j.tics.2014.10.002.
Zirnsak, M., Steinmetz, N. A., Noudoost, E., Xu, K. Z., & Moore, T. (2014, March 27). Visual space is compressed in prefrontal cortex before eye movements. Nature, 507 (7493), 504–507, https://doi.org/10.1038/nature13149.
Figure 1
 
Geometry of predictive remapping models. (A) Example ORT during oblique-to-oblique horizontal saccades. The real-world geometry, which is tilted on the retina due to ORT, does not correspond to a veridical representation of “vertical.” Note that ORT magnitude is exaggerated for illustration purposes. (B) Retinocentric representation of the retino-spatial predictive model and (C) of the purely retinal predictive model. Solid lines represent the actual retinal projection of the stimulus while dotted lines represent the corresponding percept.
Figure 1
 
Geometry of predictive remapping models. (A) Example ORT during oblique-to-oblique horizontal saccades. The real-world geometry, which is tilted on the retina due to ORT, does not correspond to a veridical representation of “vertical.” Note that ORT magnitude is exaggerated for illustration purposes. (B) Retinocentric representation of the retino-spatial predictive model and (C) of the purely retinal predictive model. Solid lines represent the actual retinal projection of the stimulus while dotted lines represent the corresponding percept.
Figure 2
 
Paradigm and task timing. (A) Illustration demonstrating rotational effects induced on retina due to oblique eye orientations while participants do task in either the test or control condition. Note that these retinal rotations are exaggerated for illustration purposes. Lower panel in (A) shows the geometrically predicted retinal torsion as a function of horizontal screen position for the Test (red) and control conditions (green). (B) Schematic showing task timing and stimulus presentation frequency distributions (inset). Bidirectional arrows represent 200 ms time window within which we randomly varied “go” cue.
Figure 2
 
Paradigm and task timing. (A) Illustration demonstrating rotational effects induced on retina due to oblique eye orientations while participants do task in either the test or control condition. Note that these retinal rotations are exaggerated for illustration purposes. Lower panel in (A) shows the geometrically predicted retinal torsion as a function of horizontal screen position for the Test (red) and control conditions (green). (B) Schematic showing task timing and stimulus presentation frequency distributions (inset). Bidirectional arrows represent 200 ms time window within which we randomly varied “go” cue.
Figure 3
 
Psychometric curves for all participants. Early (trial start until 50 ms presaccade; light shades) and late (100 ms postsaccade until trial end; dark shades) psychometric function fits are shown alongside the fixation experiment results for left (dotted) and right (dashed) targets. Color-matched arrows represent retinal predictions for ORT, corresponding to PSEs, during each time bin.
Figure 3
 
Psychometric curves for all participants. Early (trial start until 50 ms presaccade; light shades) and late (100 ms postsaccade until trial end; dark shades) psychometric function fits are shown alongside the fixation experiment results for left (dotted) and right (dashed) targets. Color-matched arrows represent retinal predictions for ORT, corresponding to PSEs, during each time bin.
Figure 4
 
Pooled and participant-level biases. (A) Pooled PSEs (left column) and JNDs (right column) for control (green) and test trials (blue), plotted alongside the retinal predictions (color-matched dashes) over time. Gray shaded regions represent presaccadic (−50 ms to 0 ms) time bin. Inset reveals significant presaccadic perceptual rotation for test PSEs (asterisks). (B) Participant-level PSEs and JNDs, binned into early (t < −50 ms), presaccadic (−50 ms < t < 0 ms), perisaccadic (0 ms < t < 100 ms), and postsaccadic time bins (t > 100 ms), aligned to saccade onset for control (top row) and test trials (bottom row). Participant-level significant effects are shown by color-matched bars crossing bin thresholds (vertical dotted lines), and group-level significant effects are shown by bold black crossing lines and black asterisks. Insets (center column) reveal direct comparisons between PSEs in presaccadic (ordinate) and early time bins (abscissa). Within the test inset, shaded quadrant represents the retinal hypothesis for either time epoch, and arrows represent direction of retino-spatial (black) or purely retinal (red) remapping. Black circles and error bars represent across-participant means and standard deviations.
Figure 4
 
Pooled and participant-level biases. (A) Pooled PSEs (left column) and JNDs (right column) for control (green) and test trials (blue), plotted alongside the retinal predictions (color-matched dashes) over time. Gray shaded regions represent presaccadic (−50 ms to 0 ms) time bin. Inset reveals significant presaccadic perceptual rotation for test PSEs (asterisks). (B) Participant-level PSEs and JNDs, binned into early (t < −50 ms), presaccadic (−50 ms < t < 0 ms), perisaccadic (0 ms < t < 100 ms), and postsaccadic time bins (t > 100 ms), aligned to saccade onset for control (top row) and test trials (bottom row). Participant-level significant effects are shown by color-matched bars crossing bin thresholds (vertical dotted lines), and group-level significant effects are shown by bold black crossing lines and black asterisks. Insets (center column) reveal direct comparisons between PSEs in presaccadic (ordinate) and early time bins (abscissa). Within the test inset, shaded quadrant represents the retinal hypothesis for either time epoch, and arrows represent direction of retino-spatial (black) or purely retinal (red) remapping. Black circles and error bars represent across-participant means and standard deviations.
Table 1
 
Identified Listing's plane tilt for each participant.
Table 1
 
Identified Listing's plane tilt for each participant.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×