Open Access
Article  |   July 2019
Spatial updating of attention across eye movements: A neuro-computational approach
Author Affiliations
Journal of Vision July 2019, Vol.19, 10. doi:10.1167/19.7.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julia Bergelt, Fred H. Hamker; Spatial updating of attention across eye movements: A neuro-computational approach. Journal of Vision 2019;19(7):10. doi: 10.1167/19.7.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

While we are scanning our environment, the retinal image changes with every saccade. Nevertheless, the visual system anticipates where an attended target will be next and attention is updated to the new location. Recently, two different types of perisaccadic attentional updates were discovered: predictive remapping of attention before saccade onset (Rolfs, Jonikaitis, Deubel, & Cavanagh, 2011) and lingering of attention after saccade (Golomb, Chun, & Mazer, 2008; Golomb, Pulido, Albrecht, Chun, & Mazer, 2010). We here propose a neuro-computational model located in lateral intraparietal cortex based on a previous model of perisaccadic space perception (Ziesche & Hamker, 2011, 2014). Our model can account for both types of updating of attention at a neural-systems level. The lingering effect originates from the late updating of the proprioceptive eye-position signal and the remapping from the early corollary-discharge signal. We put these results in relationship to predictive remapping of receptive fields and show that both phenomena arise from the same simple, recurrent neural circuit. Thus, together with the previously published results, the model provides a comprehensive framework for discussing multiple experimental observations that occur around saccades.

Introduction
During natural vision, scene perception depends on exploratory scanning using accurate targeting of attention, saccadic eye movements, anticipation of the physical consequences of motor actions, and continuous integration of visual inputs with stored representations of previously viewed portions of the scene. For example, when there is an impending eye movement, the visual system can anticipate where the target will appear on the retina after the eye movement, and in preparation for this, spatial attention updates and moves to a new location. In subjects instructed to monitor a particular location in a scene while moving the eyes, two different types of spatial-attention shifts have recently been discovered. In one type, spatial attention lingers after a saccade at the (irrelevant) retinotopic position—that is, the focus of attention appears to shift with the eyes but updates to its original world-centered position only after the eyes land at the saccade target location (Golomb, Chun, & Mazer, 2008; Golomb, Pulido, Albrecht, Chun, & Mazer 2010). Another study by Rolfs, Jonikaitis, Deubel, & Cavanagh (2011) shows that shortly before saccade onset, a locus of attention appears at a position opposite to the direction of the saccade, which suggests an anticipatory correction of the effects of eye movements. While these results initially appear contradictory, Jonikaitis, Szinte, Rolfs, and Cavanagh (2013) have shown that both updating mechanisms occur simultaneously. Around the time of an eye movement, they detected attentional effects at two different locations at the same time, although only one location was cued. This suggests that there must be at least two attention pointers in addition to the saccade target location that are active around saccades. While the anticipatory shift of attention opposite to saccade direction may be potentially useful, although a bit too early, the shift of attention in the direction of the saccade may appear to be an error or at least a delay in the spatial updating. Furthermore, it is not clear whether the recently observed phenomena of spatial updating of attention relate to the observation of predictive remapping of receptive fields. In the seminal study of Duhamel, Colby, and Goldberg (1992), a flashed stimulus in the future receptive field—that is, the location of a neuron's receptive field after saccade—evoked a neural response prior to saccade. While this can be interpreted as an anticipatory shift of the receptive field, Cavanagh, Hunt, Afraz, and Rolfs (2010) suggested that it may also be explained by learned horizontal or lateral connections. Furthermore, they proposed that such a transfer of activation may also be responsible for spatial updating of attention. 
Recently, Ziesche and Hamker (2011) proposed a model of perisaccadic perception to describe the underlying mechanisms of perceiving a stable world during eye movements. It uses gain fields and basis-function networks to perform coordinate transformations between different frames of references—more precisely, between eye- and head-centered reference frames—which may possibly take place in the parietal cortex, specifically the lateral intraparietal cortex (LIP). The model accounts for predictive remapping using two eye-position-related signals, a discrete eye-position signal, and a corollary-discharge signal to compute the perceived position of stimuli across saccades. Ziesche and colleagues demonstrated that the model is also able to explain the perisaccadic mislocalization of briefly flashed stimuli in complete darkness (Ziesche & Hamker, 2011) and the observation of saccadic suppression of displacement (Ziesche, Bergelt, Deubel, & Hamker, 2017; Ziesche & Hamker, 2014). 
We here explore by means of the neuro-computational model the relationship between predictive remapping of receptive fields (Duhamel et al., 1992) and predictive remapping of attention. How do these phenomena occur at the neural-systems level? Might both recruit the same neural mechanisms? If yes, why does the attention pointer update opposite to saccade direction, while the receptive fields update with the saccade vector? 
Neuro-computational model
Due to lack of data and sometimes even conflicting data, a pure bottom-up approach at the systems level of brain networks is presently not feasible for neuro-computational modeling. Thus, in general a mixture has to be made between theoretical guidance of the computational framework and evidence given by data. Ziesche and Hamker (2011, 2014) made three assumptions when designing the neuro-computational model: First, existing perisaccadic effects may be explainable on basis of two eye-related signals that have been shown to exist in the brain, a (proprioceptive) eye-position signal and a corollary discharge. Second, gain fields are assumed to be Gaussian shaped, although measurements of gain fields show more variability. Third, as the (proprioceptive) eye-position signal is head centered but the corollary discharge is eye displacement—thus retinocentric—and both eye signals jointly contribute to the final decision, they are required to communicate with each other and operate in the same reference frame. Unlike previous models, this model accounts for the full temporal dynamics around saccades. However, it has so far simulated actual experiments using a simplified one-dimensional design of visual space. To allow for realistic simulations of more complex experiments, we started by extending the original model to operate in two image dimensions. This two-dimensional model uses the same input signals, interactions, and concepts as the one-dimensional model proposed by Ziesche and Hamker. In addition to the existing inputs, we expanded the model by including a fourth input signal to introduce top-down attention to the system. 
Core components
As the neuro-computational model is based on gain fields and basis functions, we will first give a brief overview of these two core components. 
In gain fields, the response of a neuron is defined by the neuron's receptive-field location and a modulatory factor. This modulatory factor can be, for example, an eye-position, an attention-position, or a limb-position signal. Importantly, gain fields do not change the position or width of a receptive field, but rather scale the response to a stimulus. Thus, the activity r of a neuron can be written as  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}r\left( {x,y} \right) = {\rm{\ }}f\left( x \right)g\left( y \right){\rm ,}\end{equation}
with f(x) defining the receptive field and g(y) a gain factor (Salinas & Sejnowski, 2001). Gain fields have been found in several brain areas, including the frontal eye field (Cassanello & Ferrera, 2007) and LIP (Andersen, Bracewell, Barash, Gnadt, & Fogassi, 1990). However, the gain field defined in Equation 1 requires nonzero gain signals, which works well for eye or limb position but may lead to difficulties in attention or in all sorts of phasic signals. For such phasic signals an alternative gain field can be used, which follows typical definitions of attentive gain modulation, such as  
\begin{equation}\tag{2}r(x,y) = f(x)\left( {1 + g(y)} \right) = f(x) + f(x)g(y){\rm .}\end{equation}
 
Although eye-position gain fields often thought to be planar or sigmoidal, Lehky, Sereno, and Sereno (2016) have discussed the fact that the evidence for these particular functions is not overtly strong, and shown that differently shaped gain fields—elliptical, hyperbolic, or even a combination of different shapes—are able to decode an accurate representation of eye position. According to that study, the variation within a specific shape is more important than the general shape of gain fields. Following Pouget, Deneve, and Duhamel (2002), we use a Gaussian-shaped gain field; from a computational perspective, it is a special case of an elliptical gain field. Besides, a Gaussian function can be approximated by a linear combination of several sigmoid functions, as illustrated in Figure 1. Thus, a Gaussian gain-field response can be interpreted as a weighted sum of multiple sigmoidal gain fields. To be clear, we do not think that this particular gain field is an exact description of the gain fields in the brain; we think of it rather as one possible implementation of this concept. 
Figure 1
 
Example of how a linear combination of sigmoidal gain fields approximates a Gaussian-like function. The gray curves illustrate four different sigmoid functions si(x), and the red curve represents a linear combination of these functions, namely the weighted sum of the four sigmoid functions minus a constant value: \(\left( {\sum\nolimits_i {{w_i}} {s_i}(x) - k} \right)\). Here the linear combination is approximately a Gaussian function.
Figure 1
 
Example of how a linear combination of sigmoidal gain fields approximates a Gaussian-like function. The gray curves illustrate four different sigmoid functions si(x), and the red curve represents a linear combination of these functions, namely the weighted sum of the four sigmoid functions minus a constant value: \(\left( {\sum\nolimits_i {{w_i}} {s_i}(x) - k} \right)\). Here the linear combination is approximately a Gaussian function.
According to Salinas and Sejnowski (2001), gain fields as defined in Equation 1 can be used for coordinate transformations between multiple frames of references. As such transformations are usually nonlinear, the output cannot be received through a weighted sum of inputs, and instead a network with at least one intermediate nonlinear map is required (Pouget et al., 2002). Pouget et al. introduced a model where the intermediate map consists of basis-function units. Known from linear algebra, basis functions are used to build more complex functions by a weighted sum of these basis functions. Thus, Pouget et al. combined two input maps (a retinocentric stimulus response and an eye-position signal) multiplicatively to form an intermediate basis-function map and compute via a linear combination an output response, which is the stimulus position in a head-centered reference frame. Although they demonstrated their model only for this kind of transformation, the same principles can be used for other frames of reference, such as limb- or body-centered coordinate systems. Figure 2 shows two examples where a retinotopic response is transformed into a head-centered response using eye position. In the examples, the eye fixates at 30° in head-centered coordinates. The activity of the eye-position signal can be either tonic (Figure 2, left) or phasic (Figure 2, right) and is called xetonic or xephasic, respectively. It is modeled as a Gaussian function centered on the current eye position. Additionally, a stimulus response centered at −10° in retinotopic coordinates is given. It is likewise modeled as Gaussian function, and labeled xr. Both signals are multiplicatively combined in the basis-function map xb according to Equation 1 or 2, depending on the temporal dynamics of the eye-position signal. In the tonic case, the activity of xb is formed by xb = xr × xetonic; in the phasic case it is calculated by xb = xr × (1 + xephasic). This leads in both cases to a two-dimensional Gaussian blob at (30°, −10°). In addition to this blob, there is a band of activity in the phasic case. This band is stretched over the whole eye-position dimension of xb and is centered, like xr, at the retinotopic stimulus position at −10°. A head-centered response of the stimulus position can be computed from the basis-function map, if each neuron k of xh is combined with each neuron (i, j) of xb in such a way that i + j = k. That means that all neurons along a diagonal of xb are connected to the same neuron in xh. As a result, the activity of xh is centered at 20°, the stimulus response in head-centered coordinates. The only difference between the tonic and phasic cases is the baseline activity of xh, which is zero in the tonic case and greater than zero in the phasic case due to the band of activity in xb
Figure 2
 
Examples of basis-function networks. The inset shows the setup: The eye fixates at 30° while a stimulus (depicted as green star) is presented at 20° (both positions in head-centered coordinates). Thus, in a retinotopic reference frame the stimulus is at −10°. The neurons in the central map, the basis-function map, are aligned to the sensitivity of their input, which is a topological but not a cortical definition of space. Left: Basis-function network with tonic eye-position signal. Right: Basis-function network with phasic eye-position signal. The two signals—head-centered eye position xetonic (left) and xephasic (right) and retinotopic stimulus response xr—feed into the basis map xb and are combined according to the corresponding equation. This leads to an activity blob centered at (30°, −10°) and an additional band of activity centered at (*, −10°) in the phasic case (right). By reading out the activity along the diagonal and projecting it to the output map xh, we receive a firing-rate pattern centered at the head-centered position of the stimulus. The yellow dashed lines symbolize the interactions for the largest activity—that is, how the inputs xe* and xr are fed into xb and how the activity of xb is read out and projected to output xh.
Figure 2
 
Examples of basis-function networks. The inset shows the setup: The eye fixates at 30° while a stimulus (depicted as green star) is presented at 20° (both positions in head-centered coordinates). Thus, in a retinotopic reference frame the stimulus is at −10°. The neurons in the central map, the basis-function map, are aligned to the sensitivity of their input, which is a topological but not a cortical definition of space. Left: Basis-function network with tonic eye-position signal. Right: Basis-function network with phasic eye-position signal. The two signals—head-centered eye position xetonic (left) and xephasic (right) and retinotopic stimulus response xr—feed into the basis map xb and are combined according to the corresponding equation. This leads to an activity blob centered at (30°, −10°) and an additional band of activity centered at (*, −10°) in the phasic case (right). By reading out the activity along the diagonal and projecting it to the output map xh, we receive a firing-rate pattern centered at the head-centered position of the stimulus. The yellow dashed lines symbolize the interactions for the largest activity—that is, how the inputs xe* and xr are fed into xb and how the activity of xb is read out and projected to output xh.
The example is simplified to a one-dimensional space for the purpose of better visualization. However, our model implements two dimensions of visual space; thus, rather than 2-D basis functions, 4-D basis functions are required. Considering how the 2-D basis-function map is formed, it is easy to imagine that a 4-D basis-function map consists of four dimensions, where the first two belong to horizontal and vertical information of the first input (e.g., a two-dimensional visual space of stimulus position xr) and the last two belong to horizontal and vertical information of the second input (e.g., a two-dimensional space of eye position xe). Reading out a four-dimensional diagonal can then be done by first projecting the 4-D basis-function map to two separate 2-D maps containing only horizontal and vertical information, respectively, by maximizing over the two dimensions containing the disregarded information. Then we read out the two two-dimensional diagonals separately and finally recompose them to form again a 2-D space with horizontal and vertical dimensions. For more details of how this diagonal readout is calculated mathematically, see the Appendix
In perisaccadic visual perception, we have to define how the movement of the eyes should be considered in the model: We can use a continuously updating eye-position signal over time or instead a discretely updating pre- and a postsaccadic eye position. Inspired by physiological data, we decided to use two eye-position-related signals, a tonic signal for the pre- and postsaccadic eye position and a phasic signal for the corollary discharge. 
The first signal, called the proprioceptive (PC) signal, encodes the current eye position in head-centered coordinates and may originate in the primary somatosensory cortex (Wang, Zhang, Cohen, & Goldberg, 2007) or in the central thalamus (Tanaka, 2007). In monkeys, the eye-position signal has been shown to update around 60 ms after saccade completion (Y. Xu, Wang, Peck, & Goldberg, 2011). In humans, updating latencies have not been measured, but there is evidence that updating latencies may be species specific, depending on the behavioral task. For example, in the task on saccadic suppression of displacement, data for monkeys and humans appear to differ (Joiner, Cavanaugh, FitzGibbon, & Wurtz, 2013). Summarizing, the PC signal is a tonic, late-updating signal encoded in a head-centered reference frame. The signal is modeled as a two-dimensional Gaussian function centered at the current eye position, where the two dimensions represent horizontal and vertical dimensions of a visual scene. The updating is modeled using a decay of activity at the old position and a rise of activity at the new one. Due to the late updating, the PC signal is initially a presaccadic eye-position signal. 
The second eye-position-related signal, called a corollary-discharge (CD) signal, encodes the eye displacement retinotopically, as it is a copy of the oculomotor command and is only active around saccade onset. It is likely routed from the superior colliculus via the mediodorsal thalamus to the frontal eye field (FEF; Sommer & Wurtz, 2004), where it may be transferred into a gain-field representation. There is no direct evidence for this assumption, but gain-field representations have also been observed in the FEF (Cassanello & Ferrera, 2007). From such a gain field, the expected postsaccadic eye position in head-centered coordinates can be extracted. Thus, the transferred CD signal can be interpreted as the expected postsaccadic eye position. The primary CD signal is a retinotopic, phasic signal rising before and decaying shortly after saccade onset. It is modeled as a two-dimensional spatial signal representing again horizontal and vertical dimensions of visual space. 
Next to these two eye-position-related signals, the model receives input from visual stimuli. The stimulus response is encoded in a two-dimensional retinocentric reference frame representing horizontal and vertical dimensions of a visual scene, and models early extrastriate visual areas like MT or V4. The response to a stimulus is modeled by a two-dimensional Gaussian function centered at the retinotopic stimulus position and including decaying strength over time. That means the longer the stimulus is presented, the weaker the response is. The width of the Gaussian is directly proportional to the distance between fixation point and stimulus position, to account for effects of larger receptive fields in the periphery. Additionally, we added a delay in the response to a stimulus to account for the latency in the visual pathway. 
We further expanded the model by a fourth input signal to introduce top-down attention to the system. The top-down attention signal is encoded in a two-dimensional head-centered reference frame modeling, for instance, a head-centered position kept in memory. The emergence of this signal is not explicitly modeled here, but it might arise from memory recall in areas such as the medial temporal lobe (Byrne, Becker, & Burgess, 2007). The two dimensions of the reference frame represent, again, horizontal and vertical dimensions of visual space. 
Figure 3 shows the temporal dynamics of the four inputs for a simple example. After fixation at the fixation point (FP) for 200 ms, a saccade is executed to the saccade target (ST) lasting 41 ms. The PC signal updates after saccade offset from FP to ST. Thus, the firing rate at FP decays after saccade offset, while the PC signal at ST increases. The CD signal at ST-FP, as it encodes the eye displacement, rises before saccade onset, peaks 10 ms after saccade onset, and decays. At 150 ms before saccade onset, a stimulus is presented at the head-centered stimulus position (SP) for 80 ms. The retinal signal has a delay of 50 ms, and thus it starts 50 ms after stimulus onset at the retinotopic stimulus position—that is, SP-FP. Afterward it decays, first with a Gaussian decay and then, with stimulus offset, linearly. In experiments with top-down attention at the attention position (AP), the attention signal is active all the time. 
Figure 3
 
Example of typical temporal dynamics of input signals. At 150 ms before saccade onset, a stimulus is presented for 80 ms. The retinal signal starts 50 ms after stimulus onset. The signal decays with Gaussian distribution for the duration of stimulus presentation and linearly afterwards. The corollary-discharge (CD) signal rises before saccade onset, peaks 10 ms after saccade onset, and decays. The proprioceptive (PC) signal updates after saccade offset by decaying at the presaccadic position (fixation point; FP) and increasing at the postsaccadic position (saccade target; ST). If we use top-down attention, the signal at the attention position (AP) is active throughout the whole trial.
Figure 3
 
Example of typical temporal dynamics of input signals. At 150 ms before saccade onset, a stimulus is presented for 80 ms. The retinal signal starts 50 ms after stimulus onset. The signal decays with Gaussian distribution for the duration of stimulus presentation and linearly afterwards. The corollary-discharge (CD) signal rises before saccade onset, peaks 10 ms after saccade onset, and decays. The proprioceptive (PC) signal updates after saccade offset by decaying at the presaccadic position (fixation point; FP) and increasing at the postsaccadic position (saccade target; ST). If we use top-down attention, the signal at the attention position (AP) is active throughout the whole trial.
Model structure and dynamics
Figure 4 shows the structure of the new 2-D model with its maps and their interactions. 
Figure 4
 
Structure of the neuro-computational model. The four input signals—retinotopic stimulus position Xr, proprioceptive eye position (PC) XePC, corollary discharge (CD) XeCD, and attention Xh—are fed into two maps of lateral intraparietal cortex (LIP; XbPC, XbCD) which are gain modulated by either the PC signal or the CD signal. While the PC signal encodes the eye position in head-centered coordinates, the CD signal is originally eye centered and must be first transferred into a head-centered reference frame. This is done in map XeFEF using the eye-position signal from XePC. The activities of all simulated LIP neurons are combined in map Xh, and from there fed back into both LIP maps. The interaction of the maps is summarized in the structural equations. The mathematical description of the model, including all equations, can be found in the Appendix.
Figure 4
 
Structure of the neuro-computational model. The four input signals—retinotopic stimulus position Xr, proprioceptive eye position (PC) XePC, corollary discharge (CD) XeCD, and attention Xh—are fed into two maps of lateral intraparietal cortex (LIP; XbPC, XbCD) which are gain modulated by either the PC signal or the CD signal. While the PC signal encodes the eye position in head-centered coordinates, the CD signal is originally eye centered and must be first transferred into a head-centered reference frame. This is done in map XeFEF using the eye-position signal from XePC. The activities of all simulated LIP neurons are combined in map Xh, and from there fed back into both LIP maps. The interaction of the maps is summarized in the structural equations. The mathematical description of the model, including all equations, can be found in the Appendix.
The two eye-related input signals, eye position and corollary discharge, feed into two two-dimensional maps XePC and XeCD. Note that according to the input signal, XePC is organized in a head-centered reference frame, whereas XeCD is organized in an eye-centered reference frame. As the two signals should be progressed in the same manner, they have to operate in the same frame of reference. A simple solution to transferring the eye-centered CD signal into a head-centered signal is to use the PC signal with the concept of coordinate transformation described previously. Following this concept, the basis-function map, called XeFEF, that multiplicatively combines the CD and PC signal is required to be—from a computational point of view—a four-dimensional map, as each input is two-dimensional. The structural equation for the activity in map XeFEF can be written as  
\begin{equation}X{e_{{\rm{FEF}}}} = X{e_{{\rm{PC}}}} \times X{e_{{\rm{CD}}}}{\rm {.}}\end{equation}
 
The retinal signal feeds into two maps assumed to be located in LIP, where it is gain modulated by either the eye-position signal or the corollary-discharge signal to obtain a joint representation of the stimulus position and the eye position or eye displacement, respectively. The two LIP maps, called XbPC and XbCD according to the modulating signal, combine two two-dimensional signals multiplicatively and work like two separate basis-function maps. They again—from a computational point of view—have to be designed as four-dimensional maps. As the modulating signal is either tonic (PC) or phasic (CD), the feedforward activity in these maps is calculated by either Equation 1 or 2. Both LIP maps interact with each other via neurons organized in a two-dimensional head-centered reference frame Xh using feedback projections. This feedback is modulated by the corresponding eye-related signal as well, and is crucial for predictive remapping, as shown later. The structural equations for the activity of both LIP maps are then  
\begin{equation}\matrix{ {X{b_{{\rm{PC}}}} = } \hfill&{\;\;\;\;Xr \times X{e_{{\rm{PC}}}}} \hfill&{ + Xh \times X{e_{{\rm{PC}}}}} \hfill \cr {X{b_{{\rm{CD}}}} = } \hfill&{\underbrace {Xr \times \left( {1 + X{e_{{\rm{FEF}}}}} \right)}_{{\rm{feedforward}}}} \hfill&{ + \underbrace {Xh \times X{e_{{\rm{FEF}}}}}_{{\rm{feedback}}}} \hfill \cr } {\rm {.}}\end{equation}
 
Due to the feedforward and feedback components, there will be at least two distinct activity blobs in each LIP map simultaneously in the perisaccadic case. In addition, as the update of the PC signal is defined by a decreasing activity at the presaccadic position and an increasing activity at the postsaccadic position, the activity in XbPC updates jump-like, but smooth shifts of activity are possible to a certain degree. 
The neurons of Xh combine the activities from both LIP maps in an additive manner as follows:  
\begin{equation}Xh = X{b_{{\rm{PC}}}} + X{b_{{\rm{CD}}}}{\rm {.}}\end{equation}
 
In Xh is encoded the perceived spatial position of a stimulus in a head-centered reference frame. 
Following the idea of LIP as a priority map (Bisley, 2011; Goldberg, Bisley, Powell, & Gottlieb, 2006), any activation in LIP is interpreted as an attentional signal, and we refer to this as an attention pointer. In order to simulate the dynamics of spatial attention around eye movements, we can apply the model in two operational modes: a bottom-up mode where we cue attention by a stimulus and observe how the initially cued activity evolves during saccade, and a top-down mode where we set a head-centered top-down pointer into the model by providing a static signal to Xh and observe the dynamics of attention while the top-down signal is being transformed by the two eye-position signals traveling through the feedback pathways toward an eye-centered representation of spatial attention. 
Results
Predictive remapping
Predictive remapping is a phenomenon where the retinotopic position of a spatial-restricted neuronal receptive field (RF) shifts in anticipation of a saccade such that the neuron responds to stimuli in the future receptive field (FRF) before the eye-movement onset. Predictive remapping was first observed in LIP by Duhamel et al. (1992), and has been proposed to play a key role in establishing the percept of a visually stable world (Mathôt & Theeuwes, 2010; Sommer & Wurtz, 2006; Wurtz, 2008). Predictive remapping has been observed in many other areas: extrastriate visual cortex (Nakamura & Colby, 2002), superior colliculus (Walker, Fitzgibbon, & Goldberg, 1995), and FEF (Sommer & Wurtz, 2006; Umeno & Goldberg, 1997). However, in these early studies predictive remapping was not clearly delineated from other RF shifts, as discussed by Zirnsak, Lappe, and Hamker (2010)—which triggered new experimental observations (Hartmann, Zirnsak, Marquis, Hamker, & Moore, 2017; Neupane, Guitton, & Pack, 2016a, 2016b; Zirnsak, Steinmetz, Noudoost, Xu, & Moore, 2014). 
This report focuses on modeling the neural substrates of the original predictive-remapping phenomenon, as described by Duhamel et al. (1992). In our model, predictive remapping arises from the feedback connection from Xh to XbCD. To demonstrate this, we conducted two simple experiments, similar to those of Duhamel et al. In the first experiment, called the fixation task, a stimulus is presented in the RF for 100 ms while the eyes remain fixed at a particular location—FP at (0°, 0°). The second experiment is called the saccade task, as a saccade is executed from the FP to the ST at (10°, 8°). At 150 ms before saccade onset, a stimulus is presented for 100 ms in the FRF, which is at the position of the RF after the eyes have landed at the saccade target. The eye movement is simulated with the model of Van Wetter and Van Opstal (2008), and for this saccade length, it was 65 ms in duration. The setup as well as the positions and temporal dynamics of the three input signals—eye position, corollary discharge, and retinal signal—are shown in Figure 5
 
Figure 5
 
Setup for predictive remapping with respect to the three input signals (compare Figure 4): In the fixation task (left), the eyes fixate at the fixation point (FP) at (0°, 0°) for the whole time. A stimulus (depicted by a green star) is presented in the receptive field (RF) at (−5°, −2°) for 100 ms. In the saccade task (right), the stimulus is presented in the future receptive field (FRF) at (5°, 6°) 150 ms before saccade onset for 100 ms. The eye movement from the fixation point (FP) to the saccade target (ST) at (10°, 8°) lasts 65 ms. The red cross symbolizes the current eye position. The colored blobs represent neural activity of the three different signals, which change with time. The retinal signal (green) is delayed for 50 ms to mimic the response latency. The eye-position signal (red) starts to update 32 ms after saccade offset to the new location of the eyes. The corollary-discharge (CD) signal (blue) rises 86 ms prior to saccade onset and is active for 171 ms, with its peak activity reached 10 ms after saccade onset. As the retinal signal and the origin of the CD signal are retinotopic, they shift with the eye movement. In contrast, the proprioceptive (PC) signal is head centered and therefore fixed during the saccade. The time (in milliseconds) is aligned to saccade onset. The still image shows the inputs marginalized over time.
The results of the simulations are shown in Figure 6. To visualize the 4-D LIP maps, we projected each map to two two-dimensional planes representing either the horizontal or the vertical information by maximizing over the two disregarded dimensions. Further, we show the neural activities of both LIP maps projected into the retinotopic space together with the spatial setup. In the fixation task (Figure 6A), a stimulus is presented in the RF, no saccade is planned, and the eyes fixate at the FP. The LIP map XbPC combines the eye-position signal at (0°, 0°) and the retinal signal at (−5°, −2°) multiplicatively, thus resulting in a single locus of activity at the crossing of both signals, at (0°, −5°) for horizontal and (0°, −2°) for vertical, respectively. As the retinal signal feeds continuously into the LIP map XbCD, we get an activity line along one axis at the position of the stimulus (at −5° for horizontal and −2° for vertical, respectively). In the saccade task (see Figure 6B), the CD signal rises shortly before saccade onset and modulates the retinal signal of the stimulus presented in the FRF in such a way that it increases the neural gain at a single blob where the CD signal position crosses the retinal signal, at (10°, 5°) for horizontal and (8°, 6°) for vertical, respectively. The activity in LIP PC is the result of the multiplicative interaction of the PC signal and the retinal signal, at (0°, 5°) for horizontal and (0°, 6°) for vertical, respectively. 
Figure 6
 
Simulation results of the two predictive-remapping tasks. The activity of both maps of lateral intraparietal cortex (LIP) projected onto two two-dimensional planes is plotted, representing horizontal and vertical information. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 5. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. (A) In the fixation task, a stimulus is presented in the receptive field (RF), the eyes fixate at the fixation point (FP), and no saccade is planned. Due to the multiplicative interaction of eye-position signal and retinal signal representing the stimulus, there is a single activation blob in the LIP proprioceptive (PC) eye position map. Projected to visual space, this results in activity at RF (red blob). In the LIP corollary discharge (CD) map, there is an activity line indicating the position of the stimulus, which also leads to activity at RF (covered by red blob). (B) In the saccade task, a stimulus is presented in the future receptive field (FRF) and a saccade is going to be executed from FP to the saccade target (ST). In LIP PC, there is an activity blob at the crossing of the current eye position and the stimulus position, similar to the fixation task. Thus, we see activity at FRF triggered by LIP PC (red blob). Shortly before saccade onset, the CD signal rises, which produces an additional peak of activity along the activity line from the stimulus signal at ST in LIP CD. Additionally, there is a second activity blob resulting from the interaction of the CD signal with the feedback signal from Xh along the yellow dashed diagonal. This leads to a second activity blob at RF (blue blob).
Figure 6
 
Simulation results of the two predictive-remapping tasks. The activity of both maps of lateral intraparietal cortex (LIP) projected onto two two-dimensional planes is plotted, representing horizontal and vertical information. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 5. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. (A) In the fixation task, a stimulus is presented in the receptive field (RF), the eyes fixate at the fixation point (FP), and no saccade is planned. Due to the multiplicative interaction of eye-position signal and retinal signal representing the stimulus, there is a single activation blob in the LIP proprioceptive (PC) eye position map. Projected to visual space, this results in activity at RF (red blob). In the LIP corollary discharge (CD) map, there is an activity line indicating the position of the stimulus, which also leads to activity at RF (covered by red blob). (B) In the saccade task, a stimulus is presented in the future receptive field (FRF) and a saccade is going to be executed from FP to the saccade target (ST). In LIP PC, there is an activity blob at the crossing of the current eye position and the stimulus position, similar to the fixation task. Thus, we see activity at FRF triggered by LIP PC (red blob). Shortly before saccade onset, the CD signal rises, which produces an additional peak of activity along the activity line from the stimulus signal at ST in LIP CD. Additionally, there is a second activity blob resulting from the interaction of the CD signal with the feedback signal from Xh along the yellow dashed diagonal. This leads to a second activity blob at RF (blue blob).
Activity along each diagonal of XbPC—for example, along the yellow dashed line—is read out and fed to one neuron in Xh. Likewise, the activity in the LIP CD map is read out along each diagonal and projected to Xh. At the same time, the activity of Xh is fed back to both LIP maps along each diagonal, which allows an interaction between XbPC and XbCD via Xh. In LIP CD, the feedback from Xh is combined multiplicatively with the CD signal. This leads to a second locus of activity in LIP CD, at (10°, −5°) for horizontal and (8°, −2°) for vertical, respectively. Note that the position of this second locus of activity has the same “visual” position as if we presented the stimulus in the RF (compare Figure 6A). That means the neurons in XbCD anticipate the updating of the stimulus position from FRF to RF with the help of the CD signal and the feedback from Xh. Thus, the XbCD neurons show predictive remapping. In contrast, the LIP map for the PC signal has no such predictive component, and contains only one single activation blob that represents the stimulus position with respect to the current eye position. Supplementary Movie S1 shows the development of the activities in both LIP maps over the course of a single simulated trial. 
To illustrate the necessity of the feedback from Xh to LIP for predictive remapping, we simulated the same experiments without the connections from Xh to XbPC and XbCD, respectively. As can be seen in Figure 7, the main effect of the missing feedback besides reduced activity in both LIP maps is the nonexistent activity blob in LIP CD in the saccade task—that is, no predictive remapping. 
Figure 7
 
Simulation results of the two predictive-remapping tasks without feedback from Xh to lateral intraparietal cortex (LIP). Layout and definitions are identical to Figure 6. The main difference from Figure 6 is the missing second activity blob in the LIP corollary discharge (CD) map in the saccade task, and consequently the missing activity at RF (compare the blue blob in Figure 6B).
Figure 7
 
Simulation results of the two predictive-remapping tasks without feedback from Xh to lateral intraparietal cortex (LIP). Layout and definitions are identical to Figure 6. The main difference from Figure 6 is the missing second activity blob in the LIP corollary discharge (CD) map in the saccade task, and consequently the missing activity at RF (compare the blue blob in Figure 6B).
To summarize, our model suggests that predictive remapping in LIP is generated with the help of the CD signal. When a stimulus is presented, the eye-centered retinal signal feeds into both LIP maps. Thus, neurons whose RFs match with the stimulus position become active. Long before a saccade is planned, the eye-centered stimulus position (from Xr) is transformed into a head-centered position through XbPC with the help of the eye-position signal and encoded in Xh. Shortly before saccade onset, the CD signal rises, and with the feedback from Xh it activates neurons in XbCD which encode the stimulus with respect to the future eye position. Thus, if the stimulus presented in the FRF is encoded with reference to ST, this stimulus—in a retinocentric reference frame—leads to a neural activation at RF, as the eyes are still at FP, and consequently predictive remapping emerges. Predictive remapping does not require all-to-all connections, only a diagonal connectivity scheme within a small recurrently connected network of neurons. 
Spatial updating of attention
We particularly compare our simulation results to the data of Jonikaitis et al. (2013), who demonstrated both types of attentional updating in their data: predictive remapping of attention reported by Rolfs et al. (2011) and lingering of attention reported by Golomb and colleagues (Golomb et al., 2008; Golomb et al., 2010). Predictive remapping of attention occurs shortly before saccade onset, and spatial attention is remapped to a position opposite to the direction of the saccade. In contrast, Golomb and colleagues have reported that after a saccade, spatial attention lingers at the (irrelevant) retinotopic position—that is, the focus of attention shifts with the eyes but updates to the original fixed world-centered position only after the eyes have landed. Using our model we replicated the observation of Jonikaitis et al. and shed light onto the possible mechanisms that might account for the experimental findings. 
Jonikaitis et al. (2013) conducted two slightly different variants of their experiment. In one the attentional cue was turned off during saccade (task with transient cue), and in the other the cue was shown until the end of the experiment (task with sustained cue). They determined the locus of attention by measuring performance in a discrimination task before and after the saccade at three different positions: the attention position (AP), where the attentional cue was presented; the lingering attention position (LAP)—that is, the AP shifted by the saccade vector; and the remapped attention position (RAP)—that is, AP shifted by the reverse saccade vector. For both tasks, they found an attentional benefit at AP and RAP before saccade onset as well as an attentional benefit at AP and LAP after the saccade. They also observed a small but significant attentional benefit at RAP after saccade in the sustained-cue task, which did not appear in the transient-cue task. Since both transient and sustained cues led to similar attentional effects, it is possible that attention is largely driven by cue onset. Further, previous recordings in V4 have shown a strong decay in activation of a permanently present stimulus (Fischer & Boch, 1985). Thus, we simulated the study using a brief appearance of the attention cue. We used the same spatial setup as Jonikaitis et al.: The model executes an 8° saccade to the right (modeled as described previously), lasting 53 ms. Before saccade onset, attention is directed at the AP located 6° above the fixation point due to a cuing stimulus at this position shown 180 ms before saccade onset for 10 ms. To account for the latency in the visual pathway, the activation of Xr in the model starts 50 ms after stimulus onset. The spatial layout and temporal dynamics of the input signals—eye position, corollary discharge, and retinal signal—are shown in Figure 8
 
Figure 8
 
Setup for spatial updating of attention with cued attention with respect to the three input signals (compare Figure 4): At 180 ms before saccade onset, a stimulus (green star) is presented at the attention position (AP) at (0°, 6°) for 10 ms to cue attention. The retinal signal (green) is delayed for 50 ms to mimic the latency. After fixating the fixation point (FP) at (0°, 0°), the eyes reach the saccade target (ST) at (8°, 0°) in 53 ms. The red cross symbolizes the current eye position. The eye-position signal (red) starts to update 32 ms after saccade offset to the new location of the eyes. The corollary-discharge (CD) signal (blue) rises 86 ms prior to saccade onset and is active for 171 ms, with its peak activity reached 10 ms after saccade onset. Place markers for the remapped and the lingering attention position—RAP at (−8°, 6°) and LAP at (8°, 6°)—are shown. As the retinal signal and the origin of the CD signal are retinotopic, they shift with the eye movement. In contrast, the proprioceptive (PC) signal is head centered and therefore fixed during the saccade. The time (in milliseconds) is aligned to saccade onset. The still image shows the inputs marginalized over time.
While Jonikaitis et al. (2013) used a behavioral paradigm to estimate the amount of attention distributed across the saccade, we directly plot the LIP activity as a measure of attention. Figure 9 shows the activity of both maps at different times. Again, since the LIP maps are four-dimensional, for visualization we projected them to two two-dimensional planes representing either the horizontal or the vertical information of this map. In our model, each LIP map triggers an attention pointer in a retinotopic reference frame. At the beginning, before the saccade, the only inputs to the model are an eye-position (PC) signal encoding the current eye position at FP and a retinal signal to Xr encoding the cued attention position at AP. The retinal signal is combined multiplicatively with the PC signal in LIP PC, which leads to a single activation blob, at (0°, 0°) for horizontal and (0°, 6°) for vertical, respectively. Projecting the activity of the LIP map back to visual space shows the attention pointer (red blob) at the desired attention position, as shown in Figure 9A. Additionally, the cue generates an activity line in the second LIP map XbCD, which leads to an attention pointer at AP (covered by the red blob). Meanwhile, both LIP maps interact with each other via the neurons of Xh. All activity in XbPC and XbCD is summed up along each diagonal, fed into Xh, and projected back along the same diagonal. Thus, the activity of Xh is projected back into XbPC along the diagonal, where it is combined multiplicatively with the PC signal. This sustains the same position as initially triggered by the cue. Importantly, as the saccade is being planned the CD signal rises. It feeds into XbCD and is multiplicatively combined with the reentrant signal from Xh feeding in along the diagonal. This leads to an activity blob in LIP CD, at (8°, −8°) for horizontal and (0°, 6°) for vertical, respectively. Projected to visual space, this activity triggers a second attention pointer at RAP 8° to the left of AP (see Figure 9B, blue blob). This means that shortly before saccade onset, there are two spatial locations that exhibit attentional facilitation: the remapped attention position and the attention position itself. During saccade, the triggered attention pointers are shifted along with the eye movement as they are encoded in a retinotopic reference frame. Thus, after the saccade, the attention pointers are shifted by 8° to the right. Therefore, the attention pointer induced by XbPC is now at LAP and the attention pointer induced by XbCD is at AP (Figure 9C). As the CD signal decays after saccade onset, the activity in XbCD also decays, and the attention pointer induced by this map is gradually removed until it is completely extinguished. After the eyes have landed at the saccade target (ST), the PC signal updates to the new eye position, and thus the activity in XbPC updates as well—to (8°, −8°) for horizontal and (0°, 6°) for vertical, respectively—and the attention pointer triggered by this map is remapped back to AP (see Figure 9D). Supplementary Movie S2 shows the development of the activities in both LIP maps over the course of a single simulated trial. 
Figure 9
 
Simulation results of spatial updating of attention with cued attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Both LIP maps interact with each other by summing up all activations along each diagonal toward each neuron in Xh and projecting back along the same diagonal. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 8. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob) and for the corollary-discharge (CD) signal (blue blob, coved by red blob). (B) Shortly before saccade, the CD signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 9
 
Simulation results of spatial updating of attention with cued attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Both LIP maps interact with each other by summing up all activations along each diagonal toward each neuron in Xh and projecting back along the same diagonal. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 8. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob) and for the corollary-discharge (CD) signal (blue blob, coved by red blob). (B) Shortly before saccade, the CD signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 10 shows the time course of attention at AP, RAP, and LAP over time. The three panels show the effect separated for the two LIP maps (red and blue lines, respectively) as well as the sum of the activities of both LIP maps (purple dashed lines) for each position. To obtain the activity of a LIP map at a certain spatiotopic position, we first project the LIP maps to visual space. Then the firing rate of the neuron which corresponds to the position is read out. As the projections of the LIP maps are retinotopic, the neurons which correspond to one location (AP, RAP, or LAP) differ before and after saccade. Thus, we plotted the activity of the two corresponding neurons in the pre- and postsaccadic condition separated by the saccade period (gray bar). RAP is attended only before saccade onset generated by LIP CD cells. In contrast, LAP is attended only after saccade offset through an attention pointer maintained by LIP PC cells. AP receives attention over the whole period, as attention at AP before saccade is triggered by LIP PC and LIP CD, and after saccade the origin of the attention pointer switches from LIP CD to LIP PC. 
Figure 10
 
Attentional effect at different spatial positions over time for cued attention. The three plots show the attentional effect at the three spatial positions RAP (remapped attention position, left), AP (attention position, middle), and LAP (lingering attention position, right) over time. The red lines mark attention triggered in the lateral intraparietal cortex (LIP) map modulated by the proprioceptive (PC) eye position; the blue lines mark attention triggered by the LIP corollary discharge (CD) map. Note that before and after saccade, we have to read out the firing rate of different neurons to obtain the LIP activity corresponding to the same spatial position. The purple dashed lines mark the overall attention at this position. The gray bar covers the period of the eye movement and separates the activity from the two different neurons. The time (in milliseconds) is aligned to saccade onset.
Figure 10
 
Attentional effect at different spatial positions over time for cued attention. The three plots show the attentional effect at the three spatial positions RAP (remapped attention position, left), AP (attention position, middle), and LAP (lingering attention position, right) over time. The red lines mark attention triggered in the lateral intraparietal cortex (LIP) map modulated by the proprioceptive (PC) eye position; the blue lines mark attention triggered by the LIP corollary discharge (CD) map. Note that before and after saccade, we have to read out the firing rate of different neurons to obtain the LIP activity corresponding to the same spatial position. The purple dashed lines mark the overall attention at this position. The gray bar covers the period of the eye movement and separates the activity from the two different neurons. The time (in milliseconds) is aligned to saccade onset.
In top-down mode we cue the desired location of attention to the model by an endogenous signal via Xh modeling top-down attention. To demonstrate that this version of the model equally well explains the observations of Jonikaitis et al. (2013), we simulated the same experiment as before, but instead of a flashed cuing stimulus we introduced top-down attention directed to the attention position as a constant input to Xh (see Figure 4). The spatial layout and temporal dynamics of the input signals—here eye position, corollary discharge, and top-down attention—are shown in Figure 11
 
Figure 11
 
Setup for spatial updating of attention with top-down attention with respect to the three input signals (compare Figure 4): After fixating the fixation point (FP) at (0°, 0°), the eyes reach the saccade target (ST) at (8°, 0°) after 53 ms. The red cross symbolizes the current eye position. The eye-position signal (red) starts to update 32 ms after saccade offset to the new location of the eyes. The corollary-discharge (CD) signal (blue) rises 86 ms prior to saccade onset and is active for 171 ms, with its peak activity reached 10 ms after saccade onset. During the whole process, a top-down attention signal (orange) is introduced at the attention position (AP) at (0°, 6°). The CD signal is retinotopic, and thus it shifts with the eye movement. In contrast, the proprioceptive (PC) signal and attention signal are head centered and therefore fixed during the saccade. Markers indicate the remapped and the lingering attention position—RAP at (−8°, 6°) and LAP at (8°, 6°). The time (in milliseconds) is aligned to saccade onset. The still image shows the inputs marginalized over time.
Figure 12 shows the activity of both maps at different times. At the beginning, before the saccade, the only inputs to the model are an eye-position (PC) signal encoding the current eye position at FP and a top-down attention signal to Xh encoding the attention position at AP. The attention signal is projected backward into the LIP map XbPC along the diagonal and is combined multiplicatively with the PC signal, which leads to a single activation blob, at (0°, 0°) for horizontal and (0°, 6°) for vertical, respectively. This activity maintains an attention pointer (red blob) at the desired attention position, as shown in Figure 12A. Shortly before saccade onset, the CD signal rises and feeds into the second LIP map, XbCD, where it is multiplicatively combined with the attention signal from Xh feeding in along the diagonal. This leads to an activity blob in LIP CD—at (8°, −8°) for horizontal and (0°, 6°) for vertical, respectively—and a second attention pointer at RAP 8° to the left of the attention position (see Figure 12B, blue blob). Thus, like in the previous simulation, the two attention pointers triggered by the LIP maps direct attention to RAP and AP itself shortly before saccade onset. Likewise, shortly after saccade the attention pointers are shifted to AP and LAP, respectively (see Figure 12C), and long after the saccade, when the CD signal has decayed and the PC signal is updated to the saccade target, there is again only one attention pointer triggered by LIP PC, directing attention to AP (see Figure 12D). Supplementary Movie S3 shows the development of the activities in both LIP maps over the course of a single simulated trial. 
Figure 12
 
Simulation results of spatial updating of attention with top-down attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Additionally, the diagonal on which the attention signal is fed into the LIP maps is plotted with a yellow dashed line. The symbols and definitions are identical to Figure 9. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob). (B) Shortly before saccade, the corollary-discharge (CD) signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 12
 
Simulation results of spatial updating of attention with top-down attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Additionally, the diagonal on which the attention signal is fed into the LIP maps is plotted with a yellow dashed line. The symbols and definitions are identical to Figure 9. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob). (B) Shortly before saccade, the corollary-discharge (CD) signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Discussion
Our simulation results suggest that predictive remapping of receptive fields and spatial updating of attention are two sides of the same coin. Predictive remapping in our model arises from a relatively small, and simple, recurrent neural circuit. Although the model likely simplifies from the biophysical implementation in the brain, much can be explained at this abstract level. The remapping results from the interaction of the phasic corollary-discharge signal and the tonic feedback from LIP, which means it is synchronized with the CD signal—that is, saccade onset. The CD signal has recently been well explored in electrophysiological studies (Sommer & Wurtz, 2004, 2008). The model suggests that this signal is integrated into LIP gain fields. As the CD signal contains information about the future position of the eyes, it contributes to a new reference to which the flashed stimulus is anchored. Thus, predictive remapping of receptive fields can be well understood by the integration of a second spatial reference system within a gain-field-like representation. As the gain fields operate in a retinocentric coordinate frame and the internal eye position updates after a saccade, the flashed stimulus is transferred against saccade direction due to the future eye-position reference, leading to the activation of LIP neurons at the RF location if the stimulus has been flashed at FRF. From the viewpoint of these LIP neurons, it looks like their RFs have shifted forward. 
Regardless of the particular experimental paradigm, our model suggests the presence of two pointers around saccades which are linked to different eye-related signals, one to a proprioceptive eye position and the other to the corollary discharge. In the spatial-attention task, the corollary-discharge signal leads to a retinotopic attention pointer at the remapped attention position shortly before saccade onset, which is shifted with the saccade to the original position of attention and is finally removed shortly after the eyes have landed at the saccade target, consistent with Rolfs et al. (2011). Furthermore, the model explains the lingering of attention (Golomb et al., 2008; Golomb et al., 2010) by a late-updating, proprioceptive eye-position signal which triggers a retinotopic attention pointer first at the desired position. This pointer is then shifted by the saccade and finally, with the updated signal, remapped to the initial position without covering intermediate positions (Golomb, Marino, Chun, & Mazer, 2011). Hence, we have two attention pointers generated by LIP that are retinotopic. These pointers are shifted during the eye movement according to the saccade vector. Thus, during saccades there is one pointer which moves from the remapped attention position toward the actual attention position and one pointer moving away from the attention position. Consequently, while the eyes are moving, there is no attention directed to the original attention position itself, as shown by Yao, Ketkar, Treue, and Krishna (2016). Furthermore, Yao et al. concluded from their experimental results that the attention position must be attended again within 30 ms after the eyes have landed. In our model, attention is available at the attention position immediately after the saccade, as the remapped attention pointer is then shifted to this position. Taking into account that the decision process in their experiment itself needs time (processing the visual input and making the decision), this fits the data. In addition to attentional cuing, we show that attention may be induced by an endogenous attention signal. 
The functional roles of the two eye-related signals used in our model have been recently discussed in a review (Sun & Goldberg, 2016). As the (proprioceptive) eye-position signal is inaccurate after saccade, LIP gain fields may not be suitable to solve the spatial-accuracy problem. Therefore, Sun and Goldberg conclude that there are “two different representations of space: a rapid retinotopic one and a slower craniotopic one” (p. 80). In the model presented here, these two representations interact to trigger predictive remapping and spatial updating of attention. Both phenomena can be well explained by a lateral or reentrant network at the level of LIP, such that the presaccadic activation is recomputed in the future reference frame by means of the corollary discharge. For simplicity, we assumed two different neuron types, CD and PC eye-related cells. However, it is likely that the separation is not so complete, and there may be a continuum of cells. The joint effect of both eye-position signals may also explain the quite accurate eye-position decoding observed by Morris, Kubischik, Hoffmann, Krekelberg, and Bremmer (2012). 
Previous simulations with the 1-D-model version indicate that mislocalization in total darkness around saccade and saccadic suppression of displacement can be accounted for by the same neural circuits used here (Ziesche et al., 2017; Ziesche & Hamker, 2011, 2014). Thus, as the 1-D version of this model was designed prior to the observation of updating of attention, from the model's point of view this observation can be considered an inherent prediction. This model prediction might be tested by suppressing or inactivating specific brain areas related to the two eye-position signals, similar to previous studies (Sommer & Wurtz, 2008). For example, if either the PC or CD pathway is deactivated, one could investigate whether lingering and remapping of attention are still present. Additionally, the explicit role of LIP in updating of attention can be tested with such a method. Furthermore, other models of perisaccadic space perception can be investigated in how far they can account for updating of attention as well. For example, with models using only one extraretinal signal—like those of Pola (2004) or Binda, Cicchini, Burr, and Morrone (2009)—one can test whether two extraretinal signals, as proposed with our model, are actually necessary. 
Although the newly presented model and the previous 1-D model do a good job of accounting for several outstanding issues, and accurately replicate the behavioral findings of Jonikaitis et al. (2013), there are nevertheless a number of issues that have to be addressed in future work. First, the exact timing of the proprioceptive eye-position signal needs to be explored in more detail. While a recent study in LIP suggests that gain fields may update even after 150 ms (B. Y. Xu, Karachi, & Goldberg, 2012), which is about 80–100 ms later than in our model, Y. Xu et al. (2011) report an update after 60 ms in somatosensory cortex. No such information exists in humans. Our parameters are based on previous versions of the model (Ziesche et al., 2017; Ziesche & Hamker, 2011, 2014) and are mainly motivated to account for human behavioral data. Second, there is some variation in experimental studies regarding the dominance of the effects in the data which requires future clarifications. For instance, two recent studies investigating attentional effects in V4 (Marino & Mazer, 2018) and MT (Yao, Treue, & Krishna, 2018) seem to reveal contradictory results. Marino and Mazer (2018) recorded the neural response of V4 cells in monkeys performing a saccade while attention was cued at different positions, either in the receptive field, in the future receptive field, or at a control position. For tasks in which attention was cued at the receptive field, the neuronal response decreased after saccade onset to control level, with no indication of a lingering effect. When attention was cued at the future receptive field, the response became greater than control level about 40 ms after saccade onset. Taking into account the neuronal latency, this has been taken as evidence for predictive remapping. Yao et al. (2018) conducted a quite similar experiment but recorded neural responses in MT. In their study, the response became higher than control level after saccade offset (either right after saccade offset or 50 ms later for two different monkeys), which is later than observed by Marino and Mazer in V4. Thus, Yao et al. (2018) concluded that MT shows no predictive remapping of attention. However, they observed evidence for the lingering of attention. In another study, Yao et al. (2016) found neither remapping nor lingering of attention in a psychophysical study with humans, though these results are maybe biased by their experimental design, where remapped and lingering attention position were not only irrelevant but importantly were never target locations that tested the amount of attention. Lisi, Cavanagh, and Zorzi (2015) suggest that spatial updating of attention depends on the setup of the experiment, namely whether visual objects, which can serve as spatial landmarks, are presented or not. In their experiments, they found that the lingering effect vanishes if a placeholder is shown at the attended location for the whole trial. However, the ability to maintain attention at a spatiotopic location during saccades increases with the presence of placeholders. These findings may help to classify the various contradictory results and might be used to further elaborate our model of perisaccadic space perception. 
Acknowledgments
We thank J. Mazer for his comments and discussions. This work was supported by the US-German collaboration on computational neuroscience (BMBF 01GQ1409) and partly supported by the European Union's Seventh Framework Programme (FET, Neuro-Bio-Inspired Systems: Spatial Cognition) under grant agreement No. 600785. The publication costs of this article were funded by the German Research Foundation/DFG-392676956 and the Technische Universität Chemnitz in the funding program Open Access Publishing. The source code of the model implemented in Python is available at https://www.tu-chemnitz.de/informatik/KI/supplement/BergeltHamker2019/. The simulations were done with the neural simulator ANNarchy (Vitay, Dinkelbach, & Hamker, 2015). 
Commercial relationships: none. 
Corresponding author: Fred H. Hamker. 
Address: Artificial Intelligence, Department of Computer Science, Chemnitz University of Technology, Chemnitz, Germany. 
References
Andersen, R. A., Bracewell, R. M., Barash, S., Gnadt, J. W., & Fogassi, L. (1990). Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. The Journal of Neuroscience, 10 (4), 1176–1196.
Binda, P., Cicchini, G. M., Burr, D. C., & Morrone, M. C. (2009). Spatiotemporal distortions of visual perception at the time of saccades. The Journal of Neuroscience, 29 (42), 13147–13157.
Bisley, J. W. (2011). The neural basis of visual attention. The Journal of Physiology, 589 (1), 49–57.
Byrne, P., Becker, S., & Burgess, N. (2007). Remembering the past and imagining the future: A neural model of spatial memory and imagery. Psychological Review, 114 (2), 340–375.
Cassanello, C. R., & Ferrera, V. P. (2007). Computing vector differences using a gain field-like mechanism in monkey frontal eye field. The Journal of Physiology, 582 (2), 647–664.
Cavanagh, P., Hunt, A. R., Afraz, A., & Rolfs, M. (2010). Visual stability based on remapping of attention pointers. Trends in Cognitive Sciences, 14 (4), 147–153.
Duhamel, J.-R., Colby, C. L., & Goldberg, M. E. (1992, January 3). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255 (5040), 90–92.
Ferraina, S., Paré, M., & Wurtz, R. H. (2002). Comparison of cortico-cortical and cortico-collicular signals for the generation of saccadic eye movements. Journal of Neurophysiology, 87 (2), 845–858.
Fischer, B., & Boch, R. (1985). Peripheral attention versus central fixation: Modulation of the visual activity of prelunate cortical cells of the rhesus monkey. Brain Research, 345 (1), 111–123.
Goldberg, M. E., Bisley, J. W., Powell, K. D., & Gottlieb, J. (2006). Saccades, salience and attention: The role of the lateral intraparietal area in visual behavior. Progress in Brain Research, 155 (6), 157–175.
Golomb, J. D., Chun, M. M., & Mazer, J. A. (2008). The native coordinate system of spatial attention is retinotopic. The Journal of Neuroscience, 28 (42), 10654–10662.
Golomb, J. D., Marino, A. C., Chun, M. M., & Mazer, J. A. (2011). Attention doesn't slide: Spatiotopic updating after eye movements instantiates a new, discrete attentional locus. Attention, Perception & Psychophysics, 73 (1), 7–14.
Golomb, J. D., Pulido, V. Z., Albrecht, A. R., Chun, M. M., & Mazer, J. A. (2010). Robustness of the retinotopic attentional trace after eye movements. Journal of Vision, 10 (3): 19, 1–12, https://doi.org/10.1167/10.3.19. [PubMed] [Article]
Hamker, F. H. (2005). The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement. Cerebral Cortex, 15 (4), 431–447.
Hartmann, T. S., Zirnsak, M., Marquis, M., Hamker, F. H., & Moore, T. (2017). Two types of receptive field dynamics in area V4 at the time of eye movements? Frontiers in Systems Neuroscience, 11 (13), 1–7.
Joiner, W. M., Cavanaugh, J., FitzGibbon, E. J., & Wurtz, R. H. (2013). Corollary discharge contributes to perceived eye location in monkeys. Journal of Neurophysiology, 110 (10), 2402–2416.
Jonikaitis, D., Szinte, M., Rolfs, M., & Cavanagh, P. (2013). Allocation of attention across saccades. Journal of Neurophysiology, 109 (5), 1425–1434.
Lehky, S. R., Sereno, M. E., & Sereno, A. B. (2016). Characteristics of eye-position gain field populations determine geometry of visual space. Frontiers in Integrative Neuroscience, 9, 72.
Lisi, M., Cavanagh, P., & Zorzi, M. (2015). Spatial constancy of attention across eye movements is mediated by the presence of visual objects. Attention, Perception & Psychophysics, 77 (4), 1159–1169.
Marino, A. C., & Mazer, J. A. (2018). Saccades trigger predictive updating of attentional topography in area V4. Neuron, 98 (2), 429–438. e4.
Mathôt, S., & Theeuwes, J. (2010). Evidence for the predictive remapping of visual attention. Experimental Brain Research, 200 (1), 117–122.
Morris, A. P., Kubischik, M., Hoffmann, K.-P., Krekelberg, B., & Bremmer, F. (2012). Dynamics of eye-position signals in the dorsal visual system. Current Biology, 22 (3), 173–179.
Nakamura, K., & Colby, C. L. (2002). Updating of the visual representation in monkey striate and extrastriate cortex during saccades. Proceedings of the National Academy of Sciences, USA, 99 (6), 4026–4031.
Neupane, S., Guitton, D., & Pack, C. C. (2016a). Dissociation of forward and convergent remapping in primate visual cortex. Current Biology, 26 (12), R481–R492.
Neupane, S., Guitton, D., & Pack, C. C. (2016b). Two distinct types of remapping in primate cortical area V4. Nature Communications, 7, 10402.
Pola, J. (2004). Models of the mechanism underlying perceived location of a perisaccadic flash. Vision Research, 44 (24), 2799–2813.
Pouget, A., Deneve, S., & Duhamel, J.-R. (2002). A computational perspective on the neural basis of multisensory spatial representations. Nature Reviews Neuroscience, 3, 741–747.
Rolfs, M., Jonikaitis, D., Deubel, H., & Cavanagh, P. (2011). Predictive remapping of attention across eye movements. Nature Neuroscience, 14, 252–256.
Salinas, E., & Sejnowski, T. J. (2001). Gain modulation in the central nervous system: Where behavior, neurophysiology, and computation meet. Neuroscientist, 7 (5), 430–440.
Sommer, M. A., & Wurtz, R. H. (2004). What the brain stem tells the frontal cortex: I. Oculomotor signals sent from superior colliculus to frontal eye field via mediodorsal thalamus. Journal of Neurophysiology, 91 (3), 1381–1402.
Sommer, M. A., & Wurtz, R. H. (2006, November 16). Influence of the thalamus on spatial visual processing in frontal cortex. Nature, 444 (7117), 374–377.
Sommer, M. A., & Wurtz, R. H. (2008). Brain circuits for the internal monitoring of movements. Annual Review of Neuroscience, 31, 317–338.
Sun, L. D., & Goldberg, M. E. (2016). Corollary discharge and oculomotor proprioception: Cortical mechanisms for spatially accurate vision. Annual Review of Vision Science, 2 (1), 61–84.
Tanaka, M. (2007). Spatiotemporal properties of eye position signals in the primate central thalamus. Cerebral Cortex, 17 (7), 1504–1515.
Umeno, M. M., & Goldberg, M. E. (1997). Spatial processing in the monkey frontal eye field: I. Predictive visual responses. Journal of Neurophysiology, 78 (3), 1373–1383.
Van Wetter, S. M. C. I., & Van Opstal, A. J. (2008). Experimental test of visuomotor updating models that explain perisaccadic mislocalization. Journal of Vision, 8 (14), 1–22.
Vitay, J., Dinkelbach, H. U., & Hamker, F. H. (2015). ANNarchy: A code generation approach to neural simulations on parallel hardware. Frontiers in Neuroinformatics, 9, 19.
Walker, M. F., Fitzgibbon, E. J., & Goldberg, M. E. (1995). Neurons in the monkey superior colliculus predict the visual result of impending saccadic eye movements. Journal of Neurophysiology, 73 (5), 1988–2003.
Wang, X., Zhang, M., Cohen, I. S., & Goldberg, M. E. (2007). The proprioceptive representation of eye position in monkey primary somatosensory cortex. Nature Neuroscience, 10, 640–646.
Wurtz, R. H. (2008). Neuronal mechanisms of visual stability. Vision Research, 48 (20), 2070–2089.
Xu, B. Y., Karachi, C., & Goldberg, M. E. (2012). The postsaccadic unreliability of gain fields renders it unlikely that the motor system can use them to calculate target position in space. Neuron, 76, 1201–1209.
Xu, Y., Wang, X., Peck, C., & Goldberg, M. E. (2011). The time course of the tonic oculomotor proprioceptive signal in area 3a of somatosensory cortex. Journal of Neurophysiology, 106 (1), 71–77.
Yao, T., Ketkar, M., Treue, S., & Krishna, B. S. (2016). Visual attention is available at a task-relevant location rapidly after a saccade. eLife, 5, e18009.
Yao, T., Treue, S., & Krishna, B. S. (2018). Saccade-synchronized rapid attention shifts in macaque visual cortical area MT. Nature Communications, 9, 958.
Ziesche, A., Bergelt, J., Deubel, H., & Hamker, F. H. (2017). Pre- and post-saccadic stimulus timing in saccadic suppression of displacement—a computational model. Vision Research, 138, 1–11.
Ziesche, A., & Hamker, F. H. (2011). A computational model for the influence of corollary discharge and proprioception on the perisaccadic mislocalization of briefly presented stimuli in complete darkness. The Journal of Neuroscience, 31 (48), 17392–17405.
Ziesche, A., & Hamker, F. H. (2014). Brain circuits underlying visual stability across eye movements—converging evidence for a neuro-computational model of area LIP. Frontiers in Computational Neuroscience, 8 (25), 1–15.
Zirnsak, M., Lappe, M., & Hamker, F. H. (2010). The spatial distribution of receptive field changes in a model of peri-saccadic perception: Predictive remapping and shifts towards the saccade target. Vision Research, 50 (14), 1328–1337.
Zirnsak, M., Steinmetz, N. A., Noudoost, B., Xu, K. Z., & Moore, T. (2014, March 27). Visual space is compressed in prefrontal cortex before eye movements. Nature, 507 (7493), 504–507.
Appendix: Neuro-computational model
The neurons in each map follow different ordinary differential equations (ODEs). In our extension of the original 1-D model to the two-dimensional one, we use the same ODEs as stated in Ziesche and Hamker (2011), but with some simplifications and extensions. For all ODEs that compute firing rates r, we set negative values to zero. 
  •  
    Firing rates of neurons in map Xr representing the stimulus position in an eye-centered reference frame are given by  
    \begin{equation}\tau {d \over {dt}}{r^{{{Xr}}}} = {r^{{{Xr,\rm in}}}}\left( {1 + {{\left[ {{A^{{{Xr}}}} - {r^{{{Xr}}}}} \right]}^ + }\mathop \sum \limits^{{\rm{FB}}} {w^{{{X}}{{{b}}_{{\rm{PC}}}} \to {{Xr}}}}{r^{{{X}}{{{b}}_{{\rm{PC}}}}}}} \right) - {r^{{{Xr}}}}{\rm {,}}\end{equation}
    with Display Formula\({\left[ {{A^{{{Xr}}}} - {r^{{{Xr}}}}} \right]^ + }\) a scalar saturation factor controlling the feedback from XbPC, and rXr,in the sensory bottom-up input created by a given stimulus:  
    \begin{equation}{r^{{{Xr,in}}}} = {S^{{{Xr}}}}{K^{{{Xr}}}}\exp {{ - {{\left\| {{p^{{{Xr}}}} - {c^{{{Xr}}}}} \right\|}^2}} \over {2{{\left( {{\sigma ^{{{Xr}}}}} \right)}^2}}}{\rm {.}}\end{equation}
    Here, SXr is the short-term synaptic depression simulated as in Hamker (2005), modeling the decaying response strength over time while the stimulus is presented; KXr defines the strength of the stimulus; and Display Formula\({\left\| {{p^{{{Xr}}}} - {c^{{{Xr}}}}} \right\|^2}\) is the distance between stimulus position pXr and receptive-field center cXr for each neuron of the map. As we now have a two-dimensional visual scene, the stimulus position and the receptive-field center are two-dimensional.
  •  
    Firing rates of neurons in maps XePC and XeCD representing eye position in head and retinotopic eye displacement, respectively, are given by  
    \begin{equation}\eqalign{&\tau {d \over {dt}}{r^{{{X}}{{{e}}_{{\rm{PC}}}}}} = {r^{{{X}}{{{e}}_{{\rm{PC}}}}}}^{{\rm{,in}}} - {r^{{{X}}{{{e}}_{{\rm{PC}}}}}} \cr&\tau {d \over {dt}}{r^{{{X}}{{{e}}_{{\rm{CD}}}}}} = {r^{{{X}}{{{e}}_{{\rm{CD}}}}}}^{{\rm{,in}}} - {r^{{{X}}{{{e}}_{{\rm{CD}}}}}} \cr} {\rm {,}}\end{equation}
    where Display Formula\({r^{{{X}}{{{e}}_{{\rm{PC}}}}}}^{{\rm{,in}}}\) and Display Formula\({r^{{{X}}{{{e}}_{{\rm{CD}}}}}}^{,{\rm{in}}}\) are Gaussian input signals modeling the proprioceptive eye-position signal and the corollary-discharge signal, respectively:  
    \begin{equation}\eqalign{&{r^{{{X}}{{{e}}_{{\rm{PC}}}}}}^{{\rm{,in}}} = {K^{{{X}}{{{e}}_{{\rm{PC}}}}}}\exp {{ - {{\left\| {{p^{{{X}}{{{e}}_{{\rm{PC}}}}}} - {c^{{{X}}{{{e}}_{{\rm{PC}}}}}}} \right\|}^2}} \over {2{{\left( {{\sigma ^{{{X}}{{{e}}_{{\rm{PC}}}}}}} \right)}^2}}} \cr&{r^{{{X}}{{{e}}_{{\rm{CD}}}}}}^{,{\rm{in}}} = T{C^{{{X}}{{{e}}_{{\rm{CD}}}}}}\left( t \right){K^{{{X}}{{{e}}_{{\rm{CD}}}}}}\exp {{ - {{\left\| {{p^{{{X}}{{{e}}_{{\rm{CD}}}}}} - {c^{{{X}}{{{e}}_{{\rm{CD}}}}}}} \right\|}^2}} \over {2{{\left( {{\sigma ^{{{X}}{{{e}}_{{\rm{CD}}}}}}} \right)}^2}}} .\cr}\end{equation}
    Display Formula\({K^{{{X}}{{{e}}_{{\rm{PC}}}}}}\) and Display Formula\({K^{{{X}}{{{e}}_{{\rm{CD}}}}}}\) are the strengths of the corresponding signal, Display Formula\({\left\| {{p^{{{X}}{{{e}}_{{\rm{PC}}}}}} - {c^{{{X}}{{{e}}_{{\rm{PC}}}}}}} \right\|^2}\) is the distance between eye position Display Formula\({p^{{{X}}{{{e}}_{{\rm{PC}}}}}}\) and center of eye-position tuning Display Formula\({c^{{{X}}{{{e}}_{{\rm{PC}}}}}}\) for each neuron of map XePC, and likewise Display Formula\({\left\| {{p^{{{X}}{{{e}}_{{\rm{CD}}}}}} - {c^{{{X}}{{{e}}_{{\rm{CD}}}}}}} \right\|^2}\) is the distance between eye displacement Display Formula\({p^{{{X}}{{{e}}_{{\rm{CD}}}}}}\) and center of eye-displacement tuning Display Formula\({c^{{{X}}{{{e}}_{{\rm{CD}}}}}}\) for each neuron of map XeCD. Again, the positions are now two-dimensional. Display Formula\(T{C^{{{X}}{{{e}}_{{\rm{CD}}}}}}\left( t \right)\) models the time course of the phasic corollary-discharge signal, namely rise and decay around saccade onset:  
    \begin{equation}T{C^{{{X}}{{{e}}_{{\rm{CD}}}}}}\left( t \right) = \left\{ {\matrix{ {} \hfill&{\exp {{ - {{\left\| {{t^{{\rm{CD}}}} - t} \right\|}^2}} \over {2{{\left( {{\sigma ^{{\rm{CD,rise}}}}} \right)}^2}}},{\rm{\ if\ }}t \le {t^{{\rm{CD}}}}} \hfill \cr {} \hfill&{\exp {{ - {{\left\| {{t^{{\rm{CD}}}} - t} \right\|}^2}} \over {2{{\left( {{\sigma ^{{\rm{CD,decay}}}}} \right)}^2}}},{\rm{\ if\ }}t \gt {t^{{\rm{CD}}}}} \hfill \cr } } \right.{\rm {,}}\end{equation}
    with tCD the time where the CD signal reaches its maximum. In our model, this maximum is at 10 ms after saccade onset, consistent with data of Ferraina, Paré, and Wurtz (2002).
  •  
    Firing rates of neurons in map XeFEF representing the eye displacement in a head-centered reference frame are given by  
    \begin{equation}\displaylines{ \tau {d \over {dt}}{r^{{{X}}{{{e}}_{{\rm{FEF}}}}}} = \mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{CD}}}} \to {{X}}{{{e}}_{{\rm{FEF}}}}}}{r^{{{X}}{{{e}}_{{\rm{CD}}}}}}\mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{PC}}}} \to {{X}}{{{e}}_{{\rm{FEF}}}}}}{r^{{{X}}{{{e}}_{{\rm{PC}}}}}} \cr - {r^{{{X}}{{{e}}_{{\rm{FEF}}}}}}w_{{\rm{inh}}}^{{{X}}{{{e}}_{{\rm{FEF}}}}}\mathop \sum \limits^{{\rm{inh}}} {r^{{{X}}{{{e}}_{{\rm{FEF}}}}}} - {r^{{{X}}{{{e}}_{{\rm{FEF}}}}}} \cr} {\rm {.}}\end{equation}
    In contrast to the original ODE, we simplified the firing rate for XeFEF by removing the saturation term as well as the gain-modulation term. Thus, this map is now a classical basis-function map instead of a gain-modulation map to combine PC and CD signals.
  •  
    Firing rates of neurons in map XbPC representing the joint representation of stimulus position and eye position are given by  
    \begin{equation}\displaylines{ \tau {d \over {dt}}{r^{{{X}}{{{b}}_{{\rm{PC}}}}}} = \mathop \sum \limits^{{\rm{FF}}} {w^{Xr \to {{X}}{{{b}}_{{\rm{PC}}}}}}{r^{{{Xr}}}}\left( {{{\left[ {{A^{{{X}}{{{b}}_{{\rm{PC}}}}}} - \max {r^{{{X}}{{{b}}_{{\rm{PC}}}}}}} \right]}^ + }\mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{PC}}}} \to {{X}}{{{b}}_{{\rm{PC}}}}}}{r^{{{X}}{{{e}}_{{\rm{PC}}}}}}} \right) \cr + \mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{PC}}}} \to {{X}}{{{b}}_{{\rm{PC}}}}}}{r^{{{X}}{{{e}}_{{\rm{PC}}}}}}\mathop \sum \limits^{{\rm{FB}}} {w^{{{Xh}} \to {{X}}{{{b}}_{{\rm{PC}}}}}}{r^{{{Xh}}}} + \mathop \sum \limits^{{\rm{exc}}} w_{{\rm{exc}}}^{{{X}}{{{b}}_{{\rm{PC}}}}}{r^{{{X}}{{{b}}_{{\rm{PC}}}}}} - \left( {{r^{{{X}}{{{b}}_{{\rm{PC}}}}}} + {D^{{{X}}{{{b}}_{{\rm{PC}}}}}}} \right)w_{{\rm{inh}}}^{{{X}}{{{b}}_{{\rm{PC}}}}}\mathop \sum \limits^{{\rm{inh}}} {r^{{{X}}{{{b}}_{{\rm{PC}}}}}} - {r^{{{X}}{{{b}}_{{\rm{PC}}}}}} \cr} {\rm {,}}\end{equation}
    with Display Formula\({\left[ {{A^{{{X}}{{{b}}_{{\rm{PC}}}}}} - \max {r^{{{X}}{{{b}}_{{\rm{PC}}}}}}} \right]^ + }\) a scalar saturation factor controlling the feedforward input from XePC, and Display Formula\(\left( {{r^{{{X}}{{{b}}_{{\rm{PC}}}}}} + {D^{{{X}}{{{b}}_{{\rm{PC}}}}}}} \right)\) a scalar regulating the global inhibition. For the firing rates of XbPC we added an additional feedback signal that combines the PC signal with the signal of the intermediate cells of Xh. This feedback signal is identical to the feedback signal of the other LIP map XbCD. Additionally, we removed the perisaccadic suppression factor on the input from XePC, as otherwise the feedback from Xh would be reduced over a certain period of time due to the multiplicative interaction with the PC signal.
  •  
    Firing rates of neurons in map XbCD representing the joint representation of stimulus position and eye displacement are given by  
    \begin{equation}\displaylines{ \tau {d \over {dt}}{r^{{{X}}{{{b}}_{{\rm{CD}}}}}} = \mathop \sum \limits^{{\rm{FF}}} {w^{{{Xr}} \to {{X}}{{{b}}_{{\rm{CD}}}}}}{r^{{{Xr}}}}\left( {1 + {{\left[ {{A^{{{X}}{{{b}}_{{\rm{CD}}}}}} - {r^{{{X}}{{{b}}_{{\rm{CD}}}}}}} \right]}^ + }\mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{FEF}}}} \to {{X}}{{{b}}_{{\rm{CD}}}}}}{r^{{{X}}{{{e}}_{{\rm{FEF}}}}}}} \right) \cr + \mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{e}}_{{\rm{FEF}}}} \to {{X}}{{{b}}_{{\rm{CD}}}}}}{r^{{{X}}{{{e}}_{{\rm{FEF}}}}}}\mathop \sum \limits^{{\rm{FB}}} {w^{{{Xh}} \to {{X}}{{{b}}_{{\rm{CD}}}}}}{r^{{{Xh}}}} - \left( {{r^{{{X}}{{{b}}_{{\rm{CD}}}}}} + {D^{{{X}}{{{b}}_{{\rm{CD}}}}}}} \right)w_{{\rm{inh}}}^{{{X}}{{{b}}_{{\rm{CD}}}}}\mathop \sum \limits^{{\rm{inh}}} {r^{{{X}}{{{b}}_{{\rm{CD}}}}}} - {r^{{{X}}{{{b}}_{{\rm{CD}}}}}} \cr} {\rm {,}}\end{equation}
    with Display Formula\({\left[ {{A^{{{X}}{{{b}}_{{\rm{CD}}}}}} - {r^{{{X}}{{{b}}_{{\rm{CD}}}}}}} \right]^ + }\) a scalar saturation factor controlling the feed-forward input from XeFEF, and Display Formula\(\left( {{r^{{{X}}{{{b}}_{{\rm{CD}}}}}} + {D^{{{X}}{{{b}}_{{\rm{CD}}}}}}} \right)\) a scalar regulating the global inhibition.
  •  
    Firing rates of neurons in map Xh are given by  
    \begin{equation}\tau {d \over {dt}}{r^{{{Xh}}}} = {S^{{{Xh}}}}{I^{{{Xh}}}} + \mathop \sum \limits^{{\rm{exc}}} w_{{\rm{exc}}}^{{{Xh}}}{r^{{{Xh}}}} - \left( {{r^{{{Xh}}}} + {D^{{{Xh}}}}} \right)w_{{\rm{inh}}}^{{{Xh}}}\mathop \sum \limits^{{\rm{inh}}} {r^{{{Xh}}}} - {r^{{{Xh}}}}{\rm {,}}\end{equation}
    with Display Formula\(\left( {{r^{{{Xh}}}} + {D^{{{Xh}}}}} \right)\) a scalar regulating the global inhibition, and IXh the input consisting of the feed-forward input from both LIP maps and a newly introduced, attentional top-down signal rXh,in:  
    \begin{equation}\eqalign{&{I^{{{Xh}}}} = \mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{b}}_{{\rm{PC}}}} \to {{Xh}}}}{r^{{{X}}{{{b}}_{{\rm{PC}}}}}} + \mathop \sum \limits^{{\rm{FF}}} {w^{{{X}}{{{b}}_{{\rm{CD}}}} \to {{Xh}}}}{r^{{{X}}{{{b}}_{{\rm{CD}}}}}} + {r^{{Xh{\rm,in}}}} \cr&{r^{{Xh{\rm,in}}}} = {K^{{{Xh}}}}\exp {{ - {{\left\| {{p^{{{Xh}}}} - {c^{{{Xh}}}}} \right\|}^2}} \over {2{{({\sigma ^{{{Xh}}}})}^2}}} \cr} {\rm {,}}\end{equation}
    where KXh denotes the strength of the attention and Display Formula\({\left\| {{p^{{{Xh}}}} - {c^{{{Xh}}}}} \right\|^2}\) the distance between attention position pXh and center of attention-position tuning cXh for each neuron of map Xh, each with two dimensions. SXh is the synaptic suppression simulated as in the study by Hamker (2005):  
    \begin{equation}\displaylines{ {S^{{{Xh}}}} = 1 - d_s^{{{Xh}}}s \cr \tau _s^{{{Xh}}}{d \over {dt}}s = {I^{{{Xh}}}} - s \cr} {\rm {.}}\end{equation}
As the dimension of each map has to be doubled compared to the definition of the maps in Ziesche and Hamker (2011), we reduced the number of neurons in each dimension to compensate for the higher computational effort. More precisely, we have 21 neurons for the horizontal and 16 neurons for the vertical, covering a rectangular visual field of 40° × 30°. Thus, Xr, XePC, XeCD, and Xh are now two-dimensional, containing 21 × 16 neurons, and XeFEF, XbPC, and XbCD are four-dimensional, with 21 × 16 × 21 × 16 neurons. 
The connections between the different maps are defined through the different weights used in the ODEs. The connection weights follow Gaussian functions dependent on the distance between the position of the neuron in the presynaptic and postsynaptic maps:  
\begin{equation}{w^{{\rm{pre}} \to {\rm{post}}}} = {K^{{\rm{pre}} \to {\rm{post}}}}\exp \left( { - {{{\rm{distance}}} \over {{{\left( {{\sigma ^{{\rm{pre}} \to {\rm{post}}}}} \right)}^2}}}} \right){\rm {,}}\end{equation}
with Display Formula\({w^{{\rm{pre}} \to {\rm{post}}}}\) set to 0 if the value is lower than 0.001. The measurement of the distance differs among the connections due to the different dimensions of the maps and the different ways of connecting maps. There are three types of connections: horizontal, vertical, and diagonal. The horizontal and vertical connections are used to connect two-dimensional maps with four-dimensional maps where two of the four dimensions are disregarded (either the first two or the last two). For example, we connect Xr with XbPC horizontally independent of the last two dimensions of XbPC. That means that the distance used for the Gaussian function depends only on the position of the neuron in Xr and the first two position parameters of the neuron in XbPC. More precisely, suppose we want to connect neuron (i, j) of Xr with neuron (k, l, m, n) of XbPC. The distance between these neurons is then calculated by  
\begin{equation}{\rm{distance}} = {\left\| {i - k} \right\|^2} + {\left\| {j - l} \right\|^2}{\rm {.}}\end{equation}
 
The weight between neuron (i, j) and neuron (k, l, m, n) is then  
\begin{equation}w_{(i,j),(k,l,m,n)}^{{{Xr}} \to {{X}}{{{b}}_{{\rm{PC}}}}} = {K^{{{Xr}} \to {{X}}{{{b}}_{{\rm{PC}}}}}}\exp \left( { - {{{{\left\| {i - k} \right\|}^2} + {{\left\| {j - l} \right\|}^2}} \over {{{\left( {{\sigma ^{{{Xr}} \to {{X}}{{{b}}_{{\rm{PC}}}}}}} \right)}^2}}}} \right){\rm {.}}\end{equation}
 
This connection pattern is also used to connect Xr with XbCD, XeCD with XeFEF, and XbPC with Xr
Similarly, a vertical connection between a two-dimensional map and a four-dimensional map makes use of only the last two position parameters of the neurons in the four-dimensional map to calculate the distance. Thus, the distance between neuron (i, j) of a two-dimensional map and neuron (k, l, m, n) of a four-dimensional map is  
\begin{equation}{\rm{distance}} = {\left\| {i - m} \right\|^2} + {\left\| {j - n} \right\|^2}{\rm {.}}\end{equation}
 
Such a vertical connection is used to connect XePC with XeFEF and with XbPC
These definitions of the connection patterns allow the interpretation of the four dimensions of a map as follows: The first two dimensions represent the horizontal and the vertical information of a horizontally connected input, and the last two dimensions represent the horizontal and the vertical information of a vertically connected input. 
For a diagonal connection pattern we use both horizontal and vertical information of the four-dimensional map to connect it with a two-dimensional map. We use such a connection pattern to connect Xh with the LIP maps XbPC and XbCD and vice versa. The distance between neuron (i, j) of Xh and neuron (k, l, m, n) of an LIP map is defined as  
\begin{equation}{\rm{distance}} = {\left\| {i - k - m} \right\|^2} + {\left\| {j - l - n} \right\|^2}{\rm {.}}\end{equation}
 
For the remaining connection pattern between XeFEF and XbCD that connects two four-dimensional maps, we read out the presynaptic map diagonally and connect this vertically with the postsynaptic map—that is, we use all four dimensions of XeFEF and only the last two dimensions of XbCD to calculate the distance between the neurons. More precisely, if we want to connect neuron (i, j, k, l) of XeFEF with neuron (m, n, o, p) of XbCD, the distance between these neurons is  
\begin{equation}{\rm{distance}} = {\left\| {i + k - o} \right\|^2} + {\left\| {j + l - p} \right\|^2}{\rm {.}}\end{equation}
 
The lateral excitatory connections used in map XbPC and Xh are defined as Gaussian functions dependent on the distance between the positions of the neurons in the maps considering all dimensions of the map. Thus, the calculation of the weights follows  
\begin{equation}w_{{\rm{exc}}}^{{{\rm{X}}^*}} = K_{{\rm{exc}}}^{{{\rm{X}}^*}}\exp \left( { - {{{\rm{distanc}}{{\rm{e}}_{{\rm{exc}}}}} \over {{\sigma ^2}}}} \right){\rm {,}}\end{equation}
with  
\begin{equation}{\rm{distanc}}{{\rm{e}}_{{\rm{exc}}}} = \left\{ {\matrix{ {{{\left\| {i - k} \right\|}^2} + {{\left\| {j - l} \right\|}^2},{\rm{\ for\ }}(i,j),(k,l) \in {{Xh}}} \hfill \cr {{{\left\| {i - m} \right\|}^2} + {{\left\| {j - n} \right\|}^2} + {{\left\| {k - o} \right\|}^2} + {{\left\| {l - p} \right\|}^2},{\rm{\ for\ }}(i,j,k,l),(m,n,o,p) \in {{X}}{{{b}}_{{\rm{PC}}}}} \hfill \cr } } \right.{\rm {.}}\end{equation}
 
To reduce the computational effort, the lateral inhibitory connections with fixed weights Display Formula\(w_{{\rm{inh}}}^{{{\rm{X}}^*}}\) are not created as explicit connections but rather are calculated with the help of the mean over all firing rates multiplied by the total number of neurons:  
\begin{equation}w_{{\rm{inh}}}^{{{\rm{X}}^*}}\mathop \sum \limits^{{\rm{inh}}} {r^{{{\rm{X}}^*}}} = w_{{\rm{inh}}}^{{{\rm{X}}^*}}{\rm{mean}}\left( {{r^{{{\rm{X}}^*}}}} \right) \times {\rm{number\_of\_neurons}}{\rm {.}}\end{equation}
 
Table 1 lists all parameters whose values have changed in comparison to Ziesche and Hamker (2011). Mainly, the adaption of the values is due to the three major changes in the model: reduction of the number of neurons, simplification of XeFEF, and addition of feedback from Xh to XbPC. The new parameter values for this feedback connection are listed in the last two rows and are equal to those for the feedback connection from Xh to XbCD
Table 1
 
Parameters whose values have changed in comparison to Ziesche and Hamker (2011). Notes: The last two rows give new parameters.
Table 1
 
Parameters whose values have changed in comparison to Ziesche and Hamker (2011). Notes: The last two rows give new parameters.
Supplementary material
Supplementary Movie S1. Simulation results of the two predictive-remapping tasks over time. The activity of both maps of lateral intraparietal cortex (LIP) split into two two-dimensional planes is plotted representing horizontal and vertical information. Additionally, the setup with the current eye position (red cross) and stimulus (green star), as well as the activities of both LIP maps projected to visual space, are plotted. The yellow dashed line shows the diagonal which has the largest activation. The symbols and definitions are identical to Figure 6. The time (in milliseconds) is aligned to saccade onset. (Left) In the fixation task, a stimulus is presented in the receptive field (RF), the eyes fixate at the fixation point (FP), and no saccade is planned. Due to the multiplicative interaction of eye-position signal and retinal signal representing the stimulus, there is a single activation blob in the LIP proprioceptive (PC) eye position map. Projected to visual space, this results in activity at RF (red blob). In the LIP corollary discharge (CD) map, there is an activity line indicating the position of the stimulus, which also leads to activity at RF (covered by red blob). (Right) In the saccade task, the stimulus is presented in the future receptive field (FRF) and a saccade is going to be executed from FP to the saccade target (ST). In LIP PC, there is an activity blob at the crossing of the current eye position and the stimulus position, similar to the fixation task. Thus, we see activity at FRF triggered by LIP PC (red blob). Shortly before saccade onset, the CD signal rises, which produces an additional peak of activity along the activity line from the stimulus signal at ST in LIP CD. Additionally, there is a second activity blob resulting from the interaction of the CD signal with the feedback signal from Xh along the yellow dashed diagonal. This leads to a second activity blob at RF (blue blob). Both activity blobs are encoded in a retinotopic reference frame, and thus they move according to the eye movement. After the saccade, the CD signal decays and with it the activity in LIP CD (and hence the blue blob, now at FRF). Additionally, the activity blob by LIP PC moves back to FRF as the PC signal updates to the correct postsaccadic eye position. There it decays as the retinal signal decays after turning off the stimulus. 
Supplementary Movie S2. Simulation results of spatial updating of attention with cued attention over time. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted as well as the setup and the triggered attention pointers. The yellow dashed line shows the diagonal which has the largest activation. The symbols and definitions are identical to Figure 9. The time (in milliseconds) is aligned to saccade onset. At the beginning, the activity in the LIP proprioceptive (PC) eye position map resulting from the interaction of the PC signal and the retinal signal triggers an attention pointer at AP (red blob). By the time the retinal signal has decayed, the attended position is encoded in Xh. Shortly before saccade onset, the activity in LIP PC results from the interaction of the PC signal and the activity of Xh projected back to this map along the yellow dashed diagonal. Additionally, there is a second attention pointer triggered at the remapped attention position (RAP) by the LIP corollary discharge (CD) map through the interaction of the rising CD signal and the feedback from Xh along the diagonal (blue blob). Both attention pointers are shifted with the eye movement, as they are retinotopic. After the saccade, the CD signal decays and with it the activity in LIP CD, as well as the second attention pointer. Furthermore, the PC signal updates to the correct postsaccadic eye position and thus the attention pointer triggered by LIP PC updates to the correct position (AP). 
Supplementary Movie S3. Simulation results of spatial updating of attention with top-down attention over time. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted as well as the setup and the triggered attention pointers. Additionally, the diagonal on which the attention signal is fed into the LIP maps is plotted with a yellow dashed line. The symbols and definitions are identical to Figure 12. The time (in milliseconds) is aligned to saccade onset. At the beginning, the activity in the LIP proprioceptive (PC) eye position map resulting from the interaction of the PC signal and the attention signal triggers an attention pointer at the attention position (AP; red blob). Shortly before saccade onset, a second attention pointer is triggered at the remapped attention position (RAP) by the LIP corollary discharge (CD) map through the interaction of the rising CD signal and the attention signal (blue blob). Both attention pointers are shifted with the eye movement, as they are retinotopic. After the saccade, the CD signal decays and with it the activity in LIP CD, as well as the second attention pointer. Furthermore, the PC signal updates to the correct postsaccadic eye position and thus the attention pointer triggered by LIP PC updates to the correct position (AP). 
Footnotes
 Amended August 20, 2019: The movie files were replaced with new files to improve reader experience.
Figure 1
 
Example of how a linear combination of sigmoidal gain fields approximates a Gaussian-like function. The gray curves illustrate four different sigmoid functions si(x), and the red curve represents a linear combination of these functions, namely the weighted sum of the four sigmoid functions minus a constant value: \(\left( {\sum\nolimits_i {{w_i}} {s_i}(x) - k} \right)\). Here the linear combination is approximately a Gaussian function.
Figure 1
 
Example of how a linear combination of sigmoidal gain fields approximates a Gaussian-like function. The gray curves illustrate four different sigmoid functions si(x), and the red curve represents a linear combination of these functions, namely the weighted sum of the four sigmoid functions minus a constant value: \(\left( {\sum\nolimits_i {{w_i}} {s_i}(x) - k} \right)\). Here the linear combination is approximately a Gaussian function.
Figure 2
 
Examples of basis-function networks. The inset shows the setup: The eye fixates at 30° while a stimulus (depicted as green star) is presented at 20° (both positions in head-centered coordinates). Thus, in a retinotopic reference frame the stimulus is at −10°. The neurons in the central map, the basis-function map, are aligned to the sensitivity of their input, which is a topological but not a cortical definition of space. Left: Basis-function network with tonic eye-position signal. Right: Basis-function network with phasic eye-position signal. The two signals—head-centered eye position xetonic (left) and xephasic (right) and retinotopic stimulus response xr—feed into the basis map xb and are combined according to the corresponding equation. This leads to an activity blob centered at (30°, −10°) and an additional band of activity centered at (*, −10°) in the phasic case (right). By reading out the activity along the diagonal and projecting it to the output map xh, we receive a firing-rate pattern centered at the head-centered position of the stimulus. The yellow dashed lines symbolize the interactions for the largest activity—that is, how the inputs xe* and xr are fed into xb and how the activity of xb is read out and projected to output xh.
Figure 2
 
Examples of basis-function networks. The inset shows the setup: The eye fixates at 30° while a stimulus (depicted as green star) is presented at 20° (both positions in head-centered coordinates). Thus, in a retinotopic reference frame the stimulus is at −10°. The neurons in the central map, the basis-function map, are aligned to the sensitivity of their input, which is a topological but not a cortical definition of space. Left: Basis-function network with tonic eye-position signal. Right: Basis-function network with phasic eye-position signal. The two signals—head-centered eye position xetonic (left) and xephasic (right) and retinotopic stimulus response xr—feed into the basis map xb and are combined according to the corresponding equation. This leads to an activity blob centered at (30°, −10°) and an additional band of activity centered at (*, −10°) in the phasic case (right). By reading out the activity along the diagonal and projecting it to the output map xh, we receive a firing-rate pattern centered at the head-centered position of the stimulus. The yellow dashed lines symbolize the interactions for the largest activity—that is, how the inputs xe* and xr are fed into xb and how the activity of xb is read out and projected to output xh.
Figure 3
 
Example of typical temporal dynamics of input signals. At 150 ms before saccade onset, a stimulus is presented for 80 ms. The retinal signal starts 50 ms after stimulus onset. The signal decays with Gaussian distribution for the duration of stimulus presentation and linearly afterwards. The corollary-discharge (CD) signal rises before saccade onset, peaks 10 ms after saccade onset, and decays. The proprioceptive (PC) signal updates after saccade offset by decaying at the presaccadic position (fixation point; FP) and increasing at the postsaccadic position (saccade target; ST). If we use top-down attention, the signal at the attention position (AP) is active throughout the whole trial.
Figure 3
 
Example of typical temporal dynamics of input signals. At 150 ms before saccade onset, a stimulus is presented for 80 ms. The retinal signal starts 50 ms after stimulus onset. The signal decays with Gaussian distribution for the duration of stimulus presentation and linearly afterwards. The corollary-discharge (CD) signal rises before saccade onset, peaks 10 ms after saccade onset, and decays. The proprioceptive (PC) signal updates after saccade offset by decaying at the presaccadic position (fixation point; FP) and increasing at the postsaccadic position (saccade target; ST). If we use top-down attention, the signal at the attention position (AP) is active throughout the whole trial.
Figure 4
 
Structure of the neuro-computational model. The four input signals—retinotopic stimulus position Xr, proprioceptive eye position (PC) XePC, corollary discharge (CD) XeCD, and attention Xh—are fed into two maps of lateral intraparietal cortex (LIP; XbPC, XbCD) which are gain modulated by either the PC signal or the CD signal. While the PC signal encodes the eye position in head-centered coordinates, the CD signal is originally eye centered and must be first transferred into a head-centered reference frame. This is done in map XeFEF using the eye-position signal from XePC. The activities of all simulated LIP neurons are combined in map Xh, and from there fed back into both LIP maps. The interaction of the maps is summarized in the structural equations. The mathematical description of the model, including all equations, can be found in the Appendix.
Figure 4
 
Structure of the neuro-computational model. The four input signals—retinotopic stimulus position Xr, proprioceptive eye position (PC) XePC, corollary discharge (CD) XeCD, and attention Xh—are fed into two maps of lateral intraparietal cortex (LIP; XbPC, XbCD) which are gain modulated by either the PC signal or the CD signal. While the PC signal encodes the eye position in head-centered coordinates, the CD signal is originally eye centered and must be first transferred into a head-centered reference frame. This is done in map XeFEF using the eye-position signal from XePC. The activities of all simulated LIP neurons are combined in map Xh, and from there fed back into both LIP maps. The interaction of the maps is summarized in the structural equations. The mathematical description of the model, including all equations, can be found in the Appendix.
Figure 6
 
Simulation results of the two predictive-remapping tasks. The activity of both maps of lateral intraparietal cortex (LIP) projected onto two two-dimensional planes is plotted, representing horizontal and vertical information. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 5. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. (A) In the fixation task, a stimulus is presented in the receptive field (RF), the eyes fixate at the fixation point (FP), and no saccade is planned. Due to the multiplicative interaction of eye-position signal and retinal signal representing the stimulus, there is a single activation blob in the LIP proprioceptive (PC) eye position map. Projected to visual space, this results in activity at RF (red blob). In the LIP corollary discharge (CD) map, there is an activity line indicating the position of the stimulus, which also leads to activity at RF (covered by red blob). (B) In the saccade task, a stimulus is presented in the future receptive field (FRF) and a saccade is going to be executed from FP to the saccade target (ST). In LIP PC, there is an activity blob at the crossing of the current eye position and the stimulus position, similar to the fixation task. Thus, we see activity at FRF triggered by LIP PC (red blob). Shortly before saccade onset, the CD signal rises, which produces an additional peak of activity along the activity line from the stimulus signal at ST in LIP CD. Additionally, there is a second activity blob resulting from the interaction of the CD signal with the feedback signal from Xh along the yellow dashed diagonal. This leads to a second activity blob at RF (blue blob).
Figure 6
 
Simulation results of the two predictive-remapping tasks. The activity of both maps of lateral intraparietal cortex (LIP) projected onto two two-dimensional planes is plotted, representing horizontal and vertical information. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 5. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. (A) In the fixation task, a stimulus is presented in the receptive field (RF), the eyes fixate at the fixation point (FP), and no saccade is planned. Due to the multiplicative interaction of eye-position signal and retinal signal representing the stimulus, there is a single activation blob in the LIP proprioceptive (PC) eye position map. Projected to visual space, this results in activity at RF (red blob). In the LIP corollary discharge (CD) map, there is an activity line indicating the position of the stimulus, which also leads to activity at RF (covered by red blob). (B) In the saccade task, a stimulus is presented in the future receptive field (FRF) and a saccade is going to be executed from FP to the saccade target (ST). In LIP PC, there is an activity blob at the crossing of the current eye position and the stimulus position, similar to the fixation task. Thus, we see activity at FRF triggered by LIP PC (red blob). Shortly before saccade onset, the CD signal rises, which produces an additional peak of activity along the activity line from the stimulus signal at ST in LIP CD. Additionally, there is a second activity blob resulting from the interaction of the CD signal with the feedback signal from Xh along the yellow dashed diagonal. This leads to a second activity blob at RF (blue blob).
Figure 7
 
Simulation results of the two predictive-remapping tasks without feedback from Xh to lateral intraparietal cortex (LIP). Layout and definitions are identical to Figure 6. The main difference from Figure 6 is the missing second activity blob in the LIP corollary discharge (CD) map in the saccade task, and consequently the missing activity at RF (compare the blue blob in Figure 6B).
Figure 7
 
Simulation results of the two predictive-remapping tasks without feedback from Xh to lateral intraparietal cortex (LIP). Layout and definitions are identical to Figure 6. The main difference from Figure 6 is the missing second activity blob in the LIP corollary discharge (CD) map in the saccade task, and consequently the missing activity at RF (compare the blue blob in Figure 6B).
Figure 9
 
Simulation results of spatial updating of attention with cued attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Both LIP maps interact with each other by summing up all activations along each diagonal toward each neuron in Xh and projecting back along the same diagonal. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 8. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob) and for the corollary-discharge (CD) signal (blue blob, coved by red blob). (B) Shortly before saccade, the CD signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 9
 
Simulation results of spatial updating of attention with cued attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Both LIP maps interact with each other by summing up all activations along each diagonal toward each neuron in Xh and projecting back along the same diagonal. The yellow dashed line shows the diagonal which has the largest activation. The symbols are identical to Figure 8. The red and blue blobs are the neural activities of both LIP maps projected into retinotopic space. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob) and for the corollary-discharge (CD) signal (blue blob, coved by red blob). (B) Shortly before saccade, the CD signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 10
 
Attentional effect at different spatial positions over time for cued attention. The three plots show the attentional effect at the three spatial positions RAP (remapped attention position, left), AP (attention position, middle), and LAP (lingering attention position, right) over time. The red lines mark attention triggered in the lateral intraparietal cortex (LIP) map modulated by the proprioceptive (PC) eye position; the blue lines mark attention triggered by the LIP corollary discharge (CD) map. Note that before and after saccade, we have to read out the firing rate of different neurons to obtain the LIP activity corresponding to the same spatial position. The purple dashed lines mark the overall attention at this position. The gray bar covers the period of the eye movement and separates the activity from the two different neurons. The time (in milliseconds) is aligned to saccade onset.
Figure 10
 
Attentional effect at different spatial positions over time for cued attention. The three plots show the attentional effect at the three spatial positions RAP (remapped attention position, left), AP (attention position, middle), and LAP (lingering attention position, right) over time. The red lines mark attention triggered in the lateral intraparietal cortex (LIP) map modulated by the proprioceptive (PC) eye position; the blue lines mark attention triggered by the LIP corollary discharge (CD) map. Note that before and after saccade, we have to read out the firing rate of different neurons to obtain the LIP activity corresponding to the same spatial position. The purple dashed lines mark the overall attention at this position. The gray bar covers the period of the eye movement and separates the activity from the two different neurons. The time (in milliseconds) is aligned to saccade onset.
Figure 12
 
Simulation results of spatial updating of attention with top-down attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Additionally, the diagonal on which the attention signal is fed into the LIP maps is plotted with a yellow dashed line. The symbols and definitions are identical to Figure 9. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob). (B) Shortly before saccade, the corollary-discharge (CD) signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Figure 12
 
Simulation results of spatial updating of attention with top-down attention for different time steps. The activities of both maps of lateral intraparietal cortex (LIP; XbPC, XbCD) are plotted, as well as the setup and the triggered attention pointers. Additionally, the diagonal on which the attention signal is fed into the LIP maps is plotted with a yellow dashed line. The symbols and definitions are identical to Figure 9. The time (in milliseconds) is aligned to saccade onset. (A) Long before saccade, the attention pointer at the desired attention position (AP) is encoded by the LIP map for the proprioceptive (PC) signal (red blob). (B) Shortly before saccade, the corollary-discharge (CD) signal rises and activates neurons in LIP CD, which trigger a second attention pointer at the remapped attention position (RAP, blue blob). (C) Shortly after saccade, both attention pointers are shifted according to the eye movement, as they are retinotopic. This leads to an attention pointer at the lingering attention position (LAP, red blob) and another one at AP (blue blob). (D) Long after saccade, the PC signal updates to the new eye position (saccade target; ST) and the CD signal decays, so there is again only one attention pointer triggered by LIP PC at AP (red blob).
Table 1
 
Parameters whose values have changed in comparison to Ziesche and Hamker (2011). Notes: The last two rows give new parameters.
Table 1
 
Parameters whose values have changed in comparison to Ziesche and Hamker (2011). Notes: The last two rows give new parameters.
Supplement 1
Supplement 2
Supplement 3
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×