Abstract
Predictive remapping alerts neurons when a target will fall into its receptive field after an upcoming saccade. This has consequences for attention which starts selecting information from the target's remapped location before the eye movement begins even though that location is not relevant to pre-saccadic processing. Thresholds are lower and information from the target's remapped and current locations may be integrated. These predictive effects for eye movements are mirrored by predictive effects for object motion, in the absence of saccades: motion-based remapping. An object's motion is used to predict its current location and as a result, we sometimes see a target far from its actual location: we see it where it should be now. However, these predictions operate differently for eye movements and for perception, establishing two distinct representations of spatial coordinates. We have begun identifying the cortical areas that carry these predictive position representations and how they may interface with memory and navigation.
Meeting abstract presented at VSS 2018