Abstract
When we need to maintain spatial information across an eye movement, it is an object's location in the world, not its location on our retinas, which is generally relevant for behavior. A number of studies have demonstrated that neurons can rapidly remap visual information, sometimes even in anticipation of an eye movement, to preserve spatial stability. However, it has also been demonstrated that for a period of time after each eye movement, a "retinotopic attentional trace" still lingers at the previous retinotopic location, suggesting that remapping actually manifests in two overlapping stages, and may not be as fast or efficient as previously thought. If spatial attention is remapped imperfectly, what does this mean for feature and object perception? We have recently demonstrated that around the time of an eye movement, feature perception is distorted in striking ways, such that features from two different locations may be simultaneously bound to the same object, resulting in feature-mixing errors. We have also revealed that another behavioral signature of object-location binding, the "spatial congruency bias", is tied to retinotopic coordinates after a saccade. These results suggest that object-location binding may need to be re-established following each eye movement rather than being automatically remapped. Recent efforts from the lab are focused on linking these perceptual signatures of remapping with model-based neuroimaging, using fMRI multivoxel pattern analyses, inverted encoding models, and EEG steady-state visual evoked potentials to dynamically track both spatial and feature remapping across saccades.
Meeting abstract presented at VSS 2018