Purchase this article with an account.
Anna Shafer-Skelton, Colin Kupitz, Adeel Tausif, Julie Golomb; Feature binding and eye movements: Object identity is bound to retinotopic location regardless of stimulus complexity. Journal of Vision 2015;15(12):1062. doi: 10.1167/15.12.1062.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Intuitively, the most stable way for objects to be incorporated into these percepts is for their features to be bound directly to spatiotopic locations. We might expect that low-level features are bound to retinotopic locations, but what about features of more complex stimuli? Recently, Golomb et al. (2014 JEP:G) showed that object location is not only automatically attended but fundamentally bound to identity representations. The “spatial congruency bias” reveals that even when location is irrelevant to the task, two shapes are more likely to be judged as the same when they are presented in the same location. Importantly, this bias is exclusively driven by location. Here, we tested the coordinate system of the bias across eye movements for two different types of complex stimuli (novel objects in one experiment, faces in another). On each trial, subjects saw two stimuli and had to judge whether they were the same or different identity. The second stimulus could appear in the same screen location as the first (“spatiotopic location”), the same location relative to fixation (“retinotopic location”), or one of two control locations. For both shapes and faces, participants were more likely to judge identity as the same if stimuli were presented in the same retinotopic location, but not if they were presented in the same spatiotopic location. These results suggest that object identity is still bound to retinotopic location after an eye movement, even for complex stimuli. These findings carry important implications for feature binding and remapping across eye movements.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only