Abstract
To function properly in the world, we need to bind the features and identities of objects with their spatial locations. How automatic is this process, and what is the nature of this location information? Subjects saw two sequentially presented novel "objects" (each presented for 500ms and masked, separated by approximately 1sec memory delay), and were instructed to make a same/different object identity comparison. The task was designed to be challenging, with only subtle changes in shape when identity differed. Importantly, the second stimulus could appear in either the same location as the first, or a different location. Despite being irrelevant to the task, object location influenced behavioral performance in two ways: When the objects appeared in the same location, subjects (1) had faster reaction times, and (2) were significantly more likely to respond "same identity" (i.e., a location-identity compatibility bias). This compatibility bias was substantial, indicating that subjects were unable to suppress the influence of object location, even when it was maladaptive to the task. We next asked: If location is automatically bound to representations of object identity, does it update across eye movements? Subjects performed the same task, but the fixation cross moved during the memory delay, cuing a saccadic eye movement. Thus, the two stimuli could appear in either the same spatiotopic (absolute, screen-centered) location, the same retinotopic (eye-centered) location, or completely different locations. Critically, the location compatibility bias persisted after an eye movement, but only in retinotopic coordinates. In a separate experiment we also found a location compatibility bias for a same/different color task, with the bias remaining predominantly in retinotopic coordinates. The fact that location-identity binding occurs automatically and is anchored in native retinotopic space may have important implications for its utility in object recognition and stability.
Meeting abstract presented at VSS 2013