Abstract
Many visual tasks, such as understanding a diagram, require that we process the spatial relationships between objects. There is almost no work exploring how we make these judgments in a flexible way. One hurdle is the binding problem - because our object recognition network codes object features across large regions of the visual field, there is often uncertainty about the location of any given object. Unless individual objects are isolated by selective attention, identities can become associated with the wrong object. We will demonstrate examples of this phenomenon for simple displays containing just two colors. When we process a relative spatial relation, the visual system may solve this binding problem by isolating the first object with selective attention, encoding its location into memory, selecting the second object, and then comparing its relative location. In three studies, we used an electrophysiological correlate of selection (N2pc) to demonstrate that even when dealing with just two simple objects, selection does shift sequentially between them. These shifts (1) occur despite difficult dual tasks that discourage them, (2) occur only for trials where the relation was actually perceived, and (3) do not occur for same-different judgment tasks that do not present a binding problem. Together, these results demonstrate that when seeing one object to the right of another, our impression of simultaneously selecting both objects may be an illusion. We will describe a potential architecture that allows recovery of spatial relations from the pattern of shifts themselves, as show how this flexible processing system could be extended into non-spatial domains (e.g. size or number judgments).