Abstract
The effortlessness of vision belies the challenges faced by the visual system. Different attributes of an object, such as its colour, shape, size and location, are often processed independently, sometimes in different cortical areas. The results of these separate analyses have to be combined before an object can be seen as a single coherent entity and not just a collection of unrelated attributes. Without visual binding you can be aware of the individual object attributes but binding is required for you to be able to perceive whether or not a given object has a particular combination of these attributes. Visual bindings are typically initiated and updated in a serial fashion, one object at a time. In contrast to this, here we show that one type of binding, location-identity binding, can be updated in parallel. The location-identity binding problem is the problem of knowing which objects are where in the visual scene. Using complementary techniques, the simultaneous-sequential paradigm and systems factorial technology, we examine the computational processing that underlies the updating of these bindings. Although these techniques make different assumptions and rely on different behavioral measures, both came to the same conclusion. Our findings are surprising, strongly constrain several theories of visual perception and help resolve an apparent conflict in the field.
Meeting abstract presented at VSS 2015