Abstract
In our everyday lives, we must coherently bind the features and identities of objects with their spatial locations. We previously showed that location information biases object identity judgments: when two objects share the same location, subjects are more likely to judge them as the same shape (Golomb & Kupitz, VSS 2013). This spatial "congruency bias" is automatic, insuppressible, and unidirectional (object shape does not influence location judgments). Here we examined: (1) if location is truly unique in driving this bias, and (2) how the binding process is influenced by the manipulation of multiple object dimensions. Subjects saw two sequentially presented novel "objects" in the periphery and were instructed to make a same/different comparison of either the objects' shape or color (blocked tasks). Importantly, the objects were independently manipulated along three dimensions: shape, color, and location. Replicating our previous results, when the objects shared a location, subjects reported significantly more "same identity" responses in the shape task. Signal detection theory analyses confirmed the bias, with increases in both hits and false alarms when location was the same. Additionally, we observed a similar location bias in the color task. Interestingly, this bias was unique to location: color did not bias shape judgments, nor did shape bias color judgments. Irrelevant shape and color information did influence how quickly subjects responded (priming), but object location was the only irrelevant dimension that actually biased the responses. Together, these findings provide further evidence for a special role of location in object perception, suggesting that object location is uniquely and automatically encoded – and bound – during object recognition. Preliminary fMRI investigations suggest that the lateral occipital complex (LOC), parietal cortex, and medial temporal lobe may reflect neural substrates of this spatial congruency bias.
Meeting abstract presented at VSS 2014