Abstract
To function in the world, our brains must integrate information about "what" objects are with information about "where" they are located. The Spatial Congruency Bias is a recent finding thought to reflect this binding between object and location information (Golomb et al., 2014). The bias demonstrates a strong influence of location information on object identity judgments; two objects are more likely to be judged as same identity when presented in the same location compared to different locations, even though location is irrelevant to the task. Previous results have found a Spatial Congruency Bias across a range of stimulus complexities: colors, Gabors, shapes, and faces (Shafer-Skelton et al., submitted). In this experiment, we tested the Spatial Congruency Bias for a more complex judgment: comparisons of facial expressions across facial identities. On each trial, a face was presented in the periphery. Next, a second face was presented with either the same or different expression, in either the same or different location. Facial identity was always different. Participants indicated whether the facial expressions were the same or different; stimulus location and facial identity were irrelevant to the task. Unlike the less complex tasks and stimuli, there was no bias of location on participants' responses of facial expression. To explore whether the lack of bias was due to the higher-level judgment or other differences from previous experiments, a second experiment was conducted in which participants compared expressions of two faces with the same identity (instead of across different identities), such that participants could now rely on low-level image differences as well. In this case, the Spatial Congruency Bias returned, with a similar magnitude to previous results. These results suggest spatial location can influence processing of objects and faces, but higher-level judgments may be more invariant to these effects, raising interesting implications for object-location binding.
Meeting abstract presented at VSS 2016