Abstract
Location information plays a special role in visual processing. This has been demonstrated in a number of ways, including a new paradigm called the "spatial congruency bias" (Golomb et al., 2014), which shows that two objects sharing the same location are more likely to be judged as having the same feature/identity. However, this bias has only been tested for 2D location: here we adapted the spatial congruency bias paradigm to ask if 2D and 3D location are equally prioritized in object processing. In Experiments 1 and 2 we examined if 2D and depth location bias judgments of each other. In Experiments 3 to 5 we examined if 2D and depth location bias color judgments. Participants judged whether two sequentially presented objects had the same or different depth location (Experiment 1), 2D location (Experiment 2), or color (Experiments 3-5), while ignoring any irrelevant dimensions. We found that 2D location biased both color and depth judgments, with objects at the same 2D location more likely to be judged as the same color/depth. In contrast, depth did not bias 2D location or color judgments. Specifically, we found that although depth cued by binocular disparity appeared to bias color judgments, this may have been driven by low level disparity cues, as color judgments were also biased by vertical disparity, which does not cause a depth percept. Depth cued by occlusion did not bias color judgments. We propose that depth information does not bias features, but that eye-dependent 2D location information does. In conclusion, depth information appears to be less fundamental for visual processing than 2D location, and object processing is biased by low-level 2D location information, before depth location is computed from retinal images.
Meeting abstract presented at VSS 2016