Abstract
A single object presented to one eye among many other identical objects presented to the other eye – an ocularity singleton – is salient to attract visual attention automatically. Saliency from ocularity contrast helps rapidly localize the foreground, especially in 3D visual scenes. However, unlike saliency by other feature dimensions, e.g., color (C) and orientation (O), uniqueness by ocularity (E, eye-of-origin) alone is perceptually invisible, making it difficult to be quantified. I.e., the reaction time to detect an ocularity singleton – RT(E) – remains unknown. Quantitative measures could help further investigate the interaction between saliency by ocularity and by other features, unfolding its neural mechanisms. In the current study, RTs were measured in a search task for a unique bar among many background bars with identical C, O and E features. The target bar was unique in either C or O alone, or unique simultaneously in two or three feature dimensions: CO, CE, EO, CEO. Importantly, with a quantitative model derived from the V1 Saliency Hypothesis (V1SH), which links saliency with neural activities of primate V1, RT(E) was then robustly calculated from RT(C), RT(O), RT(CO), RT(CE), RT(EO) and RT(CEO). Furthermore, by V1SH, whether RT(CE) is shorter than the RTs of the winner of a race model involving RT(E) and RT(C) reflects whether there are V1 neurons tuned conjunctively to both E and C – monocular neurons tuned to color – that contribute to saliency. Analogously, RT(EO) sheds light on monocular neurons tuned to orientation. We show that RT(CE) and RT(EO) are shorter than that of the race winner between single-feature RTs, suggesting a contribution by CE and EO neurons. However, this applies only to search among red rather than green background bars, suggesting an intrinsic color asymmetry for saliency interaction with ocularity.