Our attention is not only captured by features that we are aware of (e.g., color, orientation, motion direction), but are also guided by information that we are not aware of. One example of such is eye-of-origin information. If different images are presented to our two eyes, and we are asked to report which image is presented to our left or right eye, most likely we do not have any clue (Blake & Cormack,
1979; Enoch, Goldmann, & Sunga,
1969; Martens, Blake, Sloane, & Cormack,
1981; Ono & Barbeito,
1982; Steinbach, Howard, & Ono,
1985). It is also likely that we are unable to identify a target item defined by eye-of-origin information in a visual search task (Wolfe & Franzel,
1988; Zhaoping,
2008). As the information from both eyes converges in V1 (Burkhalter & van Essen,
1986; Hubel & Livingstone,
1987; Hubel & Wiesel,
1968; Zeki,
1978), the eye-of-origin information is not retained for further processing in areas that are higher up in the visual hierarchy and that feed into our conscious perception of the world. Despite being inaccessible to consciousness and top-down attention (e.g., Kimchi, Trainin, & Gopher,
1995), eye-of-origin feature plays a part in bottom-up saliency computation. Zhaoping (
2008,
2012) reported a series of experiments in which participants were asked to search for a bar oriented differently from the rest of the distractor bars (an orientation singleton). Participants made fewer errors when the orientation singleton was an ocular singleton (presented to one eye while the rest of the distractor bars were presented to the other eye, dichoptic congruent condition) than when it was not presented to the other eye (dichoptic incongruent condition) or when all stimuli were present to one single eye (monocular baseline). This finding suggests that unique ocular information directs bottom-up attention and benefits visual search. Eye-movement analysis revealed that the ocular singleton captured gaze and thus slowed visual search in the dichoptic incongruent condition (Zhaoping,
2012). This enhanced bottom-up processing of an ocular item is assumed to contribute to iso-ocular suppression at V1: Activity in V1 is more readily suppressed by contextual input presented to the same eye than the other eye (DeAngelis, Freeman, & Ohzawa,
1994; Webb, Dhruv, Solomon, Tailby, & Lennie,
2005). An ocular singleton is thus less suppressed and prioritized during salience computation.