Previous studies of the binding of color with motion in transparent-motion displays offer the suggestion that the accurate feature binding observed under conditions of temporal transparency is instead tied to the generation of persistent surface representations (Clifford, Spehar, & Pearson,
2004; Moradi & Shimojo,
2004; Suzuki & Grabowecky,
2002; Vigano, Maloney, & Clifford,
2014). Here we propose a potential mechanism by which feature conjunctions can be identified through targeted feedback from higher visual areas (Bouvier & Treisman,
2010; Clifford,
2010; Di Lollo, Enns, & Rensink,
2000; Hochstein & Ahissar,
2002; Juan & Walsh,
2003). This is further examined in the
General discussion. In a rapidly alternating stimulus display, the process of assigning the correct features to the respective surfaces does not appear to be completely resolved by the early visual system alone. The question remains as to how visual features are first associated with the correct surface representation. We suggest that attentional selection of a single feature (e.g., a rightward-tilted orientation) can enhance the responses of the population of neurons selective for this orientation. Included in this population are “double-duty cells” tuned to both color and orientation (Burkhalter & Van Essen,
1986; Gegenfurtner,
2003; Gegenfurtner, Kiper, & Fenstemaker,
1996; Gegenfurtner, Kiper, & Levitt,
1997; Johnson et al.,
2008; Leventhal et al.,
1995; Navon,
1990; Shipp, Adams, Moutoussis, & Zeki,
2009; Tamura, Sato, Katsuyama, Hata, & Tsumoto,
1996). Feedback targeted to boost the response of double-duty cells selective for this orientation will similarly enhance the response to the associated color, allowing the correct pairing of orientation and color to be decoded from the response profile of the population of double-duty neurons.