Support for feature-based effects of attention on high-level processes comes from neurophysiological data suggesting that attentional modulation is stronger for later cortical areas within the visual hierarchy (Maunsell & Cook,
2002). Arguably, the strongest modulation occurs at its highest level, in the inferior temporal (IT) cortex. Chelazzi, Miller, Duncan, and Desimone (
1993) presented one preferred and one non-preferred stimulus inside the receptive field of IT neurons. The neurons responded to the preferred stimulus depending on whether the monkey was cued to search for it or not. However, these responses could reflect either feature-based or spatial mechanisms. Similarly, neuroimaging studies in humans have demonstrated effects in higher tier areas consistent with either feature-specific and/or spatial attention (Cant & Goodale,
2007; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen,
1990; Corbetta, Miezin, Dobmeyer, Shulman, & Petersen,
1991; Murray & Wojciulik,
2003; Niemeier, Goltz, Kuchinad, Tweed, & Vilis,
2005). What is more, functional data do not necessarily map directly onto behavior. For example, activity in the fusiform face area varies with attention (O'Craven, Downing, & Kanwisher,
1999; Wojciulik, Kanwisher, & Driver,
1998). But at the same time people need little attention to identify the gender of faces (Reddy, Wilken, & Koch,
2004), and faces are difficult to ignore when presented as distractors (Lavie, Ro, & Russell,
2003). This might indicate that face recognition involves specialized, automatic mechanisms. Indeed, computational models have demonstrated that under certain conditions object perception could be achieved through fast feed-forward processes that do not rely on attention (Riesenhuber & Poggio,
1999). In sum, remaining gaps in the understanding of feature-based attention call for an investigation of the contributions of feature-based attention to object perception.