Because visual attention acts on and through the functional architecture that supports visual perception, it is reasonable to suppose that these two functions, classically considered independent, in fact continuously modulate one another (e.g., Franconeri, Alvarez, & Cavanagh,
2013; McMains & Kastner,
2011; Scalf, Basak, & Beck,
2011; Scalf & Beck,
2010; Scalf, Torralbo, Tapia, & Beck,
2013). Single-cell, behavioral, and neuroimaging evidence indicate that attentional enhancement is indeed modified by the perceptual segregation of the visual scene into objects (e.g., Driver & Baylis,
1989; Duncan,
1984; Egly, Driver & Rafal,
1994; McMains & Kastner,
2011; Qiu, Sugihara, & von der Heydt,
2007). During figure–ground segmentation, potential objects on opposite sides of a border compete for figural status; the winner of this competition is perceived as the object, while the loser is seen as shapeless ground (e.g., Grossberg,
1994; Kienker, Sejnowski, Hinton, & Schumacher,
1986; Peterson & Skow,
2008). This competitive interaction has direct consequences for attentional enhancement (Qiu et al.,
2007). Directing attention into the receptive field of V2 neurons increases their firing rate. This amplification interacts with the figural status of the information being coded by those neurons, such that attention causes a disproportionately large increase in signal from V2 cells that are responding to figural rather than ground regions of the display (Qiu et al.,
2007); this is a multiplicative effect. In humans, the signal enhancement that results from visual items forming a figure reduces the amount of top-down attention required to detect a slight change in luminance (McMains & Kastner,
2011). Furthermore, directing top-down attention to one region of a figure increases the signal evoked by its other unattended regions (Martinez et al.,
2006; Müller & Kleinschmidt,
2003), suggesting that attention may flow automatically across the representation of an object (Chen & Cave,
2006; Hollingsworth, Maxcey-Richard, & Vecera,
2012). These data are in accord with the notion that attentional facilitation, rather than being a spotlight that enhances the signal of perceptual representations in a uniform manner, instead interfaces with the neural architecture that forms those percepts and is thus informed by them (McMains & Kastner,
2011; Qiu et al.,
2007; Scalf & Beck,
2010).