Purchase this article with an account.
Thomas Töllner, Markus Conci, Hermann J. Müller; Stimulus context controls the speed of attentional spotlight shifts in human visual cortex. Journal of Vision 2013;13(9):1248. doi: https://doi.org/10.1167/13.9.1248.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When humans or other primates search their environment for target objects critical for achieving their current action goals, the times they take to react upon the intended object are generally the faster the more the target differs from the objects in its vicinity. Moreover, if the physical distinctiveness of a given object from its surround exceeds a certain threshold, it might even force the searcher to attend to its location in an automatic, involuntary manner. Here, we provide an electroencephalographic signature of the similarity between a target stimulus and its surrounding context in the human visual cortex. By coupling millisecond-by-millisecond scalp-recorded voltage fluctuations to mental chronometry data during two illusory-figure search tasks, we observed a gradual decrease in internal focal-attentional selection times (as indexed by the Posterior-Contralateral-Negativity) to be correlated with external behavioural response latencies, the more relative to less an invariant target was distinguishable from its surround. Additionally, for targets of high, but not intermediate and low, salience, we found these context-driven effects even more cortically amplified if participants were provided with pre-knowledge regarding the physical context of the upcoming target stimulus. These results provide a challenge to traditional models of visual-selective attention and/or perceptual decision-making which envisage focal-attentional selection being mediated by internal templates that operate exclusively on target-defining feature coding. Instead, we provide direct neurophysiological evidence for a saliency-based processing architecture underlying the human visual system, in which the outcome of top-down accessible, pre-attentive feature-contrast computations determines when and where we engage our spotlight of attention.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only