Abstract
The selective processing of visual information, through attention, is based on bottom-up biases and top-down intentions. Here we sought evidence for a third way, through learned rules that are implemented automatically, without cognitive supervision; what we term 'dark attention'. Methods: Our main goal was to find evidence for selection in the absence of bottom-up demands and top-down intentions. We used the duration of the motion aftereffect (MAE) as a passive assay of selective resource allocation. In our main condition, observers saw two superimposed fields of limited-lifetime isoluminant dots, green and red, moving coherently to the left and right, respectively. Such a stimulus is physically balanced and should yield no net MAE (as tested by a static test field of red/green dots), unless the observer selectively attends during adaptation. In our attempt to train a new selection rule, eight observers participated in a three-day training paradigm where they explicitly attended to the red field (encouraged through a direction detection task on the attended field – in this case the red field). Before training, MAE's were measured while observers were asked to perform an auditory two-back memory task (a distracting task aimed at disrupting top-down selection); MAE's were minimal as expected (Mean: 0.37 s; SE: 0.10). Following training, observers were retested in that same condition. Results: Even though the red and green fields were balanced (sidestepping bottom-up selection), and even though observers were again distracted by the two-back task (preventing top-down selection), the red field was still being selected: MAE's were significantly increased (Mean: 2 s; SE: 0.47; two-tailed t-test =-3.45; p= 0.01). This shows that resources were deployed automatically (while distracted observers were presented with physically balanced stimuli), and training induced a new rule for selection.
Meeting abstract presented at VSS 2014