Open Access
Article  |   February 2017
New rules for visual selection: Isolating procedural attention
Author Affiliations
  • Mahalakshmi Ramamurthy
    Department of Psychology, University of Massachusetts, Boston, MA, USA
    Maha.ramamurth001@umb.edu
  • Erik Blaser
    Department of Psychology, University of Massachusetts, Boston, MA, USA
Journal of Vision February 2017, Vol.17, 18. doi:10.1167/17.2.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Mahalakshmi Ramamurthy, Erik Blaser; New rules for visual selection: Isolating procedural attention. Journal of Vision 2017;17(2):18. doi: 10.1167/17.2.18.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

High performance in well-practiced, everyday tasks—driving, sports, gaming—suggests a kind of procedural attention that can allocate processing resources to behaviorally relevant information in an unsupervised manner. Here we show that training can lead to a new, automatic attentional selection rule that operates in the absence of bottom-up, salience-driven triggers and willful top-down selection. Taking advantage of the fact that attention modulates motion aftereffects, observers were presented with a bivectorial display with overlapping, iso-salient red and green dot fields moving to the right and left, respectively, while distracted by a demanding auditory two-back memory task. Before training, since the motion vectors canceled each other out, no net motion aftereffect (MAE) was found. However, after 3 days (0.5 hr/day) of training, during which observers practiced selectively attending to the red, rightward field, a significant net MAE was observed—even when top-down selection was again distracted. Further experiments showed that these results were not due to perceptual learning, and that the new rule targeted the motion, and not the color of the target dot field, and global, not local, motion signals; thus, the new rule was: “select the rightward field.” This study builds on recent work on selection history-driven and reward-driven biases, but uses a novel paradigm where the allocation of visual processing resources are measured passively, offline, and when the observer's ability to execute top-down selection is defeated.

Introduction
Visual attention is the mechanism that allocates limited processing resources to behaviorally relevant visual information (Treue, 2003; for a review, see Carrasco, 2011). Classically, attentional selection—determining targets for these resources—has been described as either top-down or bottom-up (Corbetta & Shulman, 2002; Jonides, 1981; Theeuwes, 2010; Wolfe, Cave, & Franzel, 1989). Top-down selection can allocate resources to arbitrarily defined targets (Eimer & Kiss, 2008; Ludwig & Gilchrist, 2002, 2003; Posner, Snyder, & Davidson, 1980; Serences & Boynton, 2007; Serences, Liu, & Yantis, 2005; Wolfe, 1994; Yantis & Johnston, 1990), but is a willful process, involving central bottleneck processes (Pashler & Johnston, 1998) such as working memory, decision making, and awareness (Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006; Gazzaley & Nobre, 2012; Koch & Tsuchiya, 2007; O'Regan & Noe, 2001; Posner, 1994; Soto, Heinke, Humphreys, & Blanco, 2005). In contrast, bottom-up selection is automatic and unsupervised, but largely limited to selection heuristics based on physical salience (Connor, Egeth, & Yantis, 2004; Egeth, & Yantis, 1997; Itti & Koch, 2001; Moraglia, Maloney, Fekete, & al-Basi, 1989; Nordfang & Bundesen, 2010; Nothdurft, 2002; Remington, Johnston, & Yantis, 1992). However, everyday experience brings many demanding visual tasks (like the flow of a well-practiced game of basketball or reading a bedtime story while lost in thought) in which there is no explicit top-down selection, nor consistently useful bottom-up cues provided by the scene, yet the dynamic allocation of visual processing resources must be occurring in order to support successful performance. This suggests that, analogous to the procedural memory that guides skilled motor behavior, one can acquire new selection rules that are flexible and context-dependent, yet also implemented automatically and without supervision—a kind of procedural attention
The aim of this study was to show empirical evidence for passive, automatic allocation of visual processing resources, based on learned rules, in the absence of top-down or bottom-up selection. This goes beyond most work on selection-history–based effects (reviewed below). What that work has in common is that repeated selection of certain visual information biases subsequent selection (Shiffrin & Schneider, 1977). In some of this work though, it is possible that this bias (whether gained through repetition, reward, or context) acts as a cue to top-down selection—an implicit substitute for an explicit instruction. Here, we trained observers over three days to allocate resources to one of two iso-salient motion vectors in a bivectorial display. We then used a paradigm where we could measure the allocation of visual processing resources passively (to one of two iso-salient motion vectors), offline (via a subsequent motion aftereffect), without a visual task, and where the observer's ability to execute top-down selection was effectively defeated (by a distracting auditory two-back memory task). Under these conditions, if processing resources still wind up where they have been taught to go, then that is evidence for an unsupervised selection mode like procedural attention. 
Selection outside the top-down, bottom-up framework
Work that challenges the traditional top-down–bottom-up dichotomy is not new. For instance, cross-attribute attention (Sohn, Papathomas, Blaser, & Vidnyánszky, 2004), feature-based attentional spreading (Melcher, Papathomas, & Vidnyánszky, 2005; Saenz, Buracas, & Boynton, 2002; Serences & Boynton, 2007; Treue & Martinez Trujillo, 1999), object-based attention (Blaser, Pylyshyn, & Holcombe, 2000; Duncan, 1984; O'Craven, Downing, & Kanwisher, 1999), and dark attention (Blaser & Shepard, 2009) show that processing resources may get allocated beyond initially selected targets, or without the intention or awareness of the observer. 
An even more direct challenge comes from studies in which attention winds up allocated based on task history, reward, or context (even when that allocation may be at odds with the current goals of the observer). For example, Maljkovic and Nakayama (1994) showed that repeated search for a particular target facilitates performance (priming of pop-out; for reviews, see Awh, Belopolsky, & Theeuwes, 2012; Kristjánsson & Campana, 2010; Wolfe, Butcher, Lee, & Hyle, 2003), but when that target switches roles to become a distractor it interferes, in spite of observer's intention to ignore it (Maljkovic & Nakayama, 1994). Inversely, Theeuwes and Van der Burg (2011) demonstrated that an irrelevant color singleton will capture attention, interfering with search, but when the color of the singleton remains the same from trial-to-trial, this interference can be overcome. Beyond repetition, associating a reward (e.g., monetary) with a target that is subsequently put into play as a distractor can lead to unwitting selection of the ex-target; again, pitting an acquired bias against the observer's current goals (Anderson, Laurent, & Yantis, 2011; Della Libera, & Chelazzi, 2006, 2009; Hickey, Chelazzi, & Theeuwes, 2010). Such biases need not be linked to specific target features. For instance, Wang, Kristjansson, and Nakayama (2005) used a search array in which the target was defined by the context in which it was presented. Their results showed better performance over repeated presentations, suggesting that attentional selection was based on context–target associations. 
Isolating procedural attention
Isolating procedural attention requires that both bottom-up and top-down selections are defeated. Defeating bottom-up selection is a matter of equalizing the salience of competing attentional targets, while defeating top-down selection requires diverting the central bottleneck processes required to implement and maintain it. Our paradigm consisted of two tasks, a visual detection task and an auditory two-back memory task (see General methods). We used a visual stimulus that consisted of two fields of dots (one red, one green) that slid transparently over one another in opposite directions. To keep from triggering bottom-up selection, the fields were calibrated to be balanced in motion energy and tested to ensure they were iso-salient (see Visual stimuli in General methods). Then, top-down selection could either be explicitly engaged by asking observers to attend to one of the fields (i.e., perform the visual detection task on it) or defeated, by eliminating the visual task and instead instructing observers to perform the two-back task alone (Blaser & Shepard, 2009). 
Attentional allocation was then measured with a passive assay that did not require observers to make online judgments. This assay was the duration of the motion aftereffect (MAE), which is the illusory movement of a stationary stimulus that follows adaptation to a moving one (Wohlgemuth, 1911). It has been shown that the MAE can be modulated by attention to the adaptor: greater attention yields larger MAEs (Chaudhuri, 1990; Gogel & Sharkey, 1989; Lankheet & Verstraten, 1995; Shulman, 1993). For our purposes and similar to how attention to one of two superimposed grating patterns can bias tilt adaptation (Liu, Larsson & Carrasco, 2007; Spivey & Spirn, 2000), selectively attending to one of two superimposed motion vectors can bias the resulting MAE (Alais & Blake, 1999; Lankheet & Verstraten, 1995; see also Morgan, Schreiber, & Solomon, 2016, whose work has shown that these effects are not ubiquitous and that more needs to be done to understand the necessary and sufficient conditions to produce them). A net MAE resulting from adaptation to these otherwise balanced stimuli is thus evidence for biased allocation of resources. 
This condition allowed us to set up a straightforward before-and-after training paradigm. Before selection training, when observers had no visual task, and top-down selection was defeated, no net MAE should result from these stimuli. During training, observers performed multiple sessions over 3 days (three 0.5 hr sessions) in which they were instructed to selectively attend to one field (red, rightward) in an effort to automatize this selection rule. After training, we hypothesized that, even when there was again no visual task and top-down selection was defeated, a net (here, leftward) MAE would result.1 
General methods
Observers
A total of eight observers, aged 25–45 years, with normal or corrected-to-normal vision, and no known hearing impairments were run in these experiments. Three observers were excluded before testing because they did not experience measurable MAEs. The study protocol was approved by the Internal Research Board at University of Massachusetts, Boston and informed consent was given by all observers. 
Apparatus
Visual stimuli were presented on a ViewSonic PF790 CRT monitor (ViewSonic, Walnut, CA) with a resolution of 1024 × 768 at 75 Hz. Auditory stimuli were presented over headphones with an integrated microphone for the collection of verbal responses. Experiments were controlled using custom scripts in MATLAB (version R2013a; MathWorks, Natick, MA), using Psychophysics Toolbox functions (Brainard, 1997; Kleiner, Brainard, Pelli, Ingling, & Murray, 2007; Pelli, 1997). 
Visual stimuli
Main experimental stimuli were bivectorial dot fields, one red and one green, with opposite lateral translational motion. Each of the two fields consisted of 384 dots randomly distributed in a gray-bordered square aperture that subtended 8.5° × 8.5° of visual angle. The dots in each field had 100% coherent motion and were perceived as two dot fields moving, transparently, in opposite directions (this is true for all our bivectorial stimuli except those used for motion energy calibration [see below] and for Experiment 4, in which we also use locally paired dot fields; see those sections for further description). Each dot subtended 0.1° of visual angle and moved at a speed of 3 °/s. Dots had limited lifetime with 2% of the dots replaced and redrawn in a random location every 13.33 ms, resulting in an effective lifetime of approximately 660 ms. Luminance of the red field was fixed at 8 cd/m2 (as measured from observers' viewing distance), and that of the green field was calibrated to isoluminance for each observer (see Motion energy calibration section, below). Stimuli were presented on an otherwise dark screen. Observers were seated in a dark room, 57 cm away from the display, and were instructed to maintain fixation on a central point during testing. 
Motion energy calibration
We used a calibration method adapted from Blaser, Papathomas, and Vidnyánszky (2005) that employed a locally paired (Qian, Andersen, & Adelson, 1994) version of the bivectorial visual stimulus. Field and dot parameters were identical to those described above, but in these displays, each dot had a fixed, limited lifetime of 67 ms (five frames). Further, each rightward moving red dot was spatiotemporally paired with a leftward moving green dot, putting them on a collision course (see Figure 7a). After passing through this collision, the pair disappeared and was redrawn in a random location. The “age” of pairs of dots was jittered so that only a subset of dots was redrawn at any given time. Given these spatiotemporal parameters, along with red and green isoluminance, motion transparency is eliminated, and the display is perceived as motionless scintillation. If one of the dot fields has greater luminance, and therefore motion energy, this induces a (subtle, given a modest luminance imbalance) global drift of the compound stimulus in the direction of the more luminous dots. In 10 repeated measurements, observers made online adjustments to the luminance of the green field until global motion was nulled and the display appeared as motionless scintillation. The mean of these 10 values was used as the luminance for the green field for each observer for the remainder of the study. 
Figure 1
 
Schematic representation of events within a trial. A typical trial consisted of a 30-s adaptation phase, during which bivectorial stimuli were presented. During this phase, observers performed a visual deflection discrimination task, an auditory two-back memory task, or both, depending on condition. Task-relevant events could appear in any of the fifteen 2-s long epochs within the adaptation phase. Following adaptation, a physically static test field was presented in order to measure MAE duration.
Figure 1
 
Schematic representation of events within a trial. A typical trial consisted of a 30-s adaptation phase, during which bivectorial stimuli were presented. During this phase, observers performed a visual deflection discrimination task, an auditory two-back memory task, or both, depending on condition. Task-relevant events could appear in any of the fifteen 2-s long epochs within the adaptation phase. Following adaptation, a physically static test field was presented in order to measure MAE duration.
Figure 2
 
MAE duration results from individual observers, as measured before training, for the passive, top-down defeated, and top-down engaged conditions, respectively. MAE duration from the top-down engaged condition was significantly greater than both the passive and top-down defeated conditions, which were both effectively zero. Error bars reflect SE of the means.
Figure 2
 
MAE duration results from individual observers, as measured before training, for the passive, top-down defeated, and top-down engaged conditions, respectively. MAE duration from the top-down engaged condition was significantly greater than both the passive and top-down defeated conditions, which were both effectively zero. Error bars reflect SE of the means.
Figure 3
 
(a–d). Results from Experiment 1. Panel a shows visual deflection-detection task performance, stemming from the top-down engaged condition, over days and blocks (within training Days 2–4). Superimposed for ease of comparison are results from the dual-task condition performed on Days 1 and 5. Visual performance was significantly lower in the dual-task condition than when run alone in the top-down engaged condition. Panel b shows two-back memory task performance, stemming from the top-down defeated condition, over days and blocks. Each training Day 2 through 4 had a top-down defeated block run both pre and post the day's training session (see Table 1). Panel c shows MAE duration, stemming from the top-down engaged condition, over days and blocks. Panel d shows MAE duration, stemming from the top-down defeated condition, over days and pre/post block. There was a significantly increasing, nonlinear trend across the eight individual blocks. A logarithmic fit has been shown for reference. Error bars within all panels reflect SE of the means.
Figure 3
 
(a–d). Results from Experiment 1. Panel a shows visual deflection-detection task performance, stemming from the top-down engaged condition, over days and blocks (within training Days 2–4). Superimposed for ease of comparison are results from the dual-task condition performed on Days 1 and 5. Visual performance was significantly lower in the dual-task condition than when run alone in the top-down engaged condition. Panel b shows two-back memory task performance, stemming from the top-down defeated condition, over days and blocks. Each training Day 2 through 4 had a top-down defeated block run both pre and post the day's training session (see Table 1). Panel c shows MAE duration, stemming from the top-down engaged condition, over days and blocks. Panel d shows MAE duration, stemming from the top-down defeated condition, over days and pre/post block. There was a significantly increasing, nonlinear trend across the eight individual blocks. A logarithmic fit has been shown for reference. Error bars within all panels reflect SE of the means.
Figure 4
 
(a, b). Data from Experiment 1 comparing results before and after training. Panel a, shows the MAE duration from the top-down defeated condition for each observer both before and after training (i.e., values take from Day 1 vs. Day 5; see Table 1). There is no meaningful MAE before training, but a significant increase after training. Panel b shows the effect of training on the MAE over all conditions in Experiment 1. After training, MAE duration was significantly greater in all the conditions where top-down selection was compromised (namely, passive, top-down defeated, and dual-task). Error bars in both panels reflect SE of the means.
Figure 4
 
(a, b). Data from Experiment 1 comparing results before and after training. Panel a, shows the MAE duration from the top-down defeated condition for each observer both before and after training (i.e., values take from Day 1 vs. Day 5; see Table 1). There is no meaningful MAE before training, but a significant increase after training. Panel b shows the effect of training on the MAE over all conditions in Experiment 1. After training, MAE duration was significantly greater in all the conditions where top-down selection was compromised (namely, passive, top-down defeated, and dual-task). Error bars in both panels reflect SE of the means.
Figure 5
 
Results from Experiment 2. MAE duration is shown for individual observers, after training. Bars reflect a comparison between the top-down defeated condition (replotted here from Experiment 1) and a version of the top-down defeated condition that used a visual, RSVP presentation of the two-back memory stimuli (as opposed to auditory). MAE duration was significantly lower in the RSVP version. Error bars reflect SE of the means.
Figure 5
 
Results from Experiment 2. MAE duration is shown for individual observers, after training. Bars reflect a comparison between the top-down defeated condition (replotted here from Experiment 1) and a version of the top-down defeated condition that used a visual, RSVP presentation of the two-back memory stimuli (as opposed to auditory). MAE duration was significantly lower in the RSVP version. Error bars reflect SE of the means.
Figure 6
 
Results from Experiment 3. Panel a shows MAE duration for individual observers, after training. Bars show results from both the standard top-down defeated condition and a version of the top-down defeated condition in which the direction of bivectorial stimuli was reversed. MAE duration was not significantly different, and both were higher than before-training values in the top-down defeated condition, which were near zero. Panel b shows MAE duration plotted for the top-down defeated condition over days and pre/post block. There was a significantly increasing, nonlinear trend across the seven blocks. A logarithmic fit has been shown for reference. Error bars reflect SE of the means.
Figure 6
 
Results from Experiment 3. Panel a shows MAE duration for individual observers, after training. Bars show results from both the standard top-down defeated condition and a version of the top-down defeated condition in which the direction of bivectorial stimuli was reversed. MAE duration was not significantly different, and both were higher than before-training values in the top-down defeated condition, which were near zero. Panel b shows MAE duration plotted for the top-down defeated condition over days and pre/post block. There was a significantly increasing, nonlinear trend across the seven blocks. A logarithmic fit has been shown for reference. Error bars reflect SE of the means.
Figure 7
 
(a, b). Results from Experiment 4. Panel a shows a schematic of locally paired stimuli, in which each dot is placed on a collision course with a nearby partner. When luminance is balanced, global, but not local, motion signals are obliterated, and the display appears as motionless scintillation. Panel b shows results from a motion direction judgment task, using locally paired stimuli, both before and after training. Training did not bias motion judgments. Error bars reflect SE of the means.
Figure 7
 
(a, b). Results from Experiment 4. Panel a shows a schematic of locally paired stimuli, in which each dot is placed on a collision course with a nearby partner. When luminance is balanced, global, but not local, motion signals are obliterated, and the display appears as motionless scintillation. Panel b shows results from a motion direction judgment task, using locally paired stimuli, both before and after training. Training did not bias motion judgments. Error bars reflect SE of the means.
Visual task: Deflection detection
Observers were instructed to monitor the red, rightward field during the 30 s adaptation phase of a typical trial. The 30 s adaptation phase was broken down into fifteen 2-s (± approx. 0.5 s) random jitter epochs. Within each epoch, there was a 50% chance that the rightward dot field would undergo a brief (400 ms) deflection of ±4°. Observers were asked to continuously monitor the field and press a key as soon as a deflection was detected (Figure 1). This yielded hits, correct rejections, false alarms, and misses, from which the percentage of correct responses was calculated. This task was designed to engage top-down selection of the red, rightward moving field. Before the study began, all observers were given a few trials of practice to familiarize them with the apparatus and task. 
Memory task: Two-back match (auditory)
A stream of spoken numerals (1–5) was presented in pseudorandom order through headphones. Observers performed a two-back memory task, monitoring the stream and making a verbal response (“match”) whenever the current numeral matched the numeral presented two epochs earlier (as for the number 3 in the sequence, “5… 5… 3… 2… 3…”; Figure 1). There was a 50% probability that a match would occur within each epoch. Percent correct was calculated from the resulting hits, correct rejections, false alarms, and misses. This task was designed to defeat top-down selection of visual stimuli by distracting and occupying necessary central bottleneck processes. Before the study began, all observers were given a few trials of practice to familiarize them with the apparatus and task. 
Motion aftereffect (MAE) measurement
Following the 30-s adaptation phase, a typical trial ended with an observer-terminated MAE measurement phase (Figure 1). To facilitate comparison with Blaser and Shepard (2009) and Chaudhuri (1990), we measured MAE duration, that is, the time it takes for any illusory motion percept to reach standstill (Anstis & Ramamchandran, 1986; Burke & Wenderoth, 1993; Sekuler & Pantle, 1967). During the MAE measurement phase, observers were presented with red and green fields, identical to the adaptor, but the dots were stationary and infinite-lifetime. Observers were instructed to press a key when the field reached perceptual standstill. Observers were instructed to maintain central fixation throughout the test, and reported no substantive lapses; it should be noted, however, that while it is possible for pursuit tracking of one of the adapting fields to induce a MAE, it seems neither necessary (Verstraten, Hooge, Culham, & Van Wezel, 2001) nor sufficient (Morgan et al., 2016). If there was no MAE, observers indicated this with a dedicated key.2 Before running in any experiments, observers performed a baseline single-field MAE block in which the red, rightward moving field was presented alone and observers were asked to perform the visual task. This provided a nonattentional, maximal reference point for the MAE. The eight observers all had robust single-field MAEs (M = 12.33 s, SE = 2.2 s). 
Conditions and predictions
Conditions were defined by the tasks and instructions given to observers. A typical trial consisted of a 30-s adaptation phase during which observers performed the visual detection task, the two-back memory task, or both, followed by the user-terminated MAE measurement phase (Figure 1). Unless specified otherwise, both auditory and visual stimuli were present in all the conditions. 
Passive condition
Observers maintained central fixation and passively viewed the bivectorial visual stimuli without any task. MAE duration was measured after each trial. This served as a test to ensure that the dot fields were indeed iso-salient and balanced in terms of motion energy and that they provided no triggers for bottom-up selection, nor any incentive for top-down selection. Before training, no significant MAEs were expected in this condition. After training, significant MAEs were expected. 
Top-down engaged condition
In this condition, observers were instructed to perform the deflection detection visual task alone, with 100% effort. Further, observers were asked to perform the visual task just with respect to the red, rightward-moving field. This condition was explicitly designed to engage top-down selection of this field. We expected maximal attention-determined MAEs in this condition both before and after training. 
Top-down defeated condition
In this condition, observers were asked to perform the two-back memory task alone, with 100% effort. Before training, we expected no net MAE, since the motion vectors were balanced in terms of motion energy, there was no visual task, and top-down selection was not engaged. In a central prediction of this study, we expected to observe a significant MAE in this condition after training. 
Dual-task condition
Here, observers were asked to perform the visual and the memory task concurrently, with equal effort. The visual and memory-task relevant events were synchronized to co-occur within an epoch. This condition was designed to confirm that these two tasks competed, that is, that the memory task interfered with the performance of the visual task. Both before and after training, we expected MAE duration to be lower than those from the top-down engaged condition, but higher than those from the top-down defeated condition. We also expected an increase in MAEs after training, as compared to before-training values. 
Top-down defeated (RSVP version)
This condition was identical to the standard top-down defeated condition described above, except the numerals for the two-back memory task were presented as a rapid serial visual presentation (RSVP; Sperling, Budiansky, Spivak, & Johnson, 1971) stream at fixation, instead of an auditory stream. This condition was employed in Experiment 2 to assess whether perceptual learning could account for the results from Experiment 1
Top-down defeated (reversed-field version)
This condition was identical to the standard top-down defeated condition, except the direction of the motion fields in the bivectorial adaptation stimulus was reversed. Instead of red, rightward and green, leftward, the red field moved leftward and the green field moved rightward. This condition was employed in Experiment 3 to determine if the new selection rule observed in Experiment 1 targeted color or direction of motion, or both. 
Direction judgments with locally paired dot stimuli
This condition used bivectorial, locally paired dot stimuli identical to those employed in our motion energy calibration (see above). Just as with those stimuli, each dot in one field was spatiotemporally paired with an oppositely moving dot in the other, putting them on a collision course (Figure 7a). With isoluminant red and green fields, global motion is eliminated, and locally paired stimuli are perceived as motionless scintillation. On each trial in this condition, the green dot field randomly took on one of five possible luminance levels: two levels above, two below, and one at the level isoluminant with the red field, thereby creating a weak drift in the compound stimulus (in the direction of the higher-luminance field). These values were chosen for each observer, based on the standard deviation of their motion energy calibration values (−4 SD, −2 SD, 0 [isoluminant], +2 SD, and +4 SD, respectively; the luminance of the red dot field was fixed). Presentation of the locally paired stimulus lasted for 1 s, after which observers had to judge whether the display appeared to drift leftward or rightward. This condition was employed in Experiment 4 to determine whether the new selection rule observed in Experiment 1 targeted local or global motion signals. 
Schedule
Table 1 provides an overview of the schedule of conditions for the four experiments in this study. Note that the order of the conditions presented on the training days are listed for each experiment, respectively. 
Table 1
 
Schedule of conditions run in each experiment. Further details are provided in the respective Methods sections.
Table 1
 
Schedule of conditions run in each experiment. Further details are provided in the respective Methods sections.
Experiment 1
The primary goal of Experiment 1 was to measure the allocation of visual processing resources while top-down control over the allocation was defeated (by distracting observers with an engaging two-back memory task throughout the trial, and having no visual demands or task). Importantly, this measurement was made both before and after observers were trained by having them repeatedly attend to one of two superimposed motion fields, in an effort to proceduralize selection. Before selection training, we did not expect to find any evidence of selective resource allocation while observers were distracted (i.e., no net MAE). After training, however, even while distracted, we expected to observe a net MAE—evidence for an automatic allocation of attentional resources. The rule that we attempted to induce in Experiment 1 is best described as “select red, rightward.” Experiments 2, 3, and 4 helped to refine the characterization of this rule. 
Methods
Observers, stimuli, and tasks
Eight observers, six naïve and two expert, participated in the study. Stimuli and tasks were as described above in General methods. Data was collected in five sessions over five consecutive days (see Table 1). The first and last days were the critical before- and after-training test days, while days 2–4 were training days designed to induce acquisition of the new rule. 
Results
Before training
Before training, as expected, in the absence of top-down attentional selection, no meaningful MAEs were observed. MAEs in the passive condition were near zero3 (M = 0.33 s, SE = 0.115; with a “no MAE” response rate of 25%) as were MAEs in the top-down defeated condition (M = 0.16 s, SE = 0.07; with a “no MAE” rate of 50%). However, consistent with previous work showing that attention modulates the MAE (Blaser & Shepard, 2009; Chaudhuri, 1990; Lankheet & Verstraten, 1995), we found significant MAEs in the top-down engaged condition, when observers were instructed to attend to the red, rightward field (M = 5.74 s, SE = 0.618; Figure 2). A Friedman's one-way ANOVA showed a significant difference in MAE duration across the three conditions (χ2 = 12.97, p < 0.001). Subsequent pairwise Friedman's tests (with Dunn's correction for multiple comparisons) revealed that MAEs from the top-down engaged condition were significantly higher than those from the passive (p = 0.017) and top-down defeated conditions (p = 0.001). 
During training
Selection training was comprised of a session of three consecutive blocks of the top-down engaged condition, run on Days 2–4 (Table 1). To evaluate if there was a practice effect over the training days and/or an increase in MAE duration, a two-way repeated measures ANOVA was run. Visual performance (M = 71.35, SE = 4.3) did not show any changes over days, F(2, 42) = 0.083, p = 0.920, or blocks (i.e., blocks 1, 2, or 3 within each day), F(2, 21) = 0.084, p = 0.920. MAE duration (M = 5.02, SE = 1.05 s) in this condition was similarly unchanging over days, F(2, 42) = 0.229, p = 0.796, and blocks, F(2, 21) = 0.0524, p = 0.949 (Figure 3; Days 2 through 4 on panels a and c, respectively). 
In addition to the main, planned comparison between before-training and after-training MAE duration in the top-down defeated condition (addressed below), we were also interested to see if there were any appreciable changes within each training day. On Days 2 through 4, a top-down defeated block was run both pre and post each day's training session. A two-way repeated measures ANOVA on these data using day and pre/post blocks as factors showed that performance on the two-back memory task did not change over training days, F(2, 28) = 1.43, p = 0.257, or pre/post blocks within each day, F(1, 14) = 3.47, p = 0.084. Also, MAE duration did not show an effect of day, F(2, 28) = 2.95, p = 0.069, or pre/post blocks, F(1, 14) = 0.01, p = 0.757 (Figure 3; Days 2 through 4 on panels b and d, respectively). 
After training
On Day 5, the conditions that had been run on Day 1 were repeated in order to identify changes as a result of training.4 The central comparison of this experiment was between MAE duration in the top-down defeated condition (again, where there was no visual task, and observers performed only the two-back memory task) before and after training. Before training, there was no meaningful net MAE in this condition (M = 0.16 s, SE = 0.07 s), but after training there was a net MAE (M = 1.90 s, SE = 0.47 s). A paired t test showed this increase to be significant, t(7) = 4.38, p = 0.003, ηp2 = 0.733 (Figure 4a). To assess any overall trend, we ran a follow-up analysis that also included the top-down defeated conditions that were run on training days. A one-way within-subject repeated measures ANOVA (with no assumption of sphericity, and application of the Geisser–Greenhouse correction) on MAE duration over the eight individual top-down defeated blocks (administered over Days 1 through 5) was significant, F(3.48, 24.4) = 4.07, p = 0.0014, as was a nonlinear trend, F(6, 49) = 9.26, p < 0.001. A logarithmic fit to the data reached 2.28 s on last block (Figure 3d). 
This main result finds support in the other before–after comparisons. Similar to the pattern in the top-down defeated condition, in the passive condition, MAEs after training showed a similar increase (M = 2.68 s, SE = 1.24 s) over the near-zero values before training (M = 0.35 s, SE = 0.13 s), though this did not reach significance, t(6) = 1.89, p = 0.107; ηp2 = 0.374. This pattern was observed for the dual-task condition as well. Here we found a net MAE even before training (M = 1.68 s, SE = 0.39 s) that increased after training (M = 3.28 s, SE = 0.87 s), which is as expected since observers were asked to perform the visual as well as the memory task in this condition. A paired t test showed that the after-training values were significantly higher than before training values, t(6) = −2.5, p = 0.047, ηp2 = 0.51. These before–after comparisons are summarized in Figure 4b
Discussion
We showed observers a bivectorial motion display with overlapping red and green dot fields moving to the right and left, respectively. Initially, when observers viewed this stimulus passively, without instruction or task, or while distracted by an engrossing two-back memory task (the top-down defeated condition) no net MAE resulted, which is not surprising. The motion vectors were calibrated for each observer to be balanced in motion energy, and observers had no incentive to selectively attend to either of the fields (on the other hand, when we encouraged observers to selectively attend to one of the fields as in the top-down engaged condition, a robust MAE resulted). However, in our central manipulation, after training, during which observers practiced devoting attention to the red, rightward moving field by virtue of performing a deflection detection task on it over several sessions, a net MAE was now observed even when observers were distracted in the top-down defeated condition (Figure 4a). This result was supported by the fact that MAEs were observed in a passive-viewing condition after training (in which there had been no meaningful net MAE beforehand), and by increases in MAEs in our dual-task condition, in which observers split their attention between the visual task and the distracting two-back memory task. Taken together, these results show that, after training, processing resources were being allocated selectively to one of the fields, but that this allocation did not require bottom-up salience triggers or willful top-down selection. Instead, we argue that a new selection rule had been acquired that guided resources to the red, rightward field automatically. 
The dual-task condition further supports this idea. This condition was included in part to confirm that the two-back memory task interfered with the ability to engage in top-down selection of the visual field. A two-way repeated measures ANOVA (on task performance) with instruction (dual- or single-task) and task (visual- or memory-task) as factors5 showed a significant effect of instruction, F(1, 13) = 8.45, p = 0.012. Post hoc comparisons (with Sidak's correction) showed that a drop in performance in the visual task under dual task demands drove that effect (p = 0.013), which had the desired consequence for the MAE. Before training, when the visual task was performed alone (i.e., in the top-down engaged condition), MAE duration averaged 5.74 s. When observers were asked to incorporate the memory task (i.e., in the dual-task condition), MAEs dropped significantly to 1.68 s, t(6) = 7.41, df = 6, p < 0.001, ηp2 = 0.901. Put simply, before training, the memory task impaired top-down selection, and this impairment meant resources were not fully allocated to the red, rightward field. It is important to stress that we are not claiming that the memory task interferes with visual attention resources; the resources themselves are intact, it is the ability to allocate them that is impaired. After training, we argue, the impact of this impairment was mitigated, as resource allocation becomes proceduralized. And indeed, as reported above, the MAE duration in this condition increased significantly, to 3.28 s, after training. 
Over training, we did not find any increases in performance on the visual detection task, nor the two-back memory task in spite of training. From our impression, and from debriefing observers, it seemed that asymptotic performance was reached quickly during practice trials and within the first block of testing. This means that practicing selecting the red, rightward field is sufficient to affect the MAE; changes in the mechanisms underlying motion-direction discrimination (which would presumably boost performance) are not required. We also did not observe any boost in MAE duration in the top-down engaged condition after training. This was not unexpected, since any learned, procedural selection should be less than (or, in the limit, equal to) explicit top-down selection; the top-down engaged condition already maximizes resource allocation, so training provided no further benefit. 
Experiment 2
Experiment 1 provided evidence that training can lead to the acquisition of a new, automatic selection rule. In Experiment 2, we sought to confirm that the resources in play were visual attentional resources and related to this, that the results could not be adequately explained by enhancement of, for example, rightward motion signals as might be expected from perceptual learning. Perceptual learning effects in visual perception vary, but are generally understood as long-term improvements in thresholds due to training that show great specificity to stimulus parameters and presentation (e.g., stimulus orientation, eye of origin, retinal location), and are typically modeled as changes in the tuning or weighting of underlying detectors (Roelfsema, van Ooyen, & Watanabe, 2010; Vogels, 2010; for a review, see Fine & Jacobs, 2002). 
Experiment 2 employed a version of the top-down defeated condition with a RSVP version of the two-back memory task. It has been shown that when attention is withdrawn from an adapting field by an RSVP stream, motion aftereffects are also reduced (Blaser & Shepard, 2009; see also Chaudhuri, 1990), and the RSVP task competes for attentional resources that would otherwise go to the adaptor. If the MAE effects reported here are a result of perceptual learning effects and not attentional, then they should be unaffected by the modality of the memory task. However, if the effect of training on the MAE is due to the allocation of visual resources as we have hypothesized, then the MAE should be reduced. 
Methods
Experiment 2 used a version of the top-down defeated condition in which the numerals for the two-back memory task were presented as a visual RSVP stream instead of an auditory one. All observers who participated in Experiment 1 ran this as an additional after-training test (a block of 10 trials) after their training within Experiment 1 (see Table 1). All other aspects of the stimuli and tasks were as described in General methods and in Experiment 1
Results
A pairwise t test comparison of MAE duration from the after-training top-down defeated from Experiment 1 (M = 1.90 s, SE = 0.47 s) and those from the present RSVP counterpart (M = 0.86 s, SE = 0.27 s) showed that RSVP version was significantly lower, t(7) = 4.03, p = 0.005, ηp2 = 0.699 (Figure 5). 
Discussion
The MAE duration was adversely affected when the two-back memory task was performed in the visual domain, providing confirmation that the selection we observed indeed involves the allocation of visual processing resources; similar manipulations by Blaser & Shepard (2009) and Chaudhuri (1990) both led to average reductions in MAE duration of approximately 50%, comparable to the reduction found here. Put another way, the visual version of the memory task not only interferes with the ability to employ top-down selection (as the auditory version does), but also competes for the resources themselves. 
Experiment 3
The motion field that observers were trained on was defined by both its color (red) and its direction (rightward), so any learned rule could have been based on one or both of these features: select red, select rightward, or both. This experiment was designed to tease these apart. 
Similar to Experiment 1, observers had a 5-day schedule with 3 consecutive days of selection training; however, in this experiment, observers ran a version of the top-down defeated condition after training (Day 5), in which the direction of the fields was reversed (red dot field moved leftward and green dot field moved rightward). This presents a set of competing predictions. If the rule acquired during training targets color (select red), then resources should be allocated to the red adaptor, which is now moving leftward, yielding rightward MAEs. If the rule instead targets direction (select rightward), then the rightward moving adaptor, which is now green, should be selected, producing leftward MAEs. If the selection rule is based on a conjunction of these features, red and rightward, then no target for the selection rule would be found, and MAEs should be near zero. 
Methods
Four observers that had participated in Experiment 1 participated in this experiment. Training was identical to that received in Experiment 1. After training, each observer ran a block (10 trials) of the standard top-down defeated condition as well as the reversed-field version condition, on Day 5 (see Table 1). All other aspects were as described in General methods and Experiment 1
Results
All four observers reported MAEs in the leftward direction, consistent with adaptation to the rightward (green) field (Figure 6a). A Wilcoxon matched-pairs signed rank comparison between the duration of the MAEs in the top-down defeated condition (M = 3.79 s, SE = 1.14 s) and the reversed-field version (M = 4.89 s, SE = 1.62 s) was not significant (p = 0.25). Additionally, as a follow-up test to look for overall trends in MAE duration as a function of training, as we did for Experiment 1, we performed a one-way RMANOVA over the seven top-down defeated blocks (administered over Days 2 through 5). The RMANOVA (with no assumption of sphericity, and application of the Geisser-Greenhouse correction) was significant, F(1.61, 4.83) = 10.9, p = 0.018, as was a nonlinear trend, F(5, 18) = 6.63, p = 0.001. A logarithmic fit to the data reached 3.41 s on the last block (Figure 6b). The same analysis on visual task performance (M = 78.63, SE = 1.6) over the nine top-down engaged blocks (administered over training days) did not show an effect of block, F(2.38, 7.15) = 1.06, p = 0.407. Similarly, MAE duration (M = 6.41 s; SE = 1.2 s) over these top-down engaged blocks did not show an effect of block, F(1.76, 5.28) = 1.39; p = 0.322. 
Discussion
The results show that the rightward moving dot field was selected, irrespective of color. The magnitude of the MAEs did not differ, even though the targeted field had a new color, suggesting that color played little or no role in selection. The rule that was acquired is best described then as “select rightward.” Also, the present results provide a replication of the main results from Experiment 1. Here MAE duration in the top-down defeated, and top-down defeated (reversed field) conditions were 3.79 s and 4.89 s, respectively, even larger than the 1.9 s MAE observed in the top-down defeated condition from Experiment 1. This serves as a useful internal replication of the results from that experiment. Similar to Experiment 1, visual task performance and MAE duration over the top-down engaged blocks that comprised selection training did not show any significant changes. Again, we take this to mean that the increases seen in the MAE in the top-down defeated condition is not dependent on improvements in the underlying motion detection mechanisms. 
Experiment 4
Experiment 3 indicated that the selection rule is based on motion direction, not color. However, the motion stimuli used in this study were comprised of both local and global motion signals; that is, those provided by individual dots versus coherent groups of them, respectively. This raises the question of whether the selection rule targets local or global motion. To address this, we measured whether a legitimate target for attentional resources would still be found even when global motion was eliminated. 
To accomplish this, instead of a MAE measurement as in previous experiments, observers performed a motion direction judgment task using locally paired dot stimuli, both before and after selection training. Neuroimaging studies using locally paired stimuli have shown that the processing of component motion vectors in V1 (a neural locus for local motion processing) is unaffected, but there is a strong inhibitory effect in MT (human visual area V4/V5), an area known to integrate local motion information (Simoncelli & Heeger, 1998). 
As discussed above, given isoluminant red and green fields, global motion is eliminated, and locally paired stimuli are perceived as motionless scintillation (even though local motion signals are unperturbed). However, if the “select rightward” rule enhances processing of local motion signals (Gilbert & Li, 2013; Treue & Martinez Trujillo, 1999), then observers should have a bias to see the motion of the locally paired stimulus as rightward. If the selection rule targets global motion, then there should be no bias, as the global motion signal in these locally paired stimuli are degraded. 
Methods
Four observers who participated in Experiment 3 ran in this experiment. Observers ran one block (165 trials, 33 per level) of the locally-paired dot direction judgment condition, as additional tests both before (on Day 1) and after (on Day 5) the selection training (Days 2 through 4) for Experiment 3 (see Table 1). As described above in Conditions, observers had to judge whether a brief (1 s) presentation of the locally paired dot field appeared to drift leftward or rightward. If the selection rule targets local motion, then any training-induced enhancement of the rightward dots will increase the proportion of observers' rightward motion judgments, thereby shifting the psychometric function to the right, relative to the before-training curve. A selection rule that targets global motion would not induce such a shift. 
Results
No shift in the psychometric function was observed. Before- and after-training fits were not significantly different from each other, F(2, 6) = 0.647, p = 0.557; that is, a single (cumulative Gaussian) psychometric function satisfactorily fits both data sets (see Figure 7b). 
Discussion
Results are consistent with the selection rule acting at a global level, not a local one. We argue that since there was no consistent, global rightward field, the selection rule failed, and no bias was induced in the locally paired stimuli. The acquired rule is best described then as “select the rightward dot field.” 
General discussion
In this study, observers were shown a bivectorial motion stimulus with iso-salient dot fields, red and green, moving to the right and left, respectively. When observers adapted to this stimulus, without a visual task and while distracted by a demanding two-back memory task, unsurprisingly, no net MAE resulted. However, after observers were trained to select the red, rightward moving field (in three 0.5 hr sessions over three days), net MAEs ranging from 1.90 s in Experiment 1 to 3.79 s and 4.89 s in Experiment 3 were then observed, even when observers had no visual task and top-down selection was defeated by the two-back memory task. As a frame of reference, MAE duration when observers were instructed to explicitly attend to the red, rightward field, and perform a motion discrimination task on it averaged 5–6 s. 
These results show that practice—selection and concomitant resource allocation, with feedback—can create a new, automatic rule for visual attention allocation. Earlier studies have shown that practice enhances target selection and the rejection of irrelevant information (Vidnyánszky & Sohn, 2005). Such practice, especially when coupled with rewards, can lead to lingering selection biases (Shiffrin & Schneider, 1977) even in the face of changing task demands (e.g., asking observers to switch to a new target) or a change in the task itself (Della Libera & Chelazzi, 2009; for reviews, see Awh et al., 2012; Chelazzi, Perlato, Santandrea, & Della Libera, 2013). Our results dovetail nicely with this previous work6, again showing that history (here, our explicit training) can alter the priority of certain visual information, leading to preferential processing. 
The present study pushes these ideas further. First, we showed that such effects are not limited to classic target-detection paradigms, but could be extended to the motion domain, in which effects were seen on the motion aftereffect, and where biased resource allocation was maintained over an appreciable period (i.e., 30 s of motion adaptation). Second, we were able to gain insight into the specificity and locus of the selection rule. During training, our observers were asked to monitor a field of red dots that moved with strong local and global motion signals, superimposed with an oppositely moving field of green dots. After training, resources were now automatically devoted to one field, not based on its color (“select red”) or the spatiotemporally local motion signals carried by individual dots, but instead on the direction of global motion (“select the rightward field”). Lastly, the current study further strengthens the idea that selection can operate independently of top-down intention and cognitive supervision, since, in our critical condition, observers had no visual task, and were instead occupied with a distracting memory task. Nonetheless, visual processing resources managed to get where they had been trained to go (even the next day); the selection rule—where processing resources should go and when—had become proceduralized. 
Acknowledgments
This work was supported by the Joseph P. Healey internal research grant program at University of Massachusetts, Boston, awarded to the authors. 
Commercial relationships: none. 
Corresponding author: Mahalakshmi Ramamurthy. 
Address: Department of Psychology, University of Massachusetts, Boston, MA, USA. 
References
Alais, D.,& Blake, R. (1999). Neural strength of visual attention gauged by motion adaptation. Nature Neuroscience, 2, 1015– 1018.
Anderson, B. A., Laurent, P. A.,& Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108, 10367– 10371.
Anstis, S.,& Ramachandran, V. S. (1986). Entrained path deflection in apparent motion. Vision Research, 26, 1731– 1739.
Awh, E., Belopolsky, A. V.,& Theeuwes, J. (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16, 437– 443.
Blaser, E., Papathomas, T.,& Vidnyánszky, Z. (2005). Binding of motion and colour is early and automatic. The European Journal of Neuroscience, 21, 2040– 2044.
Blaser, E., Pylyshyn, Z. W.,& Holcombe, A. O. (2000). Tracking an object through feature space. Nature, 408 (6809), 196– 199.
Blaser, E.,& Shepard, T. (2009). Maximal motion aftereffects in spite of diverted awareness. Vision Research, 49, 1174– 1181.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433– 436.
Burke, D.,& Wenderoth, P. (1993). Determinants of two-dimensional motion aftereffects induced by simultaneously- and alternately-presented plaid components. Vision Research, 33, 351– 359.
Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research, 51, 1484– 1525.
Chaudhuri, A. (1990). Modulation of the motion aftereffect by selective attention. Nature, 344 (6261), 60– 62.
Chelazzi, L., Perlato, A., Santandrea, E.,& Della Libera, C. (2013). Rewards teach visual selective attention. Vision Research, 85, 58– 72.
Connor, C. E., Egeth, H. E.,& Yantis, S. (2004). Visual attention: Bottom-up versus top-down. Current Biology, 14, R850– R852.
Corbetta, M.,& Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews. Neuroscience, 3 (3), 201– 215.
Dehaene, S., Changeux, J.-P., Naccache, L., Sackur, J.,& Sergent, C. (2006). Conscious, preconscious, and subliminal processing: A testable taxonomy. Trends in Cognitive Sciences, 10, 204– 211.
Della Libera, C.,& Chelazzi, L. (2006). Visual selective attention and the effects of monetary rewards. Psychological Science, 17, 222– 227.
Della Libera, C.,& Chelazzi, L. (2009). Learning to attend and to ignore is a matter of gains and losses. Psychological Science, 20, 778– 784.
Duncan, J. (1984). Selective attention and the organization of visual information. Journal of Experimental Psychology. General, 113, 501– 517.
Egeth, H. E.,& Yantis, S. (1997). Visual attention: Control, representation, and time course. Annual Review of Psychology, 48, 269– 297.
Eimer, M.,& Kiss, M. (2008). Involuntary attentional capture is determined by task set: Evidence from event-related brain potentials. Journal of Cognitive Neuroscience, 20, 1423– 1433.
Fine, I.,& Jacobs, R. A. (2002). Comparing perceptual learning across tasks: A review. Journal of Vision, 2 (2): 5, 190– 203, doi:10.1167/2.2.5. [PubMed] [Article]
Gazzaley, A.,& Nobre, A. C. (2012). Top-down modulation: Bridging selective attention and working memory. Trends in Cognitive Sciences, 16, 129– 135.
Gilbert, D.,& Li, W. (2013). Top-down influences on visual processing. Nature Reviews Neuroscience, 14, 350– 363.
Gogel, W. C.,& Sharkey, T. J. (1989). Measuring attention using induced motion. Perception, 18, 303– 320.
Hickey, C., Chelazzi, L.,& Theeuwes, J. (2010). Reward changes salience in human vision via the anterior cingulate. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 30, 11096– 11103.
Itti, L.,& Koch, C. (2001). Computational modeling of visual attention. Nature Reviews Neuroscience, 2, 194– 203.
Jonides, J. (1981). Voluntary versus automatic control over the mind's eye's movement. In Long J. B.& Baddeley A. D. (Eds.), Attention and Performance IX (pp. 187– 203). Hillsdale, NJ: Erlbaum.
Kleiner, M., Brainard, D., Pelli, D., Ingling, A.,& Murray, R. (2007). What's new in Psychtoolbox-3. Perception, 36 (14), 1– 16. Retrieved from http://www.kyb.mpg.de/publications/attachments/ECVP2007-Kleiner-slides_5490[0].pdf
Koch, C.,& Tsuchiya, N. (2007). Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11 (1), 16– 22.
Kristjánsson, Á.,& Campana, G. (2010). Where perception meets memory: A review of repetition priming in visual search tasks. Attention, Perception, & Psychophysics, 72 (1), 5– 18.
Lankheet, M. J.,& Verstraten, F. A. (1995). Attentional modulation of adaptation to two-component transparent motion. Vision Research, 35, 1401– 1412.
Liu, T., Larsson, J.,& Carrasco, M. (2007). Feature-based attention modulates orientation-selective responses in human visual cortex. Neuron, 55, 313– 323.
Ludwig, C. J. H.,& Gilchrist, I. D. (2002). Stimulus-driven and goal-driven control over visual selection. Journal of Experimental Psychology. Human Perception and Performance, 28, 902– 912.
Ludwig, C. J. H.,& Gilchrist, I. D. (2003). Goal-driven modulation of oculomotor capture. Perception & Psychophysics, 65, 1243– 1251.
Maljkovic, V.,& Nakayama, K. (1994). Priming of pop-out: I. Role of features. Memory & Cognition, 22, 657– 672.
Melcher, D., Papathomas, T. V.,& Vidnyánszky, Z. (2005). Implicit attentional selection of bound visual features. Neuron, 46, 723– 729.
Moraglia, G., Maloney, K. P., Fekete, E. M.,& al-Basi, K. (1989). Visual search along the colour dimension. Canadian Journal of Psychology, 43 (1), 1– 12.
Morgan, M. J., Schreiber, K.,& Solomon, J. A. (2016). Low-level mediation of directionally specific motion aftereffects: Motion perception is not necessary. Attention, Perception, & Psychophysics, 78, 2621– 2632.
Nordfang, M.,& Bundesen, C. (2010). Is initial visual selection completely stimulus-driven? Acta Psychologica, 135, 106– 108; discussion 133–139.
Nothdurft, H. C. (2002). Attention shifts to salient targets. Vision Research, 42, 1287– 1306.
O'Craven, K. M., Downing, P. E.,& Kanwisher, N. (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401 (6753), 584– 587.
O'Regan, J. K.,& Noë, A. (2001). A sensorimotor account of vision and visual consciousness. The Behavioral and Brain Sciences, 24, 939– 973; discussion 973–1031.
Pashler, H.,& Johnston, J. C. (1998). In Pashler H. (Ed.), Attentional limitations in dual-task performance (pp. 155– 189). Hove, UK: Psychology Press/Erlbaum.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437– 442.
Posner, M. I.,& Dehaene, S. (1994). Attentional networks. Trends in Neurosciences, 17, 75– 79.
Posner, M. I., Snyder, C. R.,& Davidson, B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology, 109, 160– 174.
Qian, N., Andersen, R. A.,& Adelson, E. H. (1994). Transparent motion perception as detection of unbalanced motion signals. I. Psychophysics. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 14, 7357– 7366.
Remington, R. W., Johnston, J. C.,& Yantis, S. (1992). Involuntary attentional capture by abrupt onsets. Perception & Psychophysics, 51, 279– 290.
Roelfsema, P. R., van Ooyen, A.,& Watanabe, T. (2010). Perceptual learning rules based on reinforcers and attention. Trends in Cognitive Sciences, 14, 64– 71.
Saenz, M., Buracas, G. T.,& Boynton, G. M. (2002). Global effects of feature-based attention in human visual cortex. Nature Neuroscience, 5, 631– 632.
Sekuler, R.,& Pantle, A. (1967). A model for after-effects of seen movement. Vision Research, 7, 427– 439.
Serences, J. T.,& Boynton, G. M. (2007). Feature-based attentional modulations in the absence of direct visual stimulation. Neuron, 55, 301– 312.
Serences, J. T., Liu, T.,& Yantis, S. (2005). Parietal mechanisms of switching and maintaining attention to locations, objects, and features. In Itti, L. Rees, G.& Tsotsos J. K. (Eds.), Neurobiology of Attention (pp. 35– 41). Cambridge, MA: Academic Press.
Shiffrin, R. M.,& Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127– 190.
Shulman, G. L. (1993). Attentional effects of adaptation of rotary motion in the plane. Perception, 22, 947– 961.
Simoncelli, E. P.,& Heeger, D. J. (1998). A model of neuronal responses in visual area MT. Vision Research, 38, 743– 761.
Sohn, W., Papathomas, T. V., Blaser, E.,& Vidnyánszky, Z. (2004). Object-based cross-feature attentional modulation from color to motion. Vision Research, 44, 1437– 1443.
Soto, D., Heinke, D., Humphreys, G. W.,& Blanco, M. J. (2005). Early, involuntary top-down guidance of attention from working memory. Journal of Experimental Psychology. Human Perception and Performance, 31, 248– 261.
Sperling, G., Budiansky, J., Spivak, J. G.,& Johnson, M. C. (1971). Extremely rapid visual search: The maximum rate of scanning letters for the presence of a numeral. Science, 174, 307– 311.
Spivey, M. J.,& Spirn, M. J. (2000). Selective visual attention modulates the direct tilt aftereffect. Perception & Psychophysics, 62, 1525– 1533.
Theeuwes, J. (2010). Top-down and bottom-up control of visual selection. Acta Psychologica, 135, 77– 99.
Theeuwes, J.,& Van der Burg, E. (2011). On the limits of top-down control of visual selection. Attention, Perception, & Psychophysics, 73, 2092– 2103.
Treue, S. (2003). Visual attention: The where, what, how, and why of saliency. Current Opinion in Neurobiology, 13, 428– 432.
Treue, S.,& Martínez Trujillo, J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399 (6736), 575– 579.
Verstraten, F. A., Fredericksen, R. E.,& Van De Grind, W. A. (1994). Movement aftereffect of bi-vectorial transparent motion. Vision Research, 34, 349– 358.
Verstraten, F. A., Hooge, I. T., Culham, J.,& Van Wezel, R. J. (2001). Systematic eye movements do not account for the perception of motion during attentive tracking. Vision Research, 41, 3505– 3511.
Vidnyánszky, Z., Blaser, E.,& Papathomas, T. V. (2002). Motion integration during motion aftereffects. Trends in Cognitive Sciences, 6, 157– 161.
Vidnyánszky, Z.,& Sohn, W. (2005). Learning to suppress task-irrelevant visual stimuli with attention. Vision Research, 45, 677– 685.
Vogels, R. (2010). Mechanisms of visual perceptual learning in macaque visual cortex. Topics in Cognitive Science, 2, 239– 250.
Wang, D., Kristjansson, A.,& Nakayama, K. (2005). Efficient visual search without top-down or bottom-up guidance. Perception & Psychophysics, 67, 239– 253.
Wohlgemuth, A. (1911). On the after-effect of seen movement. Cambridge, UK: Cambridge University Press.
Wolfe, J. M. (1994). Guided Search 2.0. A revised model of visual search. Psychonomic Bulletin & Review, 1, 202– 238.
Wolfe, J. M., Butcher, S. J., Lee, C.,& Hyle, M. (2003). Changing your mind: On the contributions of top-down and bottom-up guidance in visual search for feature singletons. Journal of Experimental Psychology. Human Perception and Performance, 29, 483– 502.
Wolfe, J. M., Cave, K. R.,& Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology. Human Perception and Performance, 15, 419– 433.
Yantis, S.,& Johnston, J. C. (1990). On the locus of visual selection: Evidence from focused attention tasks. Journal of Experimental Psychology. Human Perception and Performance, 16, 135– 149.
Footnotes
1  No net MAE does not necessarily mean that there is no adaptation—in fact, there almost certainly is (Verstraten, Fredericksen, & Van De Grind, 1994,; Vidnyánszky, Blaser, & Papathomas, 2002)—but just that it is balanced with these (calibrated) stimuli. Additionally, we assume that the total pool of visual resources brought to bear on these stimuli is constant (but see Experiment 2 for a manipulation of resources), and that selection, whether top-down or procedural, biases their (zero-sum) allocation.
Footnotes
2  The distributions of MAE durations in some conditions were skewed leftward, toward near-zero values. If the distribution diverged appreciably from normality, the data were log transformed, with a small constant (half the minimum non-zero MAE), added before transformation to handle zero-duration values. Performance data (proportion correct) were transformed, when necessary, using a logit transform. If a transformation failed to produce normality, nonparametric tests were employed. Analyses are further specified in the relevant Results sections below.
Footnotes
3  Brief times such as these reflect a weak, transient instability in the MAE test stimulus that could be in any direction, or indeterminate.
Footnotes
4  Since the during-training analyses did not reveal any trends in visual task performance or MAE duration in the top-down engaged condition or any trends in memory performance in top-down defeated conditions, these data were not analyzed further.
Footnotes
5  A three-way ANOVA showed no real role of the before/after factor, F(1, 1) = 0.658, p = 0.421, so it was dropped.
Footnotes
6  Further research needs to be done to reconcile these models, but our view is that it is likely they can be collapsed under one theoretical framework. Proposing that procedural attention is fundamentally dynamic, guiding resources based on learned, context-dependent routines in response to changing task demands is meant to provide testable hypotheses for follow-up work.
Figure 1
 
Schematic representation of events within a trial. A typical trial consisted of a 30-s adaptation phase, during which bivectorial stimuli were presented. During this phase, observers performed a visual deflection discrimination task, an auditory two-back memory task, or both, depending on condition. Task-relevant events could appear in any of the fifteen 2-s long epochs within the adaptation phase. Following adaptation, a physically static test field was presented in order to measure MAE duration.
Figure 1
 
Schematic representation of events within a trial. A typical trial consisted of a 30-s adaptation phase, during which bivectorial stimuli were presented. During this phase, observers performed a visual deflection discrimination task, an auditory two-back memory task, or both, depending on condition. Task-relevant events could appear in any of the fifteen 2-s long epochs within the adaptation phase. Following adaptation, a physically static test field was presented in order to measure MAE duration.
Figure 2
 
MAE duration results from individual observers, as measured before training, for the passive, top-down defeated, and top-down engaged conditions, respectively. MAE duration from the top-down engaged condition was significantly greater than both the passive and top-down defeated conditions, which were both effectively zero. Error bars reflect SE of the means.
Figure 2
 
MAE duration results from individual observers, as measured before training, for the passive, top-down defeated, and top-down engaged conditions, respectively. MAE duration from the top-down engaged condition was significantly greater than both the passive and top-down defeated conditions, which were both effectively zero. Error bars reflect SE of the means.
Figure 3
 
(a–d). Results from Experiment 1. Panel a shows visual deflection-detection task performance, stemming from the top-down engaged condition, over days and blocks (within training Days 2–4). Superimposed for ease of comparison are results from the dual-task condition performed on Days 1 and 5. Visual performance was significantly lower in the dual-task condition than when run alone in the top-down engaged condition. Panel b shows two-back memory task performance, stemming from the top-down defeated condition, over days and blocks. Each training Day 2 through 4 had a top-down defeated block run both pre and post the day's training session (see Table 1). Panel c shows MAE duration, stemming from the top-down engaged condition, over days and blocks. Panel d shows MAE duration, stemming from the top-down defeated condition, over days and pre/post block. There was a significantly increasing, nonlinear trend across the eight individual blocks. A logarithmic fit has been shown for reference. Error bars within all panels reflect SE of the means.
Figure 3
 
(a–d). Results from Experiment 1. Panel a shows visual deflection-detection task performance, stemming from the top-down engaged condition, over days and blocks (within training Days 2–4). Superimposed for ease of comparison are results from the dual-task condition performed on Days 1 and 5. Visual performance was significantly lower in the dual-task condition than when run alone in the top-down engaged condition. Panel b shows two-back memory task performance, stemming from the top-down defeated condition, over days and blocks. Each training Day 2 through 4 had a top-down defeated block run both pre and post the day's training session (see Table 1). Panel c shows MAE duration, stemming from the top-down engaged condition, over days and blocks. Panel d shows MAE duration, stemming from the top-down defeated condition, over days and pre/post block. There was a significantly increasing, nonlinear trend across the eight individual blocks. A logarithmic fit has been shown for reference. Error bars within all panels reflect SE of the means.
Figure 4
 
(a, b). Data from Experiment 1 comparing results before and after training. Panel a, shows the MAE duration from the top-down defeated condition for each observer both before and after training (i.e., values take from Day 1 vs. Day 5; see Table 1). There is no meaningful MAE before training, but a significant increase after training. Panel b shows the effect of training on the MAE over all conditions in Experiment 1. After training, MAE duration was significantly greater in all the conditions where top-down selection was compromised (namely, passive, top-down defeated, and dual-task). Error bars in both panels reflect SE of the means.
Figure 4
 
(a, b). Data from Experiment 1 comparing results before and after training. Panel a, shows the MAE duration from the top-down defeated condition for each observer both before and after training (i.e., values take from Day 1 vs. Day 5; see Table 1). There is no meaningful MAE before training, but a significant increase after training. Panel b shows the effect of training on the MAE over all conditions in Experiment 1. After training, MAE duration was significantly greater in all the conditions where top-down selection was compromised (namely, passive, top-down defeated, and dual-task). Error bars in both panels reflect SE of the means.
Figure 5
 
Results from Experiment 2. MAE duration is shown for individual observers, after training. Bars reflect a comparison between the top-down defeated condition (replotted here from Experiment 1) and a version of the top-down defeated condition that used a visual, RSVP presentation of the two-back memory stimuli (as opposed to auditory). MAE duration was significantly lower in the RSVP version. Error bars reflect SE of the means.
Figure 5
 
Results from Experiment 2. MAE duration is shown for individual observers, after training. Bars reflect a comparison between the top-down defeated condition (replotted here from Experiment 1) and a version of the top-down defeated condition that used a visual, RSVP presentation of the two-back memory stimuli (as opposed to auditory). MAE duration was significantly lower in the RSVP version. Error bars reflect SE of the means.
Figure 6
 
Results from Experiment 3. Panel a shows MAE duration for individual observers, after training. Bars show results from both the standard top-down defeated condition and a version of the top-down defeated condition in which the direction of bivectorial stimuli was reversed. MAE duration was not significantly different, and both were higher than before-training values in the top-down defeated condition, which were near zero. Panel b shows MAE duration plotted for the top-down defeated condition over days and pre/post block. There was a significantly increasing, nonlinear trend across the seven blocks. A logarithmic fit has been shown for reference. Error bars reflect SE of the means.
Figure 6
 
Results from Experiment 3. Panel a shows MAE duration for individual observers, after training. Bars show results from both the standard top-down defeated condition and a version of the top-down defeated condition in which the direction of bivectorial stimuli was reversed. MAE duration was not significantly different, and both were higher than before-training values in the top-down defeated condition, which were near zero. Panel b shows MAE duration plotted for the top-down defeated condition over days and pre/post block. There was a significantly increasing, nonlinear trend across the seven blocks. A logarithmic fit has been shown for reference. Error bars reflect SE of the means.
Figure 7
 
(a, b). Results from Experiment 4. Panel a shows a schematic of locally paired stimuli, in which each dot is placed on a collision course with a nearby partner. When luminance is balanced, global, but not local, motion signals are obliterated, and the display appears as motionless scintillation. Panel b shows results from a motion direction judgment task, using locally paired stimuli, both before and after training. Training did not bias motion judgments. Error bars reflect SE of the means.
Figure 7
 
(a, b). Results from Experiment 4. Panel a shows a schematic of locally paired stimuli, in which each dot is placed on a collision course with a nearby partner. When luminance is balanced, global, but not local, motion signals are obliterated, and the display appears as motionless scintillation. Panel b shows results from a motion direction judgment task, using locally paired stimuli, both before and after training. Training did not bias motion judgments. Error bars reflect SE of the means.
Table 1
 
Schedule of conditions run in each experiment. Further details are provided in the respective Methods sections.
Table 1
 
Schedule of conditions run in each experiment. Further details are provided in the respective Methods sections.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×