Open Access
Article  |   March 2019
Reward expectation facilitates context learning and attentional guidance in visual search
Author Affiliations
  • Nils Bergmann
    Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
    nils.bergmann@uni-marburg.de
  • Dennis Koch
    Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
  • Anna Schubö
    Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
    anna.schuboe@staff.uni-marburg.de
Journal of Vision March 2019, Vol.19, 10. doi:https://doi.org/10.1167/19.3.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nils Bergmann, Dennis Koch, Anna Schubö; Reward expectation facilitates context learning and attentional guidance in visual search. Journal of Vision 2019;19(3):10. https://doi.org/10.1167/19.3.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Modulations of visual attention due to expectation of reward were frequently reported in recent years. Recent studies revealed that reward can modulate the implicit learning of repeated context configurations (e.g., Tseng & Lleras, 2013). We investigated the influence of reward expectations on context learning by associating colors to different reward magnitudes. Participants searched through contexts consisting of spatially distributed L-shaped distractors and a T-shaped target, with half of these objects appearing in a color associated with low, medium, or high reward. Half of these context configurations were repeatedly presented in every experimental block, whereas the other half was generated newly for every trial. Results showed an earlier and more pronounced contextual cueing effect in contexts associated with high reward compared with low reward contexts. This was visible as faster decline of response times to targets in repeated contexts associated with high reward compared with repeated low reward and novel contexts, and was reflected in the eye movement pattern as shorter distance of the first fixation to the target location. These results suggest that expectation of high reward magnitude facilitates subsequent learning of repeated context configurations. High reward also increases the efficiency of attention guidance toward the target location.

Introduction
Humans are consistently confronted with changing environments containing new and possibly unknown information. To ensure successful adaption of subsequent behavior, humans have to select relevant information relying on information sampling from their visual environment. Due to the limited processing capacity, relevant situational features might be missed if priority is given to nonrelevant features (Simons & Levin, 1997). Therefore, visual information has to be prioritized for attentional selection (Driver, 2001; Lavie & Dalton, 2014). 
Reward influencing attention guidance
Associated rewards have been proven powerful in prioritizing visual information processing, as formerly experienced extrinsic (e.g., monetary) rewards were reported to have a huge influence on human selective attention (Della Libera & Chelazzi, 2006, 2009; for reviews, see Anderson, 2016; Chelazzi, Perlato, Santandrea, & Della Libera, 2013; Failing & Theeuwes, 2018). Reward-induced selection biases were even observed to overrule an observer's intention (Feldmann-Wüstefeld, Brandhofer, & Schubö, 2016; Hickey & van Zoest, 2013; Le Pelley, Pearson, Griffiths, & Beesley, 2015; Le Pelley, Seabrooke, Kennedy, Pearson, & Most, 2017) as well as a stimulus' salience (Anderson, 2016; Chelazzi et al., 2013; Hickey, Chelazzi, & Theeuwes, 2010). As Awh and colleagues pointed out, these findings cannot be explained by referring to the classical dichotomy between top-down and bottom-up processes in attention guidance (Awh, Belopolsky, & Theeuwes, 2012). 
Several studies have demonstrated that formerly rewarded target locations and locations that were associated with higher probabilities of reward are prioritized in attentional selection in visual search tasks (Anderson, 2013; Hickey et al., 2010; Hickey, Chelazzi, & Theeuwes, 2011). Hickey et al. (2010), for instance, asked participants to search for a singleton shape target in a visual search task and to ignore an additional singleton distractor presented in a deviating color. A high or low reward was randomly given after each correct trial. In any given pair of successive trials, the target and distractor colors could either be maintained or swapped, yet color was entirely irrelevant for the task. Results showed that reward magnitude in trial n−1 affected search performance in trial n: High reward in trial n−1 led to shorter response times in trial n when colors were repeated compared with color change trials. Conversely, low reward in trial n−1 resulted in longer response times in trial n when colors were repeated compared with color change trials. These results indicate that associating a specific feature with reward can result in immediate prioritization of that feature in subsequent trials. Subsequent experiments showed that also specific locations in visual search tasks can be prioritized by reward outcome (Hickey, Chelazzi, & Theeuwes, 2014): A high reward in trial n−1 not only facilitated the return of attention to the same target location in trial n but also inhibited the deployment of attention to a location that held a salient irrelevant distractor in trial n−1. Thus reward seems to guide attentional selection by priming particular locations of visual stimuli (Hickey et al., 2014). Important to note, this “location priming” is not based on the observers' voluntary or strategic decision, but rather results from the association of a location with a previous reward outcome (Awh et al., 2012). 
Eye movement studies have also shown that the presence of reward-signaling stimuli can bias attention and result in oculomotor capture (e.g., Hickey & van Zoest, 2013), even when the stimulus is not relevant but even counterproductive for the actual task (Failing, Nissens, Pearson, Le Pelley, & Theeuwes, 2015; Le Pelley et al., 2015). The presence of a distractor signaling reward was also found to lead to saccades landing closer to high reward distractors (Bucker, Belopolsky, & Theeuwes, 2014) and to increased saccade latencies to the target (Le Pelley et al., 2015). These results provide further evidence that reward can bias attentional selection to those locations and object features that signal subsequent reward. Such prioritizations in attention guidance can work automatically and against an observer's intention. Depending on the actual goal of the task, these reward influences on attention guidance can have beneficial or counterproductive effects on task performance (cf. Le Pelley et al., 2015). 
Attention guidance in contextual cueing
Not only formerly experienced rewards but also recurrent contextual regularities can result in facilitated processing of visual information (Summerfield & de Lange, 2014). Statistical learning mechanisms can help the observer to detect contextual regularities in visual search and to localize the target (Goujon, Didierjean, & Thorpe, 2015). Studies investigating the influence of spatial contextual regularities often used contextual cueing tasks (Chun & Jiang, 1998) in which participants performed a visual search task searching for one target among a spatial configuration of distractors (Chun, 2000; Chun & Turk-Browne, 2007; Goujon et al., 2015; Le-Hoa Võ & Wolfe, 2015). In each experimental block, half of these configurations were repeatedly presented (“repeated contexts”) and presented randomly intermixed with configurations newly generated for each trial (“novel contexts”). During the experiment, participants became faster in responding to the target when searching through repeated relative to novel contexts, an advantage that was also reflected in accuracy measures in some studies (Feldmann-Wüstefeld & Schubö, 2014; Pollmann, Eštočinová, Sommer, Chelazzi, & Zinke, 2016; Sharifian, Contier, Preuschhof, & Pollmann, 2017). Better search performance for repeated compared with novel contexts typically became apparent after six repetitions and reached an asymptote after 10 to 30 exposures to repeated contexts (Chun & Jiang, 1998, 1999, 2003; Feldmann-Wüstefeld & Schubö, 2014; Olson & Chun, 2001; van Asselen & Castelo-Branco, 2009). The effect seems to be relatively stable in time, as differences between context types were still observed after one week (Jiang, Song, & Rigas, 2005; Zellin, Mühlenen, Müller, & Conci, 2014). 
One prominent explanation of the contextual cueing effect claims that context knowledge that is acquired during context repetition facilitates attention guidance (Goujon et al., 2015; Harris & Remington, 2017). Accordingly, repeated context configurations were considered to function as cues that guide attention toward the expected target location (Chun & Jiang, 1998). In line with this, repeated contexts were observed to be associated with an increased N2pc component in electroencephalography studies (Schankin, Hagemann, & Schubö, 2011; Schankin & Schubö, 2009) suggesting a more pronounced deployment of visual selective attention (Eimer, 2014; Luck & Hillyard, 1994; see also Tan & Wyble, 2015). This is also supported by empirical work applying eye tracking, which has demonstrated that the number of fixations decreases and scan paths become more direct in repeated contexts (Manginelli & Pollmann, 2009; Peterson & Kramer, 2001; Tseng & Li, 2004; Zhao et al., 2012). All these findings support the notion that attention is guided more efficiently to the target in repeated than in novel contexts. 
Reward modulating contextual cueing
There is evidence for an interaction of reward and contextual cueing, i.e., reward accelerating context learning in contextual cueing paradigms. Tseng and Lleras (2013) examined whether reward had a direct impact on configuration learning in contextual cueing. They associated three outcome conditions (reward, loss, or no outcome) to a subset of repeated and novel contexts. Participants had to collect points that were awarded for correct responses and were told that a particular amount of points had to be reached to complete the experiments. Results showed faster development of the search time advantage for rewarded versus nonrewarded repeated displays, while the size of the contextual cueing effect was not affected. Moreover, consistent reward associations led to faster learning compared to variable associations, indicating that the valence of context-outcome associations had an impact on the consolidation of context information into memory. 
Also relative reward magnitudes were reported to influence context learning. In a functional magnetic resonance imaging study, Pollmann et al. (2016) consistently associated individual contexts with either high or low monetary reward feedback. Participants worked through two separate contextual cueing sessions with reward being absent in the second session. Their results replicated Tseng and Lleras' finding of accelerated learning of repeated contexts when these were associated with reward. Interestingly, however, while repeated high-reward distractor configurations elicited a strong search advantage and were searched more efficiently also in the absence of reward, no such advantage was observed for low reward configurations. The authors suggested that the presence of two different reward magnitudes hindered learning in low reward trials and instead resulted in preferential allocation of limited resources to context learning in high reward contexts. 
Although these studies have demonstrated the influence of reward on contextual cueing, it is still unclear to what extent attentional mechanisms are involved. Tseng and Lleras (2013) suggested that observers learned both, an association between a context and the position of the target, and an association between context and reward magnitude. They argued that reward feedback resulted in an increase in arousal which subsequently strengthened the consolidation process of context learning into memory. As contexts encoded at higher arousal were easier to retrieve, target detection was faster in future encounters of the same context. 
Schlagbauer, Geyer, Müller, and Zehetleitner (2014), however, suggested that attentional weighting of individual target locations accounted for the observed acceleration of context learning. In their replication of the study of Tseng and Lleras (2013), they disentangled the effect of reward on context configuration learning and on target location learning. The authors presented repeated and newly generated distractor configurations associated with either a low or high reward magnitude. Importantly, they used separate target locations for repeated and novel contexts, and for high and low reward trials to assess whether reward influenced context learning and target location learning separately. With this design, high and low reward magnitude was associated with different target locations in novel contexts, and with both different target locations and context configurations in repeated contexts. As a result, they found reward effects also in novel contexts. These were actually larger than those observed in repeated context configurations. The authors concluded that observers, rather than learning an association of the repeated context and the reward magnitude, learned an association of the target location and the reward magnitude. They suggested that this association was learned in novel contexts, where the target location repeated, but also in repeated contexts, where both target location and context configuration repeated. The authors concluded that reward facilitated target location learning due to attentional weighting of those individual target locations that were associated with high reward. Accordingly, target locations which were followed by high reward feedback were preferably selected in the following trials (cf. Hickey et al., 2014), because increased attentional weights facilitated attention guidance toward these locations in future encounters—irrespective of repeated contextual regularities. 
Also Sharifian et al. (2017) suggested that reward is associated with the target location in novel contexts. In contrast to Schlagbauer et al. (2014), they argued that in repeated contexts, when target location and context configuration repeated, the context configuration rather than the target location was associated with reward, resulting in the facilitation of context learning. The authors hypothesized that initially, both target location and repeated context configuration compete for an association in repeated contexts but that after a few repetitions, the context “wins” the competition against the target location. To test their hypothesis, the authors consistently associated novel and repeated contexts with either a low or a high reward. In contrast to Schlagbauer et al. (2014) they used the same target locations for novel and repeated contexts, but reward magnitude was consistently associated with a target location in only 50% of the trials. In the other trials, reward magnitude varied dependent on context type: In trials with variable reward magnitude, a target location was consistently tied to, e.g., high reward in novel contexts and low reward in repeated contexts. With this design, the authors found that reward facilitated context learning when the target location was consistently associated with reward. However, when the same target location was paired with high reward in novel and low reward in repeated contexts, they observed that context learning was reduced in the first blocks of the experiment. The authors concluded that this was resulting from the competition between context learning and target location learning. They interpreted that a high reward was associated with the target location in novel contexts and that this association interfered with the association of a repeated context and the same location. They suggested that this interference supported their hypothesis that the target location was associated with reward in novel contexts whereas in repeated contexts target location and context configuration initially competed for an association. 
From the aforementioned it seems obvious that reward facilitates task performance in contextual cueing tasks either by leading to prioritized processing of associated repeated context configurations or by increasing the weight at associated target locations, or by both. In all aforementioned studies, however, reward feedback was associated with different context configurations that were generated by a combination of distractor orientations and distractor locations. In the present experiment, we used a different approach. Rather than associating reward magnitude with particular context configurations (i.e., a combination of distractor orientations and distractor locations), we used an additional response-irrelevant context feature, namely color, to signal reward magnitude. This reward-signaling color was available in both novel and repeated contexts with display onset. As outlined already, studies examining reward-driven attention capture often used particular stimulus features to signal subsequent reward magnitude (e.g., Anderson, Laurent, & Yantis, 2011; Hickey & van Zoest, 2013). 
Rationale of the present study
In the present study, we used a salient yet response-irrelevant context feature (color) to signal the reward magnitude that could be received in each trial. Although the context configuration per se is not response-relevant in contextual cueing because participants have to respond to the orientation of the target letter T that varies randomly in each trial, the context configuration shares some features with the target, as all context elements (the letters L) are composed of horizontal and vertical lines, as is the target letter T. Since participants are instructed to report the orientation of the target, one might argue that line orientation as a feature is response-relevant. Color, as an additional context feature, is response irrelevant in this task. We associated reward to color rather than different context feature configurations, and we associated reward magnitude with the same color in both repeated and novel context configurations. We assumed that once the color-reward association had been established, participants could predict the expected reward magnitude directly with display onset without having to process the context configuration. 
Former studies have coupled reward to particular context configurations or target locations; hence, participants had to process the context configurations to some extent to predict reward magnitude. Contrary to Schlagbauer et al. (2014), we used the same target locations for novel and repeated contexts and for all reward magnitudes. Target location therefore neither predicted the reward magnitude nor context novelty. This also differed from the study of Sharifian et al. (2017), as they shared target locations across novel and repeated contexts but associated locations with particular reward magnitudes in half of the trials. Since we associated reward to colored context items, we could use the same target locations in all experimental conditions. 
Participants performed a standard contextual cueing task with reward feedback given after every trial. Half of the search display items were presented in one of three colors. Colors were associated with different reward magnitudes (low, medium, and high) in both repeated and novel contexts. As color was fully predictive of reward magnitude, we hypothesized that participants will learn to associate the color with the expected reward magnitude. 
In contrast to previous work reward magnitude could be predicted from the search display directly in novel and repeated contexts, but it could not lead to differences in target location cueing. This approach allowed investigating to what extent reward contributes to contextual cueing, independent of location probability cueing and in addition to context configuration learning. 
Reward learning might affect search performance in at least three ways in our task. First, reward learning might have a general boosting effect on search performance, resulting in a performance increase (faster response times) in high reward compared with medium and low reward trials that should be observed in both novel and repeated context configurations. Second, reward learning might lead to prioritized encoding of contextual configuration information of displays containing the reward-signaling color. This should manifest in faster response times for repeated compared with novel contexts that should be more pronounced in high reward compared with medium or low reward contexts. Finally, reward learning might boost contextual cueing by leading to more efficient attention guidance to the location of the target. More efficient attention guidance to the target should be observed when comparing the eye movement pattern in repeated and novel contexts associated with high, medium, and low reward: Initial saccades should land closer to the target location in repeated compared with novel contexts (e.g., Tseng & Li, 2004; Zhao et al., 2012) and this difference should be most pronounced in contexts associated with high reward. 
Method
Participants
Twenty volunteers (14 female, six male) naïve to paradigm and objective took part in the experiment. Participants were aged 19–34 years (M = 23.7, SD = 3.92), had normal visual acuity, and showed no signs of visual achromatopsia (both tested with an Oculus Binoptometer 3; OCULUS Optikgeräte, Wetzlar, Germany). Participation was remunerated with payment or course credit. The experiment was conducted with the written understanding and consent of each participant in accordance with the ethical standards of the Declaration of Helsinki and was approved by the local Ethic Committee (Faculty of Psychology, Philipps-University Marburg). 
Apparatus
Participants were seated in a comfortable chair in a dimly lit and sound attenuated room responding with buttons of a gamepad (Microsoft Sidewinder USB; Microsoft, Redmond, WA) in their hands. Participants placed their heads on a chinrest facing the center of the screen. Stimuli were presented on a LCD-TN screen (Samsung SyncMaster 2233RZ 22-in., 1680 × 1050 pixels; Samsung, Seoul, ROK) set to a refresh rate of 100 Hz. The screen was placed 100 cm in front of the participants. Eye movements were recorded with an EyeLink 1000 Plus desktop mounted eye tracker (SR Research Ltd., Ottawa, Canada) with a spatial resolution of 0.01° at a sampling rate of 1000 Hz. The device was calibrated using the EyeLink 13-point calibration procedure. A Windows 7 PC (iTMediaConsult, Züsch, Germany) running E-Prime Professional (Version 2.0.10.356; Psychology Software Tools, Sharpsburg, PA) controlled response collection and stimulus presentation. 
Stimuli
Search context displays always consisted of 16 items, 15 L-shaped distractors, and 1 T-shaped target item, distributed on an imaginary 10 × 7 matrix (24.4° × 15.5°). Each L-shaped item was rotated either 0°, 90°, 180°, or 270°. T-shaped items were tilted left- or rightward. The size of both items was 1.10° × 1.10° with a minimum distance of 0.68° between two objects. Targets were presented at four locations, one placed in each quadrant of the search display at an eccentricity of 6.18° from screen center and with two-cell distance to the grid's outer edges. Distractors were placed at eight cells per hemifield (seven if the target was presented on the same side), which were chosen randomly within the matrix. Every context contained eight gray (RGB 102, 102, 102; 37.07 cd/m2) and eight unitarily colored items presented on a dark gray background (RGB 60, 60, 60; 12.15 cd/m2). The target was colored in 50% of all trials; in the other half it was presented in gray. Colored items could be green (RGB 0, 128, 21; 36.48 cd/m2), orange (RGB 143, 95, 0; 36.90 cd/m2), or purple (RGB 170, 0, 217; 36.81 cd/m2). All colors were isoluminant to the gray items. 
Procedure
Contextual cueing task
Trials started with a central fixation dot (Thaler, Schütz, Goodale, & Gegenfurtner, 2013) surrounded by a thin line. As soon as participants fixated an area of 1.4° around this dot for at least 350 ms, the thin line disappeared and the screen was replaced by the search display after 400 ms. Participants were instructed to search for the T-shaped target and correctly report its orientation by pressing a left or right button on the gamepad. The search display was presented until participants manually responded or replaced after 1,000 ms by a blank screen presented for 600 ms. As soon as a response was given, a feedback screen was showing point feedback at screen center for 600 ms. Correct responses were rewarded by “+1,” “+5,” or “+10” points, dependent on the color presented in the search context. Color and reward magnitude associations were constant for individuals during the experiment but were balanced across participants. Incorrect responses and responses slower than 1,600 ms were not rewarded but followed by “+0” feedback. Participants were not explicitly informed about the color and reward magnitude association but were told that they would be rewarded for correct responses in every trial. Points were translated into monetary reward (1 EUR for 1,000 points, max. 6.14 EUR) at the end of the experiment. Participants received the monetary reward in addition to the reimbursement for participation. Trial procedure and search display are depicted in Figure 1
Figure 1
 
Trial procedure and exemplary search display. Participants were instructed to fixate the fixation dot to avoid eye movements before the search display was presented. The search display was shown until response or replaced after 1,000 ms by a blank screen. Participants searched for a T-shaped target among L-shaped distractors and reported the target's orientation by button press. After a response was given, a feedback screen presented point feedback. The amount of points depended on the color presented in the search display. Color-reward associations were balanced across participants. Correct answers were rewarded, only incorrect responses were followed by no reward (“+0”).
Figure 1
 
Trial procedure and exemplary search display. Participants were instructed to fixate the fixation dot to avoid eye movements before the search display was presented. The search display was shown until response or replaced after 1,000 ms by a blank screen. Participants searched for a T-shaped target among L-shaped distractors and reported the target's orientation by button press. After a response was given, a feedback screen presented point feedback. The amount of points depended on the color presented in the search display. Color-reward associations were balanced across participants. Correct answers were rewarded, only incorrect responses were followed by no reward (“+0”).
Experimental procedure
The experiment consisted of two sessions. Each session contained 12 blocks with 48 trials resulting in 1152 trials. For each participant, 24 repeated search contexts were generated individually. These contexts appeared repeatedly in each block of each session, randomly intermixed with 24 novel context configurations generated randomly for each trial. Both configurations were generated separately for contexts containing a colored or a gray target. Contexts were created for all combinations of the four target locations and three reward magnitudes. The same target locations were used in novel and repeated contexts and contexts associated with different reward magnitudes (cf. Figure 2). Assignment of reward magnitudes to colors was randomized and balanced across participants. Orientation of the T-shaped target was determined randomly in each trail, ensuring that repeated context configurations predicted the target location but not the target orientation, i.e., the correct manual response. 
Figure 2
 
Target locations and exemplary search displays in novel and repeated contexts associated with different reward magnitudes. The same target locations (indicated by blue cycles) were used in repeated and novel contexts and in contexts associated with low, medium, and high reward. Color-reward associations were balanced across participants.
Figure 2
 
Target locations and exemplary search displays in novel and repeated contexts associated with different reward magnitudes. The same target locations (indicated by blue cycles) were used in repeated and novel contexts and in contexts associated with low, medium, and high reward. Color-reward associations were balanced across participants.
At the beginning of the first session, participants performed 48 practice trials consisting of only novel contexts without implementation of reward. When participants reached a response accuracy of at least 65%, they continued with the experimental task. Performance feedback consisting of mean response accuracy, response times, and total amount of points achieved was provided after each block, followed by an obligatory pause of at least 10 seconds between blocks. After session 1, participants returned within 3 days for session 2. No additional practice trials were performed in this session. A recognition task was performed at the end of session 2, followed by a follow-up survey investigating individual search strategies and recognized experimental regularities. 
Recognition task
In the recognition task conducted at the end of the second session, 48 trials consisting of 24 repeated contexts were randomly intermingled with 24 novel contexts. Participants were informed that some contexts were repeated over time and asked to decide for each context whether it had been shown before. The recognition task had no time restriction and participants were asked to decide intuitively. 
Data analysis
Response times and error rates
Reaction times (RT) and error rates were analyzed separately. Trials with incorrect responses and trials with exceedingly short or long RT (±2 SD from mean RT, calculated separately for each participant and block) were removed from RT analysis (M = 17.4 %, SD = 4.22). Hierarchical linear mixed models (HLMs; e.g., Hox, 2002; Raudenbush & Bryk, 2002; Snijders & Bosker, 1999) were applied to investigate the influence of reward magnitude on the reduction of response times in repeated relative to novel contexts. In contrast to usually implemented analyses of variance (ANOVA), HLMs allow to analyze context learning as developing differential reduction in RTs without reducing data by aggregating blocks into epochs. Using HLMs allows ability to include data based on the experimental factors in every single trial and to control for the dependent data structure. As participants took part on two days, sessions were modeled on the second, participants on the first level of data analysis. As we expected participants to show inter-individually varying levels of RTs, we included random intercepts and slopes for each participant and session. Fixed effects, which might be compared with within-subject factors of ANOVA analyses, included the effects of blocks (0–23, block coded as block−1 for better interpretation), context type (novel vs. repeated contexts), medium reward magnitude (low vs. medium), high reward magnitude (low vs. high), and experimental session (1 vs. 2). Blocks were included as time variable, as the HLM was analyzing the decline of response times in the course of the experiment. The two-way interaction of blocks and context type described the emerging differences in RTs between novel and repeated contexts. The three-way interactions: blocks × context type × medium reward, blocks × context type × high reward, and blocks × context type × session, were included in the HLM. These interactions represented differences in context learning between the different reward magnitudes during the experiment, and differences in context learning between the first and second session. Model parameters were estimated by applying maximum likelihood method. Data was evaluated using IBM SPSS Statistics 24 (IBM, Armonk, NY). 
Additionally, the influence of received reward magnitudes in trial n−1 on task performance in trial n was analyzed using similar HLMs. Within these models, reward magnitude in trial n−1 was applied for predicting performance measures in trial n. 
The analysis was averaged across contexts in which the target was colored or gray. Since both types were presented equally likely, participants could neither benefit from searching only the gray nor only the colored items. One may, however, assume that participants prioritized colored compared with gray targets, since the color was associated to reward magnitude. This would lead to reward influencing task performance stronger in contexts with colored compared with gray targets. To examine whether reward had a differential influence on contextual cueing in displays with colored and displays with gray targets, we divided the data into two sets (displays with colored and displays with gray targets) and computed the HLM described previously separately for both context types. In addition, we directly compared task performance in contexts with gray and colored targets. We ran a repeated-measure ANOVA with the three factors target color (gray vs. colored), context novelty (novel vs. repeated), and reward (low. vs. medium vs. high reward). 
Recognition task
Accuracy in the recognition task was examined by a 2 × 3 repeated-measure ANOVA with the within-subject factors context type (novel vs. repeated contexts) and reward magnitude (low vs. medium vs. high reward). 
Eye movements
Saccades, fixations, and blinks were detected applying the SR Research parser. Saccades were defined by the combination of minimum velocity of 30°/s and acceleration of 8,000°/s2. For further analyses, eye position data was transformed into degrees of visual angle. Trials with incorrect responses, first saccade latencies smaller than 100 ms, and trials in which participants blinked were removed, resulting in 22.0% (SD = 6.91) of trials being discarded from following analyses. The remaining data was evaluated applying the same HLMs as used for response times. An additional fixation accuracy measure was implemented, depicting whether participants stayed within 2° around the target location before a manual response was given. Due to technical issues, eye movements could only be recorded for 19 out of 20 participants. The eye movement results were based on the data of these 19 participants. 
Results
Response times
At the beginning of the experiment, response times in novel (MBlock1 = 694.64 ms, SD = 67.26) and repeated contexts (MBlock1 = 693.41 ms, SD = 70.91) did not differ significantly, as the main effect of context type did not reach statistical significance, F(1, 13513) = 0.58, p = 0.448, cf. Figure 3, upper row. As the experiment proceeded, participants became faster responding to targets in repeated compared to novel contexts, which was indicated by a significant interaction of blocks and context type, F(1, 6878) = 5.18, p = 0.023. This effect developed during the experimental course (ΔMBlock1 = 1.23 ms, SD = 48.53; ΔMBlock24 = 24.34 ms, SD = 25.71) and was reflected in the predicted values based on the HLM's regression coefficients, cf. Figure 3, lower row. The RT difference between novel and repeated contexts predicted by the HLM increased by an average of 1.47 ms with each subsequent block, b = −1.47, SEb = 0.64, t(6878) = −2.28, p = 0.023, starting with the second block in session 1. Importantly, participants responded faster to targets presented in repeated high reward (M = 614.41 ms, SD = 52.20) compared with repeated low reward contexts (M = 627.46 ms, SD = 56.63), an effect that developed during the experimental course and became visible as a significant three-way interaction of high reward magnitude, blocks and context type, F(1, 18950) = 8.66, p = 0.003. This was visible in contexts with a gray target, F(1, 9273) = 5.29, p = 0.021, and with a colored target, F(1, 9589) = 3.95, p = 0.047. RT decreases of repeated contexts associated with medium and low reward magnitude did not differ significantly, as the relating interaction (blocks × context type × medium reward) did not reach statistical significance, F(1, 18951) = 0.09, p = 0.762. The general level of response times did not differ between high and low as well as medium and low reward contexts, as neither the main effect of medium (vs. low) reward magnitude, F(1, 18951) = 1.01, p = 0.315, nor of high (vs. low) reward magnitude, F(1, 18951) = 0.02, p = 0.896, reached statistical significance (MLow = 630.59 ms, SD = 52.19; MMedium = 634.03 ms, SD = 51.53; MHigh = 625.57 ms, SD = 63.61). In session 2, differences between repeated and novel contexts' RTs were maintained but less pronounced: The HLM predicted that the RT difference between novel and repeated contexts only increased by an average of 0.45 ms with each subsequent block in session 2, which was indicated by an interaction of block, context type, and session, F(1, 5124) = 4.66, p = 0.031. The positive predictor value of this interaction indicates a shallower predicted RT decrease in repeated contexts in session 2 compared to session 1, b = 1.02, SEb = 0.47, t(5124) = 2.16, p = 0.031. 
Figure 3
 
Observed response times (upper row) for novel (dotted lines) and repeated (solid lines) contexts associated with low (left panel), medium (middle panel), and high (right panel) reward. The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Low, medium, and high reward magnitudes are also indicated by different colors. The gray bar depicts the time gap (1–2 days) between the two experimental sessions. Error bars denote the standard error of the mean.
Figure 3
 
Observed response times (upper row) for novel (dotted lines) and repeated (solid lines) contexts associated with low (left panel), medium (middle panel), and high (right panel) reward. The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Low, medium, and high reward magnitudes are also indicated by different colors. The gray bar depicts the time gap (1–2 days) between the two experimental sessions. Error bars denote the standard error of the mean.
Response times gradually decreased during the experiment with increasing block number (MBlock1 = 693.29 ms, SD = 64.10; MBlock24 = 604.70 ms, SD = 51.33) indicated by a main effect of blocks, F(1, 25.6) = 41.31, p < 0.001. The HLM predicted a response time decrease by an average of 4.0 ms with each subsequent block, b = −4.00, SEb = 0.62, t(25.6) = −6.43, p < 0.001. Response times were slower at the beginning of the second (MBlock13 = 639.60 ms, SD = 75.84) compared with the end of the first (MBlock12 = 615.06 ms, SD = 51.50) session of the experiment, F(1, 25.7) = 6.02, p = 0.021. This response time increase might have been due to the missing practice trials in session 2, as participants might have needed some trials to get used to the task again. Since this increase was similar in repeated and novel contexts and in different reward magnitudes, the missing practice might have led to a general, nonspecific performance loss, which became visible in longer response times in the first block of session 2. In line with this conclusion, studies using practice trials also in session 2 (e.g., Chun & Jiang, 2003) found no general increase in the first block of the second session (see also Jiang et al., 2005). 
The three-way interaction of block, context type, and reward magnitude was further investigated by HLMs calculated separately for each reward magnitude. The models specified were the same as the main model, but excluded the main and interaction effects of reward magnitude. Although a significant interaction of blocks and context type was observed for low, F(1, 1426) = 5.44, p = 0.020, and high reward contexts, F(1, 6941) = 6.23, p = 0.013, this interaction missed statistical significance for medium reward contexts, F(1, 1149) = 0.86, p = 0.355. 
If reward learning had a general boosting effect on search performance, a performance increase in high reward contexts should be observable in both novel and repeated context configurations. To examine this notion, we additionally examined search performance separately for novel and repeated contexts associated with different reward magnitudes. The HLM was specified as the main model, with the main and interaction effects of context type excluded, as we now analyzed performance separately for each context type. In repeated contexts, high (vs. low) reward led to shorter response times (MHigh = 614.4 ms, SD = 52.20; MLow = 627.5 ms, SD = 56.63), F(1, 9674) = 22.01, p < 0.001, while no difference was observed for medium (vs. low) reward (MMedium = 630.3 ms, SD = 56.67), F(1, 9676) = 0.85, p = 0.358. In novel contexts, neither high (vs. low) reward, F(1, 9205) = 1.30, p = 0.254, nor medium (vs. low) reward magnitude influenced response times, F(1, 9204) = 1.52, p = 0.218, (MHigh = 638.3 ms, SD = 59.25; MMedium = 638.7 ms, SD = 50.03; MLow = 634.5 ms, SD = 51.01). 
Reward magnitudes received in trial n−1 did not influence response times in trial n. When the reward magnitude received in trial n−1 was applied for predicting response times in trial n, neither did the interaction of block, context type, and received medium reward magnitude reach statistical significance, F(1, 18942) = 0.42, p = 0.517; nor did the interaction of block, context type, and received high reward magnitude, F(1, 18935) = 1.34, p = 0.247. 
To compare whether response times differed across contexts with gray and colored targets, we additionally ran a repeated-measure ANOVA with the three factors target color (gray vs. colored), context novelty (novel vs. repeated), and reward magnitude (low vs. medium vs. high reward). Response time results showed no significant differences between contexts with gray (M = 634.33 ms, SD = 60.75) and colored targets (M = 628.71 ms, SD = 54.22), as the main effect and any interactions including the target color failed to reach significance (all ps > 0.173). 
In sum, the results show that participants responded faster to targets in repeated compared with novel contexts, an effect which developed during the experiment. The differences between repeated and novel contexts' RTs appeared fastest and were most pronounced in the high reward condition. 
Error rates
Participants' search accuracy was comparatively high, reflected in low error rates (M = 14.0%, SD = 34.70). Error rates significantly decreased during the experiment, F(1, 122247) = 53.44, p < 0.001. The predicted error rates by the HLM decreased by an average of 0.9% with each subsequent block, b = −0.94, SEb = 0.13, t(22981) = −7.31, p < 0.001. Participants made more errors in novel contexts (M = 15.59%, SD = 5.65) compared with repeated contexts (M = 12.41%, SD = 5.06), F(1, 23962) = 5.36, p = 0.021. The interactions of blocks and context type, F(1, 25148) = 2.17, p = 0.642, as well as blocks, context type and session, F(1, 25721) = 0.23, p = 0.631, missed statistical significance. All other main and interaction effects showed no significant influences (all ps ≥ 0.197). These results show that participants made fewer errors during the experimental course. They also made fewer errors in repeated compared with novel contexts, while no differences between contexts associated with different reward magnitudes could be observed. 
Eye movements
Analogous to the analysis of response times, the predicted distance between first fixation and target location gradually decreased during the experimental course with increasing block number, F(1, 222) = 19.57, p < 0.001, cf. Figure 4, upper row. The predicted distance of 5.09° in block 1, b = 5.09, SEb = 0.16, t(31.51) = 31.02, p < 0.001, decreased on average by 0.05° with each subsequent block, b = −0.05, SEb = 0.01, t(222) = −4.42, p < 0.001. Comparable with response time results, this distance was larger at the beginning of the second compared with the end of the first session, yielded by a significant main effect of session, F(1, 272) = 17.86, p < 0.001. In repeated contexts associated with high reward magnitude, estimated distances decreased on average by 0.03° faster per block than distances in low reward repeated contexts, b = −0.03, SEb = 0.01, t(16890) = −3.43, p = 0.001. This was yielded by a significant interaction of blocks, context type, and high reward magnitude, F(1, 16890) = 11.75, p = 0.001. Analogous to response times, this effect was visible in contexts with a gray, F(1, 8395) = 5.91, p = 0.015, and a colored target, F(1, 9594) = 5.65, p = 0.018. However, faster distance decrease could only be observed in the high reward condition, as the interaction of block and context type did not reach statistical significance, F(1, 11001) = 0.23, p = 0.633. No other main effects or interactions were significant (all ps ≥ 0.202). Congruent to the evaluation of response times, reward magnitudes in trial n−1 did not influence eye movements in trial n. The observed interaction of blocks, context type, and high reward magnitude missed statistical significance when the reward magnitude in trial n−1 predicted the distance of the first fixation to the target location in trial n, F(1, 16995) = 1.86, p = 0.173. Distances of the first fixation to the target location predicted by the HLM are shown in Figure 4, lower row. 
Figure 4
 
Distance of the first fixation to the target location (upper row). The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Associated reward magnitudes are indicated by different colors and displayed in separate diagrams in columns one to three. Solid lines indicate performance in repeated, dotted lines in novel context configurations. The gray bar depicts the time gap between the two experimental sessions. Error bars denote the standard error of the mean.
Figure 4
 
Distance of the first fixation to the target location (upper row). The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Associated reward magnitudes are indicated by different colors and displayed in separate diagrams in columns one to three. Solid lines indicate performance in repeated, dotted lines in novel context configurations. The gray bar depicts the time gap between the two experimental sessions. Error bars denote the standard error of the mean.
Analogous to response times, we examined whether the distance of the first fixation differed across contexts with gray and colored targets. We ran the same repeated-measure ANOVA with the three factors target color (gray vs. colored), context novelty (novel vs. repeated), and reward (low. vs. medium vs. high reward). Again, we observed no differences between contexts with gray (M = 4.77°, SD = 0.93) and colored targets (M = 4.74°, SD = 0.87), as the main effect and any interactions including the target color failed to reach significance (all ps > 0.323). 
To examine whether reward influenced attention guidance also beyond the first fixation, we additionally analyzed the fixation count. This measure is known to be related to efficiency in visual search, as there is evidence that faster response times in repeated compared with novel contexts are accompanied by fewer total fixations (e.g., Harris & Remington, 2017; Peterson & Kramer, 2001; Zhao et al., 2012). Accordingly, participants not only respond faster in repeated contexts but also needed fewer fixations to find the target. Fewer total fixations in high reward repeated contexts therefore also suggest that attention guidance was facilitated by reward. Results were similar to those observed with first fixation results: Only in high reward trials, the fixation count was significantly lower in repeated (M = 2.12; SD = 0.32) compared with novel contexts (M = 2.24; SD = 0.35), F(1, 16983) = 9.58, p = 0.002. This effect also developed during the experiment and could only be observed in the high reward condition, as the interaction of block and context type did not reach statistical significance, F(1, 2804) = 2.08, p = 0.150. Participants made two to three fixations on average in a trial (M = 2.20; SD = 0.33). 
To investigate whether reward had an effect on postselective processes in our task, we also analyzed mean fixation durations. Fixation durations are related to the speed of object processing and to the selection of the next fixation, as studies suggested that during the fixation of an object the next fixation is planed simultaneously to object processing (e.g., Herwig & Schneider, 2014; Ludwig, Davies, & Eckstein, 2014). Results showed neither an effect of reward nor context novelty on mean fixation durations (all ps > 0.121). 
Participants started to move their eyes 196 ms (SD = 68.6) after stimulus onset on average. They fixated the target location before giving a correct manual response in 95.5% of trials. When participants fixated the target before responding, fewer response errors were made, r = −0.29, p < 0.001 (correlation reported as two-tailed Pearson coefficient). 
In sum, these results show that only in high reward contexts the distance between first fixation and target location was shorter in repeated compared with novel contexts and that this difference evolved during the course of the experiment. 
Recognition task
Participants did not reliably differentiate between novel (47.3% correctly identified contexts, SD = 9.68) and repeated contexts (M = 55.6 %, SD = 13.47), as the statistical comparison failed to reach significance, F(1, 19) = 4.20, p = 0.055, Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\({\rm{\eta }}_{\rm{p}}^2\) = 0.181. Neither reward magnitude nor the interaction of context type and reward magnitude showed an effect on recognition accuracy, F(2, 38) = 0.92, p = 0.406, Display Formula\({\rm{\eta }}_{\rm{p}}^2\) = 0.046; F(2, 38) = 0.45, p = 0.956, Display Formula\({\rm{\eta }}_{\rm{p}}^2\) = 0.002. 
When asked for recognized experimental regularities in the follow-up survey, three out of 20 participants stated that they had recognized the (correct) association between color and reward magnitude. 
Discussion
This study investigated the influence of expected reward outcomes on contextual cueing by using context configurations that contained colored context items that were associated with different reward magnitudes. We expected reward magnitude to modulate contextual cueing, with high reward leading to larger (cf. Pollmann et al., 2016), and faster emerging (cf. Tseng & Lleras, 2013) differences between repeated and novel context configurations. The implemented HLMs used to analyze the results provided several advantages. The model was specified with to comply with the hierarchical data structure and according to specific hypotheses about the impact of reward on contextual cueing. Reward learning was decoupled from both, context configuration and target location, as reward magnitude was associated with a salient, response-irrelevant context feature (color). Thus, the same target locations could be used in novel and repeated contexts, and both context types could be combined with all three reward magnitudes. Participants could predict the reward magnitude from the color with display onset in both novel and repeated contexts. 
Our findings showed a faster decline of response times in repeated compared with novel contexts, an effect that was more pronounced and emerged earlier in contexts associated with high reward. Contextual cueing became also visible in the analysis of eye movement patterns: The distance of the first fixation to the target location decreased in repeated compared with novel contexts, an effect that was observed only in contexts associated with high reward. These results suggest that reward learning not only resulted in contextual cueing in repeated contexts associated with high reward, but was also accompanied by faster and more efficient detection of the target in repeated context configurations. 
Taken together, these findings demonstrate that high reward facilitates learning of context configurations containing the reward-signaling color. Moreover, reward learning also leads to more efficient attention guidance toward the target location in the course of the experiment. 
Reward facilitates context configuration learning
In line with Tseng and Lleras (2013), we observed no main effect of reward but an accelerated response time decrease in repeated high reward contexts (but see Schlagbauer et al., 2014). Associating high reward with context color thus led to a more pronounced contextual cueing effect and not to a general boost of search performance. A possible theoretical explanation of these results was suggested by Tseng and Lleras (2013). In their study, the authors explained the accelerated contextual cueing effect in rewarded contexts by referring to an increase in arousal. They assumed that receiving a reward enhanced the observer's arousal, which in turn altered the memory consolidation process of the rewarded context. A state of high arousal thus strengthened the consolidation of the rewarded context into memory, resulting in faster retrieval on future encounters of the same context. Faster retrieval in turn resulted in faster target detection in rewarded contexts. 
Evidence for this arousal hypothesis comes from a result of their second experiment (Tseng & Lleras, 2013: experiment 2) in which participants experienced an unexpected point penalty when they had expected to obtain a reward. Results showed that experiencing an unexpected point penalty when expecting a reward immediately accelerated learning of the associated context, an effect that was observed in the subsequent block. The authors assumed that the unexpected outcome gave rise to “surprise,” which triggered more arousal than any expected outcome might have and eventually resulted in an immediate enhanced consolidation of the context into memory and in large contextual cueing effects. 
The present study extended the results of Tseng and Lleras (2013) by showing that high reward and arousal actually facilitated context configuration learning and did not result in an unspecific reward benefit. Contrary to Tseng and Lleras (2013), reward was associated with a salient color rather than with a particular context configuration (i.e., a combination of distractor orientations and distractor locations), and the same color was used in both repeated and novel contexts. This allowed disentangling context configuration learning from a more general arousal effect. As the decrease in response time shows, participants quickly learned the association between color and subsequent reward magnitude. Thus after several trials, they were able to predict the expected reward outcome already with onset of the search display. If we assume that expecting a high reward outcome triggers more arousal than expecting a low reward outcome, an impact of arousal on context configuration learning should become manifest as response time benefits only in those contexts that were repeated in the course of the experiment. A general arousal effect, on the other hand, should have boosted response times in both context types. 
Our results showed a response time benefit for repeated contexts associated with high reward, that is, only when context configurations were repeated in the course of the experiment. Following the notion of Tseng and Lleras (2013), we conclude that memory consolidation was strengthened with each repetition of a context configuration, and that this effect was enhanced when contexts were associated with high reward. High arousal levels during encoding also allowed faster retrieval of repeated contexts, resulting in faster target detection than in contexts associated with low reward and in novel contexts. As novel contexts were generated randomly in each trial, performance could not benefit from higher arousal in high reward trials. Thus, high reward was efficient in repeated contexts, because it facilitated learning of context configuration regularities. Target detection benefited indirectly, as attention could be guided faster to the target location in repeated contexts. 
Interestingly, a pronounced contextual cueing effect was mainly observed in contexts associated with high reward while the effect was much smaller or virtually absent in contexts associated with low and medium reward. Evidence showing that contextual cueing is not very pronounced in contexts associated with a low (relative to high) reward magnitude has been reported already elsewhere (Pollmann et al., 2016; Sharifian et al., 2017). These findings might seem puzzling at first, since a large number of contextual cueing studies have reported contextual cueing without assigning any reward. A possible explanation might lie in the limited processing resources available for context learning (Pollmann et al., 2016; Schlagbauer, Müller, Zehetleitner, & Geyer, 2012; Smyth & Shanks, 2008). Some studies used quite larger numbers of repeated contexts that were divided across different experimental conditions. As a result, each context received only little capacity for encoding, storage and retrieval. If these contexts are associated with different reward magnitudes, participants might allocate most of their resources to high reward contexts leaving little or no capacity for contexts associated with medium or low reward (Pollmann et al., 2016). In the present study, 24 repeated context configurations were divided into three reward magnitudes, making it rather likely that participants had to allocate their resources. Thus, processing limits might have contributed to the results. 
At first glance, our interpretation that little or no capacity was left for low and medium reward context learning seems contradictory to the results of Jiang et al. (2005), who had hypothesized that observers were able to learn far more than 12 repeated contexts, the amount used in most contextual cueing studies. To examine this notion, they conducted five training sessions of different contextual cueing tasks on five consecutive days. In each session, participants learned 12 unique repeated contexts that differed from the other sessions. In a sixth session 1 week after the last training session, the authors presented all 60 learned contexts, now randomly intermixed with 60 novel contexts. Jiang et al. (2005) observed contextual cueing for all contexts of the five training sessions and concluded that observers have a high capacity for context learning in contextual cueing. The present study used only 24 repeated contexts, less than half as many as Jiang et al. (2005). Contrary to the study of Jiang et al. (2005), however, participants had to learn twice as many contexts in one experimental session. Schlagbauer et al. (2012) suggested that only about four repeated contexts out of 12 can actually be learned in an experimental session (Schlagbauer et al., 2012). When we assume that participants learn only a subset of repeated contexts due to a limitation of resources for context learning, it might be rather likely that reward increases the probability that limited resources are allocated to high reward and not to medium or low reward contexts (see also Pollmann et al., 2016). This might be especially the case in our study, since reward magnitude could be predicted directly from the color. 
Reward learning and attention guidance
In addition to response times, eye movement measures were used to analyze search performance. Prior studies have reported that context learning manifested in more precise first fixations with respect to the target location (Manginelli & Pollmann, 2009; Peterson & Kramer, 2001; Tseng & Li, 2004; Zhao et al., 2012). Although participants were not explicitly instructed to move their eyes in our task, we hypothesized more efficient attention guidance might manifest when comparing eye movement patterns in repeated and novel contexts associated with different reward magnitude. 
Indeed, eye movements showed similar contextual cueing effects as observed for response times. Eye movements to the target location preceded correct responses, as indicated by the reported correlation between target fixations and response accuracy. The first fixation landed closer to the target location when the target was presented in a repeated context associated with high reward compared with targets that were presented in novel contexts or targets in contexts associated with low reward. Participants made fewer fixations in repeated contexts associated with high reward compared to novel contexts or contexts associated with low reward. Both, more effective first fixations and the reduced fixation count in repeated contexts indicate that participants could use retrieved contextual information for more efficient target localization. Mean fixation durations, however, were neither influenced by reward nor by context novelty, indicating that neither reward nor context novelty affected post-selective processes (e.g., Herwig & Schneider, 2014; Ludwig et al., 2014). These findings support the assumption that context learning manifests in more efficient attention guidance, and that this effect is supported by high reward. 
Interestingly, more efficient attention guidance was again only visible with high reward and virtually absent in medium or low reward repeated contexts. As described already, this difference might have resulted from differential allocation of processing resources: Participants might have allocated most of their resources to high reward contexts, leaving little or no capacity for contexts associated with medium or low reward. This might also hold for the allocation of attention, especially as for each participant, reward magnitude was consistently associated with the same color feature, that is, color reliably predicted the reward outcome. As color was salient, preferential allocation of processing resources to high reward contexts seems even more likely. 
We assume that the salient reward-signaling colors and the limits in context configuration learning might have contributed to the missing first fixation effect in medium and low reward contexts. We suggest that participants allocated most of their resources to contexts signaling high reward (cf. Pollmann et al., 2016), but also perceived these contexts as more salient than low or medium reward contexts, as there is evidence that reward can add to the salience of stimuli (e.g., Hickey et al., 2010). The preferential allocation of resources and the increased salience of high reward contexts might have led to a prioritization of context learning in high compared with medium and low reward contexts. If participants have learned high reward repeated contexts faster than medium or low reward contexts, also recognition of such contexts might have been faster. With faster recognition, information about the target location might have already been available in repeated contexts at the time the first saccade was planned, resulting in first fixations that landed closer to the target. This might explain why we observed first fixation effects in high, but not in low and medium reward repeated contexts. 
In sum, expecting a high reward not only facilitated context configuration learning, but also led to more efficient attention guidance to the target when context configuration repeated. As in the response time results, we found no evidence for reward effects in novel contexts when they were associated with high reward. In contrast, Schlagbauer et al. (2014) reported faster response times also for targets presented in novel contexts when these were associated with different reward magnitudes. Importantly, however, in their experiment individual target locations were directly associated with high and low reward, and separate target locations were used for targets in novel and repeated contexts. Results showed that participants learned these particular location-reward associations for both context types, as high reward magnitude facilitated target detection also in novel contexts. The authors concluded that participants had learned to assign different attentional weights to target locations irrespective of the context configuration repetition (Schlagbauer et al., 2014). 
The present data showed a response time benefit with high reward only for context configurations that were repeated in the course of the experiment. Search performance in novel contexts was not affected by reward because the same target locations were used in both context types, and they were associated with all three reward conditions. Thus, neither reward magnitude nor context novelty was predictive for particular target locations (and vice versa). In fact, our results showed that reward affected behavioral search performance (i.e., target responses, Figure 3) and eye movements (first fixations, Figure 4) in a rather similar way, as both underwent a similar development in the course of the experiment. It seems as if participants had to learn that particular context configurations come with particular target locations for reward to become effective. Such learning was possible only when context configurations repeated. 
As outlined already, we assume that memory consolidation was strengthened with each repetition of a context configuration, and that this effect was enhanced when contexts were associated with high reward. Similarly, one might assume that target locations linked with a particular context configuration receive strengthening with every context repetition, which is enhanced in high reward contexts. Such strengthening might be part of context configuration learning, as with context configuration learning, the link between a particular context configuration and the respective target location is strengthened as well. Alternatively, target locations might be strengthened directly, for instance, because they receive a form of special weighting because the target location information is needed for search performance responses (see Schlagbauer et al., 2012, for a similar notion). Although our results are not decisive with respect to this point, weighting of target locations is a well-known factor in attention guidance and weighting of locations, which are associated with a high reward (and inhibition of locations associated with low reward) has been reported before (cf. Heuer, Wolf, Schütz, & Schubö, 2017; Hickey et al., 2014; Wolf, Heuer, Schubö, & Schütz, 2017). 
In sum, our results provide evidence that once the link between a particular context configuration and the respective target location has been learned, attention guidance is facilitated toward this location with each context repetition. Facilitated attention guidance manifested in more precise eye movements, as the first fixations landed closer to the target with each repetition of a context associated with high reward. 
Prediction and attention guidance
In our design, a color feature was consistently associated with the same reward magnitude; hence, each color feature reliably predicted the reward outcome. The differences observed in context learning can therefore not be attributed to different predictability of the reward signaling stimuli (cf. Tseng & Lleras, 2013). 
Gottlieb (2012) suggested three different mechanisms according to which organisms allocate their attention. In natural behavior, these mechanisms serve different functions in guiding behavior. First, an organism allocates attention to the stimulus that has the highest probability of delivering the most valuable information for an upcoming action. This mechanism is labeled “attention for action.” In learning environments, organisms attend to stimuli that are likely to reduce uncertainty. This mechanism (“attention for learning”) ensures attending to novel and unknown stimuli, which have an uncertain predictability but can result in a large information gain. The third mechanism, “attention for liking,” takes the expected value of signaled rewards into account. According to this mechanism, stimuli associated with high reward magnitude receive higher weights in attention guidance and are thus prioritized. 
“Attention for liking” fits well with the finding of more efficient attention guidance in high reward repeated contexts. Participants neither were instructed to perform eye movements in our task, nor were eye movements associated with any response outcome such as reward. In natural behavior, the role of vision is to provide the relevant information needed for decision-making, and gaze is used to acquire this kind of information (Hayhoe, 2017, 2018; Hayhoe & Ballard, 2014; Tatler, Hayhoe, Land, & Ballard, 2011). Following these considerations, the finding that eye movements were more efficiently guided to the target location in repeated contexts seems to reflect a behavior that was more and more refined as the observer has learned to attend to the location, which contains the most relevant information (the target). The fact that this effect was more pronounced in contexts associated with high reward emphasizes the role of motivational value in learning. 
Conclusion
The present findings indicate that context configuration learning is magnified and accelerated by the anticipation of high reward magnitude. When a salient context feature signaled high reward, an increased contextual cueing effect manifested in shorter response times in repeated relative to novel contexts. Reward expectation did not lead to a general boost of search performance, as performance in novel contexts was unaffected. At the same time, eye movements landed closer to targets in repeated contexts, which were associated with high reward. Taken together, the results show that high reward facilitates context learning and guides attention more efficiently to the target. 
Acknowledgments
This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—project number 222641018—SFB/TRR 135, TP B3, project number 290878970—RTG 2271, and project number 220482592—IRTG 1901. 
Commercial relationships: none. 
Corresponding author: Nils Bergmann. 
Address: Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany. 
References
Anderson, B. A. (2013). A value-driven mechanism of attentional selection. Journal of Vision, 13 (3): 7, 1–16, https://doi.org/10.1167/13.3.7. [PubMed] [Article]
Anderson, B. A. (2016). The attention habit: How reward learning shapes attentional selection. Annals of the New York Academy of Sciences, 1369 (1), 24–39. https://doi.org/10.1111/nyas.12957
Anderson, B. A., Laurent, P. A., & Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences of the United States of America, 108 (25), 10367–10371. https://doi.org/10.1073/pnas.1104047108
Awh, E., Belopolsky, A. V., & Theeuwes, J. (2012). Top-down versus bottom-up attentional control: a failed theoretical dichotomy. Trends in Cognitive Sciences, 16 (8), 437–443. https://doi.org/10.1016/j.tics.2012.06.010
Bucker, B., Belopolsky, A. V., & Theeuwes, J. (2014). Distractors that signal reward attract the eyes. Visual Cognition, 23 (1-2), 1–24. https://doi.org/10.1080/13506285.2014.980483
Chelazzi, L., Perlato, A., Santandrea, E., & Della Libera, C. (2013). Rewards teach visual selective attention. Vision Research, 85, 58–72. https://doi.org/10.1016/j.visres.2012.12.005
Chun, M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4 (5), 170–178. https://doi.org/10.1016/S1364-6613(00)01476-5
Chun, M. M., & Jiang, Y. (1998). Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cognitive psychology, 36 (1), 28–71. https://doi.org/10.1006/cogp.1998.0681
Chun, M. M., & Jiang, Y. (1999). Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science, 10 (4), 360–365. https://doi.org/10.1111/1467-9280.00168
Chun, M. M., & Jiang, Y. (2003). Implicit, long-term spatial contextual memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29 (2), 224–234. https://doi.org/10.1037/0278-7393.29.2.224
Chun, M. M., & Turk-Browne, N. B. (2007). Interactions between attention and memory. Current opinion in neurobiology, 17 (2), 177–184. https://doi.org/10.1016/j.conb.2007.03.005
Della Libera, C., & Chelazzi, L. (2006). Visual selective attention and the effects of monetary rewards. Psychological Science, 17 (3), 222–227. https://doi.org/10.1111/j.1467-9280.2006.01689.x
Della Libera, C., & Chelazzi, L. (2009). Learning to attend and to ignore is a matter of gains and losses. Psychological Science, 20 (6), 778–784. https://doi.org/10.1111/j.1467-9280.2009.02360.x
Driver, J. (2001). A selective review of selective attention research from the past century. British Journal of Psychology, 92 (1), 53–78. https://doi.org/10.1348/000712601162103
Eimer, M. (2014). The neural basis of attentional control in visual search. Trends in Cognitive Sciences, 18 (10), 526–535. https://doi.org/10.1016/j.tics.2014.05.005
Failing, M., Nissens, T., Pearson, D., Le Pelley, M., & Theeuwes, J. (2015). Oculomotor capture by stimuli that signal the availability of reward. Journal of neurophysiology, 114 (4), 2316–2327. https://doi.org/10.1152/jn.00441.2015
Failing, M., & Theeuwes, J. (2018). Selection history: How reward modulates selectivity of visual attention. Psychonomic bulletin & review, 25 (2), 514–538. https://doi.org/10.3758/s13423-017-1380-y
Feldmann-Wüstefeld, T., Brandhofer, R., & Schubö, A. (2016). Rewarded visual items capture attention only in heterogeneous contexts. Psychophysiology, 53 (7), 1063–1073. https://doi.org/10.1111/psyp.12641
Feldmann-Wüstefeld, T., & Schubö, A. (2014). Stimulus homogeneity enhances implicit learning: Evidence from contextual cueing. Vision Research, 97, 108–116. https://doi.org/10.1016/j.visres.2014.02.008
Gottlieb, J. (2012). Attention, learning, and the value of information. Neuron, 76 (2), 281–295. https://doi.org/10.1016/j.neuron.2012.09.034
Goujon, A., Didierjean, A., & Thorpe, S. (2015). Investigating implicit statistical learning mechanisms through contextual cueing. Trends in Cognitive Sciences, 19 (9), 524–533. https://doi.org/10.1016/j.tics.2015.07.009
Harris, A. M., & Remington, R. W. (2017). Contextual cueing improves attentional guidance, even when guidance is supposedly optimal. Journal of Experimental Psychology: Human Perception and Performance, 43 (5), 926–940. https://doi.org/10.1037/xhp0000394
Hayhoe, M., & Ballard, D. (2014). Modeling task control of eye movements. Current Biology, 24 (13), R622–R628. https://doi.org/10.1016/j.cub.2014.05.020
Hayhoe, M. M. (2017). Vision and action. Annual Review of Vision Science, 3, 389–413. https://doi.org/10.1146/annurev-vision-102016-061437
Hayhoe, M. M. (2018). Davida Teller Award Lecture 2017: What can be learned from natural behavior? Journal of Vision, 18 (4): 10, 1–11, https://doi.org/10.1167/18.4.10. [PubMed] [Article]
Herwig, A., & Schneider, W. X. (2014). Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 143 (5), 1903–1922. https://doi.org/10.1037/a0036781
Heuer, A., Wolf, C., Schütz, A. C., & Schubö, A. (2017). The necessity to choose causes reward-related anticipatory biasing: Parieto-occipital alpha-band oscillations reveal suppression of low-value targets. Scientific Reports, 7 (1) , 14318.https://doi.org/10.1038/s41598-017-14742-w
Hickey, C., Chelazzi, L., & Theeuwes, J. (2010). Reward changes salience in human vision via the anterior cingulate. The Journal of Neuroscience, 30 (33), 11096–11103. https://doi.org/10.1523/JNEUROSCI.1026-10.2010
Hickey, C., Chelazzi, L., & Theeuwes, J. (2011). Reward has a residual impact on target selection in visual search, but not on the suppression of distractors. Visual Cognition, 19 (1), 117–128. https://doi.org/10.1080/13506285.2010.503946
Hickey, C., Chelazzi, L., & Theeuwes, J. (2014). Reward-priming of location in visual search. PLoS One, 9 (7), e103372. https://doi.org/10.1371/journal.pone.0103372
Hickey, C., & van Zoest, W. (2013). Reward-associated stimuli capture the eyes in spite of strategic attentional set. Vision Research, 92, 67–74. https://doi.org/10.1016/j.visres.2013.09.008
Hox, J. (2002). Multilevel analysis: Techniques and applications. Mahwah, NJ: Erlbaum.
Jiang, Y., Song, J.-H., & Rigas, A. (2005). High-capacity spatial contextual memory. Psychonomic Bulletin & Review, 12 (3), 524–529. https://doi.org/10.3758/BF03193799
Lavie, N., & Dalton, P. (2014). Load theory of attention and cognitive control. In Nobre A. C. & Kastner S. (Eds.), The Oxford handbook of attention (pp. 56–75). New York: Oxford University Press.
Le Pelley, M. E., Pearson, D., Griffiths, O., & Beesley, T. (2015). When goals conflict with values: counterproductive attentional and oculomotor capture by reward-related stimuli. Journal of Experimental Psychology: General, 144 (1), 158–171. https://doi.org/10.1037/xge0000037
Le Pelley, M. E., Seabrooke, T., Kennedy, B. L., Pearson, D., & Most, S. B. (2017). Miss it and miss out: Counterproductive nonspatial attentional capture by task-irrelevant, value-related stimuli. Attention, Perception, & Psychophysics, 79 (6), 1628–1642. https://doi.org/10.3758/s13414-017-1346-1
Le-Hoa Võ, M., & Wolfe, J. M. (2015). The role of memory for visual search in scenes. Annals of the New York Academy of Sciences, 1339, 72–81. https://doi.org/10.1111/nyas.12667
Luck, S. J., & Hillyard, S. A. (1994). Electrophysiological correlates of feature analysis during visual search. Psychophysiology, 31 (3), 291–308. https://doi.org/10.1111/j.1469-8986.1994.tb02218.x
Ludwig, C. J. H., Davies, J. R., & Eckstein, M. P. (2014). Foveal analysis and peripheral selection during active visual sampling. Proceedings of the National Academy of Sciences of the USA, 111 (2), E291–299. https://doi.org/10.1073/pnas.1313553111
Manginelli, A. A., & Pollmann, S. (2009). Misleading contextual cues: How do they affect visual search? Psychological Research, 73 (2), 212–221. https://doi.org/10.1007/s00426-008-0211-1
Olson, I. R., & Chun, M. M. (2001). Temporal contextual cuing of visual attention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27 (5), 1299–1313. https://doi.org/10.1037/0278-7393.27.5.1299
Peterson, M. S., & Kramer, A. F. (2001). Attentional guidance of the eyes by contextual information and abrupt onsets. Perception & Psychophysics, 63 (7), 1239–1249. https://doi.org/10.3758/BF03194537
Pollmann, S., Eštočinová, J., Sommer, S., Chelazzi, L., & Zinke, W. (2016). Neural structures involved in visual search guidance by reward-enhanced contextual cueing of the target location. NeuroImage, 124( Pt A), 887–897. https://doi.org/10.1016/j.neuroimage.2015.09.040
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (Vol. 1). Newbury Park, CA: Sage.
Schankin, A., Hagemann, D., & Schubö, A. (2011). Is contextual cueing more than the guidance of visual-spatial attention? Biological Psychology, 87 (1), 58–65. https://doi.org/10.1016/j.biopsycho.2011.02.003
Schankin, A., & Schubö, A. (2009). Cognitive processes facilitated by contextual cueing: Evidence from event-related brain potentials. Psychophysiology, 46 (3), 668–679. https://doi.org/10.1111/j.1469-8986.2009.00807.x
Schlagbauer, B., Geyer, T., Müller, H. J., & Zehetleitner, M. (2014). Rewarding distractor context versus rewarding target location: A commentary on Tseng and Lleras (2013). Attention, Perception, & Psychophysics, 76 (3), 669–674. https://doi.org/10.3758/s13414-014-0668-5
Schlagbauer, B., Müller, H. J., Zehetleitner, M., & Geyer, T. (2012). Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks. Journal of Vision, 12 (11) 25, 1–12, https://doi.org/10.1167/12.11.25. [PubMed] [Article]
Sharifian, F., Contier, O., Preuschhof, C., & Pollmann, S. (2017). Reward modulation of contextual cueing: Repeated context overshadows repeated target location. Attention, Perception, & Psychophysics, 79, 1871–1877. https://doi.org/10.3758/s13414-017-1397-3
Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Sciences, 1 (7), 261–267. https://doi.org/10.1016/S1364-6613(97)01080-2
Smyth, A. C., & Shanks, D. R. (2008). Awareness in contextual cuing with extended and concurrent explicit tests. Memory & Cognition, 36 (2), 403–415. https://doi.org/10.3758/MC.36.2.403
Snijders, T., & Bosker, R. (1999). Multilevel modeling: An introduction to basic and advanced multilevel modeling. London: Sage.
Summerfield, C., & de Lange, F. P. (2014). Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews: Neuroscience, 15 (11), 745–756. https://doi.org/10.1038/nrn3838
Tan, M., & Wyble, B. (2015). Understanding how visual attention locks on to a location: Toward a computational model of the N2pc component. Psychophysiology, 52 (2), 199–213. https://doi.org/10.1111/psyp.12324
Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11 (5): 5, 1–23, https://doi.org/10.1167/11.5.5. [PubMed] [Article]
Thaler, L., Schütz, A. C., Goodale, M. A., & Gegenfurtner, K. R. (2013). What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision Research, 76, 31–42. https://doi.org/10.1016/j.visres.2012.10.012
Tseng, Y.-C., & Li, C.-S. R. (2004). Oculomotor correlates of context-guided learning in visual search. Perception & Psychophysics, 66 (8), 1363–1378. https://doi.org/10.3758/BF03195004
Tseng, Y.-C., & Lleras, A. (2013). Rewarding context accelerates implicit guidance in visual search. Attention, Perception, & Psychophysics, 75 (2), 287–298. https://doi.org/10.3758/s13414-012-0400-2
van Asselen, M., & Castelo-Branco, M. (2009). The role of peripheral vision in implicit contextual cuing. Attention, Perception, & Psychophysics, 71 (1), 76–81. https://doi.org/10.3758/APP.71.1.76
Wolf, C., Heuer, A., Schubö, A., & Schütz, A. C. (2017). The necessity to choose causes the effects of reward on saccade preparation. Scientific Reports, 7(1), 16966. https://doi.org/10.1038/s41598-017-17164-w
Zellin, M., Mühlenen, A. von., Müller, H. J., & Conci, M. (2014). Long-term adaptation to change in implicit contextual learning. Psychonomic Bulletin & Review, 21 (4), 1073–1079. https://doi.org/10.3758/s13423-013-0568-z
Zhao, G., Liu, Q., Jiao, J., Zhou, P., Li, H., & Sun, H.-J. (2012). Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings. Journal of Vision, 12 (6): 11, 1–13, https://doi.org/10.1167/12.6.11. [PubMed] [Article]
Figure 1
 
Trial procedure and exemplary search display. Participants were instructed to fixate the fixation dot to avoid eye movements before the search display was presented. The search display was shown until response or replaced after 1,000 ms by a blank screen. Participants searched for a T-shaped target among L-shaped distractors and reported the target's orientation by button press. After a response was given, a feedback screen presented point feedback. The amount of points depended on the color presented in the search display. Color-reward associations were balanced across participants. Correct answers were rewarded, only incorrect responses were followed by no reward (“+0”).
Figure 1
 
Trial procedure and exemplary search display. Participants were instructed to fixate the fixation dot to avoid eye movements before the search display was presented. The search display was shown until response or replaced after 1,000 ms by a blank screen. Participants searched for a T-shaped target among L-shaped distractors and reported the target's orientation by button press. After a response was given, a feedback screen presented point feedback. The amount of points depended on the color presented in the search display. Color-reward associations were balanced across participants. Correct answers were rewarded, only incorrect responses were followed by no reward (“+0”).
Figure 2
 
Target locations and exemplary search displays in novel and repeated contexts associated with different reward magnitudes. The same target locations (indicated by blue cycles) were used in repeated and novel contexts and in contexts associated with low, medium, and high reward. Color-reward associations were balanced across participants.
Figure 2
 
Target locations and exemplary search displays in novel and repeated contexts associated with different reward magnitudes. The same target locations (indicated by blue cycles) were used in repeated and novel contexts and in contexts associated with low, medium, and high reward. Color-reward associations were balanced across participants.
Figure 3
 
Observed response times (upper row) for novel (dotted lines) and repeated (solid lines) contexts associated with low (left panel), medium (middle panel), and high (right panel) reward. The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Low, medium, and high reward magnitudes are also indicated by different colors. The gray bar depicts the time gap (1–2 days) between the two experimental sessions. Error bars denote the standard error of the mean.
Figure 3
 
Observed response times (upper row) for novel (dotted lines) and repeated (solid lines) contexts associated with low (left panel), medium (middle panel), and high (right panel) reward. The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Low, medium, and high reward magnitudes are also indicated by different colors. The gray bar depicts the time gap (1–2 days) between the two experimental sessions. Error bars denote the standard error of the mean.
Figure 4
 
Distance of the first fixation to the target location (upper row). The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Associated reward magnitudes are indicated by different colors and displayed in separate diagrams in columns one to three. Solid lines indicate performance in repeated, dotted lines in novel context configurations. The gray bar depicts the time gap between the two experimental sessions. Error bars denote the standard error of the mean.
Figure 4
 
Distance of the first fixation to the target location (upper row). The predicted values based on the calculated HLM are depicted in the row below the observed values (lower row). Associated reward magnitudes are indicated by different colors and displayed in separate diagrams in columns one to three. Solid lines indicate performance in repeated, dotted lines in novel context configurations. The gray bar depicts the time gap between the two experimental sessions. Error bars denote the standard error of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×