Free
Article  |   February 2012
A triple dissociation between learning of target, distractors, and spatial contexts
Author Affiliations
Journal of Vision February 2012, Vol.12, 5. doi:https://doi.org/10.1167/12.2.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christophe C. Le Dantec, Elizabeth E. Melton, Aaron R. Seitz; A triple dissociation between learning of target, distractors, and spatial contexts. Journal of Vision 2012;12(2):5. https://doi.org/10.1167/12.2.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When we perform any task, we engage a diverse set of processes. These processes can be optimized with learning. While there exists substantial research that probes specific aspects of learning, there is a scarcity of research regarding interactions between different types of learning. Here, we investigate possible interactions between Perceptual Learning (PL) and Contextual Learning (CL), two types of implicit learning that have garnered much attention in the psychological sciences and that often co-occur in natural settings. PL increases sensitivity to features of task targets and distractors and is thought to involve improvements in low-level perceptual processing. CL regards learning of regularities in the environment (such as spatial relations between objects) and is consistent with improvements in higher level perceptual processes. Surprisingly, we found CL, PL for target features, and PL for distractor features to be independent. This triple dissociation demonstrates how different learning processes may operate in parallel as tasks are mastered.

Introduction
Learning and memory are of fundamental importance to our understanding of brain function. A major challenge to their study is that these are not unitary processes. An example of this can be found in the analysis of how one could learn to hunt and capture an animal (target). Semantic knowledge helps one decide the general location to start the hunt (perhaps one's colleagues mentioned a particularly good forest location). Episodic knowledge may be useful in finding the location where on a previous trip a target was discovered. Procedural knowledge is often used to describe how one learns to conduct the hunt (e.g., move quietly, setup one's gear, etc). However, these broad categories do not capture other aspects of learning that need to take place to successfully spot, track, and capture the target. For instance, one must learn which features in the environment indicate the likely presence or absence of the target; research has shown that this type of Contextual Learning (CL) can occur both quickly (with just a couple of examples) and implicitly (i.e., without subjects being aware of their acquired knowledge; Chun, 2000; Chun & Jiang, 1998). One must also learn to spot the target when one is looking in the correct location (camouflage can make this very difficult). Research of Perceptual Learning (PL) indicates that improving one's visual discrimination abilities involves refining one's representations of basic stimulus features (such as line orientations and motion directions) and that weeks of training involving many thousands of stimulus presentations can be required to achieve expert performance (Ahissar & Hochstein, 2004; Fahle, 2004; Seitz & Watanabe, 2009). Our understanding of different learning processes (such as CL and PL) is largely due to clever techniques that researchers have developed to isolate individual learning processes. However, this comes with the cost that a comprehensive approach to learning is rare and little is known regarding the extent to which different components of learning develop and interact together as participants learn to perform a task. 
All tasks involve specific contexts and consist of specific features. Accordingly, in all but perhaps the most trivial circumstances, both CL and PL will take place in concert. However, to date, no research has examined how both types of learning arise and interact within the same study. This is a notable problem given recent research showing that the long-touted spatial specificity of PL can be an artifact of the limited spatial configurations in which the stimuli were trained (Xiao et al., 2008). It may be that some effects touted as PL are actually due to subjects learning the spatial contexts of the experimental setup. Likewise, some effects thought to be CL could be due to improved sensitivities of the elements being trained. Without examining these learning effects together, it is difficult to accurately understand what in fact subjects have learned. Here, we begin to address these issues through the investigation of these two well-established types of visual implicit learning that are phenomenologically distinct, believed to occur at different stages of visual processing and to be subserved by plasticity in different brain areas. 
For CL, we chose to adapt the paradigm of contextual cuing (Chun, 2000). In the contextual cuing paradigm, subjects perform a visual search (for example, finding a T among a set of Ls) in which the spatial locations of the distracting elements are repeated for some of the trials and novel distractor locations are chosen for other trials. Previous research has found that, with just a handful repeats of a given spatial context, search response times for these Repeated contexts are faster than those for Novel contexts that are made up of newly generated distractor locations (Chun & Jiang, 1998). These same studies have demonstrated that when subjects are asked to perform a forced choice between Novel and Repeated contexts they are unable identify which contexts were repeated, a demonstration that this learning is implicit. Furthermore, Jiang, Song, and Rigas (2005) showed that response time benefits for these Repeated contexts can last for weeks or more and that, in addition to the improved search times for the Repeated over the Novel search displays, response times improve across conditions over the course of the experimental sessions. These overall improvements in search times for both novel and repeated contexts may be an indication of PL and provides our motivation to better understand how PL may form in relation to CL. 
For PL, we examined orientation-specific learning resulting from training on a visual search task (Ahissar & Hochstein, 2004). This PL paradigm is particularly appropriate to study in conjunction with CL because it employs the same type of visual search task (as used for contextual cuing) and may account for some of the across condition effects found in the above-mentioned studies of CL. The key difference between studies of PL and CL is that PL is typically operationalized by the degree to which performance benefits are specific to the features of the training stimuli. This specificity can be assessed by testing subject's performance, after training, on search arrays that include features different from those used during training. 
We thus designed a novel visual search task that manipulated both CL and PL within the confines of the same task. To evaluate PL, subjects were trained on specific target and distractors orientations during successive days, and at the end, a test session was run to compare the performance on Trained (orientations experienced during the training sessions) vs. Untrained (orientations only used during the test sessions) target and distractor orientations. To evaluate CL, performance on Repeated (configurations with fixed locations of the target and the distractors) vs. Novel (matched location of the targets to one Repeated context to control for target eccentricity effect) contexts was investigated. An important aspect of this design is that while the spatial locations of the target and distractor stimuli were fixed for a given Repeated context display, the orientations of these stimuli were tested using both the Trained and Untrained orientation sets. This allowed us not only to examine both effects of PL and CL but also to address possible interactions between the two. 
Throughout this manuscript, we use the term “Trained” to refer to the orientations used during training and “Untrained” to refer the orientations not used during training. Likewise, the term “Repeated” refers to the spatial configurations of target and distractor locations that are repeated more than once during training (even though the orientations of these are sometimes altered) and “Novel” to refer to a spatial configuration that has not been seen before (i.e., differs in one or more distractor locations from a previously presented configuration). 
Experiment 1
Methodology
Subjects
Ten subjects (five females and five males; age range = 20–28 years; mean = 22.75 years, SD = 3.41 years) had normal or corrected-to-normal vision and were paid between $8 and $13 per hour, based on their performance. Subjects provided informed consent at the beginning of the experiment and experimental conditions conformed to the guidelines of the University of California, Riverside Human Research Review Board. 
Materials
An Apple Mac Mini running Matlab (Mathworks, Natick, MA) and Psychtoolbox Version 3 (Brainard, 1997; Pelli, 1997) was used for stimulus generation and experiment control. Subjects sat on a height adjustable chair at 50–55 inches from a 24″ Sony Trinitron CRT monitor (resolution: 1600 * 1200 at 100 Hz). Gaze position on the screen was tracked with the use of an eye tracker (EyeLink 1000, SR Research). 
Task and stimuli
Subjects were required to perform a visual search task. The stimuli were white (95 cd/m2) or black (5.5 cd/m2) lines (0.1° × 1°) presented on a gray (40 cd/m2) background. Subjects were trained to find a target with an orientation (45° (Figure 1A or 1C) or 135° (Figure 1B), counterbalanced across subjects) among a set of distractors (ranging from 316° to 44° (Figure 1A or 1B) or 46° to 134° (Figure 1C), counterbalanced across subjects) and report whether the target was white or black (randomized across trials). The distractor range (an orientation wedge centered on 0° or 90°) was determined with a staircase procedure (see below for description) such that the closest distractor orientation of the wedge was adaptively moved as close to the target orientation as the participant could tolerate and perform well. For a given trial, the range of distractor orientations was chosen uniformly across the extent of the wedge such that there was always one distractor orientation present at the threshold value. During the testing session, subjects ran the same task but with both possible target and distractor orientations in the 4 possible combinations. 
Figure 1
 
Examples of stimulus displays: (A) target oriented at 45° and horizontal context; (B) target oriented at 135° and horizontal context; (C) vertical context. (D) Grid indicating the 36 possible positions (9 in each quadrant) for the target and the distractors. (E) Target oriented at 45° with both horizontal and vertical distractors; Experiment 2.
Figure 1
 
Examples of stimulus displays: (A) target oriented at 45° and horizontal context; (B) target oriented at 135° and horizontal context; (C) vertical context. (D) Grid indicating the 36 possible positions (9 in each quadrant) for the target and the distractors. (E) Target oriented at 45° with both horizontal and vertical distractors; Experiment 2.
The spatial locations of targets and distractors were presented on a grid (Figure 1D) such that the eccentricity (2.5°, 4.5°, and 7°) and placement in the left/right and upper/lower visual quadrants were balanced across Repeated and Novel contexts. Each line could be presented in one of 9 locations (3 at each eccentricity) in each quadrant and 3 lines were presented in each visual quadrant for a set size of 12 search items. To manipulate context, we pre-calculated all possible configurations of the 12 items within the grid given the above constraints. To prevent the occurrence of displays where all items were presented at the same eccentricity, we added the further constraint that all displays contained at least 3 items in each eccentricity. From this set of possible search displays, some contexts were selected to be Repeated on each day (the target location of each of these contexts was fixed for each subject) and others selected for use as Novel displays. 
Design and procedure
The experiment was divided in different phases (Figure 2A). In the Familiarization phase (session 1), subjects were instructed on the task and ran 20 practice trials. 
Figure 2
 
Experimental timeline: The experiment was divided in different phases. During session 1, subjects were familiarized with the task. In sessions 2–10, they were trained on the visual search task (see Figure 3) with particular target and distractor orientations and with some Repeated contexts. After training phase, subjects were tested with different target and distractor orientations. The same day, an explicit recognition test (2IFC) was performed to test for explicit learning of the Repeated contexts (see Figure 4).
Figure 2
 
Experimental timeline: The experiment was divided in different phases. During session 1, subjects were familiarized with the task. In sessions 2–10, they were trained on the visual search task (see Figure 3) with particular target and distractor orientations and with some Repeated contexts. After training phase, subjects were tested with different target and distractor orientations. The same day, an explicit recognition test (2IFC) was performed to test for explicit learning of the Repeated contexts (see Figure 4).
In the Training phase (sessions 2 to 10), subjects were trained on the visual search task. Each session consisted of 1,000 trials that were split into eight blocks with a short break between blocks. The entire session lasted approximately 1 h. The general procedure for the training is shown in Figure 3. A gaze contingent display was utilized such that the subject had to fixate a centrally presented red dot for 500 ms in order for each trial to begin. The search display was presented to the subject for 200 ms followed by a gray screen and then he had 5000 ms to indicate the color of the target with a key press (“1” for white or “2” for black). The trial was determined to be invalid if an eye movement was made while the search display was on the screen, no response was given, or a key other than “1” or “2” was pressed. Invalid trials were rare with a mean of 99.3% and standard deviation of 1.4% of trials being valid across all subjects and sessions and with the worst performance of any session having 94% valid trials. No feedback was given to the subject except during the 20 practice trials given at the beginning of each session. 
Figure 3
 
General procedure for the visual search task.
Figure 3
 
General procedure for the visual search task.
The session was organized into miniblocks that contained each possible Repeated context and a set of Novel contexts with matched target eccentricities. After each miniblock, the orientation range of the distractors was adjusted with a staircase procedure such that the distractor range was increased if average performance of the previous block was greater than 80% correct or the range was decreased if the previous block performance was lower than 70% correct. The value for the new block was set to the current threshold value (orientation difference of target and closest distractor) multiplied by the difference between the proportion correct for that block and 0.75. This procedure was based on pilot experiments that found stable threshold estimates and asymptotes using this procedure with the present task and stimuli. Key to this procedure is that performance was averaged across both the Novel and Repeated configurations when calculating these thresholds. This ensured a relatively constant level of performance during training and ensured that the distractor range was always matched between Repeated and Novel contexts. However, since the threshold was calculated based on the average performance between the Novel and Repeated contexts, accuracy differences between them could still be measured. Threshold values reported in the manuscript represent the performance in the final 10 blocks of a given session data for each subject and each session was visually inspected to ensure that these values were stable and represented valid threshold estimates. Furthermore, across all subjects and sessions, we find that the standard deviation of the threshold estimates over the last 10 blocks of each training session was 0.56 degree, which is small in magnitude compared to the changes in thresholds under consideration in this paper. 
In the Test phase (session 11), subjects conducted the same task but also with Untrained target and distractor orientations. In this session, each combination of Trained/Untrained target orientation and Trained/Untrained distractor set orientation was run in separate interleaved blocks. For these tests, a separate staircase (blockwise procedure identical to that used during training) was run on each of the four orientation conditions, however, also based on the average of the performance on Novel and Repeated configurations. 
After performing the test sessions, subjects performed an explicit recognition test to assess whether they could identify the learned Repeated contexts. The purpose of this test was to examine whether previous findings that CL is implicit were replicated in our procedure. This was a particular concern given the fact that our Repeated contexts were repeated many times across multiple sessions, which could lead to subjects gaining explicit knowledge of these contexts. A two-interval forced choice (2IFC) procedure was used where two search displays were presented successively (Figure 4) and the subject indicated which display was more familiar. In some trials, one stimulus was a Repeated context, while in others both were Novel. 
Figure 4
 
Explicit recognition test. Subjects were successively presented with two displays and asked to report whether the first or second display was the most familiar.
Figure 4
 
Explicit recognition test. Subjects were successively presented with two displays and asked to report whether the first or second display was the most familiar.
Results
Data from the training sessions showed significant effects of both Perceptual Learning (PL) and Contextual Learning (CL). PL can be seen in Figure 5A in that the smallest orientation difference that subjects could discriminate decreased from 13° ± 1.8° to 7.1° ± 1.1° across sessions (effect of the day (D): F(8,9) = 3.2, p = 0.0034; ANOVA), and the reaction time decreased from 875.8 ms ± 31.5 ms to 691.8 ms ± 23.2 ms (D: F(8,9) = 9.164, p < 0001; ANOVA). Given that these thresholds and reaction times were based on the average performance across Repeated and Novel contexts, we were able to examine accuracy differences between these context types in addition to differences in reaction times. CL can be seen in Figures 5B and 5C, showing that accuracy (effect of the context (CL): F(1,9) = 4.84, p = 0.0001; ANOVA) and reaction times (CL: F(1,9) = 50.14, p = 0.0001; ANOVA) were better for the Repeated compared to the Novel contexts. 
Figure 5
 
Results of training sessions. On the first row, for the Experiment 1: (A) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (B) accuracy, (C) reaction times in training sessions, and (D) reaction times using only the first 5 presentations of each context in each training session. On the second row, for Experiment 2: (E) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (F) accuracy, (G) reaction times in training sessions, and (H) reaction times using only the first presentation of each context in each training session.
Figure 5
 
Results of training sessions. On the first row, for the Experiment 1: (A) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (B) accuracy, (C) reaction times in training sessions, and (D) reaction times using only the first 5 presentations of each context in each training session. On the second row, for Experiment 2: (E) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (F) accuracy, (G) reaction times in training sessions, and (H) reaction times using only the first presentation of each context in each training session.
A notable issue is that there were only nominal increases in CL between the first and last training sessions with the difference in performance between Novel and Repeated contexts showing only a tendency on the first training day (reaction time: p = 0.076; accuracy: p = 0.069; paired t tests) but was significant by the last day (reaction time: p = 0.0004; accuracy: p = 0.0022; paired t tests). However, there were no interactions with training day (F(8,72) = 0.48, p = 0.87 and F(8,72) = 0.91, p = 0.51 for accuracy and reaction time, respectively), suggesting that the effects of CL arose rapidly. A consideration is that our design, using a staircase to increase difficulty within the session, resulted in overall accuracy dropping from 100% in the first block to ∼70% in the final block and RTs growing from ∼500 ms to ∼800 ms. These large changes in performance prevented us from seeing the time course of CL within the first session. However, a fast time course of CL is typical with effects often arising within the first few trials of a session (Chun & Jiang, 1998). 
To better understand the development of contextual learning in this experiment, we compared the difference of reaction times for the Repeated vs. Novel contexts on the first 5 presentations of each context in each training session (see Figure 5D); typically, CL takes 5 or more trials before it can be clearly observed (Chun & Jiang, 1998). Here, we found no significant contextual effects at the beginning of the first, second, and third days (CL on Day 1: p = 0.5338; CL on Day 2: p = 0.9754; CL on Day 3: p = 0.1555; paired t tests), but we found significant or near significant differences on the other days (CL on Day 4: p = 0.0043; CL on Day 5: p = 0.0306; CL on Day 6: p = 0.0585; CL on Day 7: p = 0.0222; except CL on Day 8: p = 0.3326 and on Day 9: p = 0.1244; paired t tests). Of note, accuracy was close to ceiling (see Supplemental Figure 1A) in these trials because the orientation difference between targets and distractors was substantially above threshold. Previous research has shown that CL is most evident in reaction times when accuracy is high (Chun & Jiang, 1998). While these results are not conclusive (and are more so in Experiment 2), they suggest that subjects did learn the contexts during the course of our experiment. 
While training data demonstrate that PL and CL can arise within the same task, the design of the training sessions was not sensitive to possible interactions between these types of learning. The testing sessions were designed to assess the specificity of learning and the extent to which different aspects of learning interacted. To accomplish this, separate staircases were run for each combination of Trained and Untrained target and distractor conditions (thresholds plotted in Figure 6A); notably, these thresholds were still based on the average performance of Novel and Repeated configurations. We found that thresholds were smaller for the Trained target than the Untrained target (effect of the target (T): F(1,9) = 9.64, p = 0.013; ANOVA) as well as for the Trained distractors compared to the Untrained distractors (effect of the distractor (d): F(1,9) = 10.95, p = 0.0091; ANOVA). Surprisingly, we found no interaction between those factors (T × d: F(1,9) = 0.002, p = 0.96; ANOVA). These data suggest that PL shows independent benefits for the orientations of the target and distractor elements. 
Figure 6
 
Results of test sessions: For Experiment 1, (A) threshold, (B) accuracy, and (C) reaction time are shown as a function of the Trained target (TT), Untrained target (UT), Trained distractors (TDs), and Untrained distractors (UDs). For Experiment 2, (D) accuracy and (E) reaction time are shown as a function of the Trained target (TT) and Untrained target (UT) for Repeated and Novel contexts.
Figure 6
 
Results of test sessions: For Experiment 1, (A) threshold, (B) accuracy, and (C) reaction time are shown as a function of the Trained target (TT), Untrained target (UT), Trained distractors (TDs), and Untrained distractors (UDs). For Experiment 2, (D) accuracy and (E) reaction time are shown as a function of the Trained target (TT) and Untrained target (UT) for Repeated and Novel contexts.
Importantly, the Repeated configurations were tested both using the Trained and Untrained orientation sets (thus, context only specifies the spatial locations of target and distractors, not the orientations of these stimuli). This allowed us to address possible interactions between CL and PL by examining accuracy (Figure 6B; staircases allowed performance to range from 70% to 80%) and reaction times (Figure 6C) for the Trained and Untrained orientations in Repeated and Novel contexts (see Table 1). As a measure of CL, we found that subjects responded more quickly (CL: F(1,9) = 15.90, p = 0.0032; ANOVA) and more accurately (CL: F(1,9) = 26.36, p = 0.0006; ANOVA) for the Repeated contexts compared to Novel ones. PL for target orientations can also be seen in subjects' faster reaction times (T: F(1,9) = 14.87, p = 0.0039; ANOVA) and higher accuracy (F(1,9) = 6.50, p = 0.031; ANOVA) for the Trained as compared to the Untrained target orientation. Likewise, PL for the distractor orientations can be seen in faster reaction times (d: F(1,9) = 17.60, p = 0.0023; ANOVA) and higher accuracy (d: F(1,9) = 15.38, p = 0.0035; ANOVA) for the Trained distractors as compared to the Untrained ones. However, we failed to find any interaction between the CL (CL), PL for targets (T), and PL for distractors (d) in either measures of reaction time (CL × T: F(1,9) = 1.63, p = 0.23; CL × d: F(1,9) = 1.085, p = 0.32; T × d: F(1,9) = 0.36, p = 0.56; ANOVA) or for accuracy (CL × T: F(1,9) = 2.36, p = 0.16; CL × d: F(1,9) = 3.29, p = 0.10; T × d: F(1,9) = 1.056, p = 0.33; ANOVA). These results show a triple dissociation between CL, PL for targets, and PL for distractors; PL for targets transfers to Novel contexts and distractor conditions, PL for distractors transfers to Untrained targets and Novel contexts, and CL transfers to Untrained target and distractor orientations. These data provide evidence that separate processes may subserve each of these learning effects. 
Table 1
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 1.
Table 1
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 1.
Context
Novel (mean/SERR) Repeated (mean/SERR)
(A) Accuracy (%)
   Target Trained target/trained distractors 75.00/0.96 85.74/1.93
Untrained target/trained distractors 72.50/1.08 80.94/1.27
Trained target/untrained distractors 69.91/0.69 76.85/1.75
Untrained target/untrained distractors 68.89/2.00 74.26/2.12
(B) Reaction time (ms)
   Target Trained target/trained distractors 617.1/19.1 574.1/24.7
Untrained target/trained distractors 695.3/12.6 612.3/21.5
Trained target/untrained distractors 745.3/21.1 665.0/21.6
Untrained target/untrained distractors 784.0/15.1 708.4/17.5
The explicit recognition test was run to see if we would replicate previous findings that CL is implicit (Chun & Jiang, 1998). We were especially interested in determining if the prolonged training on each context (40 presentations per day for 10 days) would enable subjects to develop explicit knowledge and be aware of these Repeated items. However, we found no evidence that subjects were aware of the Repeated contexts (CL: p = 0.96 Novel vs. Repeated; paired t test; cf. Figure 7). This demonstration that CL in our study was implicit helps substantiate our claim that the CL observed in the current study is similar to that found in previous research of contextual cuing (Chun & Jiang, 1998). 
Figure 7
 
Results of the explicit recognition test: percentage of correct recognition.
Figure 7
 
Results of the explicit recognition test: percentage of correct recognition.
While the above results clearly showed how PL evolved with training, our tracking of the development of the time course of CL was rather poor. To better address CL, we undertook another experiment where we made a number of improvements to the experimental design. To make the task more challenging, we reduced the stimulus duration to 100 ms and combined the distractor sets so that on a given trial distractors were oriented both clockwise and counterclockwise to the target. These changes were aimed to reduce pop-out for the target from among the distractors, which was a persistent problem at the beginning of each training session where the orientation differences between target and distractor were large. In addition, to improve the rhythm of the experiment, we changed the intertrial interval from 5 to 2 s so as to achieve a procedure more conducive to learning (Zhang et al., 2008). 
Experiment 2
Methodology
Subjects
Ten undergraduate students were recruited for this experiment (five females and five males; age range = 19–25 years; mean = 22.33 years, SD = 2.18 years) at the University of California, Riverside according to the same standards as described for Experiment 1
Design and procedure
Procedures are the same as Experiment 1 except as noted. Subjects were trained to find a target with a specific orientation (45° or 135°, counterbalanced across subjects) among a set of distractors (ranging from 316° to 44° and 46° to 134°), but here, the distractors presented in a given trial were chosen both from the distractor sets (Figure 1E). In this experiment, the search display was presented to the participant for 100 ms followed by a gray screen, and then he had 2000 ms to indicate the color of the target with a key press (“1” for white or “2” for black). 
As with Experiment 1, Experiment 2 was divided into 3 phases. Subjects had a familiarization phase (Day 1) followed by 8 days of training that started at least 24 h after familiarization. Each session followed the general procedure described above and they all started with 20 practice trials. Each training session consisted of 1,200 trials that were split into eight blocks with a short break between blocks; the entire session lasted approximately 1 h. In Experiment 2, each miniblock consisted of 24 trials (12 Repeated contexts and 12 Novel contexts). Finally, the test session lasted approximately 1 h where subjects' performance was assessed with Trained and Untrained target orientations and Repeated and Novel contexts. For these tests, we used a fixed orientation difference between target and distractors of 15°, which corresponded closely to the final threshold value found in the test sessions. 
Results
Data from the training sessions showed a significant effect of Perceptual Learning. This learning can be seen in Figure 5E where the smallest orientation difference that subjects could discriminate decreased from 31.2° ± 0.6° to 14.3° ± 2.0° (D: F(7,8) = 3.29, p = 0.0053; ANOVA) across sessions as the reaction time (Figure 5G) decreased from 811.4 ms ± 25.3 ms to 730.8 ms ± 17.7 ms (D: F(7,9) = 7.38, p < 0.0001; ANOVA). The evolution of the performance shows that the subjects became more and more efficient in the task as the training progressed. 
Data from the training also indicated CL with accuracy (CL: F(1,9) = 4.84, p = 0.0001, ANOVA; Figure 5F) and reaction times (CL: F(1,9) = 50.14, p = 0.0001, ANOVA; Figure 5G) were significantly better for the Repeated compared to the Novel contexts. Most importantly, for both accuracy and reaction times, we found an interaction between the Context and the Day (accuracy, CL × D: p = 0.0001; reaction times, CL × D: p = 0.0006; ANOVA). These results show clearly that CL increased from the first to the last training sessions. 
To address whether these results were due to contexts being newly learned in each session or CL building up across sessions, we compared the difference of reaction times for the Repeated vs. Novel contexts on the first presentation on each day of each of the 12 Repeated and 12 Novel contexts for each session of the 8 days of training. As indicated in Figure 5H, the first day, the analysis did not show any significant difference between the Repeated compared to the Novel contexts (CL on Day 1: p = 0.73; paired t tests), which was expected because subjects never saw any of those contexts before. On the contrary, for the rest of the training days, this difference was significant (CL on Day 2: p = 0.017; Day 3: 0.0062; Day 5: p = 0.023; Day 6: p = 0.0252; Day 7: p = 0.022; Day 8: p = 0.011; paired t tests) or showed a tendency (Training Day 4: p = 0.10; paired t tests) for faster responses to Repeated than Novel contexts. Again, accuracy was close to ceiling (see Supplemental Figure 1B) at the beginning of each session. These results show clearly that the effects of CL was not present during the first block of the first session but arose through training and this learning was maintained across sessions. 
The testing sessions were designed to assess the specificity of learning with the Trained and Untrained target orientations and the Repeated and Novel contexts. Like in Experiment 1, improved accuracy (T: F(1,9) = 12.302, p = 0.0066; ANOVA; Figure 6D) and faster responses (T: F(1,9) = 16.984, p = 0.0026; ANOVA; Figure 6E) for Trained compared to Untrained target orientations indicate PL. Likewise, we found CL where accuracy (CL: F(1,9) = 11.038, p = 0.0089; ANOVA; Figure 6D) and reaction times (CL: F(1,9) = 3.590, p = 0.090; ANOVA; Figure 6E) were better with Repeated as compared to Novel contexts. Again, we did not find any interaction between the PL and CL in either measures of reaction time (CL × T: F(1,9) = 0.050, p = 0.83; ANOVA) or for accuracy (CL × T: F(1,9) = 1.26, p = 0.29; ANOVA). These results (see Table 2) confirm the results of CL and PL found in Experiment 1 and support that there is a dissociation between CL and PL; PL transfers to Novel contexts and CL transfers to Novel targets. 
Table 2
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 2.
Table 2
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 2.
Target
Trained Untrained
(A) Accuracy (%)
   Context Novel (mean/SERR) 66.67/0.15 61.11/0.70
Repeated (mean/SERR) 77.64/0.95 67.08/0.10
(B) Reaction time (ms)
   Context Novel (mean/SERR) 762.5/2.8 845.6/5.5
Repeated (mean/SERR) 732.6/5.8 822.6/3.2
Discussion
In the present study, we explored two distinct types of visual implicit learning, Perceptual Learning (PL) and Contextual Learning (CL). We find that it is possible and useful to examine CL and PL as they developed together within the confines of the same task. For CL, we found that subjects were faster and more accurate responding to the Repeated contexts, even though they did not have conscious access to this knowledge of the Repeated contexts. This CL transferred to the Untrained target and distractors. For PL, we found that learning was specific both to the Trained target and distractor orientations but transferred to the different contexts. Most interestingly, we found that performance measures between CL, PL for targets, and PL for distractors were independent, supporting a triple dissociation between these three aspects of learning. 
These data provide the first direct experimental dissociation between CL and PL and support the idea that they might be processed by different neural systems. CL is typically characterized as being “high level” by reason that it is fast (requiring only a handful of trials to manifest) and occurs in relation to the global configuration of a scene, which requires processing higher order visual features and integration of information across relatively large regions of visual space (Olson, Chun, & Allison, 2001). EEG studies of CL indicate that it affects relatively late ERP potentials >200 ms after stimulus presentation, which are thought to be related to attentional mechanisms (Chun & Jiang, 1998; Schankin & Schubo, 2009). While there have been few studies that address the locus of CL in the brain, there is some evidence that the parahippocampal place area (PPA) is involved in learning of context (Aminoff, Gronau, & Bar, 2007). Our results in the training session or the test session indicated that the Repeated contexts consistently elicit better performance than the Novel ones, independently of the fact that the target or the distractor orientations were Trained or Untrained. Results of other studies also find that CL can transfer across stimulus attributes (Jiang & Wagner, 2004). However, in previous studies of CL, the goal was to rule out contributions of stimulus-specific learning rather than to understand interactions between CL and PL. 
A concern regarding Experiment 1 was that there was a lack of significant interactions between CL and training day, in both accuracy and RT, whereas in Experiment 2 those interactions were significant. This lack of interaction in Experiment 1 is at least partially due to the fast acquisition of CL, which can form in as little as five repetitions of a context (Chun & Jiang, 1998). This combined with our use of staircases within sessions, which led to dramatic changes of accuracy and reaction time within each session, masked our ability to properly describe the time course of CL in Experiment 1. Another concern is that with the target being at the extreme of the distractor range it may have been prone to pop-out. These difficulties were ameliorated in Experiment 2, where we increased the task demands (by broadening the distribution of distractor orientations to both sides of the target and reducing stimulus presentation time) and improved the task flow (by reducing the intertrial interval). As a result of these modifications, in Experiment 2, we were able to demonstrate a clear interaction between CL and training day for both accuracy and RT. Furthermore, Experiment 2 clearly shows that CL benefits can be found from the beginning of later training sessions and thus are not entirely acquired anew on subsequent training days. 
It is important to point out that our present findings are not a demonstration that context plays no role in PL. For example, the presence of spatial flankers (Adini, Sagi, & Tsodyks, 2002) or auditory stimuli (Kim, Seitz, & Shams, 2008; Seitz, Kim, & Shams, 2006) has been shown to facilitate perceptual learning (Shams & Seitz, 2008). In these cases, the presence of context during training provide additional information that may serve to reduce uncertainty, such as from factors like roving (Adini, Wilkonsky, Haspel, Tsodyks, & Sagi, 2004; Yu, Klein, & Levi, 2004), and promote learning (Seitz & Dinse, 2007). Other studies invoke context to explain lack of transfer between different stimulus conditions. For example, Crist, Kapadia, Westheimer, and Gilbert (1997) described the lack of transfer between Vernier and bisection hyperacuity tasks as a context-specific effect of learning. While these studies demonstrate that some forms of context play a role in learning, they do not address how learning of context relates to perceptual learning. While we cannot rule out that other aspects CL do interact with PL, to date a clear interaction has not been established. 
Our finding that PL is specific to the orientation of the target replicates a number of similar studies showing that PL of visual search is specific to the features of the trained search task (Ahissar & Hochstein, 1993, 1997). Furthermore, this learning is similar to that found in the texture discrimination task of Karni and Sagi (1991), which is similarly composed of an oriented target among differently oriented background stimuli. Generally, it fits well within the literature showing that perceptual learning can be highly specific to a wide range of trained stimulus features including retinotopic location (Crist et al., 1997; Watanabe et al., 2002), visual orientation (Fiorentini & Berardi, 1980; Schoups, Vogels, Qian, & Orban, 2001), and direction (Ball & Sekuler, 1981; Seitz & Watanabe, 2003), among others. Neuroscientific studies give direct evidence of sensory plasticity across the visual hierarchy through single-unit recording in monkeys (Li, Padoa-Schioppa, & Bizzi, 2001; Schoups et al., 2001; Yang & Maunsell, 2004; Zohary, Celebrini, Britten, & Newsome, 1994) and fMRI signal changes in humans (Furmanski, Schluppeck, & Engel, 2004; Schwartz, Maquet, & Frith, 2002; Vaina, Belliveau, des Roziers, & Zeffiro, 1998). While the exact locus of visual plasticity in a given study is often an issue of significant controversy, as a whole, these studies give an indication that plasticity is likely occurring at all stages of processing with a distribution that varies across tasks and training paradigms (Ahissar & Hochstein, 2004). Our present results provide additional evidence regarding the specificity of PL, and the demonstration of dissociable learning for targets, distractors, and context provides a framework for understanding the different contributions to learning that may be made by different stages of processing to learning. 
As expected, subjects performed best when presented with both the Trained target and the Trained distractors and the worst performance when both target and distractors were Untrained. These results confirm previous research showing that prolonged visual experience facilitates processing of both target and distractor items during visual search (Mruczek & Sheinberg, 2005; Sireteanu & Rettenbach, 1995). In addition, when only the Trained target is present, performance was worse than when only the Trained distractors were present, as shown in significantly faster reaction times (F(1,9) = 5.9, p = 0.038; ANOVA), which is supported by close to significant lower thresholds (T(9) = 2.2, p = 0.058; paired t test) and higher accuracy (F(1,9) = 3.0, p = 0.12; ANOVA) for the UT/TD compared to the TT/UD conditions shown in Figure 6. Thus, subjects gain greater benefit from improved sensitivity for the distractor orientation than for the target orientation. This result is consistent across measures of threshold, accuracy, and reaction time. To understand this difference and more particularly the direction of this difference, we may consider the fact that the target occupies a very small space in the visual field and thus will stimulate a relatively small number of visual neurons. Furthermore, perceptual learning of the target may have occurred, at least partially, separately at different target locations. On the contrary, distractors are present across most of the visual field and stimulate a greater number of visual neurons. Thus, this greater learning for distractors is sensible from the perspective of the size of the neural population stimulated by these stimuli, even if it runs counter to the idea that the greatest learning should be for the object that is the focus of attention. 
The learning of both targets and distractors in the visual search is consistent with previous research of visual search that had demonstrated that automaticity search relies on consistent mappings of targets and distractors (Schneider & Shiffrin, 1977). For example, Mruczek and Sheinberg (2005) found that extended practice with target and distractor sets led to superior performance for the experienced targets and distractors over searches including unfamiliar targets or distractors. In addition, Wang, Cavanagh, and Green (1994) showed that unfamiliar target among familiar distractors produced faster searches than familiar target among unfamiliar distractors. These studies used shapes and letters, which may be discrete objects and may be processed at different visual stages than the fine orientation discrimination task that we employed. Furthermore, these previous studies did not address the relationship between learning of the search elements (PL) and learning of the search configuration (CL), which was the target of the current investigation. 
While the subject's task was to find a specific target in the visual search array, it appears that subjects also acquired enhanced representations of the distractor orientations and used this to better discriminate them from the target. A possible explanation of how the processing of distractors in early trials impacts performance in later trials comes from the literature on negative priming (Tipper, 1985). Negative priming is an attentional phenomenon that results in slower or less accurate responses compared to responses on a control trial when a previously ignored stimulus (distractor) must later be attended to. An account for negative priming is that the internal representation of the ignored item is suppressed below its baseline activation level at the second presentation and that the slower responses are due to the time that it takes for the inhibition to release (Tipper & Cranston, 1985). As the Repeated context in the present study provided for improved performance based on the presence of the distractors, it does not seem that an inhibitory mechanism can explain the results. An alternative explanation for slowed responses in the negative priming phenomenon is an episodic retrieval account that proposes that associations are made between stimuli and their responses (Neill, Valdes, Terry, & Gorfein, 1992). According to this account, when a target stimulus appears, it signals the retrieval of prior instances from memory that involve the same stimulus and contain information about the response that was executed. This better distractor encoding leads more readily to a do not respond tag during retrieval, thus producing greater negative priming than for an episode without interference. In the present study, it seems that the enhanced representations of the distractor orientations occurred through retrieval of the distractors and their attentional responses as instances to assist in responding to the present target and/or to not respond to the distractors. Further research will be necessary to clarify these processes and to understand their relationship to perceptual learning. 
Conclusion
The goal of our study was to investigate how fragmented approaches to learning can be unified to achieve a more holistic understanding of learning. We approached this general problem with a study of how Contextual and Perceptual Learning can be investigated as they form within the same task. We suggest that the present study provides a framework through which still other types of learning can be identified and characterized. For example, low-level perceptual learning, contextual cuing, statistical learning, categorical learning, reinforcement learning, long-term adaptation, priming, etc., are all forms of implicit sensory learning that are often discussed without relation to each other and are studied by researchers of different specialties. We hope that future research can expand upon the work discussed here to understand the types of information that are acquired as we learn these perceptual tasks and to what extent learning types are subserved by different learning rules and brain structures. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Acknowledgments
This study was funded by NSF (BCS-1057625) to ARS. We would like to thank Shigeaki Nishina, Marvin Chun, and Yuhong Jiang for helpful discussions related to this research and Dalton Downey Jr., Justin Draeger, Nicole Praytor, Andrew Moran, William Choi, Jerel Villanueva, Bradley Tien, and Angelique Deleon, who helped run participants in these studies. 
Commercial relationships: none. 
Corresponding author: Aaron R. Seitz. 
Address: University of California, 900 University Ave, Riverside, CA 92521, USA. 
References
Adini Y. Sagi D. Tsodyks M. (2002). Context-enabled learning in the human visual system. Nature, 415, 790–793. [CrossRef] [PubMed]
Adini Y. Wilkonsky A. Haspel R. Tsodyks M. Sagi D. (2004). Perceptual learning in contrast discrimination: The effect of contrast uncertainty. Journal of Vision, 4(12):2, 993–1005, http://www.journalofvision.org/content/4/12/2, doi:10.1167/4.12.2. [PubMed] [Article] [CrossRef]
Ahissar M. Hochstein S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences of the United States of America, 90, 5718–5722. [CrossRef] [PubMed]
Ahissar M. Hochstein S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387, 401–406. [CrossRef] [PubMed]
Ahissar M. Hochstein S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8, 457–464. [CrossRef] [PubMed]
Aminoff E. Gronau N. Bar M. (2007). The parahippocampal cortex mediates spatial and nonspatial associations. Cerebral Cortex, 17, 1493–1503. [CrossRef] [PubMed]
Ball K. Sekuler R. (1981). Adaptive processing of visual motion. Journal of Experimental Psychology: Human Perception and Performance, 7, 780–794. [CrossRef] [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Chun M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178. [CrossRef] [PubMed]
Chun M. M. Jiang Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. [CrossRef] [PubMed]
Crist R. E. Kapadia M. K. Westheimer G. Gilbert C. D. (1997). Perceptual learning of spatial localization: Specificity for orientation, position, and context. Journal of Neurophysiology, 78, 2889–2894. [PubMed]
Fahle M. (2004). Perceptual learning: A case for early selection. Journal of Vision, 4(10):4, 879–890, http://www.journalofvision.org/content/4/10/4, doi:10.1167/4.10.4. [PubMed] [Article] [CrossRef]
Fiorentini A. Berardi N. (1980). Perceptual learning specific for orientation and spatial frequency. Nature, 287, 43–44. [CrossRef] [PubMed]
Furmanski C. S. Schluppeck D. Engel S. A. (2004). Learning strengthens the response of primary visual cortex to simple patterns. Current Biology, 14, 573–578. [CrossRef] [PubMed]
Jiang Y. Song J. H. Rigas A. (2005). High-capacity spatial contextual memory. Psychonomic Bulletin and Review, 12, 524–529. [CrossRef] [PubMed]
Jiang Y. Wagner L. C. (2004). What is learned in spatial contextual cuing—Configuration or individual locations? Perception & Psychophysics, 66, 454–463. [CrossRef] [PubMed]
Karni A. Sagi D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences of the United States of America, 88, 4966–4970. [CrossRef] [PubMed]
Kim R. S. Seitz A. R. Shams L. (2008). Benefits of stimulus congruency for multisensory facilitation of visual learning. PLoS ONE, 3, e1532.
Li C. S. Padoa-Schioppa C. Bizzi E. (2001). Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field. Neuron, 30, 593–607. [CrossRef] [PubMed]
Mruczek R. E. Sheinberg D. L. (2005). Distractor familiarity leads to more efficient visual search for complex stimuli. Perception & Psychophysics, 67, 1016–1031. [CrossRef] [PubMed]
Neill W. T. Valdes L. A. Terry K. M. Gorfein D. S. (1992). Persistence of negative priming: II. Evidence for episodic trace retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 993–1000. [CrossRef] [PubMed]
Olson I. R. Chun M. M. Allison T. (2001). Contextual guidance of attention: Human intracranial event-related potential evidence for feedback modulation in anatomically early temporally late stages of visual processing. Brain, 124, 1417–1425. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Schankin A. Schubo A. (2009). Cognitive processes facilitated by contextual cueing: Evidence from event-related brain potentials. Psychophysiology, 46, 668–679. [CrossRef] [PubMed]
Schneider W. Shiffrin R. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychological Review, 84, 1–66. [CrossRef]
Schoups A. Vogels R. Qian N. Orban G. (2001). Practising orientation identification improves orientation coding in V1 neurons. Nature, 412, 549–553. [CrossRef] [PubMed]
Schwartz S. Maquet P. Frith C. (2002). Neural correlates of perceptual learning: A functional MRI study of visual texture discrimination. Proceedings of the National Academy of Sciences of the United States of America, 99, 17137–17142. [CrossRef] [PubMed]
Seitz A. R. Dinse H. R. (2007). A common framework for perceptual learning. Current Opinion in Neurobiology, 17, 148–153. [CrossRef] [PubMed]
Seitz A. R. Kim R. Shams L. (2006). Sound facilitates visual learning. Current Biology, 16, 1422–1427. [CrossRef] [PubMed]
Seitz A. R. Watanabe T. (2003). Psychophysics: Is subliminal learning really passive? Nature, 422, 36. [CrossRef] [PubMed]
Seitz A. R. Watanabe T. (2009). The phenomenon of task-irrelevant perceptual learning. Vision Research, 49, 2604–2610. [CrossRef] [PubMed]
Shams L. Seitz A. R. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12, 411–417. [CrossRef] [PubMed]
Sireteanu R. Rettenbach R. (1995). Perceptual learning in visual search: Fast, enduring, but non-specific. Vision Research, 35, 2037–2043. [CrossRef] [PubMed]
Tipper S. P. (1985). The negative priming effect: Inhibitory priming by ignored objects. Quarterly Journal of Experimental Psychology A, 37, 571–590. [CrossRef]
Tipper S. P. Cranston M. (1985). Selective attention and priming: Inhibitory and facilitatory effects of ignored primes. Quarterly Journal of Experimental Psychology A, 37, 591–611. [CrossRef]
Vaina L. M. Belliveau J. W. des Roziers E. B. Zeffiro T. A. (1998). Neural systems underlying learning and representation of global motion. Proceedings of the National Academy of Sciences of the United States of America, 95, 12657–12662. [CrossRef] [PubMed]
Wang Q. Q. Cavanagh P. Green M. (1994). Familiarity and Pop-out in Visual-Search. Perception & Psychophysics, 56, 495–500. [CrossRef] [PubMed]
Watanabe T. Nanez J. E. Koyama S. Mukai I. Liederman J. Sasaki Y. (2002). Greater plasticity in lower-level than higher-level visual motion processing in a passive perceptual learning task. Nature Neuroscience, 5, 1003–1009. [CrossRef] [PubMed]
Xiao L. Q. Zhang J. Y. Wang R. Klein S. A. Levi D. M. Yu C. (2008). Complete transfer of perceptual learning across retinal locations enabled by double training. Current Biology, 18, 1922–1926. [CrossRef] [PubMed]
Yang T. Maunsell J. H. (2004). The effect of perceptual learning on neuronal responses in monkey visual area V4. Journal of Neuroscience, 24, 1617–1626. [CrossRef] [PubMed]
Yu C. Klein S. A. Levi D. M. (2004). Perceptual learning in contrast discrimination and the (minimal) role of context. Journal of Vision, 4(3):4, 169–182, http://www.journalofvision.org/content/4/3/4, doi:10.1167/4.3.4. [PubMed] [Article] [CrossRef]
Zhang J.-Y. Kuai S.-G. Xiao L.-Q. Klein S. A. Levi D. M. Cong Y. (2008). Stimulus coding rules for perceptual learning. PLoS Biology, 6, e197, doi:10.1371/journal.pbio.0060197.
Zohary E. Celebrini S. Britten K. H. Newsome W. T. (1994). Neuronal plasticity that underlies improvement in perceptual performance. Science, 263, 1289–1292. [CrossRef] [PubMed]
Figure 1
 
Examples of stimulus displays: (A) target oriented at 45° and horizontal context; (B) target oriented at 135° and horizontal context; (C) vertical context. (D) Grid indicating the 36 possible positions (9 in each quadrant) for the target and the distractors. (E) Target oriented at 45° with both horizontal and vertical distractors; Experiment 2.
Figure 1
 
Examples of stimulus displays: (A) target oriented at 45° and horizontal context; (B) target oriented at 135° and horizontal context; (C) vertical context. (D) Grid indicating the 36 possible positions (9 in each quadrant) for the target and the distractors. (E) Target oriented at 45° with both horizontal and vertical distractors; Experiment 2.
Figure 2
 
Experimental timeline: The experiment was divided in different phases. During session 1, subjects were familiarized with the task. In sessions 2–10, they were trained on the visual search task (see Figure 3) with particular target and distractor orientations and with some Repeated contexts. After training phase, subjects were tested with different target and distractor orientations. The same day, an explicit recognition test (2IFC) was performed to test for explicit learning of the Repeated contexts (see Figure 4).
Figure 2
 
Experimental timeline: The experiment was divided in different phases. During session 1, subjects were familiarized with the task. In sessions 2–10, they were trained on the visual search task (see Figure 3) with particular target and distractor orientations and with some Repeated contexts. After training phase, subjects were tested with different target and distractor orientations. The same day, an explicit recognition test (2IFC) was performed to test for explicit learning of the Repeated contexts (see Figure 4).
Figure 3
 
General procedure for the visual search task.
Figure 3
 
General procedure for the visual search task.
Figure 4
 
Explicit recognition test. Subjects were successively presented with two displays and asked to report whether the first or second display was the most familiar.
Figure 4
 
Explicit recognition test. Subjects were successively presented with two displays and asked to report whether the first or second display was the most familiar.
Figure 5
 
Results of training sessions. On the first row, for the Experiment 1: (A) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (B) accuracy, (C) reaction times in training sessions, and (D) reaction times using only the first 5 presentations of each context in each training session. On the second row, for Experiment 2: (E) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (F) accuracy, (G) reaction times in training sessions, and (H) reaction times using only the first presentation of each context in each training session.
Figure 5
 
Results of training sessions. On the first row, for the Experiment 1: (A) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (B) accuracy, (C) reaction times in training sessions, and (D) reaction times using only the first 5 presentations of each context in each training session. On the second row, for Experiment 2: (E) threshold (distance in degrees between the orientation of the target and the possible orientation of the distractors), (F) accuracy, (G) reaction times in training sessions, and (H) reaction times using only the first presentation of each context in each training session.
Figure 6
 
Results of test sessions: For Experiment 1, (A) threshold, (B) accuracy, and (C) reaction time are shown as a function of the Trained target (TT), Untrained target (UT), Trained distractors (TDs), and Untrained distractors (UDs). For Experiment 2, (D) accuracy and (E) reaction time are shown as a function of the Trained target (TT) and Untrained target (UT) for Repeated and Novel contexts.
Figure 6
 
Results of test sessions: For Experiment 1, (A) threshold, (B) accuracy, and (C) reaction time are shown as a function of the Trained target (TT), Untrained target (UT), Trained distractors (TDs), and Untrained distractors (UDs). For Experiment 2, (D) accuracy and (E) reaction time are shown as a function of the Trained target (TT) and Untrained target (UT) for Repeated and Novel contexts.
Figure 7
 
Results of the explicit recognition test: percentage of correct recognition.
Figure 7
 
Results of the explicit recognition test: percentage of correct recognition.
Table 1
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 1.
Table 1
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 1.
Context
Novel (mean/SERR) Repeated (mean/SERR)
(A) Accuracy (%)
   Target Trained target/trained distractors 75.00/0.96 85.74/1.93
Untrained target/trained distractors 72.50/1.08 80.94/1.27
Trained target/untrained distractors 69.91/0.69 76.85/1.75
Untrained target/untrained distractors 68.89/2.00 74.26/2.12
(B) Reaction time (ms)
   Target Trained target/trained distractors 617.1/19.1 574.1/24.7
Untrained target/trained distractors 695.3/12.6 612.3/21.5
Trained target/untrained distractors 745.3/21.1 665.0/21.6
Untrained target/untrained distractors 784.0/15.1 708.4/17.5
Table 2
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 2.
Table 2
 
Accuracy and reaction times during the test session as a function of the context (Novel or Repeated) and the target and distractor orientations (Trained or Untrained) of the stimuli presented in Experiment 2.
Target
Trained Untrained
(A) Accuracy (%)
   Context Novel (mean/SERR) 66.67/0.15 61.11/0.70
Repeated (mean/SERR) 77.64/0.95 67.08/0.10
(B) Reaction time (ms)
   Context Novel (mean/SERR) 762.5/2.8 845.6/5.5
Repeated (mean/SERR) 732.6/5.8 822.6/3.2
Supplementary PDF
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×