Abstract
Perceptual Learning (PL) and Contextual Learning (CL) are two types of implicit visual learning that have garnered much attention in the vision sciences. PL refers to the visual system learning to better represent (i.e. become more sensitive to) the elements (target and distractors) of the visual search. It is low level, slow to form, long lasting, specific to trained features and consistent with early visual plasticity. CL is the learning of regularities in the environment that allow better identification of the target-location in a visual search task. It is higher level, rapid to form, is in relation with the global stimulus configuration, and is also long lasting. While these two types of learning often co-occur in natural settings (for example, a bird watcher must be able to identify a bird but also need to know where to look for it), they are typically studied separately and using distinct experimental paradigms. Here we present results of a study, where we compared the operational measures of both PL for the target and the distractor orientations and CL for the repeated versus novel configurations, as they co-develop within a single visual search task. For CL, we observed improved performance (RT and accuracy) for learned compared to novel configurations. For PL, we observed improved thresholds and RTs for both trained target and distractor orientations as compared to the respective untrained ones, reflecting the specificity of PL for the characteristics of those elements. Notably, CL, PL for target, and PL for distractors were largely independent of each other and we observed no interactions between these three components of learning. Taken all together, these results suggest a triple dissociation between CL and PL for the target and the disctractors and that these are distinct visual learning phenomena that have different behavioral characteristics.