Free
Article  |   October 2014
Uncertainty in fast task-irrelevant perceptual learning boosts learning of images in women but not men
Author Affiliations
Journal of Vision October 2014, Vol.14, 26. doi:https://doi.org/10.1167/14.12.26
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Virginie Leclercq, Russell Cohen Hoffing, Aaron R. Seitz; Uncertainty in fast task-irrelevant perceptual learning boosts learning of images in women but not men. Journal of Vision 2014;14(12):26. https://doi.org/10.1167/14.12.26.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A key tenet of models of reinforcement learning is that learning is most desirable in the times of maximum uncertainty. Here we examine the role of uncertainty in the paradigm of fast task-irrelevant perceptual learning (fast-TIPL), where stimuli that are consistently presented at relevant points in times (e.g., with task targets or rewards) are better encoded than when presented at other times. We manipulated two forms of uncertainty, expected uncertainty and unexpected uncertainty (Yu & Dayan, 2005), and compared fast-TIPL under uncertainty with fast-TIPL under no uncertainty. Results indicate a larger fast-TIPL effect under uncertainty than under no uncertainty without a difference between expected and unexpected uncertainty. However, interestingly, this effect of uncertainty on fast-TIPL was found in women but not in men. In men, equivalent fast-TIPL was observed under no uncertainty and uncertainty, whereas in women, confirming previous results (Leclercq & Seitz, 2012b), no fast-TIPL was observed in the no-uncertainty condition, but fast-TIPL was observed in the uncertainty conditions. We discuss how these results imply differences in attention or neuromodulatory processes between men and women.

Introduction
How do we “choose” what to learn in an environment where it is impossible for our system to memorize everything? Attention, the ability to select relevant information in the environment can explain a part of this; however, research shows that learning can occur for unattended stimuli, even those that participants are not aware of (Pessiglione et al., 2008; Seitz & Watanabe, 2003; Watanabe, Nanez, & Sasaki, 2001). For example, studies of task-irrelevant perceptual learning (TIPL) show that information presented at times of important events (such as onset of task-targets or rewards) are better encoded than when presented at other times (for review see Seitz & Watanabe, 2009). A model of TIPL (Seitz & Watanabe, 2005) suggested that learning of unattended features in the environment is gated by the diffuse release of neuromodulatory signals in the brain in a manner that resembles aspects of reinforcement learning theory (Seitz, Lefebvre, Watanabe, & Jolicoeur, 2005; Seitz & Watanabe, 2009). Namely, that learning is gated by behaviorally relevant events (important task events, rewards, punishment, novelty, etc.), at which times reinforcement signals are released to better learn aspects of the environment. 
One important aspect of reinforcement learning is the effect of uncertainty. The role of uncertainty is a topic of much interest in learning theory, which suggests that at times of high uncertainty learning is most necessary (Yu & Dayan, 2005). Perhaps the most popular instantiation of this idea is the role of “prediction errors” in learning (Rescorla & Wagner, 1972), where learning is only desirable at times when the environment behaves differently from one's prediction of it. Yu and Dayan (2005) suggested that uncertainty in various forms plagues our interactions with the environment and uncertainty signals can enable optimal learning in environments. They described two operationally distinct forms of uncertainty: expected uncertainty and unexpected uncertainty. Expected uncertainty arises from known unreliability of objects in the environment; for example that involved in card games where the distribution of cards is fixed and known to the players, as is true when playing with a standard 52-card deck. Unexpected uncertainty arises when unreliability of objects violates one's expectation, for example, if one suddenly found oneself playing with a deck of cards containing 26 clubs, 18 spades, five diamonds, and three hearts. More concretely, within the context of an attention cueing task, expected uncertainty can be related to a known validity rate of a cue and unexpected uncertainty can be related to a change in the cue-validity. 
Yu and Dayan (2005) postulated that these two forms of uncertainty, expected and unexpected, are supported by the cholinergic (ACh) and noradrenergic (NE) systems, respectively. Interestingly, Seitz and Watanabe (2005) hypothesized that ACh and NE are candidate neuromodulators that may underlie TIPL and have speculated about the role of uncertainty in TIPL. Seitz and Watanabe (2008) found greater TIPL in conditions of a large set of potential responses compared to a condition with a small set, and suggested that the greater uncertainty in the condition with many potential responses led to greater TIPL. This suggests that target uncertainty is an important factor for TIPL. However, differences in task difficulty between the conditions confound this explanation, and additional research is required to understand the role of uncertainty in TIPL. In the present paper, we conduct experiments that control for overall performance and level of stimulus processing while manipulating uncertainty. 
As a method to dissociate the different types of uncertainty (no uncertainty, expected uncertainty, unexpected uncertainty) we employed a cueing task, in which participants were cued to the timing of a subsequent target (Posner & Petersen, 1990). In the case that the cue is always valid (i.e., target always appears after the cue), there is no uncertainty. On the other hand, if the cue is valid at a fixed proportion of the time (i.e., indicates target 75% of the time), participants can learn and adjust to the known uncertainty of the target. This case corresponds to expected uncertainty. Unexpected uncertainty occurs in conditions where large changes in the environment violate top-down expectations; here we manipulated this by requiring participants to identify new cue-target mappings during the experiment. 
While the phenomenon of TIPL has been studied in most detail in the case of low-level perceptual learning (Pilly, Grossberg, & Seitz, 2010; Seitz, Kim, & Watanabe, 2009; Watanabe et al., 2002), recent research has identified a high-level, fast form of TIPL (fast-TIPL) (Lin, Pype, Murray, & Boynton, 2010; Swallow & Jiang, 2010, 2011). In this fast-TIPL paradigm, participants conduct target detection tasks (looking for a target, letter, color, or word among a series of distractors), while also memorizing other stimuli (e.g., images) that are consistently paired with the stimuli of the target-detection task. Similar to TIPL for low-level perceptual learning, visual memory is enhanced for stimuli that are paired with the targets of the target-detection task (Dewald, Sinnett, & Doumas, 2011; Leclercq, Le Dantec, & Seitz, 2013; Lin et al., 2010; Swallow & Jiang, 2011). While the enhanced memorization found in fast-TIPL may involve some differences in underlying processes from the low-level perceptual learning that has been the primary focus of studies of slow-TIPL, the strong parallels between the experimental paradigms and results suggests that fast-TIPL and slow-TIPL are related phenomena (see Leclercq & Seitz, 2012a for a larger discussion of this point). We also note that the term task-irrelevant in the context of the dual task used in fast-TIPL refers to the fact that the images have no predictive relationship to the occurrence of the targets of the target-detection task, nor are the targets of the target-detection task informative of which image will be tested in the image-recognition task. As such, the relevant stimuli to one task are irrelevant to those of the other task. We thus employed fast-TIPL in the present experiment as a more efficient method to understand the role of uncertainty in TIPL. 
Three experiments were conducted. In the first experiment, a within-subjects design was used to compare performance between conditions of no uncertainty (NU) and expected uncertainty (EU). In the second and third experiments, participants were run in unexpected uncertainty (UU) conditions. We hypothesized that greater learning would be found in the two uncertainty conditions than in the no-uncertainty condition. 
Experiment 1
In the first experiment, we examined how expected uncertainty influences fast-TIPL. In this paradigm (Leclercq & Seitz, 2011), participants performed a rapid serial visual presentation (RSVP) target-detection task requiring an immediate response to a target—a white square—that was sometimes preceded by a cue—a green square—to which participants were instructed not to respond. In the no-uncertainty (NU) condition, the cue was valid in 95% of the trials. In the expected-uncertainty (EU) condition, the cue was valid in 75% of the trials. An image was presented with each stimulus of the target-detection task (target, distractor, and cue). Each participant conducted both the NU and EU conditions, order counterbalanced across participants. We hypothesized a greater enhancement of memorization for target-paired images in the EU condition than in the NU condition. 
Methods
Participants
Thirty-five participants gave written informed consent to participate in this experiment, which was approved by the Human Subjects Review Board of the University of California–Riverside. Two participants were excluded because of their poor performance to the target detection task (60% for one participant and 43% for the other one). As our objective was to study the role of the uncertainty related to the appearance of the target, we included only participants who successfully withheld responses to the cue. Consistent with our previous research (Leclercq and Seitz, 2012a), participants with more than 35% of responses to the cue (more than 35% of RTs < 150 ms) were excluded. This criterion excluded one participant in Experiment 1. One more participant was excluded because of his poor overall performance on image recognition (40%). Thus, 31 participants were included in this experiment (20 y.o. ± 1 month; 20 females, 11 males). Experimental conditions were counterbalanced with five men for order 1 (NU and EU) and six for order 2 (EU and NU), and 11 women for order 1, and nine women for order 2 (of note, restricting analysis to a matched number of men and women in each condition doesn't change pattern or significance of results). All participants reported normal or corrected-to-normal visual acuity and received course credit and financial compensation for the one-hour session. 
Prior to testing, participants were familiarized with the 192 images that were to be used in the experiment by viewing each image for two seconds. After this, participants were run in the two conditions: the NU condition (101 trials), preceded by 12 trials of practice, and the EU condition (128 trials), preceded by 12 trials of practice. The order of the conditions was counterbalanced. Breaks were provided every 24 trials. 
Apparatus and stimuli
An Apple Mac Mini and PowerMac G5 both running MATLAB (Mathworks, Natick, MA) and Psychtoolbox Version 3 (Brainard, 1997; Pelli, 1997) were used for stimulus generation and experiment control. Stimuli were presented Samsung SyncMaster S23B300 20 in. CRT monitors with resolution of 1920 × 1080 and a refresh rate of 60 Hz. Participants sat with their eyes approximately 60 cm from the screen. The backgrounds of all displays were a mid-gray (luminance of 19 cd/m2). Display items consisted of 192, 700 × 700 pixel (18.3° of visual angle), photographs depicting natural or urban scenes from eight distinct categories (i.e., mountains, cityscapes, etc.). Images were obtained from the LabelMe Natural and Urban Scenes database (Oliva & Torralba, 2001) at 250 × 250 pixels of resolution, then up-sampled to 700 × 700 pixels of resolution. The average luminance of all images was 17 ± 8 cd/m2 (SD). 
Procedure
Each trial began with the presentation of a fixation cross for 450 ms. This presentation was followed by a rapid sequence of 11 full-field images. Each image was presented for 133 ms, followed by a blank (luminance of 19 cd/m2) inter stimulus interval of 367 ms for a stimulus onset asynchrony of 500 ms (Figure 1). 
Figure 1
 
Design of Experiment 1. Participants had to rapidly press the “up arrow” key when the white square appeared. At the end of the trial, they have to say, by pressing the “left arrow” or the “right arrow” key, which images they saw in the trial.
Figure 1
 
Design of Experiment 1. Participants had to rapidly press the “up arrow” key when the white square appeared. At the end of the trial, they have to say, by pressing the “left arrow” or the “right arrow” key, which images they saw in the trial.
Target detection task:
In this task, participants were instructed to fixate the center of the screen and to rapidly press the “up arrow” key when they detected a target white square. They were also instructed to memorize the 11 images presented in each trial and were tested on image recognition after each trial. 
More precisely, in each trial, 11 images were presented. Each image was presented centered in the middle of the screen and was presented with a square (0.75° of visual angle) in its middle. This square was presented in a gray aperture (1° of visual angle and luminance of 19 cd/m2) and could be a distractor (black square; luminance of 0 cd/m2), a target (white square; luminance of 76 cd/m2), or a cue (green square; luminance of 39 cd/m2). When a cue and a target were presented in the same trial, the cue was presented immediately preceding the target. Each square had the same onset and offset time as the image with which it was paired. In the NU condition, the cue was valid in 95% of the trials: in the 101 trials, a cue was presented and followed by a target in 96 trials and a cue was presented without a target in the five remaining trials. These five catch trials were added to control participants responding based upon the cue rather than the target. In the EU condition, the cue was valid in 77% of the trials: in the 128 trials, a cue was presented and followed by a target in 96 trials, and a cue was presented but not a target in the remaining 32 trials. For each condition, the target could only appear with images presented in serial positions 4 to 9. Consequently, the cue could only appear with the images presented in serial positions 3 to 8. This avoids the presentation of the target at the beginning of the RSVP stream (Lin et al., 2010). 
Image recognition task:
Following each trial, two different images were presented to the participants; one to the left and one to the right of the fixation point. Participants had to report which image they remembered from the RSVP sequence by pressing the left or right arrow key. The test image (image presented in RSVP sequence) was always presented in serial positions 3 to 9 of the present RSVP sequence so as to match tested positions between targets, distractors, and cues. When comparing target and distractor paired images, only the serial positions 4 to 9 were analyzed. Of note, the stimuli of the target-detection task did not predict which image would be tested in the image recognition task and thus, any benefit in processing of the image was task-irrelevant in regard to the target-detection task. 
Results
Results from the target-detection task indicate that participants complied with the instructions to maintain their attentional focus on the middle of the screen. Overall, mean accuracy on the target-detection task was 91.8% ± 1.1%. To test if uncertainty had an effect on the target detection performance, we compared results between the NU condition and EU condition and found no significant difference, respectively: (90.5% ± 1.6% and 93.2% ± 1.2%), t(30) = 1.67, p = 0.11. Of note, previous studies have confirmed that the target-detection task is not significantly influenced by the addition of the memorization task (Leclercq and Seitz, 2012b). 
To study the effects of fast-TIPL, we examined performance in the image recognition task (Figure 2). We conducted an ANOVA on the image-recognition accuracy (hit-rate) with type of Uncertainty (NU and EU) and type of Stimulus (Target; Distractor) as within subjects' factors. This analysis indicated a significant effect of stimulus, F(1, 30) = 25.34, p < .001, with better recognition of target-paired images (70.5% ± 1.2%) than for distractor-paired images (62.9% ± 0.6%), indicating fast-TIPL. Concerning the effect of uncertainty, there was a numerical difference, which was not statistically significant for better recognition in the EU condition (68.5 ± 1.1%) than in the NU condition (65.0% ± 1.2%), F(1, 30) = 3.33, p = 0.078. The interaction between stimuli and uncertainty was not significant, F(1, 30) = 2.37, p = 0.13, indicating that fast-TIPL was found both in the EU condition (target-paired images, 73.4% ± 1.7%, vs. distractor-paired images, 63.5% ± 1.2%, F(1, 30) = 22.44, p < 0.001), and in the NU condition (target-paired images, 67.6% ± 1.9%, vs. distractor-paired images, 62.3% ± 1.2%, F(1, 30) = 6.00, p < 0.05). According to our hypothesis, planned comparisons confirmed better performance on target-paired images in the EU (73.4%) condition than in the NU condition (67.6%), F(1, 30) = 4.59, p < 0.05, and no significant difference on distractor-paired images between the EU (63.9%) and the NU conditions (61.9%), F(1, 30) = 0.33, p = 0.57. This difference in target-paired performance between the EU and NU conditions may indicate a greater fast-TIPL effect under uncertainty. However, the lack of a significant interaction of stimuli × uncertainty requires some explanation. 
Figure 2
 
Results from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 2
 
Results from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
A possible explanation builds upon our previous research, where we found a gender difference in fast-TIPL with greater fast-TIPL in men than in women (Leclercq and Seitz, 2012b). To examine this gender effect in the present experiment, we examined performance separately for men and for women (Figure 3). For men, we found fast-TIPL in the EU, 10.2 ± 2.4, t(10) = 2.35, p < 0.05, and NU conditions, 11.0 ± 2.4, t(10) = 2.99, p < 0.05. However, for women, we found fast-TIPL in the EU condition, 10.0 ± 2.2, t(19) = 4.19, p < 0.01, but not in the NU condition, 2.5 ± 2.2, t(19) = 0.80, p = 0.43). While differences in fast-TIPL between men and women in the NU condition replicates our prior finding (Leclercq & Seitz, 2012b), the emergence of fast-TIPL in women under expected uncertainty represents a new finding. 
Figure 3
 
Gender breakdown from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 3
 
Gender breakdown from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
In summary, results of Experiment 1 indicated that expected uncertainty has an effect on fast-TIPL. However, it seems that this effect of uncertainty on fast-TIPL is primarily occurring for women and not measurably so for men. 
Experiment 2
In our second experiment, we attempted to examine how unexpected uncertainty may influence fast-TIPL. In this experiment, the cue was valid in 95% of the trials but its color changed unexpectedly every 30–60 trials with the aim of driving unexpected uncertainty related to the cue color. 
Method
Participants
Forty-one participants gave written informed consent to participate in this experiment, which was approved by the Human Subjects Review Board of University of California, Riverside. Two participants were excluded because of their poor performance to the target-detection task (60% for one participant and 62% for the other one). As our objective was to study the role of the uncertainty related to the cue, we included only participants who successfully withheld responses to the cue. Thus, participants with more than 35% of responses to the cue (more than 35% of RTs < 150 ms) in the experiment were excluded. This criterion excluded eight participants in Experiment 2. Four more participants were excluded due to poor performance on the image recognition task (<50%). Thus, 27 participants were included in this experiment (19 y.o. ± 7 months; 14 females, 13 males). All participants reported normal or corrected-to-normal visual acuity and received course credit and financial compensation for the one-hour session. 
Prior to testing, participants were familiarized with the 192 images that were to be used in the experiment by viewing each image for 2 s. After this, participants performed the main experiment (255 trials), which was preceded by 12 trials of practice. Breaks were provided every 24 trials. 
Apparatus and stimuli
Same as described in Experiment 1
Procedure
Same as described in Experiment 1. In this experiment, the cue was valid in almost 95% of the trials: in the 255 trials, a cue was presented and followed by a target in 240 trials. In 15 catch trials a cue was presented but not a target. Compared to the NU condition, in this experiment, the color of the cue changed every 30–60 trials (these numbers were randomly chosen for each participant by the program during the experiment). The cue could be blue, turquoise, green, red, yellow, or pink. The participants were not aware when the color of the cue would change. 
The target-detection task and image-recognition task were the same as described in Experiment 1
Results
Results from the target-detection task indicate that participants complied with the instruction to maintain their attentional focus on the middle of the screen. Overall, mean accuracy for trials with RTs > 150 ms on the white square detection task was 92.1% ± 1.1% (between-subjects standard error). This level of accuracy is quite similar to that found in Experiment 1, suggesting that unexpected uncertainty did not have a significant impact on target detection. 
To study the effects of fast-TIPL, we examined performance in the image recognition task. An overall effect of fast-TIPL was found with better performance on image-recognition accuracy for target-paired images (68.8% ± 1.4%) compared to distractor-paired images (62.7% ± 0.6%), t(26) = 3.40, p < 0.01. 
To examine the effect of unexpected uncertainty, we compared the first 15 trials of each block (one block is all the trials for one given color cue), with the last 15 trials of each block. We hypothesized that the effect of uncertainty would be greater in the 15 first trials (when the color of the cue just changed) than in the 15 last trials, because uncertainty is maximal immediately after the color change. To follow up on the gender effect observed in Experiment 1, an ANOVA was conducted with stimulus (target; distractor) and trial position (first; last) as within-subjects factors and gender (men; women) as a between-subjects factor. Results (Figure 4) confirmed a significant effect of stimulus, F(1, 26) = 14.48, p < 0.001, with better performance for target-paired (71.6% ± 1.9%) than distractor-paired images (62.6% ± 1.8%). However, no significant effect of position was found, F(1, 26) = 1.29, p = 0.27, indicating equivalent performance on image recognition for the first 15 trials (65.8% ± 1.8%) and last 15 trials (68.3% ± 1.7%). Moreover, no interaction between stimulus and position was observed, F(1, 26) = 0.02, p = 0.89. Gender had no significant effect and did not interact with any other factor. These results suggest that our manipulation of unexpected uncertainty may have failed and that instead, our manipulation gave rise to expected uncertainty similar to that found in Experiment 1
Figure 4
 
Gender breakdown from the image recognition task of Experiment 2 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 4
 
Gender breakdown from the image recognition task of Experiment 2 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
We also examined performance on the target-detection task for the first 15 trials after a cue change compared to those before the next cue change. While we found a trend for a differences in accuracy (90.9% ± 0.7% vs. 91.6% ± 0.8%), t(26) = 1.87, p = 0.073, there was no difference in reaction times (335.5 ms ± 4.9 ms vs. 325.2 ms ± 5.1 ms), t(26) = 0.73, p = 0.474. 
Combined Analysis of Experiment 1 and Experiment 2
Comparing the fast-TIPL effect between Experiments 1 and 2 suggests that the two uncertainty manipulations (cue-validity and cue-type) may have influenced a common process of expected uncertainty. Consequently, to better understand the effect of expected uncertainty on fast-TIPL and the difference between men and women in this effect, the results from Experiments 1 and 2 were combined into an uncertainty condition (U). As there was no difference between the 15 first and the 15 last trials in the UU conditions, all the trials of the UU conditions were taken into account in the new analysis. A within- and between-subjects ANOVA was conducted with gender (men; women) as a between-subjects factor, stimulus (target; distractor) as a within-subjects factor, and uncertainty (NU; U), where NU versus EU was a within-subjects factor and NU versus UU was a between-subjects factor. As expected, we found a significant main effect of stimulus, F(1, 85) = 33.28, p < 0.001, with better performance for target-paired images (69.7% ± 1.0%) than distractor-paired images (62.5% ± 0.6%). No other main effect was significant. However, as expected, the interaction gender × uncertainty × stimuli was significant, F(1, 85) = 4.03, p < 0.05 (Figure 5). Planned comparisons indicated a difference in fast-TIPL between NU and U conditions in women, F(1, 85) = 4.43, p < 0.05, with larger fast-TIPL in the U conditions (target-distractor accuracy = 8.4 ± 1.6 within standard error) than in the NU condition (1.9 ± 2.3), but not in men, F(1, 85) = 0.83, p = 0.37, that had equivalent fast-TIPL in the U conditions (7.5 ± 2.5) and the NU condition (11.0 ± 3.7). These results show that men exhibit fast-TIPL with or without uncertainty but that women, at least under the conditions of these experiments, only exhibit fast-TIPL under conditions of uncertainty. 
Figure 5
 
Gender breakdown from the image recognition task of combined uncertainty results and no uncertainty results. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 5
 
Gender breakdown from the image recognition task of combined uncertainty results and no uncertainty results. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Experiment 3
The results of Experiments 1 and 2 indicate a gender difference in the impact of expected uncertainty on fast-TIPL; however, Experiment 2 failed to show a specific effect of unexpected uncertainty. This suggests that unexpected uncertainty itself may have played little role in the observed results and that instead, participants learned to expect the changing cue and thus, that Experiment 2 was an accidental replication of the expected uncertainty study. In retrospect, this is not surprising given that in Experiment 2, the cue-color change itself served as a cue and there was no need to ascertain which cue color was informative in any given trial. To overcome this limitation we designed Experiment 3 in which we employed a color cue presented among other color distractors to create a condition where participants would be required to learn a new cue-target mapping after each cue switch. Our hypothesis was that this would produce the desired conditions of unexpected uncertainty. 
Method
Participants
Fifty-two participants gave written informed consent to participate in this experiment, which was approved by the Human Subjects Review Board of University of California, Riverside. The same inclusion and exclusion criteria were used as in Experiment 2, with five participants excluded because of more than 35% of responses to the cue, three participants excluded with more that 35% of responses during catch trials, 16 participants excluded due to poor performance on the target-detection task (<60% accuracy). Thus, 28 participants were included in this experiment (19.6 ± 1.3 y.o.; 14 females, 14 males). While this is a large number of subjects to exclude, all of the exclusions, criteria were based upon the criteria established in Experiments 1 and 2. Experiment 3 was more difficult for participants than the previous experiments due to the increase number of colors presented in each trial. Thus, prior to testing, participants repeated eight trials of practice until they performed with at least 60% accuracy on the image recognition task. Breaks were provided every 24 trials. 
Apparatus and stimuli
These were the same as in Experiments 1 and 2 with the following exceptions: Stimuli were presented on 20″ CRT monitors with resolution of 1920 × 1080 and a refresh rate of 60 Hz. The backgrounds of all displays were a mid-gray (luminance of 60 cd/m2). In this experiment, each image was exposed only once, requiring a greater number of total scenes, so we obtained scenes from the Massive Memory database (Konkle, Brady, Alvarez, & Oliva, 2010). These had 256 × 256 pixels of resolution and were up-sampled and presented at 768 × 768 pixels (28.3° of visual angle). Each scene was matched by the average luminance distribution of the 2,112 scenes using the histMatch function of the SHINE toolbox (Willenbockel et al., 2010; http://www.mapageweb.umontreal.ca/gosselif/SHINE/) to control for luminance fluctuations across the image set. 
Procedure
Procedure was the same as in Experiment 2 with the following exceptions. 
Target-detection task:
In this task, participants pressed the “1” key on the number pad when they detected a white target among distractors. The cue color could be red, blue, green, or black and the cue color changed every 35–69 trials. The remaining colors were randomly selected to be presented with each distractor, though no distractor color was repeated twice in a row. To help participants know that a cue change had occurred, we changed the shape of the central stimuli (cue, target, and distractor) at the time of each color-cue change and kept the shape (triangle, inverted triangle, square, diamond, horizontal rectangle, or vertical rectangle) the same for the duration of each block. In this way, we ensured that the period of unexpected uncertainty of the cue color was time-locked to the first trial of each cue change. Participants were told to use the cue to prepare a response but only to respond if a target appeared. Participants were unaware of when the cue would change and were instructed that when the shape changed, the cue color would also change. 
Participants completed a total of 294 trials. A cue was presented and followed by a target in 279 trials, and a cue was presented without a target in the 15 remaining trials. For each condition, the target could only appear with images presented in serial positions 3 to 8. Consequently, the cue could only appear with the images presented in serial positions 2 to 7. 
Image-recognition task:
Participants selected one of the two presented images by pressing the “1” or “2” key. One image was the test image, which was presented in serial positions 2 to 8 of the present RSVP sequence and the other was a novel image. Three types of trial conditions were utilized: target (n = 42), distractor (n = 210), and cue trials (n = 42), with each tested image only shown once during the session. Each condition used a separate set of images that were paired and later tested with the stimulus of interest. 
Results
Accuracy on the target-detection task was 78.6% ± 1.5% (between-subjects standard error). This score is lower than that found in Experiments 1 and 2, suggesting that the increased number of cues used in a given trial increased overall task difficulty. This also provides encouraging evidence that we may be successfully manipulating unexpected uncertainty in Experiment 3
To study the effects of fast-TIPL, we examined performance in the image recognition task (Figure 6). An overall effect of fast-TIPL was found with better performance on image-recognition accuracy for target-paired images (64.3 % ± 1.8%) compared to distractor-paired images (60.0% ± 0.04%), t(27) = 1.92, p = 0.033. 
Figure 6
 
Gender breakdown from the image recognition task of Experiment 3 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 6
 
Gender breakdown from the image recognition task of Experiment 3 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
We hypothesized that the effect of unexpected uncertainty would be greater in the 15 first trials (when the shapes and color of the cue just changed) than in the 15 last trials. To follow up on the gender effect observed in Experiments 1 and 2, we conducted an ANOVA with stimulus (target; distractor) and trial position (first; last) as a within-subjects factors, and gender (men; women) as a between-subjects factor. As hypothesized, a significant three-way interaction between stimulus × position × gender, F(1, 26) = 8.64, p = 0.007, was observed with the effect of uncertainty found only in women. Indeed, planned comparisons indicate an interaction between stimulus × trial position in women, F(1, 13) = 7.15, p = 0.019, with significantly better performance (p = 0.008) for target-paired (66.6% ± 2.6%) than distractor-paired images (56.5% ± 1.4%) after a cue change, but no significant difference (p = 0.50) between target-paired (56.3% ± 3.8%) and distractor-paired images (59.8% ± 0.9%) before a cue change. Men failed to show a stimulus × trial position interaction, F(1, 13) = 2.67, p = 0.126, and actually had the opposite pattern of results with nominally better performance (p = 0.13) for target-paired (70.9% ± 5.2%) than distractor-paired images (60.7% ± 1.7%) before a cue change, but no difference (p = 0.94) between target-paired (60.1% ± 4.9%) and distractor-paired images (60.5% ± 1.4%) after a cue change. These results support our hypothesis that unexpected uncertainty has a positive effect on TIPL in women and is in agreement with the positive effects of uncertainty observed in Experiments 1 and 2
We also examined performance on the target-detection task for the first 15 trials after a cue change compared to those before the next cue change. There were no differences in accuracy (80.5% ± 1.3% vs. 80.3% ± 1.4%), t(27) = 0.61, p = 0.545), nor were there differences in reaction times (367.1 ms ± 10.0 ms vs. 369.3 ms ± 9.2 ms), t(27) = 0.66, p = 0.515. 
Discussion
The objective of the present research was to study the impact of uncertainty on fast-TIPL, namely the memorization of information that is presented at times of reinforcing events, with the hypothesis that an increase of uncertainty would cause faster and better learning (Yu & Dayan, 2005). The results of Experiment 1 indicate an effect of uncertainty on fast-TIPL, with better memorization for information paired with the target in a condition of expected uncertainty compared to a situation of no uncertainty; however, the benefit of uncertainty was only found for women. In Experiment 2, we failed to find an effect of unexpected uncertainty, as operationalized as a difference between the first and last trials in each block. However, by increasing the requirement to differentiate color to detect the cue, Experiment 3 found that unexpected uncertainty did affect fast-TIPL, with better memorization for information paired with the target in a condition of unexpected uncertainty (15 first trials), compared to a situation of no (or less) uncertainty (15 last trials), but again, the benefit of uncertainty was only found for women. Across experiments, we found evidence for a gender effect where uncertainty facilitates fast-TIPL in women but not in men. 
These results suggest that uncertainty benefits fast-TIPL. However, a possible confound, as proposed by Yu and Dayan (2005), is that in conditions of uncertainty, acetylcholine (Ach) release may be related to the estimated invalidity of the cue and thus suppresses the use of the cue. Consequently, the difference between the EU and the NU conditions in Experiment 1 and between the 15 first and last trials in Experiment 3 could be related to the possibility that in the uncertainty conditions, participants paid less attention to the cue and thus, more resources were available to process the RSVP stimuli. However, for this confound to be valid in explaining the difference between uncertainty and no-uncertainty conditions, in conditions where a target is always presented (no uncertainty) but can or not be preceded by a cue, fast-TIPL should be larger in the no-cue condition than in the cue condition, because in the no-cue condition, more attention is available to process the RSVP stimuli. We previously conducted this very experiment (Leclercq & Seitz, 2012a) and found larger fast-TIPL, or no difference, in the cue condition than in the no-cue condition. This is inconsistent with the cue-suppression confound, and, therefore, we suggest that the greater fast-TIPL effect in the uncertainty conditions (expected and unexpected) than in the no-uncertainty condition is consistent with our interpretation that uncertainty influences fast-TIPL. However, further work will be required to understand what dissociations may exist between the roles of expected and unexpected uncertainty in fast-TIPL. 
In prior research on fast-TIPL (Leclercq & Seitz, 2012a) recognition for images paired with the cue was reduced, compared to distractors. Here we replicated this finding with better recognition of distractor-paired images compared to cue-paired images. For Experiment 1 results were 61.5% ± 0.7% (within standard error) versus 55.9% ± 1.6%, t(26) = 3.1, p = 0.004; for Experiment 2, results were 60.9% ± 0.6% versus 54.2% ± 1.7%, t(26) = 3.5, p = 0.002; and for Experiment 3 results were 59.9% ± 0.5% versus 57.5% ± 1.0%, t(27) = 1.9, p = 0.034. We previously suggested that this effect might be related to negative priming (Tipper, 1985). 
Interestingly, we found that the effect of uncertainty on fast-TIPL exists only in women but not in men. In men, equivalent fast-TIPL effects were observed under no uncertainty and uncertainty, whereas in women, according to previous results (Leclercq & Seitz, 2012b), no fast-TIPL was observed without uncertainty, but fast-TIPL was observed under uncertainty. Together our results suggest that uncertainty ameliorates the gender difference that we have previously identified in fast-TIPL (Leclercq & Seitz, 2012b). It is notable that in Experiment 3, men failed to exhibit fast-TIPL in the unexpected uncertainty condition. We are still exploring whether this is a robust finding and, if so, what factors may underlie the effect. 
Why does uncertainty have a greater effect on fast-TIPL in women than in men? One hypothesis concerns the role of Ach. The release of Ach is thought to be proportional to the invalidity of the cue (Yu & Dayan, 2005) and thus more Ach is released in the uncertainty than in the no uncertainty conditions. Furthermore, the Ach neuromodulatory system has been shown to modulate perceptual learning (Rokem & Silver, 2010; Wilson, Fletcher, & Sullivan, 2004) and cortical plasticity (Bear & Singer, 1986; Kilgard & Merzenich, 1998, 2002). It may be that the difference in fast-TIPL between men and women participants can be related to a difference in the release of Ach at the relevant point in time (detection of the target). 
An interesting hypothesis is that the effect of uncertainty observed in women may be related to the effect of the menstrual cycle in learning (Cahill, 2006; Phillips & Sherwin, 1992). For example, better visual memory has been observed during the luteal phase compared to the menstrual phase (Phillips & Sherwin, 1992). While we were unable to obtain information about the menstrual cycle of our participants, there is likely some diversity in our population regarding the stage in the cycle at which the experiment was conducted. Accordingly, we examined whether any women in Experiment 1 showed results similar to those of men; that is, no difference between the no uncertainty and the expected uncertainty conditions. However, none of our women present such pattern of results. While it is difficult to rule out an impact of the cycle on our learning results without data about our female participants' menstrual cycles, we failed to find appropriate individual subject differences to support this hypothesis. 
Another possibility is that there are differences between men and women in how to cognitively process these tasks. It may be that in the conditions without uncertainty our female participants were somehow less engaged in the task and thus, appropriate neuromodulators were not successfully released when task targets were found (i.e., they were too easy to identify to be rewarding). Under this hypothesis, there may not be a fundamental difference in neuromodulatory function between men and women, but instead, differences in what stimuli and tasks are engaging. 
Of note, one may question whether the term task-irrelevant is the best description of one of two classes of stimuli in a dual-task setting. We refer to the procedure of the present manuscript as a form of task-irrelevant perceptual learning because the image-recognition task is statistically independent of the target-detection task. As such, we use the term task-irrelevant perceptual learning to be consistent in nomenclature with prior work published using this paradigm (Seitz & Watanabe, 2009). 
Conclusion
Our results show that uncertainty (both expected and unexpected) impacts fast-TIPL but with different effects between men and women. These, combined with our previous findings, suggest that there may be important differences in how men and women process these types of tasks, which may be related to differences in neuromodulatory or possibly cognitive processing. Overall, our results show that different aspects of uncertainty can contribute in complex ways to our processes of learning and memory. 
Acknowledgments
Commercial relationships: none. 
Corresponding authors: Virginie Leclercq; Aaron R. Seitz. 
Address: Department of Psychology, University of California–Riverside, Riverside, CA, USA. 
References
Bear M. F. Singer W. (1986). Modulation of visual cortical plasticity by acetylcholine and noradrenaline. Nature, 320, 172–176. [CrossRef] [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Cahill L. (2006). Why sex matters for neuroscience. Nature Review of Neuroscience, 7, 477–484. [CrossRef]
Dewald A. D. Sinnett S. Doumas L. A. (2011). Conditions of directed attention inhibit recognition performance for explicitly presented target-aligned irrelevant stimuli. Acta Psychologica, 138, 60–67. [CrossRef] [PubMed]
Kilgard M. P. Merzenich M. M. (1998). Cortical map reorganization enabled by nucleus basalis activity. Science, 279, 1714–1718. [CrossRef] [PubMed]
Kilgard M. P. Merzenich M. M. (2002). Order-sensitive plasticity in adult primary auditory cortex. Proceedings of the National Academy of Sciences, USA, 99, 3205–3209. [CrossRef]
Konkle T. Brady T. F. Alvarez G. A. Oliva A. (2010). Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. Journal of Experimental Psychology: General, 139, 558–578. [CrossRef] [PubMed]
Leclercq V. Le Dantec C. C. Seitz A. R. (2013). Encoding of episodic information through fast task-irrelevant perceptual learning. Vision Research, 99, 5–11, doi:10.1016/j.visres.2013.09.006. [CrossRef] [PubMed]
Leclercq V. Seitz A. R. (2011). Fast task-irrelevant perceptual learning is disrupted by sudden onset of central task elements. Vision Research, 61, 70–76. [CrossRef] [PubMed]
Leclercq V. Seitz A. R. (2012a). Enhancement from targets and suppression from cues in fast task-irrelevant perceptual learning. Acta Psychologica, 141, 31–38. [CrossRef]
Leclercq V. Seitz A. R. (2012b). Fast-TIPL occurs for salient images without a memorization requirement in men but not in women. PLoS ONE, 7, e36228.
Lin J. Y. Pype A. D. Murray S. O. Boynton G. M. (2010). Enhanced memory for scenes presented at behaviorally relevant points in time. PLoS Biology, 8, e1000337.
Oliva A. Torralba A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42, 145–175. [CrossRef]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Pessiglione M. Petrovic P. Daunizeau J. Palminteri S. Dolan R. J. Frith C. D. (2008). Subliminal instrumental conditioning demonstrated in the human brain. Neuron, 59, 561–567. [CrossRef] [PubMed]
Phillips S. M. Sherwin B. B. (1992). Variations in memory function and sex steroid hormones across the menstrual cycle. Psychoneuroendocrinology, 17, 497–506. [CrossRef] [PubMed]
Pilly P. Grossberg S. Seitz A. R. (2010). Low-level sensory plasticity during task-irrelevant perceptual learning: Evidence from conventional and double training procedures? Vision Research, 50, 424–432. [CrossRef] [PubMed]
Posner M. I. Petersen S. E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13, 25–42. [CrossRef] [PubMed]
Rescorla R. A. Wagner A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In Black A. H. Prokasy W. F. (Eds.), Classical conditioning II: Current theory and research. (pp. 64–69). New York: Appleton-Century-Crofts.
Rokem A. Silver M. A. (2010). Cholinergic enhancement augments magnitude and specificity of visual perceptual learning in healthy humans. Current Biology, 20, 1723–1728. [CrossRef] [PubMed]
Seitz A. Lefebvre C. Watanabe T. Jolicoeur P. (2005). Requirement for high-level processing in subliminal learning. Current Biology, 15, R753–755. [CrossRef] [PubMed]
Seitz A. Watanabe T. (2005). A unified model for perceptual learning. Trends in Cognitive Science, 9, 329–334. [CrossRef]
Seitz A. R. Kim D. Watanabe T. (2009). Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700–707. [CrossRef] [PubMed]
Seitz A. R. Watanabe T. (2003). Psychophysics: Is subliminal learning really passive? Nature, 422, 36. [CrossRef] [PubMed]
Seitz A. R. Watanabe T. (2008). Is task-irrelevant learning really task-irrelevant? PLoS ONE, 3, e3792.
Seitz A. R. Watanabe T. (2009). The phenomenon of task-irrelevant perceptual learning. Vision Research, 49, 2604–2610. [CrossRef] [PubMed]
Swallow K. M. Jiang Y. V. (2010). The attentional boost effect: Transient increases in attention to one task enhance performance in a second task. Cognition, 115, 118–132. [CrossRef] [PubMed]
Swallow K. M. Jiang Y. V. (2011). The role of timing in the attentional boost effect. Attention, Perception, & Psychophysics, 73, 389–404. [CrossRef]
Tipper S. (1985). The negative priming effect: inhibitory processes by ignored objects. Journal of Experimental Psychology, 37, 571–590. [CrossRef]
Watanabe T. Nanez J. E. Koyama S. Mukai I. Liederman J. Sasaki Y. (2002). Greater plasticity in lower-level than higher-level visual motion processing in a passive perceptual learning task. Nature Neuroscience, 5, 1003–1009. [CrossRef] [PubMed]
Watanabe T. Nanez J. E. Sasaki Y. (2001). Perceptual learning without perception. Nature, 413, 844–848. [CrossRef] [PubMed]
Willenbockel V. Sadr J. Fiset D. Horne G. O. Gosselin F. Tanaka J. W. (2010). Controlling low-level image properties: The SHINE toolbox. Behavior Research Methods, 42, 671–684. [CrossRef] [PubMed]
Wilson D. A. Fletcher M. L. Sullivan R. M. (2004). Acetylcholine and olfactory perceptual learning. Learning and Memory, 11, 28–34. [CrossRef] [PubMed]
Yu A. J. Dayan P. (2005). Uncertainty, neuromodulation, and attention. Neuron, 46, 681–692. [CrossRef] [PubMed]
Figure 1
 
Design of Experiment 1. Participants had to rapidly press the “up arrow” key when the white square appeared. At the end of the trial, they have to say, by pressing the “left arrow” or the “right arrow” key, which images they saw in the trial.
Figure 1
 
Design of Experiment 1. Participants had to rapidly press the “up arrow” key when the white square appeared. At the end of the trial, they have to say, by pressing the “left arrow” or the “right arrow” key, which images they saw in the trial.
Figure 2
 
Results from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 2
 
Results from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 3
 
Gender breakdown from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 3
 
Gender breakdown from the image recognition task of Experiment 1. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 4
 
Gender breakdown from the image recognition task of Experiment 2 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 4
 
Gender breakdown from the image recognition task of Experiment 2 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 5
 
Gender breakdown from the image recognition task of combined uncertainty results and no uncertainty results. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 5
 
Gender breakdown from the image recognition task of combined uncertainty results and no uncertainty results. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 6
 
Gender breakdown from the image recognition task of Experiment 3 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
Figure 6
 
Gender breakdown from the image recognition task of Experiment 3 for the 15 first and 15 last trials for color cue. Plots represent accuracy (% correct). Error bars represent within standard error of the mean.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×