Free
Research Article  |   March 2008
Unconscious associative memory affects visual processing before 100 ms
Author Affiliations
Journal of Vision March 2008, Vol.8, 10. doi:10.1167/8.3.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maximilien Chaumon, Valérie Drouet, Catherine Tallon-Baudry; Unconscious associative memory affects visual processing before 100 ms. Journal of Vision 2008;8(3):10. doi: 10.1167/8.3.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Searching for an object in a cluttered environment takes advantage of different cues, explicit attentional cues, such as arrows, and visual cues, such as saliency, but also memory. Behavioral studies manipulating the spatial relationships between context and target in visual search suggest that the memory of context-target associations could be retrieved quickly and act at an early perceptual stage. On the other hand, neural responses are usually influenced by memory at a later, postperceptual stage. At which level of neural processing does the memory of context-target associations influence scene analysis? In our experiment, human subjects learned arbitrary associations between given spatial layouts of distractors and target positions while performing a classical visual search task. Behaviorally, context-target associations speed visual search times, although subjects remain fully unaware of these associations. Magneto-encephalographic responses to visual displays containing or not containing relevant contextual information differ before 100 ms, much earlier than any known effect of recent experience. This effect occurs bilaterally at occipital sensors only, suggesting that context affects activity in the underlying early sensory cortices. Importantly, subjects do not show any sign of explicit knowledge about context-target associations: The earliness of the influence of contextual knowledge may be a hallmark of unconscious memory.

Introduction
Attention can be driven by external cues, such as the “look left” sign or a squeal of tires at a pedestrian crossing, but also by memory. For example, when crossing a street our memory allows us to check for the presence of cars only on the road, not on the sidewalk. Although we know that memory influences attentional deployment (Brockmole & Henderson, 2006; Chun, 2000; Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006), we know little about the underlying brain processes. Which stages of neural processing are affected by memory-driven attention? 
It is usually held that memory influences rather late steps of brain processing. In its simplest form, memory is seen as a different brain response to the second and subsequent presentations of a stimulus, compared to the first presentation (Grill-Spector, Henson, & Martin, 2006). These so-called repetition effects show up at the earliest around 200 ms when several different images occur between two presentations of the same image (Henson, Rylands, Ross, Vuilleumeir, & Rugg, 2004). A more elaborate form of memory signals whether an object is seen in its usual context or not. Out-of-context objects elicit specific neural responses at late processing stages, in the 300–500 ms range (Ganis & Kutas, 2003).The effects of memory on visual processing are thus usually observed at relatively late processing stages, after 200 ms. 
Recently, a series of behavioral experiments revealed the existence of another form of memory that could possibly influence earlier stages of brain processing: the implicit memory for the context in which a target is found in visual search. It appears that every time we find a target in the environment, our brain is able to register the relations between the target and the surrounding context: spatial relations (Chun & Jiang, 1998), as well as identity or movement relations (Chun & Jiang, 1999), or even semantic categorical membership (Goujon, Didierjean, & Marmèche, 2007). On subsequent encounters with a given context, the registered relations are then exploited to guide attention faster to the target. 
The paradigm used to study this form of memory is called contextual cueing (Chun, 2000; Chun & Jiang, 1998). In this visual search paradigm, subjects search through hundreds of seemingly different displays, each composed with one target and several distractor items. Unknown to the subjects, some displays are presented several times during the experiment. Without noticing it, after only a few repetitions, subjects become faster at finding the target. No conscious knowledge regarding the identity or the spatial properties of the search arrays could ever be shown (Chun & Jiang, 1998, 2003). In sum, these results show that human observers quickly develop an unconscious memory of the repeated displays and use it automatically to guide attention when viewing previously encountered visual scenes. 
How does this form of memory influence brain processing? Does it affect visual processing at late stages like other types of memory? Behavioral results using the contextual cueing paradigm suggest on the contrary that context-target associations may have an influence on early neural activity: Contextual effects are obtained with briefly (200 ms) presented displays (Chun & Jiang, 1998) and affect the direction of the first saccade (Brockmole & Henderson, 2006; Peterson & Kramer, 2001). Two electrophysiological studies have also investigated the issue. In an intracranial electroencephalographic (EEG) experiment, Olson, Chun, and Allison (2001) showed that neural activity is influenced after 200 ms in early visual areas, suggesting a feedback influence of contextual memory from higher order areas. In a recent scalp EEG experiment, Johnson, Woodman, Braun, and Luck (2007) showed that contextual cueing affects the N2Pc component of the EEG, at around 170 ms. This component is a well-documented electrophysiological correlate of attentional deployment (Hopf et al., 2000; Woodman & Luck, 1999). Thus, contextual memory influences visual processing and attentional deployment at latencies similar to those observed with simple repetition memory. It is thus important to consider the repetition bias inherent to the contextual cueing paradigm. Indeed, learning is evidenced in this paradigm by comparing responses to repeated versus non-repeated displays. Since the learned configurations are also the only ones that are repeated, the difference observed in the electrophysiological experiments could be due to repetition memory (Grill-Spector et al., 2006; Henson et al., 2004) rather than knowledge of context-target associations. 
To better control for repetition memory, we developed a modified version of contextual cueing. Like the original paradigm (Chun & Jiang, 1998), subjects searched for a target (T) among distractors (L) in the display and reported whether it was tilted left or right. In contrast to the original paradigm, however, all configurations of distractors were presented the exact same number of times. In the “Predictive” configurations, a given layout of distractors was associated with a target position that remained the same throughout the experiment. In the “non-Predictive” configurations, a given layout of distractors was associated with a different target position on each presentation (Figure 1). The different configurations of each category were randomly interleaved. Repeating all configurations the same number of times controls for the influence of familiarity and repetition effects (Grill-Spector et al., 2006) versus contextual associative memory. To determine that this modification of the paradigm did not yield to any form of conscious knowledge about the displays or their spatial properties, we tested the subjects in a series of behavioral tests at the end of the experiment to ensure that neither scene identity nor associative spatial knowledge was available to consciousness. 
Figure 1
 
Paradigm. A. Subjects searched for the “T” without moving their eyes and reported its orientation (tilted left or right). The subject's response (around 1,400 ms on average) interrupted the visual display and triggered a feedback screen (+ or − for good and bad response, respectively). An absence of response after 4 s was followed by an “o” feed-back. B. Experimental sequence. In the Predictive (P) configurations, a given array of Ls was associated with the same target position throughout the experiment (e.g., array P1 is associated here with a T in the upper left quadrant). In the non-Predictive configurations, the target location changed on each presentation of a given configuration (e.g., array nP1 here). Subjects performed the task on 12 P and 12 nP configurations randomly intermingled for about half an hour. Another set of 12 P and 12 nP configurations was then generated and the task resumed for another half hour.
Figure 1
 
Paradigm. A. Subjects searched for the “T” without moving their eyes and reported its orientation (tilted left or right). The subject's response (around 1,400 ms on average) interrupted the visual display and triggered a feedback screen (+ or − for good and bad response, respectively). An absence of response after 4 s was followed by an “o” feed-back. B. Experimental sequence. In the Predictive (P) configurations, a given array of Ls was associated with the same target position throughout the experiment (e.g., array P1 is associated here with a T in the upper left quadrant). In the non-Predictive configurations, the target location changed on each presentation of a given configuration (e.g., array nP1 here). Subjects performed the task on 12 P and 12 nP configurations randomly intermingled for about half an hour. Another set of 12 P and 12 nP configurations was then generated and the task resumed for another half hour.
In a pilot behavioral experiment, we first controlled the shortening of reaction times to be equivalent in the Old/New paradigm and in our Predictive/non-Predictive paradigm. Then our rationale was that by controlling for repetition effects, the magnetoencephalographic (MEG) experiment conducted using the modified version of the paradigm would specifically probe the influence of context-target associations rather than neural facilitation, due to the mere repetition of some displays but not others. We looked at the time and topography of the first difference between the responses to Predictive and non-Predictive displays after learning to assess which visual processing level is influenced by learned context-target spatial relations. 
Methods
Stimuli
Each display consisted of a unique configuration of 16 L distractors presented randomly at 0, 90, 180, or 270° and a T target on a gray background. Items (red, green, blue, or yellow, 0.4 × 0.4 deg) were randomly placed on an invisible 12 × 10 grid subtending 12.5 × 7.5° (3–5 items per quadrant), with a maximal jitter of 0.5° kept constant throughout the experiment. Target positions were constrained to 12 possible locations arranged symmetrically with respect to the center of the screen. An L could never appear at any of these 12 locations. A new set of stimuli was generated for each subject. 
Paradigm
A fixation cross (750–1,250 ms) was followed by the search array ( Figure 1A). Subjects had to find the T and report its orientation by a button press. The subject's response (on average around 1,400 ms) interrupted the search display presentation, triggered a feedback screen (+ or − for good or bad response) and initiated the next trial. Failure to respond within 4,000 ms triggered a time-out sign (o), and the next trial was initiated (inter-trial interval 1,500–2,000 ms). Unknown to the subjects, images consisted of two randomly intermingled categories of displays. In the replication of the original Old/New task (Chun & Jiang, 1998), 12 different “old” configurations were repeated 24 times (the configurations' color and location remained constant throughout the repetitions) and 288 “new” configurations were presented only once. The orientation of the target (top of the T pointing left or right) changed randomly from one occurrence of a given configuration to another to avoid direct stimulus-response mapping (Chun & Jiang, 1998; Dobbins, Schnyer, Verfaellie, & Schacter, 2004). In the Predictive/non-Predictive task (Figure 1B), all configurations were presented the same number of times. In each Predictive configuration, all items in the display were identical across repetitions. In the non-Predictive configurations, the T position changed from one presentation of the same distractor configuration to the other. The 12 possible target locations were used the same number of times across repetitions in the Predictive and non-Predictive configurations. 
In both paradigms, the two configuration types (Old/New or Predictive/non-Predictive) were randomly intermixed. In the Predictive/non-Predictive paradigm, the number of intervening items between two successive occurrences of the same image was set to be similar in the two conditions (mean and standard deviation differing by <5% for each subject). 
Procedure (Behavioral)
Stimuli were presented on a computer screen (refresh rate 60 Hz) using the Psychophysics Toolbox extension for Matlab ( http://www.psychtoolbox.org) (Brainard, 1997). The experiment comprised 12 runs of 96 trials lasting about 8 min each. Six consecutive runs used the Old/New paradigm and 6 others used the Predictive/non-Predictive paradigm; the order was counterbalanced across subjects. The responding hand was switched in the middle of the experiment. 
Procedure (MEG)
All subjects provided written informed consent and were paid for their participation, according to procedures approved by the national ethics committee (CCPPRB). Images were back-projected on a translucent screen placed at 110 cm using a computer data projector (60 Hz). Twenty-four Predictive and 24 non-Predictive configurations were randomly generated for each subject and each configuration was presented 12 times. The experiment comprised 6 runs of 96 trials each; runs 1–3 used the first half of the stimuli (12 P and 12 nP) and runs 4–6 used the remaining 12 P and 12 nP displays. In other respects, stimuli and task settings were the same as in the pilot study. The subjects' head positions were monitored and no deviation larger than 0.5 cm was allowed. After the 6 runs, subjects were informed about the existence of Predictive and non-Predictive configurations, and they performed a series of behavioral tests to determine whether the knowledge acquired implicitly could be used explicitly (see “ Results” section). 
Recordings
Continuous data were collected at the MEG-EEG Centre, Hôpital Pitié-Salpétrière (Paris, France), using a CTF/VSM OMEGA 151 channels third-order gradiometers, whole-head system (CTF Systems, Vancouver, Canada) at a sampling rate of 1,250 Hz. Electrocardiograms (ECG) and vertical and horizontal electro-oculograms (EOG) were also recorded. Cardiac artifacts in the MEG signal were corrected by a correlation method (Gratton, Coles, & Donchin, 1983). The EOG was calibrated for each subject, and the rejection threshold was set to 1°. Trials contaminated with muscle artifact (visual inspection) were also rejected. Correct trials were low-pass filtered at 30 Hz and averaged with respect to stimulus onset (−200 to +300 ms) over epochs of 4 presentations (59.3 ± 3.3 trials) for each type of configuration, with a baseline taken between −200 and 0 ms. There were not enough trials to separately analyze displays containing a left- or right-lateralized target. 
We defined the root mean square (RMS) of the signal at time t as  
R M S t = i = 1 N A i 2 N ,
(1)
with A i being the signal amplitude on sensor i at time t and N the total number of sensors. The RMS values did not deviate from normal (Shapiro-Wilk normality test, all W > 0.95, p > 0.5). Topographical maps in Figure 3B show the RMS value of the signal not averaged across sensors. 
Results
Learning contextual associations
We completed a behavioral experiment to determine that the behavioral advantage conferred by the association between given layouts of Ls and specific target positions is similar in the Predictive/non-Predictive paradigm used here and in the original Old/New paradigm (Chun & Jiang, 1998). Fifteen subjects (mean age 22, range 19–29 years; 5 males; 14 right-handed) performed both paradigms in counterbalanced order. Only correct reaction times above 300 ms and within mean ± 2 SDs were included in the analysis. The learning effect was similar in both paradigms. The reaction time differences were similar in both paradigms: 76 ± 21 ms in the Old/New paradigm; 54 ± 24 ms in the Predictive/non-Predictive paradigm; ANOVA for repeated measures, main effect of paradigms, F(1,14) = 2.23 p > 0.15. 
Sixteen healthy adult volunteers (mean age 25, range 19–31 years; 8 males; 14 right-handed) performed the Predictive/non-Predictive task while their brain activity was monitored using magnetoencephalographic (MEG) recordings. Reaction times decreased with practice for all configurations, but were further shortened by 79 ± 16 ms on Predictive configurations after learning. As shown in Figure 2, at the beginning of the experiment (presentations 1 to 4), reaction times did not differ between Predictive and non-Predictive configurations (two-tailed paired t test), t(15) = 0.49, p > 0.6, whereas at the end of the experiment (presentations 9 to 12), reaction times on Predictive configurations were significantly shortened compared to non-Predictive configurations, t(15) = 4.87, p < 0.001. Mean error rates were low (2.6% ± 0.7) and did not differ between configurations, t(15) = 0.80, p > 0.4). 
Figure 2
 
Subjects learned the regularity of context-target associations. Difference in reaction times to Predictive and non-Predictive configurations before (presentations 1–4) and after learning, ± SEM. Reaction times differed between the two conditions only after learning. Note: *** denote p < 0.001.
Figure 2
 
Subjects learned the regularity of context-target associations. Difference in reaction times to Predictive and non-Predictive configurations before (presentations 1–4) and after learning, ± SEM. Reaction times differed between the two conditions only after learning. Note: *** denote p < 0.001.
Learning affects evoked visual responses as early as 50–100 ms
To determine the earliest period of time at which the signal differed between the two conditions, we first adopted a global method. The global power of the responses was estimated by computing RMS values of the signal for each subject (see “ Methods”). RMS values were compared between conditions at the beginning (presentations 1 to 4) and the end (presentations 9 to 12) of the experiment with paired t tests ( Figure 3A). The only highly significant difference appears between 90 and 100 ms at the end of the experiment, t(15) = 3.44, p < 0.005. The signal indexed by the RMS technique was greater in the Predictive condition at this latency (see maps in Figure 3B). 
Figure 3
 
Global root mean square (RMS) activity differs from 90–100 ms. A. Time course of the p value of paired t tests of the RMS activity between responses to Predictive and non-Predictive displays. A highly significant difference between the responses to Predictive and non-Predictive displays appears between 90 and 100 ms only at the end of the experiment when the context-target associations have been registered and reaction times are shorter in the Predictive condition ( Figure 2). Vertical scale is logarithmic. B. Average topographical map of the RMS response between 90 and 100 ms. The response to Predictive displays is stronger than to non-Predictive displays in this latency range. C. Difference between conditions at 90–100 ms. Error bars show SEM. Note: ** denote p < 0.005.
Figure 3
 
Global root mean square (RMS) activity differs from 90–100 ms. A. Time course of the p value of paired t tests of the RMS activity between responses to Predictive and non-Predictive displays. A highly significant difference between the responses to Predictive and non-Predictive displays appears between 90 and 100 ms only at the end of the experiment when the context-target associations have been registered and reaction times are shorter in the Predictive condition ( Figure 2). Vertical scale is logarithmic. B. Average topographical map of the RMS response between 90 and 100 ms. The response to Predictive displays is stronger than to non-Predictive displays in this latency range. C. Difference between conditions at 90–100 ms. Error bars show SEM. Note: ** denote p < 0.005.
This first analysis showed in which time range a reliable difference occurred in the signal. To better characterize the topography of this early contextual effect, we analyzed the classical evoked fields in response to Predictive or non-Predictive configurations. A significant difference appears on occipital sensors in the 50–100 ms range ( Figure 4A). At this latency, activity peaks at occipital and temporal sensors bilaterally in both conditions. Neural responses to Predictive and non-Predictive configurations are identical at the beginning of the experiment (presentations 1 to 4; Figure 4A, top row). At the end of the experiment, however, the response is stronger in the Predictive condition at this latency ( Figure 4A, bottom row). This effect is seen bilaterally at occipital sensors, right occipital, t(15) = −3.35, p < 0.005, and left occipital, t(15) = 2.60, p < 0.02. The response at two posterior sensors (RO33 and LO33) is shown on Figure 4B. Activity appears to rise quicker in the Predictive condition in the time window highlighted in green on Figure 4B
Figure 4
 
Evoked activity is affected as early as 50–100 ms after learning. A. Topographical maps (flattened top view) of the evoked fields averaged between 50 and 100 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. Two bilateral occipital and two bilateral temporal regions of interest (ROI) were used for measures (outlined on the maps). Predictive and non-Predictive conditions differ only during presentations 9 to 12 on the occipital ROIs (bold outlined regions), right occipital, p < 0.005, and left occipital, p < 0.02. B. Time course of the evoked fields at left and right occipital sensors (LO33 and RO33), highlighted in red on the maps in a. Activity rises earlier in the Predictive condition. The light-green, shaded area shows the 50–100 ms time window taken for the maps in a. This early difference is followed by later ones, such as the 225–245 ms time window fully illustrated in Figure 5.
Figure 4
 
Evoked activity is affected as early as 50–100 ms after learning. A. Topographical maps (flattened top view) of the evoked fields averaged between 50 and 100 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. Two bilateral occipital and two bilateral temporal regions of interest (ROI) were used for measures (outlined on the maps). Predictive and non-Predictive conditions differ only during presentations 9 to 12 on the occipital ROIs (bold outlined regions), right occipital, p < 0.005, and left occipital, p < 0.02. B. Time course of the evoked fields at left and right occipital sensors (LO33 and RO33), highlighted in red on the maps in a. Activity rises earlier in the Predictive condition. The light-green, shaded area shows the 50–100 ms time window taken for the maps in a. This early difference is followed by later ones, such as the 225–245 ms time window fully illustrated in Figure 5.
Because medial temporal areas are likely to be implicated in the learning of context-target relations (Chun & Phelps, 1999; Greene, Gross, Elsinger, & Rao, 2007; Manns & Squire, 2001; Park, Quinlan, Thornton, & Reder, 2004), we also looked at the activity recorded on the temporal sensors overlying these areas. A marginally significant trend is observed at left but not right temporal sensors: left temporal, t(15) = 1.88, p > 0.07, and right temporal, t(15) = −0.96, p > 0.3. Thus, although we cannot exclude a contribution of cortical areas underlying left temporal sensors, most of the differential activity occurred at occipital sensors, which suggests sources in early visual areas. 
The early difference reported here at occipital sites is likely to affect subsequent processing steps in the brain. Indeed, later differences occur in the evoked activity, for instance at 225–245 ms (highlighted in gray on the bottom plot of Figure 4B). Figure 5 shows the average activity at this latency in both conditions at the beginning and end of the experiment. Activity at the peak sensors on the right is different between conditions at the end of the experiment, t(15) = −2.27, p < 0.04, but not at the beginning, t(15) = 0.05, p > 0.9. 
Figure 5
 
Later steps of brain processing are also affected by the learned context-target relations. Topographical maps (flattened top view) of the evoked fields averaged between 225 and 245 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. The difference between Predictive and non-Predictive conditions is significant over the right occipito-temporal region of interest shown on the maps: presentations 9 to 12 ( t test: p < 0.04).
Figure 5
 
Later steps of brain processing are also affected by the learned context-target relations. Topographical maps (flattened top view) of the evoked fields averaged between 225 and 245 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. The difference between Predictive and non-Predictive conditions is significant over the right occipito-temporal region of interest shown on the maps: presentations 9 to 12 ( t test: p < 0.04).
Learning and memory for context-target associations are unconscious
Contextual memory is acquired and used unconsciously. Because we reduced the number of different configurations presented in our paradigm, it was important to ensure that the knowledge was neither acquired nor used consciously. We administered a debriefing questionnaire and a series of tests at the end of the experiment to make sure that subjects did not try to explicitly remember the images. During the postexperimental debriefing, no subject reported noticing that the displays were repeated in the experiment. No subjects reported trying to remember the displays or any spatial property of the displays. The learning of the spatial context-target associations was thus implicit in this experiment. 
After the debriefing, the whole experimental manipulation was revealed to the subjects. Then they were submitted to 3 behavioral tests to identify any form of explicit knowledge that they could have had about the displays. The tests were performed on the 12 Predictive configurations seen most recently. Fifteen subjects were analyzed because data for one subject was lost. The first test was designed to identify potential traces of familiarity with the identity of the visual displays. A nonspeeded two-alternative forced-choice (2AFC) Old/New test was used: Subjects were presented with two different configurations (one had been seen previously and was a new unseen one) and had to decide which one had been seen before ( Figure 6A). Subjects were at chance at this explicit familiarity test (Kolmogorov-Smirnoff test against a binomial distribution, D max = 0.279, p > 0.1). Two other tests were designed to reveal explicit knowledge of the target location in a given context. Because these tests probed the spatial knowledge acquired and used in the experiment, they could be more sensitive than the familiarity test of the first Old/New test (Chun & Jiang, 2003). We presented the subjects with Predictive configurations of distractors seen during the experiment, but without any target (Figure 6B), and asked them to decide in which quadrant the target should be (4AFC). Subjects were at chance in this test (Dmax = 0.175, p > 0.2). Finally, subjects saw Predictive configurations in which a second target was added (Figure 6C) and were asked to choose which one was at the correct location (2AFC). In this test, in addition to the type of knowledge, we matched the task settings to those of the actual experiment. Before responding, subjects had to perform a visual search task, as they did during the MEG experiment. In this final test, however, like the first two tests, subjects did not show any reliable sign of explicit memory of the spatial structure of the displays (Dmax = 0.206, p > 0.2). Subjects were thus able to use their memory of spatial regularities to speed visual search, but were at chance when it came to using this knowledge explicitly. 
Figure 6
 
Subjects had no explicit access to the knowledge acquired during learning. A. Old/New test. Subjects had to decide which of the two displays had been presented before. B. Completion test performed on Predictive distractor configurations, without any target. Subjects had to guess in which quadrant the target was located. C. Two-targets search test. Subjects had to decide which of the two targets was at the learned location. Subjects were at chance in all three tests.
Figure 6
 
Subjects had no explicit access to the knowledge acquired during learning. A. Old/New test. Subjects had to decide which of the two displays had been presented before. B. Completion test performed on Predictive distractor configurations, without any target. Subjects had to guess in which quadrant the target was located. C. Two-targets search test. Subjects had to decide which of the two targets was at the learned location. Subjects were at chance in all three tests.
Discussion
Early visual processes are influenced by unconsciously memorized contextual relations after only 12 encounters with a given image. Our results illustrate the generally accepted idea that past experience has a strong influence on neural processing: Vision is a proactive system, with a constant adjustment between the information received and the most likely interpretation of this information (Kersten, Mamassian, & Yuille, 2004; Knill & Pouget, 2004; Rao & Ballard, 1999). Our results demonstrate that learning-induced modifications in early visual processes can occur in adults in as little as 15–20 min—after 9 exposures to the visual scenes in our experiment. The early latency of the effect of context-target associations imposes strong temporal constraints on the neural mechanisms involved. 
Repeated exposure to a stimulus leads to a decrease (or sometimes increase) of the neural response to this stimulus and a shortening of reaction times (Grill-Spector et al., 2006). We carefully controlled distractor layouts so they were repeated the same number of times during the experiment. We also assured that an equal number of intervening trials occurred between two presentations of a given layout in both conditions. It could be argued, however, that the amount of perceptual variability was slightly different between the two conditions: The Predictive arrays were repeated exactly from presentation to presentation (with the exception of target orientation), whereas non-Predictive arrays varied slightly across presentations because the target location varied. Could this difference in the overall level of variability between Predictive and non-Predictive displays explain the observed results? We do not favor this interpretation because no repetition effects have ever been observed at such early latencies with so many intervening items between two presentations (Doniger et al., 2001; Grill-Spector et al., 2006; Henson et al., 2004). The crucial consideration in this study is thus the learned association between a given context and a target position. 
Context can act as an attentional cue to guide attention toward the target more efficiently (Chun & Jiang, 1998). Electrophysiological evidence also suggests that the context guides attention in the visual cortex (Johnson et al., 2007). Memory for contextual associations thus seems to constitute an attentional cue. It is notable that this kind of attentional cue is peculiar. It guides attention unconsciously even though it is displayed simultaneously with the target. If they are delivered together with the target, classical attentional cues, such as arrows, affect neural activity only at later, postperceptual stages (Vogel, Woodman, & Luck, 2005). Our results thus suggest that memory-driven attention affects neural processing much earlier than voluntary attention driven by symbolic cues. This interpretation is in line with the behavioral evidence that memory-driven attention is more efficient than visually driven attention (Peterson & Kramer, 2001; Summerfield et al., 2006). Furthermore, whereas visually driven attentional shifts are often consciously controlled by the subjects (even if it is sometimes hard to ignore misleading cues), contextual cueing operates in an unconscious manner (Chun & Jiang, 1998, 2003). Contextual cueing is triggered automatically and explicit strategies do not facilitate learning or performance (Chun & Jiang, 2003). Our results thus support the idea that memory can drive attention extremely fast on the basis of unconscious cues, whereas volitional shifts of attention are much slower (Müller, Teder-Salejarvi, & Hillyard, 1998; Wolfe, Klempen, & Dahlen, 2000). 
Why did not previous electrophysiological experiments reveal any effect at such early latency (Johnson et al., 2007; Olson et al., 2001)? As noted in the “Introduction,” repetition is confounded with associative context-target relations in the old/new paradigm. Old displays have two features that new displays do not have: They are repeated and are predictive of the target position, whereas, new displays are neither. In the predictive/non-predictive paradigm, however, both types of displays are repeated, but only predictive displays predict the position of the target. One possible explanation for the lack of an early effect in the MEG signal is that repetition- and prediction-related activities trigger electrophysiological currents of opposite polarity at the cortical level at this latency and cancel out at the scalp level. Although any definitive statement on the subject would require the direct comparison of the two paradigms in the same experiment, the lack of an early effect in previous electrophysiological studies of contextual cueing seems to be explained by differences in experimental settings. 
How can learned context-target associations influence visual processing so rapidly? The influence of past experience on neural activity is usually thought to rely on a complex interplay between bottom-up, visually driven processes and top-down, memory-driven processes (Hochstein & Ahissar, 2002; Lamme & Roelfsema, 2000). It was previously suggested that contextual cueing requires feedback from higher order areas (Olson et al., 2001) but at much longer latencies. At 50–100 ms after stimulus onset, the evoked response is dominated by activity in visual areas (Poghosyan & Ioannides, 2007; Tzelepi, Ioannides, & Poghosyan, 2001). Indeed, Figure 4A shows that in both Predictive and non-Predictive configurations, the maximal response is observed at posterior sensors. An iterative interaction between bottom-up and top-down processes involving higher order areas within 50 ms seems unlikely. Two alternatives can be considered. First, contextual information would be retrieved quickly in higher order areas not detected here and immediately fed back to the posterior part of the visual system. Medial temporal regions are probably involved in this task (Chun & Phelps, 1999; Greene et al., 2007; Manns & Squire, 2001; Park et al., 2004), but MEG may not be sensitive enough to detect activity in these deep structures, especially at such early latencies when the signal-to-noise ratio is rather weak. This interpretation fits well with a recent model of contextual guidance supported by behavioral data (Oliva & Torralba, 2001). In this model, global and local properties of the visual scene are extracted in parallel very rapidly (in a feed-forward manner) and constrain local processing and attentional deployment on the scene. Prior knowledge is incorporated in this model (Torralba, Oliva, Castelhano, & Henderson, 2006), and its operation on global representations of the visual input nicely parallels its potential implementation in the medial temporal lobe of the brain (where receptive fields span large portions of the visual field). The second alternative is that learned contextual associations are stored within early sensory areas (Fuster, 1997) to form unconscious perceptual memories (Maia & Cleeremans, 2005; Slotnick & Schacter, 2006). These memory-driven early visual area alterations would in turn lead to the modification of the unconscious feed-forward volley of neural processing (Hochstein & Ahissar, 2002). We would like to suggest that alternatives presented above are not mutually exclusive. Rather, we favor a mixture of the two. A modification of early visual processes by scene priors, rapidly extracted and fed back, nicely parallels our assertion that in contextual cueing, the attentional cue is the scene itself that guides sensory processing from its early steps. 
Conclusions
Unconscious knowledge derived from recent experience influences neural activity in visual areas as early as 50–100 ms, close to the entry point of visual information in the cortical system. This unconscious bias is thus likely to influence later processing in other visual and decisional areas (Bullier, 2001). Indeed, modulations of visual activity are observed in the MEG signal at later latencies. Intracranial recordings in humans performing the classical contextual cueing paradigm revealed modulations largely distributed in the ventral pathway after 200 ms (Olson et al., 2001). The effect of unconscious memory on early neural processing and on the following cascade of neural events may be the reason why it is so difficult to voluntarily override its influence (Jacoby, 1991; Mazzoni & Krakauer, 2006; Peterson & Kramer, 2001). 
Acknowledgments
We thank Antoine Ducorps, Denis Schwartz, and Dr. Pascale Pradat-Diehl for assistance with recordings and data analysis and Anabelle Goujon for discussion on the explicit memory tests. This research was supported by grants from the French ministry of research and the Agence Nationale pour la Recherche to C.T.B. M.C. is supported by a grant from the Délégation Générale pour l'Armement. 
Commercial relationships: none. 
Corresponding author: Maximilien Chaumon. 
Email: Maximilien.Chaumon@gmail.com. 
Address: LENA CNRS UPR640, Cognitive Neuroscience & Brain Imaging, 47 Bd de l'Hôpital 75013 Paris, France. 
References
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Brockmole, J. R. Henderson, J. M. (2006). Recognition and attention guidance during contextual cueing in real-world scenes: Evidence from eye movements. Quarterly Journal of Experimental Psychology, 59, 1177–1187. [PubMed] [CrossRef]
Bullier, J. (2001). Feedback connections and conscious vision. Trends in Cognitive Sciences, 5, 369–370. [PubMed] [CrossRef] [PubMed]
Chun, M. M. Phelps, E. A. (1999). Memory deficits for implicit contextual information in amnesic subjects with hippocampal damage. Nature Neuroscience, 2, 844–847. [PubMed] [Article] [CrossRef] [PubMed]
Chun, M. M. (2000). Contextual cueing of visual attention. Trends in Cognitive Sciences, 4, 170–178. [PubMed] [CrossRef] [PubMed]
Chun, M. M. Jiang, Y. (1998). Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology, 36, 28–71. [PubMed] [CrossRef] [PubMed]
Chun, M. M. Jiang, Y. (1999). Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science, 10, 360–365. [CrossRef]
Chun, M. M. Jiang, Y. (2003). Implicit, long-term spatial contextual memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 224–234. [PubMed] [CrossRef] [PubMed]
Dobbins, I. G. Schnyer, D. M. Verfaellie, M. Schacter, D. L. (2004). Cortical activity reductions during repetition priming can result from rapid response learning. Nature, 428, 316–319. [PubMed] [CrossRef] [PubMed]
Doniger, G. M. Foxe, J. J. Schroeder, C. E. Murray, M. M. Higgins, B. A. Javitt, D. C. (2001). Visual perceptual learning in human object recognition areas: A repetition priming study using high-density electrical mapping. Neuroimage, 13, 305–313. [PubMed] [CrossRef] [PubMed]
Fuster, J. M. (1997). Network memory. Trends in Neurosciences, 20, 451–459. [PubMed] [CrossRef] [PubMed]
Ganis, G. Kutas, M. (2003). An electrophysiological study of scene effects on object identification. Cognitive Brain Research, 16, 123–144. [PubMed] [CrossRef] [PubMed]
Goujon, A. Didierjean, A. Marmèche, E. (2007). Contextual cueing based on specific and categorical properties of the environment. Visual Cognition, 15, 257–275. [CrossRef]
Gratton, G. Coles, M. G. Donchin, E. (1983). A new method for off-line removal of ocular artifact. Electroencephalography and Clinical Neurophysiology, 55, 468–484. [PubMed] [CrossRef] [PubMed]
Greene, A. J. Gross, W. L. Elsinger, C. L. Rao, S. M. (2007). Hippocampal differentiation without recognition: An fMRI analysis of the contextual cueing task. Learning & Memory, 14, 548–553. [PubMed] [Article] [CrossRef]
Grill-Spector, K. Henson, R. Martin, A. (2006). Repetition and the brain: Neural models of stimulus-specific effects. Trends in Cognitive Sciences, 10, 14–23. [PubMed] [CrossRef] [PubMed]
Henson, R. N. Rylands, A. Ross, E. Vuilleumeir, P. Rugg, M. D. (2004). The effect of repetition lag on electrophysiological and haemodynamic correlates of visual object priming. Neuroimage, 21, 1674–1689. [PubMed] [CrossRef] [PubMed]
Hochstein, S. Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791–804. [PubMed] [Article] [CrossRef] [PubMed]
Hopf, J. M. Luck, S. J. Girelli, M. Hagner, T. Mangun, G. R. Scheich, H. (2000). Neural sources of focused attention in visual search. Cerebral Cortex, 10, 1233–1241. [PubMed] [Article] [CrossRef] [PubMed]
Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory & Language, 30, 513–541. [CrossRef]
Johnson, J. S. Woodman, G. F. Braun, E. Luck, S. J. (2007). Implicit memory influences the allocation of attention in visual cortex. Psychonomic Bulletin & Review, 14, 834–839. [CrossRef] [PubMed]
Kersten, D. Mamassian, P. Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55, 271–304. [PubMed] [CrossRef] [PubMed]
Knill, D. C. Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27, 712–719. [PubMed] [CrossRef] [PubMed]
Lamme, V. A. Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23, 571–579. [PubMed] [CrossRef] [PubMed]
Maia, T. V. Cleeremans, A. (2005). Consciousness: Converging insights from connectionist modeling and neuroscience. Trends in Cognitive Sciences, 9, 397–404. [PubMed] [CrossRef] [PubMed]
Manns, J. R. Squire, L. R. (2001). Perceptual learning, awareness, and the hippocampus. Hippocampus, 11, 776–782. [PubMed] [CrossRef] [PubMed]
Mazzoni, P. Krakauer, J. W. (2006). An implicit plan overrides an explicit strategy during visuomotor adaptation. Journal of Neuroscience, 26, 3642–3645. [PubMed] [Article] [CrossRef] [PubMed]
Müller, M. M. Teder-Salejarvi, W. Hillyard, S. A. (1998). The time course of cortical facilitation during cued shifts of spatial attention. Nature Neuroscience, 1, 631–634. [PubMed] [Article] [CrossRef] [PubMed]
Oliva, A. Torralba, A. (2001). International Journal of Computer Vision, V42,.
Olson, I. R. Chun, M. M. Allison, T. (2001). Contextual guidance of attention: Human intracranial event-related potential evidence for feedback modulation in anatomically early temporally late stages of visual processing. Brain, 124, 1417–1425. [PubMed] [Article] [CrossRef] [PubMed]
Park, H. Quinlan, J. Thornton, E. Reder, L. M. (2004). The effect of midazolam on visual search: Implications for understanding amnesia. Proceedings of the National Academy of Sciences of the United States of America, 101, 17879–17883. [PubMed] [Article] [CrossRef] [PubMed]
Peterson, M. S. Kramer, A. F. (2001). Attentional guidance of the eyes by contextual information and abrupt onsets. Perception & Psychophysics, 63, 1239–1249. [PubMed] [CrossRef] [PubMed]
Poghosyan, V. Ioannides, A. A. (2007). Precise mapping of early visual responses in space and time. Neuroimage, 35, 759–770. [PubMed] [CrossRef] [PubMed]
Rao, R. P. Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2, 79–87. [PubMed] [Article] [CrossRef] [PubMed]
Slotnick, S. D. Schacter, D. L. (2006). The nature of memory related activity in early visual areas. Neuropsychologia, 44, 2874–2886. [PubMed] [CrossRef] [PubMed]
Summerfield, J. J. Lepsien, J. Gitelman, D. R. Mesulam, M. M. Nobre, A. C. (2006). Orienting attention based on long-term memory experience. Neuron, 49, 905- [CrossRef] [PubMed]
Torralba, A. Oliva, A. Castelhano, M. S. Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786. [PubMed] [CrossRef] [PubMed]
Tzelepi, A. Ioannides, A. A. Poghosyan, V. (2001). Early (N70m neuromagnetic signal topography and striate and extrastriate generators following pattern onset quadrant stimulation. Neuroimage, 13, 702–718. [PubMed] [CrossRef] [PubMed]
Vogel, E. K. Woodman, G. F. Luck, S. J. (2005). Pushing around the locus of selection: evidence for the flexible-selection hypothesis. Journal of Cognitive Neuroscience, 17, 1907–1922. [PubMed] [CrossRef] [PubMed]
Wolfe, J. M. Klempen, N. Dahlen, K. (2000). Postattentive vision. Journal of Experimental Psychology: Human Percepteption and Performance, 26, 693–716. [PubMed] [CrossRef]
Woodman, G. F. Luck, S. J. (1999). Electrophysiological measurement of rapid shifts of attention during visual search. Nature, 400, 867–869. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Paradigm. A. Subjects searched for the “T” without moving their eyes and reported its orientation (tilted left or right). The subject's response (around 1,400 ms on average) interrupted the visual display and triggered a feedback screen (+ or − for good and bad response, respectively). An absence of response after 4 s was followed by an “o” feed-back. B. Experimental sequence. In the Predictive (P) configurations, a given array of Ls was associated with the same target position throughout the experiment (e.g., array P1 is associated here with a T in the upper left quadrant). In the non-Predictive configurations, the target location changed on each presentation of a given configuration (e.g., array nP1 here). Subjects performed the task on 12 P and 12 nP configurations randomly intermingled for about half an hour. Another set of 12 P and 12 nP configurations was then generated and the task resumed for another half hour.
Figure 1
 
Paradigm. A. Subjects searched for the “T” without moving their eyes and reported its orientation (tilted left or right). The subject's response (around 1,400 ms on average) interrupted the visual display and triggered a feedback screen (+ or − for good and bad response, respectively). An absence of response after 4 s was followed by an “o” feed-back. B. Experimental sequence. In the Predictive (P) configurations, a given array of Ls was associated with the same target position throughout the experiment (e.g., array P1 is associated here with a T in the upper left quadrant). In the non-Predictive configurations, the target location changed on each presentation of a given configuration (e.g., array nP1 here). Subjects performed the task on 12 P and 12 nP configurations randomly intermingled for about half an hour. Another set of 12 P and 12 nP configurations was then generated and the task resumed for another half hour.
Figure 2
 
Subjects learned the regularity of context-target associations. Difference in reaction times to Predictive and non-Predictive configurations before (presentations 1–4) and after learning, ± SEM. Reaction times differed between the two conditions only after learning. Note: *** denote p < 0.001.
Figure 2
 
Subjects learned the regularity of context-target associations. Difference in reaction times to Predictive and non-Predictive configurations before (presentations 1–4) and after learning, ± SEM. Reaction times differed between the two conditions only after learning. Note: *** denote p < 0.001.
Figure 3
 
Global root mean square (RMS) activity differs from 90–100 ms. A. Time course of the p value of paired t tests of the RMS activity between responses to Predictive and non-Predictive displays. A highly significant difference between the responses to Predictive and non-Predictive displays appears between 90 and 100 ms only at the end of the experiment when the context-target associations have been registered and reaction times are shorter in the Predictive condition ( Figure 2). Vertical scale is logarithmic. B. Average topographical map of the RMS response between 90 and 100 ms. The response to Predictive displays is stronger than to non-Predictive displays in this latency range. C. Difference between conditions at 90–100 ms. Error bars show SEM. Note: ** denote p < 0.005.
Figure 3
 
Global root mean square (RMS) activity differs from 90–100 ms. A. Time course of the p value of paired t tests of the RMS activity between responses to Predictive and non-Predictive displays. A highly significant difference between the responses to Predictive and non-Predictive displays appears between 90 and 100 ms only at the end of the experiment when the context-target associations have been registered and reaction times are shorter in the Predictive condition ( Figure 2). Vertical scale is logarithmic. B. Average topographical map of the RMS response between 90 and 100 ms. The response to Predictive displays is stronger than to non-Predictive displays in this latency range. C. Difference between conditions at 90–100 ms. Error bars show SEM. Note: ** denote p < 0.005.
Figure 4
 
Evoked activity is affected as early as 50–100 ms after learning. A. Topographical maps (flattened top view) of the evoked fields averaged between 50 and 100 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. Two bilateral occipital and two bilateral temporal regions of interest (ROI) were used for measures (outlined on the maps). Predictive and non-Predictive conditions differ only during presentations 9 to 12 on the occipital ROIs (bold outlined regions), right occipital, p < 0.005, and left occipital, p < 0.02. B. Time course of the evoked fields at left and right occipital sensors (LO33 and RO33), highlighted in red on the maps in a. Activity rises earlier in the Predictive condition. The light-green, shaded area shows the 50–100 ms time window taken for the maps in a. This early difference is followed by later ones, such as the 225–245 ms time window fully illustrated in Figure 5.
Figure 4
 
Evoked activity is affected as early as 50–100 ms after learning. A. Topographical maps (flattened top view) of the evoked fields averaged between 50 and 100 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. Two bilateral occipital and two bilateral temporal regions of interest (ROI) were used for measures (outlined on the maps). Predictive and non-Predictive conditions differ only during presentations 9 to 12 on the occipital ROIs (bold outlined regions), right occipital, p < 0.005, and left occipital, p < 0.02. B. Time course of the evoked fields at left and right occipital sensors (LO33 and RO33), highlighted in red on the maps in a. Activity rises earlier in the Predictive condition. The light-green, shaded area shows the 50–100 ms time window taken for the maps in a. This early difference is followed by later ones, such as the 225–245 ms time window fully illustrated in Figure 5.
Figure 5
 
Later steps of brain processing are also affected by the learned context-target relations. Topographical maps (flattened top view) of the evoked fields averaged between 225 and 245 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. The difference between Predictive and non-Predictive conditions is significant over the right occipito-temporal region of interest shown on the maps: presentations 9 to 12 ( t test: p < 0.04).
Figure 5
 
Later steps of brain processing are also affected by the learned context-target relations. Topographical maps (flattened top view) of the evoked fields averaged between 225 and 245 ms in the Predictive and non-Predictive condition before (presentations 1 to 4) and after (presentations 9 to 12) learning took place. The difference between Predictive and non-Predictive conditions is significant over the right occipito-temporal region of interest shown on the maps: presentations 9 to 12 ( t test: p < 0.04).
Figure 6
 
Subjects had no explicit access to the knowledge acquired during learning. A. Old/New test. Subjects had to decide which of the two displays had been presented before. B. Completion test performed on Predictive distractor configurations, without any target. Subjects had to guess in which quadrant the target was located. C. Two-targets search test. Subjects had to decide which of the two targets was at the learned location. Subjects were at chance in all three tests.
Figure 6
 
Subjects had no explicit access to the knowledge acquired during learning. A. Old/New test. Subjects had to decide which of the two displays had been presented before. B. Completion test performed on Predictive distractor configurations, without any target. Subjects had to guess in which quadrant the target was located. C. Two-targets search test. Subjects had to decide which of the two targets was at the learned location. Subjects were at chance in all three tests.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×