Free
Research Article  |   January 2011
Perceptual learning transfer between hemispheres and tasks for easy and hard feature search conditions
Author Affiliations
  • Marina Pavlovskaya
    Loewenstein Rehabilitation Hospital, Raanana, Israel
    Sackler School of Medicine, Tel Aviv University, Ramat Aviv, Israelmarina@netvision.net.il
  • Shaul Hochstein
    Interdisciplinary Center for Neural Computation and Neurobiology Department, Life Sciences Institute, Hebrew University, Jerusalem, Israelshaul@vms.huji.ac.il
Journal of Vision January 2011, Vol.11, 8. doi:https://doi.org/10.1167/11.1.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marina Pavlovskaya, Shaul Hochstein; Perceptual learning transfer between hemispheres and tasks for easy and hard feature search conditions. Journal of Vision 2011;11(1):8. https://doi.org/10.1167/11.1.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perceptual learning involves modification of cortical processes so that transfer to new task variants depends on neuronal representation overlap. Neuron selectivity varies with cortical level, so that the degree of transfer should depend on training-induced modification level. We ask how different can stimuli be, how far apart can their representations be, and still induce training transfer. We measure transfer across both long distances within the visual field, namely across cerebral hemispheres, as well as across perceptual dimensions, i.e., between detection of odd color and orientation. In Experiment 1, subjects learned feature search using eccentric arrays randomly presented in the right or left hemifield. Odd elements differed in color or orientation, depending on the presentation hemifield. Following training and performance improvement, the dimensions were switched between hemifields. There was little cross-hemifield or cross-task transfer for difficult cases, and the greater transfer found for easier cases could be across hemispheres and/or perceptual dimensions. Testing these two elements separately, Experiment 2 confirmed considerable transfer across task dimensions, training one dimension and testing another, and Experiment 3 confirmed such transfer across hemifields, training search on one side and testing on the other. Results support Reverse Hierarchy Theory (M. Ahissar & S. Hochstein, 1997, 2004) in that, for easier perceptual tasks involving and modifying higher cortical levels, considerable transfer occurs both across perceptual dimensions and across visual field location even across hemifields.

Introduction
Perhaps the essential issue in training-induced improvement is the degree of transfer of this improvement to new task conditions. This issue is essential if we want students to perform better under unforeseen circumstances. It is also an important issue in the attempt to understand the mechanisms and cortical sites where modifications occur, affording changed behavior. 
Numerous studies of perceptual learning have shown that sometimes training is extremely specific and affects performance mainly under the same conditions as used for training, while in other cases, training improvement may transfer significantly to new task conditions (Ahissar & Hochstein, 1993, 1997, 2000, 2004; Censor & Sagi, 2009; Fahle & Poggio, 2002; Hochstein & Ahissar, 2002; Jeter, Dosher, Petrov, & Lu, 2009; Karni & Sagi, 1991, 1993; Ramachandran & Braddick, 1973; Sireteanu & Rettenbach, 2000; Treisman, 1991; Treisman, Vieira, & Hayes, 1992; Xiao et al., 2008; Zhang et al., 2010). Importantly, this dichotomy—or continuum—is unrelated to the issue of whether training increases or decreases cortical activity, related perhaps to increased neural sensitivity to trained stimuli and to sparser network encoding, respectively (see, e.g., Kourtzi, Betts, Sarkheil, & Welchman, 2005). 
Transfer or specificity of training effects relate to the site of learning modification in the following way: If the new task being tested is based on the same neurons that were used for performance of the trained task, then we may expect that the training-induced modifications, which affected these neurons, will also affect—hopefully improve—performance of the new task, since it depends on these very same, modified neurons. If, on the other hand, the neurons that were modified are very specific, then we would expect very limited transfer of training effects to new tasks. 
It was found that within the visual hierarchy, lower level neurons are specific to stimulus parameters such as position in the visual field, orientation, color, and motion direction, while higher level neurons generalize over these stimulus parameters and generalize, instead, to image category (Hubel & Wiesel, 1962; Ungerleider & Mishkin, 1982; Zeki, 1978; see also Damasio, 1985; Felleman & Van Essen, 1991; Grill-Spector & Malach, 2004; Schwarzlose, Swisher, Dang, & Kanwisher, 2008; Vaina, 1994; Woolsey, 1981). Thus, it would be expected that training that depends on low-level neurons would not transfer to tasks with changed basic parameters, while if training modified high-level neurons, performance would be invariant over changes in basic dimensions (see, for example, Epshtein, Lifshitz, & Ullman, 2008; Li, Cox, Zoccolan, & DiCarlo, 2009; Stringer & Rolls, 2008; Zoccolan, Oertelt, DiCarlo, & Cox, 2009). 
A similar cortical level dichotomy is the subject of debate concerning trial-to-trial target priming in a feature search task (Treisman & Gelade, 1980). Maljkovic and Nakayama (1994, 1996, 2000) found that reaction times were considerably shorter when targets were identical in sequential trials, a phenomenon they called “priming of pop-out” or PoP. While this effect was short, lasting 15–30 s, it extended across 5–6 trials. These authors suggested that a visual short-term memory trace was involved. A later experiment found that priming depended on repetition of the full image configuration, suggesting that the mechanism may involve an episodic memory trace (Huang, Holcombe, & Pashler, 2004). On the other hand, in line with Wolfe's (1994) guided search model, which suggested presence of a multi-dimensional saliency map, Wolfe, Butcher, Lee, and Hyle (2003) suggested that the gain of features included in the current target are raised for future trials (see also Mozer, Shettel, & Vecera, 2006). Recently, it was found that such priming may be induced in a cross-task fashion (Lee, Mozer, & Vecera, 2009) suggesting a low-level attentional mechanism rather than involvement of a visual short-term memory trace. 
Can this difference between high-level and low-level representations and between bottom-up and top-down effects explain the different results quoted above for degrees of transfer vs. specificity in perceptual learning? One answer might be that no, learning is always effected at the same level (usually seen as high level; e.g., Xiao et al., 2008; Zhang et al., 2010) so that on this basis alone, its transfer characteristics should be constant. A second possibility is that learning follows the usual visual hierarchy route, including both low-level and high-level regions (Furmanski, Schluppeck, & Engel, 2004; Pourtois, Karsten, Rauss, Vuilleumier, & Schwartz, 2008; Schoups, Vogels, Qian, & Orban, 2001; Watanabe et al., 2002; see Pilly, Grossberg, & Seitz, 2010). Then, low-level learning should be specific and high-level learning should transfer across basic stimulus parameters. A third, opposing view, which claimed to clarify the specificity transfer muddle in a single theoretical framework, is represented by Reverse Hierarchy Theory (Ahissar & Hochstein, 1997, 2004; Hochstein & Ahissar, 2002). 
Reverse Hierarchy Theory (RHT) proposed that training first affects high cortical level mechanisms and gradually, with top-down guidance, affects also low-level sites. Note that this suggestion has two aspects: learning can take place at different levels (and not, for example, only at high levels, or only at low levels, with and without top-down guidance), and the order is from high to low level. A corollary of this theory reflects the well-known phenomenon that it is best to train first with easier conditions and slowly move to more and more difficult conditions. Since high-level receptive fields are large and generalized, RHT suggests that initial—easier condition—training effects at high cortical levels transfer more to new conditions. Low-level receptive fields are specific to perceptual parameters (location, orientation, color, etc.) so that later training, useful for harder conditions, should be more limited to training conditions (Ahissar & Hochstein, 1997, 1998). 
Thus, according to Reverse Hierarchy Theory, we would expect more transfer of learning effects following training of an easy search task, but not as much following training of a hard search task (Ahissar & Hochstein, 1997, 2000, 2004; Hochstein & Ahissar, 2002; see Jeter et al., 2009; Xiao et al., 2008; Zhang et al., 2010 claiming transfer depends on training task precision—as in our tasks—or procedure rather than difficulty alone). Note that some improvement is always found after training, probably due to the high-level effect of subjects becoming familiar with the testing situation. The type of transfer that Ahissar and Hochstein tested was between different orientations, sizes, or locations. The finding of considerable specificity was surprising so that experimental emphasis was on how similar could two stimuli be, how close to each other can two stimuli be, and still training with one would not affect the other. 
However, becoming accustomed to the understanding that the basis of training improvement is that practice affects cortical processes, we realize that we should really have expected from the outset to find learning specificity, rather than generalization. If there is no overlap between the neurons used to perform two different tasks, then, of course, training on one of them—modifying processes involving one set of neurons—should have no impact on performance of the other task, which depends on different neurons. 
Thus, a straightforward prediction is that transfer will occur between mechanisms sharing neuronal resources, and not for independent processes (Xiao et al., 2008). The physiological finding that high-level receptive fields are larger and more generalizing over basic visual features led to the RHT prediction that tasks that depend on high-level mechanisms will be more generalized, while those depending on the very localized low-level neurons will be more specific. 
The question that now arises is therefore just the opposite of the original one, not how close can stimuli be and not affect each other, but how far apart can they be and training with one will still affect the other. A most distant pair of representations is those located in the two cerebral hemispheres. We now test whether training effects can transfer even between the two hemispheres—or equivalently between training with stimuli presented in one hemifield and subsequent testing using stimuli in the other hemifield. 
Previous tests of learning transfer, as most feature search studies, were always performed with test arrays in the center of the test screen, so that stimulus arrays fell on the parafovea. This meant that even when target location was limited to one area during training and to another during test, the distance between training and test positions—or retinal locations—was not great. More recently, we introduced a new paradigm with more eccentric search arrays as well as search arrays limited to one hemifield and studied eccentricity effects under these conditions (Pavlovskaya, Ring, Groswasser, Keren, & Hochstein, 2001; see also Carrasco, Evert, Chang, & Katz, 1995; Carrasco & Yeshurun, 1998; Wolfe, O'Neill, & Bennett, 1998). We now exploit this new paradigm for testing transfer of learning effects between cortical hemispheres. 
Another set of very distant stimuli is those related to different visual dimensions and the question of whether there is transfer across tasks depending on different dimensions. That is, when subjects are trained to perform a color search task, will the improvement transfer to subsequent testing on an orientation search task, and vice versa? Only a few previous studies addressed the issue of transfer of learning effects following a change in the task being performed. Treisman et al. (1992) addressed this issue, asking if experience with the form of an object would speed its use in a subsequent changed task condition and found little transfer between tasks involving even the same geometric shapes. Similarly, some transfer was found from direction training on a subsequent orientation discrimination task, but only when both were along the same axis, and not under other conditions, suggesting use of the same neurons was critical for transfer (Matthews, Liu, Geesaman, & Qian, 1999). These findings suggest that cross-task or cross-dimension transfer may be very limited. We therefore now test degree of transfer between search tasks involving very different dimensions but using “easy” spatial conditions (large differences between target and distracters) to test such transfer. 
In summary, in the following experiments, we ask if search task training in one hemisphere will transfer to improved performance in the other hemisphere and if training in orientation search will transfer to performance in color search (and vice versa). Results of our first experiment are consistent with our expectation that there would be considerable transfer for easy tasks and less for difficult task conditions. However, the method used left an ambiguity as to what type of transfer was taking place in the easy task case: We trained subjects on orientation search in one hemifield and on color search in the other hemifield—being careful to train half of the subjects with orientation on the right and color on the left and half with the opposite sides. Then, we switched the sides of the tasks and tested for transfer. (Note that it is essential to present stimuli on both sides for otherwise subjects attend only to one side rather than spreading attention to the whole screen.) The ambiguity that arises is that the significant transfer that we found for the easy tasks, color in one hemifield to color in the other and orientation in one hemifield to orientation in the other, could be interpreted as transfer across tasks: color in one hemifield to orientation in the same hemifield and orientation in one hemifield to orientation in the same hemifield. Of course, the lack of considerable transfer for the hard task was unambiguous: This limited transfer meant that there is little cross-hemisphere or cross-task transfer for hard cases. 
Experiments 2 and 3 were designed to answer this ambiguity. The second experiment directly tested the degree of transfer across tasks. Here we trained subjects in visual search with a single dimension, either color or orientation in both hemifields. Then, those subjects who had been trained with color were tested with orientation and those who had been trained with orientation were tested with color search. For the easy conditions, there was major transfer from task dimension to task dimension, and for the hard task, there was much less transfer. We conclude that there is considerable transfer of learning effects from task to task—but only if the training is with sufficiently easy task conditions. With hard conditions, despite the learning due to the training, there is little transfer to new dimensions. 
To isolate transfer across hemispheres, the third experiment directly tested interhemispheric learning transfer by using a paradigm designed to match two independent tasks with very similar average degrees of difficulty (Ahissar & Hochstein, 1993). In our case, however, these tasks were performed in the two different hemifields. The need to apply two very different tasks in the two hemifields arises by our having two requirements for testing cross-hemisphere transfer: (1) We need to train one hemisphere, i.e., we need to use lateralized stimuli for the task, training on one side and testing on the other. (2) We need to have subjects keep their gaze centralized and attention spread to allow the task to depend on one hemisphere alone. To this end, we need to ask them to perform a second task in the other hemifield. Thus, we trained subjects on a local detection (orientation pop-out task) in one hemifield and on global identification (vertical vs. horizontal array orientation) in the other hemifield. Following training, we switched the sides of the tasks. We found large transfer for the trained search task for easy cases. Since it is highly unlikely that there was transfer from the global identification task to the local pop-out task (and we demonstrate this in an auxiliary experiment), this finding demonstrates that indeed learning effects do transfer even across cortical areas as distant as across hemispheres. 
Methods
Subjects
Twenty-six healthy naive subjects (19 women, 7 men, age: 16–54 years) with normal or corrected-to-normal vision were tested on Experiments 1 (9), 2 (12), and 3 (5). Subjects received remuneration for participation. 
Stimuli and procedure
General
We used a 7 × 7 search array containing a target element on half of the trials in one of the central 24 locations (excluding fixation), as demonstrated in Figure 1 (top row). The array was presented laterally, 5.5° right or left of fixation. Target differed from distracters in color (top right) or orientation (top left). In addition, this difference could be large, making the task easy (distracter color blue, target color red; distracter orientation 60°, target 20°) or the difference could be small, making the task hard (target: light blue color or 40° orientation; distracters as above). 
Figure 1
 
Schematic diagram of local feature and global identification tasks. Examples of the orientation and color search task displays, with a 7 × 7 array of distracters, are presented in the top row. Targets differed from distracters in color (right) or orientation (left). In addition, this difference could be large making the task easy or small making the task hard (both are demonstrated here in the same array, but only one appeared per array in the experiment; see text). The global identification task was to determine the array orientation as horizontal or vertical (third and fourth rows; left and right panels, respectively). Arrays were 7 × 5 vs. 5 × 7 for easy tasks (third row) or 7 × 6 vs. 6 × 7 for hard tasks (fourth row). A mask comprising elements each with bars of various orientations (second row) was presented after each stimulus with a variable Stimulus-to-mask Onset Asynchrony (SOA). The mask for the color task had the same multi-line elements but with colored lines.
Figure 1
 
Schematic diagram of local feature and global identification tasks. Examples of the orientation and color search task displays, with a 7 × 7 array of distracters, are presented in the top row. Targets differed from distracters in color (right) or orientation (left). In addition, this difference could be large making the task easy or small making the task hard (both are demonstrated here in the same array, but only one appeared per array in the experiment; see text). The global identification task was to determine the array orientation as horizontal or vertical (third and fourth rows; left and right panels, respectively). Arrays were 7 × 5 vs. 5 × 7 for easy tasks (third row) or 7 × 6 vs. 6 × 7 for hard tasks (fourth row). A mask comprising elements each with bars of various orientations (second row) was presented after each stimulus with a variable Stimulus-to-mask Onset Asynchrony (SOA). The mask for the color task had the same multi-line elements but with colored lines.
Arrays of Figure 1 (top row) include two targets each, one easy and one hard, for demonstration purposes only; in the experiment, there was always either one target or none (divided half–half in random order) and subjects were asked to respond “Yes” or “No” for target present or absent, respectively. Arrays were presented for 16 ms, followed by a mask (150 ms; Figure 1, second row) after a variable Stimulus-to-mask Onset Asynchrony (SOA; 20–180 ms). Easy and hard tasks were performed during separate interleaved sessions, generally 2 sessions per day. Subjects were trained and tested for up to 12 sessions of 1,000 trials divided into 50 blocks (each with a single SOA), with 20 trials per block (10 left; 10 right in random order). 
We performed pilot studies before conducting the reported experiments to determine what conditions would be “easy” or “hard” for naive subjects. These categories were defined by the following criteria: (1) In the first session, “hard” condition performance would be close to chance (going from 55–60 to 60–65% correct for short to long SOAs) while “easy” condition performance would be always significantly better than chance (65% to 75% correct). (2) Training would lead to improvement, which is substantially complete after ∼4 sessions for “easy” conditions, while it is slower, continuing for 10 or more sessions, for “hard” conditions. Results with these chosen conditions indeed matched these criteria. 
Since performance did not reach 100% even for quite long SOAs, we use an average performance measure rather than assigning an arbitrary performance threshold. 
Experiment 1
Subjects were trained on orientation search in one hemifield and color search in the other hemifield. Following training, subjects were tested on the same color and orientation tasks, but with each trained task moved to the other (new) hemifield—testing for cross-hemisphere transfer. 
Experiment 2
Subjects were trained on visual search with a single dimension, either color or orientation in both hemifields. Then, those subjects who were trained with color were tested with orientation search and those trained with orientation were tested with color search—testing for cross-dimensional transfer. 
Experiment 3
We used two types of tasks for this experiment: The usual local orientation pop-out search task and a global array orientation discrimination task (see Ahissar & Hochstein, 1993). The bottom half of Figure 1 illustrates stimuli used to test performance of the global array orientation task, with the next-to-bottom row showing the easier version with 5 × 7 or 7 × 5 arrays and the bottom row demonstrating the more difficult version, with 6 × 7 or 7 × 6 arrays. Subjects were asked to report the global array orientation identifying it as horizontal or vertical (left and right panels, respectively). Note that (unlike the situation in the experiment of Ahissar & Hochstein, 1993) there was never an odd element in the array when subjects performed the global task. 
Subjects performed the local task in the left hemifield and the global task in the right hemifield (or vice versa for half the subjects). Sessions were alternately easy and hard. Following training, we switched the sides of the local and global tasks—testing for cross-hemisphere transfer. 
For all experiments, we used the STATISTICA-6 statistical program packet to perform ANOVA tests. Post hoc comparisons were executed with the Tukey HSD test for unequal N
Note that in the terminology of Xiao et al. (2008), Experiment 1 uses “double training” and Experiments 2 and 3 use “sequential training.” Nevertheless, all our comparisons are within-experiment and between easier and more difficult conditions. 
Results
Experiment 1
We ask if search task training in one hemisphere will transfer to the other hemisphere. Subjects were trained in odd orientation search in an array presented in one hemifield and odd color search for arrays presented in the other hemifield. In each trial, one array was presented, randomly assigned to either the left or right hemifield, and randomly with or without an odd element target. Following training, subjects were tested on the same task, but with the hemifields switched so that in the location where the orientation task appeared, we now presented the color task, and vice versa. Results of this first experiment supported our expectation in that we found considerable learning transfer for easy conditions and not for difficult conditions. 
Figure 2 shows the experimental results: We compare performance for the initial session of 1,000 trials (light blue symbols and lines), the final post-training session (dark brown), and performance on the test session with switched hemifields (orange). This comparison is done for orientation feature (right) and for color feature search (left) and for easy conditions (top) and hard conditions (bottom). 
Figure 2
 
Experiment 1—Color (left, squares) and orientation (right, triangles) feature search performances as a function of test-to-mask Stimulus Onset Asynchrony (SOA) before (blue) and after (brown) training in the trained hemifield and after training with switched tasks (orange), i.e., with the color search task performed in the hemifield where orientation feature search was trained, and vice versa. Top graphs show results with easy conditions, as demonstrated in Figure 1 (top row) with targets at 20° orientation and red color. Bottom graphs are for hard conditions, as in Figure 1, with 40° orientation and light blue target. Note that there is training-induced improvement in all cases (compare light and dark blue data points). For the easy tasks (top graphs), this improvement is maintained after switching hemifields (orange), reflecting nearly complete transfer to new task conditions. For the hard tasks (bottom), there is considerable training specificity and much less transfer of improvement to new conditions. This differentiation is predicted by Reverse Hierarchy Theory, extending it to cross-hemisphere transfer (see text).
Figure 2
 
Experiment 1—Color (left, squares) and orientation (right, triangles) feature search performances as a function of test-to-mask Stimulus Onset Asynchrony (SOA) before (blue) and after (brown) training in the trained hemifield and after training with switched tasks (orange), i.e., with the color search task performed in the hemifield where orientation feature search was trained, and vice versa. Top graphs show results with easy conditions, as demonstrated in Figure 1 (top row) with targets at 20° orientation and red color. Bottom graphs are for hard conditions, as in Figure 1, with 40° orientation and light blue target. Note that there is training-induced improvement in all cases (compare light and dark blue data points). For the easy tasks (top graphs), this improvement is maintained after switching hemifields (orange), reflecting nearly complete transfer to new task conditions. For the hard tasks (bottom), there is considerable training specificity and much less transfer of improvement to new conditions. This differentiation is predicted by Reverse Hierarchy Theory, extending it to cross-hemisphere transfer (see text).
Note that post-switch test session performance is very close to the trained level—for either task—in the easy condition cases, but this post-switch test performance is nearly back to the initial pre-training level in the hard conditions. This dramatic difference between training effects is seen for both color and orientation tasks and is consistent across all SOAs. This differential learning effect is shown in Figure 3, where we compare the learning effects for easy (left) and hard (right) task conditions. The top graphs show the initial, post-training, and post-switch performances, averaged over the two feature tasks—color and orientation. The second row of graphs show performance averaged over SOA. The inset shows the transfer ratio for each condition, indicating the fraction of the improvement for the trained task that transferred to performance after switching hemifields. This ratio is calculated as the difference between the average performance for the transfer task and for the original task before training, divided by the difference for the original task following and before training. 
Figure 3
 
Experiment 1. Training-induced improvement, with more transfer for easy tasks and more specificity for hard tasks. Performance is shown as a function of SOA (top graphs) for easy (left) and hard (right) tasks—averaging data for color and orientation tasks (data point color conventions as in Figure 2; shown as diamonds for averaged data). Bar graphs (middle row) show mean performance (across SOAs) before and after training and for transfer to the switched hemifield. Bottom row graphs show development of training effect with training session (first and last 4 or 6 training session results are shown, averaged across subjects and SOAs) and transfer test results.
Figure 3
 
Experiment 1. Training-induced improvement, with more transfer for easy tasks and more specificity for hard tasks. Performance is shown as a function of SOA (top graphs) for easy (left) and hard (right) tasks—averaging data for color and orientation tasks (data point color conventions as in Figure 2; shown as diamonds for averaged data). Bar graphs (middle row) show mean performance (across SOAs) before and after training and for transfer to the switched hemifield. Bottom row graphs show development of training effect with training session (first and last 4 or 6 training session results are shown, averaged across subjects and SOAs) and transfer test results.
We performed an ANOVA for 9 subjects with task (orientation, color), condition (easy, hard) stage (initial, final, transfer), and SOA (20–40, 60–80, 100–120, 140–180 ms) as repeated measures main factors and found significance for all (SOA: F(3,8) = 29.93; p < 0.001); post hoc analysis (Tukey HSD test) showed that performance for 20–40 ms SOA was poorer than for other SOAs (Q(2,8) = 5.01, p < 0.01); condition: F(1,8) = 974.52; p < 0.001; task: F(1,8) = 68.86; p < 0.001; orientation easier than color; stage: F(2,16) = 153.58; p < 0.001). Furthermore, there was a significant 2-way interaction between condition and stage (F(2,16) = 21.88; p < 0.001). Post hoc analysis showed that for the easy condition, initial performance was poorer than final (Q(4,16) = 6.04, p < 0.01) or than transfer (Q(4,16) = 5.92, p < 0.01) and there was no difference between final and transfer (p = 0.22); on the other hand, for the hard condition: initial performance was poorer than final (p < 0.001), but transfer was also poorer than final (orientation: Q(4,16) = 5.89, p < 0.01; color Q(4,16) = 5.63, p < 0.05) and no different than initial performance (orientation: p = 0.99; color: p = 0.32). 
In summary, there is a clear difference between the easy and the hard condition effects, as follows: First of all, performance is obviously better for easy than for hard conditions. In addition, training always has an effect, and post-training performance is always significantly better than the pre-training level. Nevertheless, there is also another major difference between the conditions, in that after switching hemifields, performance for the easy task remains near post-training level, while for the hard conditions, performance drops to near pre-training level. In terms of our predictions, we would say that easy case training largely transfers to new testing conditions, while hard case training is more specific to the training conditions. 
The bottom row of graphs in Figure 3 shows the learning dynamics for easy and hard conditions (left and right, respectively) and for orientation and color task performance (filled and empty symbols, respectively). Performance for the color task is poorer than for the orientation task, as might be expected for stimuli presented in the periphery of the visual scene, but the conclusions here and in the following experiments are consistent for both. The harder the task, the slower the training effect, i.e., the more training sessions required to achieve the full training effect. Furthermore, while transfer is nearly complete for easy tasks (left), we find nearly no transfer for hard conditions (right)—where performance following about 10 training sessions is similar to performance on the second or third training session. 
However, this training and testing methodology left an ambiguity as to what type of transfer was taking place in the easy task case: Recall that we trained subjects on orientation search in one hemifield and on color search in the other hemifield—being careful to train half of the subjects with orientation on the right and color on the left and half with the opposite sides. Then, we switched the sides of the tasks and tested for transfer. The ambiguity that arises is that the large transfer that we found for the easy tasks, color in one hemifield to color in the other and orientation in one hemifield to orientation in the other, could be interpreted as transfer across tasks: color in one hemifield to orientation in the same hemifield and orientation in one hemifield to orientation in the same hemifield. Of course, the lack of considerable transfer for the hard task was unambiguous: The lack of transfer meant that there is neither cross-hemisphere nor cross-task transfer for hard cases. 
Experiment 2
Experiments 2 and 3 were designed to address this ambiguity by testing cross-hemisphere and cross-task transfers in separate experiments. To disambiguate the transfer found for easy cases, we repeated the search task, but trained subjects in visual search with a single dimension, either color or orientation in both hemifields (but not in the center of the screen, as had been done by other investigations). Then, those subjects who had been trained with color were tested with orientation and those who had been trained with orientation were tested with color. 
Figure 4 demonstrates the results: For 12 subjects, an ANOVA showed significant main effects for SOA (F(3,10) = 19.82; p < 0.001), condition (F(1,10) = 640.73; p < 0.001), and stage (F(2,20) = 373.62; p < 0.001). Again, there was significant interaction between condition and stage (F(2,20) = 6.97; p = 0.01) for which post hoc analysis showed such that for easy conditions, there was substantial cross-task transfer (transfer significantly different than initial (Q(4,16) = 7.05, p < 0.01) and not different than final, p = 0.28) while for hard conditions, transfer performance was significantly less than final pre-transfer performance (Q(4,16) = 5.32, p < 0.01) and not significantly greater than initial performance for color to orientation (though it was for orientation to color). 
Figure 4
 
Experiment 2. Subjects were trained with a single dimension, either color or orientation in both hemifields. Since training is with only one task, and testing only with another, transfer must be across task (green); see text.
Figure 4
 
Experiment 2. Subjects were trained with a single dimension, either color or orientation in both hemifields. Since training is with only one task, and testing only with another, transfer must be across task (green); see text.
Experiment 3
To disambiguate cross-hemisphere transfer, the third experiment tested the possibility of interhemispheric learning transfer directly by designing a paradigm whereby two independent tasks, which depend on very different perceptual capabilities, but have very similar average degrees of difficulty, would be performed in two different hemifields. 
We trained subjects on orientation detection in one hemifield and on global array alignment identification (vertical vs. horizontal array) in the other hemifield, as demonstrated in the bottom rows of Figure 1. Following training, we switched the sides of the tasks. We found considerable transfer for the trained dimension for easy cases, as shown in Figure 5, while for the hard conditions, performance dropped to near pre-training level. 
Figure 5
 
Experiment 3. Subjects were trained on orientation detection in one hemifield and identification of global array alignment (vertical vs. horizontal) in the other. Note transfer between hemispheres (orange).
Figure 5
 
Experiment 3. Subjects were trained on orientation detection in one hemifield and identification of global array alignment (vertical vs. horizontal) in the other. Note transfer between hemispheres (orange).
For 12 subjects, an ANOVA showed significant main effects for SOA (F(3,35) = 132.23; p < 0.001), condition (F(1,35) = 399.27; p < 0.001), and stage (F(2,70) = 451.43; p < 0.001). Here, too, there was a significant interaction between condition and stage (F(2,70) = 4.06; p < 0.05), with post hoc analysis showing that there was always a difference between initial and trained pre-transfer performances (Q(4,16) = 4.69, p < 0.05), but the transfer was greater for easy than for hard conditions, with no significant difference between transfer and initial (orientation: p = 0.13; global shape: p = 0.16). 
We can now conclude that this transfer is between hemispheres. It is very unlikely that there should be cross-task transfer in this case because the two tasks are not just a change in feature (as in Experiment 1) but a change in the very nature of the task itself. Just to be sure, however, we performed an auxiliary experiment, training one group of subjects in feature search in both hemifields and another group in array alignment identification, also in both hemifields. Then, we switched tasks. There was no significant transfer to the new task, in either case, as shown in Figure 6
Figure 6
 
Auxiliary experiment. Subjects were trained either on orientation detection in both hemifields or on identification of global array alignment (vertical vs. horizontal) in both hemifields and then tested on the alternate task in both hemifields. There was no significant transfer to the new task in either direction, even for the easy task.
Figure 6
 
Auxiliary experiment. Subjects were trained either on orientation detection in both hemifields or on identification of global array alignment (vertical vs. horizontal) in both hemifields and then tested on the alternate task in both hemifields. There was no significant transfer to the new task in either direction, even for the easy task.
In summary, transfer of learning effects can be either across hemispheres or from task to task—but only if the training is with sufficiently easy task conditions. With hard conditions, despite the learning due to the training, there is little or no transfer to new dimensions or to the other hemisphere (Figure 5). 
Conclusions and discussion
We found a large degree of transfer for easy condition feature search training and considerable specificity for hard condition feature search training, as predicted. In addition, there are two important new findings of the current study: Transfer was extended across hemispheres. That is, training with stimuli presented to one hemifield produced improvement not only for subsequent testing with stimuli in the same hemifield but also for stimuli in the other hemifield. This is the longest distance transfer ever tested and found. Presumably, training and testing stimulated both hemispheres, or at least neurons with receptive fields that covered both hemifields, even though the stimuli were placed well away from the vertical meridian. Note that training may enhance target identification and/or suppression (Ahissar & Hochstein, 1996; Karni & Sagi, 1991); our finding is that either or both of these transfer across hemifields for easier conditions. 
A second new finding was that performance of the same task with novel dimensions is also facilitated by training. That is, training on a color feature search task improved performance of subsequent orientation search—as long as the tasks were designed to be easy enough in their spatial and temporal paradigm. This is a novel finding (compare Treisman et al., 1992; Xiao et al., 2008) and suggests that at the highest level, search is mediated by dimension-independent mechanisms. On the other hand, when the task was changed entirely—from feature search to array orientation discrimination or vice versa—then there was no significant transfer. Thus, it is not the experimental setting (knowing how to sit, watch, be attentive, etc.) nor the response mode (learning the timing, keys to be pressed, etc.) that direct the learning. These aspects were insufficient to produce significant improvement, despite the presence of feedback for incorrect trials. Rather, the transfer is for real learning of the task itself, and training effects transfer because the old and the new tasks, though they differ in the dimension of the feature search, have in common sufficient aspects to include the same cortical visual system neurons! That is, we expect that this transfer takes place on the basis of invariant recognition mechanisms (see Epshtein et al., 2008; Li et al., 2009; Stringer & Rolls, 2008; Zoccolan et al., 2009). 
There is recent discussion concerning the dependence of transfer on training methodology. Xiao et al. (2008) find considerable transfer when using a “double training” rather than a sequential training technique. Their results would suggest more transfer in our Experiment 1, where we use a simultaneous “double-training” procedure, and more specificity in our Experiments 2 and 3, where we use a sequential training mode. However, our comparisons are always between easier and harder task conditions, using the same training paradigm. We find little transfer for harder conditions in Experiment 1 and considerable transfer for easier conditions in Experiments 2 and 3. Thus, the easy–hard training/testing condition difference is upheld. It would be interesting to perform Xiao et al.’s (2008) task for more/less difficult conditions to see the effect on transfer. 
The findings are consistent with Reverse Hierarchy Theory in that there is a clear distinction between training with easy and with hard conditions. Reverse Hierarchy Theory proposes that this difference is between use of high cortical level and low cortical level mechanisms for performing the search task. High-level neurons have very large receptive fields and these have been reported to extend beyond the vertical meridian (Large, Culham, Kuchinad, Aldcroft, & Vilis, 2008; Nagy, Eördegh, & Benedek, 2003). 
Alternatives to Reverse Hierarchy Theory, which are implicit in the literature, include the possibility that all learning occurs at a single cortical level—usually seen as located at a high level, or that perceptual learning can occur at low levels without top-down guidance from high levels—perhaps following the direction of the forward rather than the reverse hierarchy (Furmanski et al., 2004; Hoffman & Logothetis, 2009; Pourtois et al., 2008; Recanzone, Schreiner, & Merzenich, 1993; Schoups et al., 2001; Spang, Grimsen, Herzog, & Fahle, 2010; Watanabe et al., 2002; see Pilly et al., 2010). Both of these alternatives are refuted by the current results: There are clearly two different types of learning—i.e., with and without considerable transfer to new conditions—for easy and for hard conditions, respectively, suggesting two rather than one site of training malleability. In addition, the type of training that takes place first—i.e., for easier tasks—is that which transfer more—i.e., is at a site where representations are more general. 
High-level representations are categorical and generalize over specific stimulus properties such as exact position, orientation, or color. These receptive fields represent feature differences—including presence of an odd element among a uniform array of elements—without signaling the nature of the feature upon which the difference is based. Thus, training at detecting an odd element will affect performance on subsequent tests even if a new feature is used. 
Acknowledgments
This research was supported by the Israel Science Foundation (ISF) and the US–Israel Binational Science Foundation (BSF), as well as the National Institute for Psychobiology in Israel (NIPI) to author MP. We thank Anne Treisman and Merav Ahissar for helpful discussions throughout the course of this study. 
Commercial relationships: none. 
Corresponding author: Shaul Hochstein. 
Email: shaul@vms.huji.ac.il. 
Address: Institute for Life Sciences, Hebrew University, Givat Ram, Jerusalem, 91904, Israel. 
References
Ahissar M. Hochstein S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences of the United States of America, 90, 5718–5722. [PubMed] [Article] [CrossRef] [PubMed]
Ahissar M. Hochstein S. (1996). Learning pop-out detection: Specificities to stimulus characteristics. Vision Research, 36, 3487–3500. [PubMed] [Article] [CrossRef] [PubMed]
Ahissar M. Hochstein S. (1997). Task difficulty and the specificity of perceptual learning. Nature, 387, 401–406. [PubMed] [CrossRef] [PubMed]
Ahissar M. Hochstein S. (1998). Perceptual learning. In Walsh V. Kulikowski J. (Eds.), Perceptual constancies: Why things look as they do (pp. 455–498). Cambridge, UK: Cambridge University Press.
Ahissar M. Hochstein S. (2000). The spread of attention and learning in feature search: Effects of target distribution and task difficulty. Vision Research, 40, 1349–1364. [PubMed] [Article] [CrossRef] [PubMed]
Ahissar M. Hochstein S. (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8, 457–464. [PubMed] [Article] [CrossRef] [PubMed]
Carrasco M. Evert D. L. Chang I. Katz S. M. (1995). The eccentricity effect: Target eccentricity affects performance on conjunction searches. Perception & Psychophysics, 57, 1241–1261. [PubMed] [Article] [CrossRef] [PubMed]
Carrasco M. Yeshurun Y. (1998). The contribution of covert attention to the set-size and eccentricity effects in visual search. Journal of Experimental Psychology: Human Perception and Performance, 24, 673–692. [PubMed] [Article] [CrossRef] [PubMed]
Censor N. Sagi D. (2009). Global resistance to local perceptual adaptation in texture discrimination. Vision Research, 49, 2550–2556. [PubMed] [Article] [CrossRef] [PubMed]
Damasio A. R. (1985). Disorders of complex visual processing: Agnosias, achromatopsia, Balint's syndrome, and related difficulties of orientation and construction. In Mesulam M. M. (Ed.), Principles of behavioural neurology (vol. 1, pp. 259–288). Philadelphia, PA: Davis.
Epshtein B. Lifshitz I. Ullman S. (2008). Image interpretation by a single bottom-up top-down cycle. Proceedings of the National Academy of Sciences of the United States of America, 105, 14298–14303. [PubMed] [Article] [CrossRef] [PubMed]
Fahle M. Poggio T. (2002). Perceptual learning. Cambridge, MA: MIT Press.
Felleman D. J. Van Essen D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. [PubMed] [Article] [CrossRef] [PubMed]
Furmanski C. S. Schluppeck D. Engel S. A. (2004). Learning strengthens the response of primary visual cortex to simple patterns. Current Biology, 14, 573–578. [PubMed] [CrossRef] [PubMed]
Grill-Spector K. Malach R. (2004). The human visual cortex. Annual Reviews in Neuroscience, 27, 649–677. [PubMed] [Article] [CrossRef]
Hochstein S. Ahissar M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791–804. [PubMed] [Article] [CrossRef] [PubMed]
Hoffman K. L. Logothetis N. K. (2009). Cortical mechanisms of sensory learning and object recognition. Philosophical Transactions of the Royal Society B, 364, 321–329. [PubMed] [Article] [CrossRef]
Huang L. Holcombe A. O. Pashler H. (2004). Repetition priming in visual search: Episodic retrieval, not feature priming. Memory & Cognition, 32, 12–20. [PubMed] [Article] [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of Physiology, 160, 106–154. [PubMed] [Article] [CrossRef] [PubMed]
Jeter P. E. Dosher B. A. Petrov A. Lu Z.-L. (2009). Task precision at transfer determines specificity of perceptual learning. Journal of Vision, 9, (3):1, 1–13, http://www.journalofvision.org/content/9/3/1, doi:10.1167/9.3.1. [PubMed] [Article] [CrossRef] [PubMed]
Karni A. Sagi D. (1991). Where practice makes perfect in texture discrimination: Evidence for primary visual cortex plasticity. Proceedings of the National Academy of Sciences of the United States of America, 88, 4966–4970. [PubMed] [Article] [CrossRef] [PubMed]
Karni A. Sagi D. (1993). The time course of learning a visual skill. Nature, 365, 250–252. [PubMed] [Article] [CrossRef] [PubMed]
Kourtzi Z. Betts L. R. Sarkheil P. Welchman A. E. (2005). Distributed neural plasticity for shape learning in the human visual cortex. PLoS Biology, 3,
Large M. E. Culham J. Kuchinad A. Aldcroft A. Vilis T. (2008). fMRI reveals greater within- than between-hemifield integration in the human lateral occipital cortex. European Journal of Neuroscience, 27, 3299–3309. [PubMed] [CrossRef] [PubMed]
Lee H. Mozer M. C. Vecera S. P. (2009). Mechanisms of priming of pop-out: Stored representations or feature-gain modulations? Attention, Perception and Psychophysics, 71, 1059–1071. [PubMed] [Article] [CrossRef]
Li N. Cox D. D. Zoccolan D. DiCarlo J. J. (2009). What response properties do individual neurons need to underlie position and clutter “invariant” object recognition? Journal of Neurophysiology, 102, 360–376. [Article] [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (1994). Priming of pop-out: I Role of features. Memory & Cognition, 22, 657–672. [PubMed] [Article] [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (1996). Priming of pop-out: II Role of position. Perception & Psychophysics, 58, 977–991. [PubMed] [Article] [CrossRef] [PubMed]
Maljkovic V. Nakayama K. (2000). Priming of pop-out: III A short term implicit memory system beneficial for rapid target selection. Visual Cognition, 7, 571–595. [Article] [CrossRef]
Matthews N. Liu Z. Geesaman B. J. Qian N. (1999). Perceptual learning on orientation and direction discrimination. Vision Research, 39, 3692–3701. [PubMed] [Article] [CrossRef] [PubMed]
Mozer M. C. Shettel M. Vecera S. P. (2006). Top-down control of visual attention: A rational account. Advances in Neural Information Processing Systems, 18, 923–930. [Article]
Nagy A. Eördegh G. Benedek G. (2003). Extents of visual, auditory and bimodal receptive fields of single neurons in the feline visual associative cortex. Acta Physiolica Hungaria, 90, 305–312. [PubMed] [CrossRef]
Pavlovskaya M. Ring H. Groswasser Z. Keren O. Hochstein S. (2001). Visual search in peripheral vision: Learning effects and set-size dependence. Spatial Vision, 14, 151–173. [PubMed] [CrossRef] [PubMed]
Pilly P. K. Grossberg S. Seitz A. R. (2010). Low-level sensory plasticity during task-irrelevant perceptual learning: Evidence from conventional and double training procedures. Vision Research, 50, 424–432. [PubMed] [CrossRef] [PubMed]
Pourtois G. Karsten S. Rauss K. S. Vuilleumier P. Schwartz S. (2008). Effects of perceptual learning on primary visual cortex activity in humans. Vision Research, 48, 55–62. [PubMed] [CrossRef] [PubMed]
Ramachandran V. S. Braddick O. (1973). Orientation specific learning in stereopsis. Perception, 2, 371–376. [Article] [CrossRef] [PubMed]
Recanzone G. H. Schreiner C. E. Merzenich M. M. (1993). Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys. Journal of Neuroscience, 13, 87–103. [PubMed] [Article] [PubMed]
Schoups A. Vogels R. Qian N. Orban G. (2001). Practicing orientation identification improves orientation coding in V1 neurons. Nature, 412, 549–553. [Article] [CrossRef] [PubMed]
Schwarzlose R. F. Swisher J. D. Dang S. Kanwisher N. (2008). The distribution of category and location information across object-selective regions in human visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 105, 4447–4452. [PubMed] [Article] [CrossRef] [PubMed]
Sireteanu R. Rettenbach R. (2000). Perceptual learning in visual search generalizes over tasks, locations, and eyes. Vision Research, 40, 2925–2949. [PubMed] [CrossRef] [PubMed]
Spang K. Grimsen C. Herzog M. H. Fahle M. (2010). Orientation specificity of learning vernier discriminations. Vision Research, 50, 479–485. [PubMed] [CrossRef] [PubMed]
Stringer S. M. Rolls E. T. (2008). Learning transform-invariant object recognition in the visual system with multiple stimuli present during training. Neural Networks, 21, 888–903. [PubMed] [CrossRef] [PubMed]
Treisman A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652–676. [PubMed] [Article] [CrossRef] [PubMed]
Treisman A. Vieira A. Hayes A. (1992). Automaticity and preattentive processing. American Journal of Psychology, 105, 341–362. [PubMed] [CrossRef] [PubMed]
Treisman A. M. Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [Article] [CrossRef] [PubMed]
Ungerleider J. Mishkin M. (1982). Two cortical visual systems. In Agle M. Goodale, M. I. Mansfield R. J. W. (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT Press.
Vaina L. M. (1994). Functional segregation of color and motion processing in the human visual cortex: Clinical evidence. Cerebral Cortex, 4, 555–572. [PubMed] [CrossRef] [PubMed]
Watanabe T. Nanez J. E. Koyama S. Mukai I. Liederman J. Sasaki Y. (2002). Greater plasticity in lower level than higher level visual motion processing in a passive perceptual learning task. Nature Neuroscience, 5, 1003–1009. [PubMed] [CrossRef] [PubMed]
Wolfe J. M. (1994). Guided Search 20: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238. [Article] [CrossRef] [PubMed]
Wolfe J. M. Butcher S. J. Lee C. Hyle M. (2003). Changing your mind: On the contributions of top-down and bottom-up guidance in visual search for feature singletons. Journal of Experimental Psychology: Human Perception & Performance, 29, 483–502. [PubMed] [Article] [CrossRef]
Wolfe J. M. O'Neill P. Bennett S. C. (1998). Why are there eccentricity effects in visual search Visual and attentional hypotheses. Perception & Psychophysics, 60, 140–156. [PubMed] [Article] [CrossRef] [PubMed]
Woolsey C. N. (1981). Cortical sensory organization. Clifton, NJ: Humana Press.
Xiao L. Q. Zhang J. Y. Wang R. Klein S. A. Levi D. M. Yu C. (2008). Complete transfer of perceptual learning across retinal locations enabled by double training. Current Biology, 18, 1922–1926. [PubMed] [Article] [CrossRef] [PubMed]
Zeki S. (1978). Functional Specialization in the visual cortex of the rhesus monkey. Nature, 274, 423–428. [PubMed] [CrossRef] [PubMed]
Zhang T. Xiao L.-Q. Stanley A. Klein S. A. Dennis M. L. Yu C. (2010). Decoupling location specificity from perceptual learning of orientation discrimination. Vision Research, 50, 368–374. [PubMed] [Article] [CrossRef] [PubMed]
Zoccolan D. Oertelt N. DiCarlo J. J. Cox D. D. (2009). A rodent model for the study of invariant visual object recognition. Proceedings of the National Academy of Sciences of the United States of America, 106, 8748–8753. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Schematic diagram of local feature and global identification tasks. Examples of the orientation and color search task displays, with a 7 × 7 array of distracters, are presented in the top row. Targets differed from distracters in color (right) or orientation (left). In addition, this difference could be large making the task easy or small making the task hard (both are demonstrated here in the same array, but only one appeared per array in the experiment; see text). The global identification task was to determine the array orientation as horizontal or vertical (third and fourth rows; left and right panels, respectively). Arrays were 7 × 5 vs. 5 × 7 for easy tasks (third row) or 7 × 6 vs. 6 × 7 for hard tasks (fourth row). A mask comprising elements each with bars of various orientations (second row) was presented after each stimulus with a variable Stimulus-to-mask Onset Asynchrony (SOA). The mask for the color task had the same multi-line elements but with colored lines.
Figure 1
 
Schematic diagram of local feature and global identification tasks. Examples of the orientation and color search task displays, with a 7 × 7 array of distracters, are presented in the top row. Targets differed from distracters in color (right) or orientation (left). In addition, this difference could be large making the task easy or small making the task hard (both are demonstrated here in the same array, but only one appeared per array in the experiment; see text). The global identification task was to determine the array orientation as horizontal or vertical (third and fourth rows; left and right panels, respectively). Arrays were 7 × 5 vs. 5 × 7 for easy tasks (third row) or 7 × 6 vs. 6 × 7 for hard tasks (fourth row). A mask comprising elements each with bars of various orientations (second row) was presented after each stimulus with a variable Stimulus-to-mask Onset Asynchrony (SOA). The mask for the color task had the same multi-line elements but with colored lines.
Figure 2
 
Experiment 1—Color (left, squares) and orientation (right, triangles) feature search performances as a function of test-to-mask Stimulus Onset Asynchrony (SOA) before (blue) and after (brown) training in the trained hemifield and after training with switched tasks (orange), i.e., with the color search task performed in the hemifield where orientation feature search was trained, and vice versa. Top graphs show results with easy conditions, as demonstrated in Figure 1 (top row) with targets at 20° orientation and red color. Bottom graphs are for hard conditions, as in Figure 1, with 40° orientation and light blue target. Note that there is training-induced improvement in all cases (compare light and dark blue data points). For the easy tasks (top graphs), this improvement is maintained after switching hemifields (orange), reflecting nearly complete transfer to new task conditions. For the hard tasks (bottom), there is considerable training specificity and much less transfer of improvement to new conditions. This differentiation is predicted by Reverse Hierarchy Theory, extending it to cross-hemisphere transfer (see text).
Figure 2
 
Experiment 1—Color (left, squares) and orientation (right, triangles) feature search performances as a function of test-to-mask Stimulus Onset Asynchrony (SOA) before (blue) and after (brown) training in the trained hemifield and after training with switched tasks (orange), i.e., with the color search task performed in the hemifield where orientation feature search was trained, and vice versa. Top graphs show results with easy conditions, as demonstrated in Figure 1 (top row) with targets at 20° orientation and red color. Bottom graphs are for hard conditions, as in Figure 1, with 40° orientation and light blue target. Note that there is training-induced improvement in all cases (compare light and dark blue data points). For the easy tasks (top graphs), this improvement is maintained after switching hemifields (orange), reflecting nearly complete transfer to new task conditions. For the hard tasks (bottom), there is considerable training specificity and much less transfer of improvement to new conditions. This differentiation is predicted by Reverse Hierarchy Theory, extending it to cross-hemisphere transfer (see text).
Figure 3
 
Experiment 1. Training-induced improvement, with more transfer for easy tasks and more specificity for hard tasks. Performance is shown as a function of SOA (top graphs) for easy (left) and hard (right) tasks—averaging data for color and orientation tasks (data point color conventions as in Figure 2; shown as diamonds for averaged data). Bar graphs (middle row) show mean performance (across SOAs) before and after training and for transfer to the switched hemifield. Bottom row graphs show development of training effect with training session (first and last 4 or 6 training session results are shown, averaged across subjects and SOAs) and transfer test results.
Figure 3
 
Experiment 1. Training-induced improvement, with more transfer for easy tasks and more specificity for hard tasks. Performance is shown as a function of SOA (top graphs) for easy (left) and hard (right) tasks—averaging data for color and orientation tasks (data point color conventions as in Figure 2; shown as diamonds for averaged data). Bar graphs (middle row) show mean performance (across SOAs) before and after training and for transfer to the switched hemifield. Bottom row graphs show development of training effect with training session (first and last 4 or 6 training session results are shown, averaged across subjects and SOAs) and transfer test results.
Figure 4
 
Experiment 2. Subjects were trained with a single dimension, either color or orientation in both hemifields. Since training is with only one task, and testing only with another, transfer must be across task (green); see text.
Figure 4
 
Experiment 2. Subjects were trained with a single dimension, either color or orientation in both hemifields. Since training is with only one task, and testing only with another, transfer must be across task (green); see text.
Figure 5
 
Experiment 3. Subjects were trained on orientation detection in one hemifield and identification of global array alignment (vertical vs. horizontal) in the other. Note transfer between hemispheres (orange).
Figure 5
 
Experiment 3. Subjects were trained on orientation detection in one hemifield and identification of global array alignment (vertical vs. horizontal) in the other. Note transfer between hemispheres (orange).
Figure 6
 
Auxiliary experiment. Subjects were trained either on orientation detection in both hemifields or on identification of global array alignment (vertical vs. horizontal) in both hemifields and then tested on the alternate task in both hemifields. There was no significant transfer to the new task in either direction, even for the easy task.
Figure 6
 
Auxiliary experiment. Subjects were trained either on orientation detection in both hemifields or on identification of global array alignment (vertical vs. horizontal) in both hemifields and then tested on the alternate task in both hemifields. There was no significant transfer to the new task in either direction, even for the easy task.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×