In daily life we frequently switch between visuomotor mappings without a moment's thought. To give a very crude example, when we participate in traffic on the streets, there are similar constraints and visual inputs regardless of whether we are driving a motorcycle or a car. But, depending on the vehicle we need to apply completely different mappings from visual input to motor behavior in order to avoid accidents, etc. Yet, we never run into the problem of accidentally applying the visuomotor mapping associated with a motorcycle when we are driving a car or vice versa. So the visuomotor system has learned to switch between motor mappings as we switch vehicles. There is even evidence that seeing the vehicle may already allocate the appropriate motor mapping, as can be argued from priming effects on object recognition when showing objects that have a similar way of interaction relative to showing objects that only look similar (Helbig, Graf, & Kiefer,
2006). That is, seeing the vehicle may serve as a cue for activating the corresponding motor mapping after learning to drive the vehicle.
Similarly, as most people who wear glasses may have experienced, when first putting on a new pair of glasses we tend to get dizzy, despite being able to see better. This dizziness is due to the fact that the glasses lead to geometric distortions in the retinal image, leading to changes in multi-sensory and sensorimotor interactions that we are not yet used to when first donning the glasses. However, after some time the dizziness goes away as you adapt to these distortions, i.e., the sensorimotor interactions become normal again. And after just a few days of putting the glasses on and off every day, adapting to glasses-on and glasses-off conditions becomes immediate, i.e., we do not feel dizzy at all when putting on the glasses or taking them off and sensorimotor interactions remain normal throughout. This means we have learned both glasses-on and glasses-off mappings and can switch automatically between them.
The purpose of this study is to investigate the learning process that underlies the establishment and switching of multiple distinct visuomotor mappings. Particularly we ask what information is actually being stored with respect to the mappings. As a first approach we use a pointing task, similar to the tasks used in prism adaptation studies. From the prism adaptation literature, it is known that different visuomotor mappings can simultaneously be learned and maintained (e.g., Martin, Keating, Goodkin, Bastian, & Thach,
1996; McGonigle & Flook,
1978; Welch, Bridgeman, Anand, & Browman,
1993). However, it is not clear what is actually learned with regard to the mappings that enable us to switch between them. In principle there are two possibilities. The first possibility is that each individual mapping is individually stored in an absolute sense independent of other recently experienced mappings. This would mean that we would be able to switch to those previously learned mappings regardless of what the current mapping is, thus even if the system is significantly perturbed from its normal behavior by, for instance, having adapted to an entirely new mapping just beforehand. To relate this to the car example: you would be able to retrieve the car mapping independent of whether you've just travelled by motorcycle, by using roller blades, or any other means of transport. The second possibility is that what has actually been learned are not the individual mappings themselves, but only the ability to shift behavior by the amount consistent with the relative shift between the trained mappings. That is, we might have learned to go from motorcycle to car and vice versa, but not from roller blades to car or from roller blades to motorcycle. For a pointing task, learning the relationship between the mappings would mean that when cued, behavior is dependent on what the mapping was before the cue was presented. That is, if after training the system is perturbed from normal behavior by adapting to an entirely new mapping, the learned shift would be applied relative to this new current mapping rather than retrieving a specific absolute mapping.
From the car/motorcycle example above it seems unlikely that we should code the difference (i.e., the relative shift) between those mappings rather than the mappings themselves. But the car and motorcycle mappings are also different in quite complex ways, each vehicle coming with its own set of special skills that have to be learned before being able to drive it. In contrast, when adapting to a new mapping in a pointing task we do not have to relearn or readopt the skill of “how to point” but just have to adjust the “where to point” with respect to the sensory input. In this case there is no reason to store absolute mappings per se, and learning a relative shift between two mappings would mean that you only have to learn one shift compared to two separate mappings. Moreover, our sensory systems are especially adept at teasing apart relative differences compared to determining absolute coordinates. For instance, for distance perception of sound sources it has been shown that we are much better at judging the distance between sound sources than at judging the absolute distance of a single source (e.g., Coleman,
1962). Similarly, visual motion in depth can only be perceived relative to a reference point or surface (see e.g., Erkelens & Collewijn,
1985; Regan, Erkelens, & Collewijn,
1986). Also the coupling of visual lateral motion and perceived depth is based on relative depth order rather than absolute depth (Sohn & Seiffert,
2006). Thus, since our sensory systems specialize in determining relative relationships it would make sense if the sensorimotor interactions would also be coded in relative shifts rather than absolute coordinates.
Previous studies on the storage of multiple mappings so far only looked at the learning stage itself and for instance investigated how switching between two trained mappings became more efficient with training, rather than investigating what information has actually been learned (e.g., Kravitz & Yaffe,
1974; McGonigle & Flook,
1978). In those experiments both storage of the individual absolute mappings and learning the relative relationships between the mappings would lead to the same predictions for learning to switch. In the current study we will try to tease apart the absolute mapping and relative shift hypotheses. We do this by having participants, after they've learned two separate mappings, adapt to a new visuomotor mapping before cuing one of the two previously learned mappings. This will reveal whether, upon contextual cuing, they apply a shift relative to the current mapping or whether they retrieve the absolute mapping.
In order to cue the separate mappings it is useful to have contextual cues that are not directly behaviorally relevant for the pointing task itself. We decided to pair each of the two trained mappings with a color cue by presenting the target objects in different colors during training. Participants were not informed about these cues or their meaning. The role of previously irrelevant cues in visuomotor adaptation has been investigated before. Most studies that involve cues for visuomotor adaptation, investigated whether the cue can elicit cue-contingent aftereffects of adaptation after only a short amount of training. Significant cue-contingent aftereffects have indeed been found for simply wearing the prism-glasses themselves, i.e., differential aftereffects were found dependent on whether the glasses were being worn or had been taken away (Kravitz,
1972; Kravitz & Yaffe,
1974; Welch,
1971); for auditory tones (Kravitz & Yaffe,
1972); head posture (Seidler, Bloomberg, & Stelmach,
2001); gaze direction (Hay & Pick,
1966; Pick, Hay, & Martin,
1969) and target color (Donderi, Jolicoeur, Berg, & Grimes,
1985). Such aftereffects generally are very quickly obtained but also relatively short-lived, so from these studies it is not directly clear what this will mean for repeated adaptation to the mappings. There is, however, also strong evidence that cues can become sufficient to switch between mappings after more extensive training. For instance, Martin et al. (
1996) found that after a two-week period of training with and without prism glasses, simply the act of putting on or taking off the glasses was a contextual cue for the participants to adopt (or shift to) the associated visuomotor mapping.
The study of Martin et al. (
1996) also provides a first insight into whether absolute mappings or relative shifts are being stored. In that study Martin et al. (
1996), unbeknownst to the participants, reversed the prisms in the glasses at the end of training, effectively reversing the required mapping for these manipulated glasses, keeping the contextual cue, i.e., the glasses themselves present. They found that the error that participants made when first donning these manipulated glasses was twice as large as could be expected from the current prism shift alone, indicating that, indeed, the participants shifted behavior to the mapping that was previously associated to wearing the glasses during the training phase. Furthermore, after prolonged adaptation to the manipulated glasses with the reversed prisms, i.e., participants learned a new mapping for the known glasses-on context, Martin et al. (
1996) found significant and about equal aftereffects for both the glasses-on context as well as for the glasses-off context. This suggests that the two mappings cannot adapt independently but are always coupled by the same relative shift. However, in that study, reversing the prisms in the glasses means that the known glasses-on context suddenly has a new additional mapping coupled with it. Logically, this could be interpreted as the environment in general having changed, thus going beyond the scope of the context, rather than just the conditions for the separate glasses-on context having changed. If so, the change in mapping is treated as an additional disturbance independent of context and therefore, the observed change in behavior, i.e., the aftereffects occurring for both contexts, do not necessarily relate to how the context-specific information has been stored. The advantage of using color cues is that we can easily add new colors for adapting to new mappings after the training has been completed. In this way the correspondence between the trained mappings and their contextual color cues will remain intact when adapting to a new mapping after training. Switching to one of the trained contexts from such a new context will then be informative as to what information has been stored with respect to each trained context individually (i.e., whether the context represents an absolute mapping or a relative shift in behavior regardless of the previous context).