December 2011
Volume 11, Issue 14
Free
Article  |   December 2011
Sensory cancellation of self-movement facilitates visual motion detection
Author Affiliations
Journal of Vision December 2011, Vol.11, 5. doi:https://doi.org/10.1167/11.14.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Níall Lally, Benjamin Frendo, Jörn Diedrichsen; Sensory cancellation of self-movement facilitates visual motion detection. Journal of Vision 2011;11(14):5. https://doi.org/10.1167/11.14.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The nervous system continuously predicts the sensory consequences of self-generated actions. These predictions can be used to cancel self-generated sensory information. It has been hypothesized that this cancellation process may serve to increase the perceptual sensitivity to unpredicted external events. Here, we provide the first empirical evidence for this idea. Participants were required to detect coherent motion in a random dot motion display. The task was made more difficult by a set of superimposed distractor dots that had to be ignored. When these distractors moved congruently with an active arm movement, perceptual performance in detecting the coherent motion was superior compared to a condition in which the distractor motion did not match the arm movement. To test whether this difference was due to sensory cancellation of matching distractors, or to the attentional enhancement of non-matching distractors, we introduced a control condition without any overt movement. Our results indicate that improvements in the detection of visual motion are indeed caused by sensory cancellation of self-generated events. In conjunction with other recent results, our data therefore suggest that the nervous system is able to attenuate or facilitate self-generated visual stimuli in a task-dependent manner.

Introduction
While awake, we are subjected to a constant stream of sensory information, a sizeable portion of which is caused by our own action. From this stream, the nervous system must extract sensory information that is task-relevant and ignore unimportant information. Indeed, a wealth of evidence suggests that our nervous system constantly predicts the sensory consequences of our own actions and, subsequently, attenuates or cancels the self-generated components of this sensory input stream. This concept of sensory cancellation has been illustrated in a number of different experimental settings. One prominent example is the cancellation of visual motion information during saccadic (Bridgeman, Van der Heijden, & Velichkovsky, 1994; Sperry, 1950; von Holst & Mittelstaedt, 1950) and pursuit eye movements (Haarmeier, Bunjes, Lindner, Berret, & Thier, 2001). Furthermore, a number of studies have demonstrated that auditory (Aliu, Houde, & Nagarajan, 2009; Martikainen, Kaneko, & Hari, 2005; Sato, 2008) and tactile (Bays, Wolpert, & Flanagan, 2005; Blakemore, Wolpert, & Frith, 1998; Hesse, Nishitani, Fink, Jousmaki, & Hari, 2010; Shergill, Bays, Frith, & Wolpert, 2003; Tsakiris & Haggard, 2003; Weiskrantz, Elliott, & Darlington, 1971) stimuli are perceived as less intense when caused by a self-generated action. The latter phenomenon is thought to explain the anecdotal observation that it is hard to tickle oneself (Weiskrantz et al., 1971). 
One proposed functional role of sensory cancellation is that it may improve perceptual performance in the detection of other, external events (Bays & Wolpert, 2007). Here, we test this hypothesis by comparing the ability to detect a visual stimulus in a situation when visual distractors are linked to a self-generated arm movement and in a situation where the distractors do not match the arm movement. We gave participants the task of detecting coherent dot motion embedded in a random dot motion display (Newsome & Pare, 1988). The task was made harder by superimposing a cloud of distractor dots that moved coherently with each other on a figure-eight-shaped (Lissajous) trajectory on top of the random dot motion display. Participants were instructed to ignore this movement and pay attention only to whether the background dots assumed a consistent linear motion direction. In one condition, participants moved their arm, guided by a robotic device, on the same trajectory as the distractor dots. If the motor system uses an internal prediction to help attenuate undesired visual input, then it should be able to suppress the competing distractor motion signals and thereby become more sensitive to the target motion. While often hypothesized, there is to date, to our knowledge, no empirical evidence that cancellation of self-generated events improves perceptual performance in the detection of other, external events (Bays & Wolpert, 2007). Additionally, we also tested the possibility that the nervous system amplifies visual stimuli that are in conflict with the predicted outcomes. We therefore introduced an extra condition in which the visual motion of the distractor dots did not match the arm movement. 
In a very similar study, Christensen, Ilg, and Giese (2011) have recently shown that self-generated visual motion stimuli can be enhanced rather than attenuated (see also Craighero, Bello, Fadiga, & Rizzolatti, 2002; Miall et al., 2006; Repp & Knoblich, 2007; Wohlschlager, 2000). In their study, participants were required to detect biological dot motion in an array of moving distractors. When the to-be-detected motion was congruent with a self-produced arm movement, participants were better at detecting this motion than when passively watching the stimulus. Thus, in combination with this study, a finding of sensory cancellation of self-generated visual motion in our study would indicate that the visuomotor system can use sensory motor predictions to attenuate or facilitate the perception self-generated stimuli in a task-dependent manner. 
Methods
Participants
Thirteen healthy right-handed participants were recruited (8 males, mean age = 25.2 years). All volunteers had normal or corrected-to-normal vision. Experimental and consent procedures were approved by the Ethics Committee of the School of Psychology, Bangor University (United Kingdom). 
Apparatus and stimuli
Participants sat with their heads supported by a headrest and looked into two mirrors (one for each eye) positioned to reflect two 24″ LCD monitors (Takahashi, Diedrichsen, & Watt, 2009). With this setup, the visual stimuli could be presented in 3D and were calibrated to appear before the participant in the plane of hand motion, which was perpendicular to the line of sight at a distance of 50 cm. The apparatus prevented vision of the hand and surrounding equipment. A robotic arm (Phantom Premium 3.0, SensAble Technologies, USA) was used to measure the arm movement of the participant and to guide their hand along a predetermined movement path. The visual stimulus was comprised of two kinds of dots: 80 target and 80 distractor dots. All dots were identical in color and size and moved across the same circular area (diameter = 16 cm). 
The target dots moved in different directions, uniformly distributed between 0 and 360 deg, such that the vectorial sum of the motion directions of all target dots was zero at all times. Each dot moved until it hit the boundary of the circular area, where it disappeared and then reappeared at a random position on the boundary. All target dots moved with a constant speed of 9.56 cm/s. The average lifetime of a target dot was ∼850 ms. At a random time within the 7-s trial (between 1 s and 5.5 s, to avoid target motion in the beginning or the end of the trial), a new target generated in the next 1350 ms assumed a single, predetermined movement direction. Each new dot was assigned this movement direction with a probability corresponding to the coherence level; otherwise, it was assigned with a direction drawn from a uniform distribution. As new dots continuously appeared, the coherence level of the display increased continuously for 1350 ms to the target level and then decreased again back to random motion. The task of the participants was to detect this coherent motion and report its direction after the trial. 
The distractor dots always moved as a unit with the same direction and speed, in a figure-eight motion, described by two sinusoids (Figure 1). One sinusoid had a period of 3.5 s for a full revolution while the other had a period of 1.75 s. The peak-to-peak amplitude of the sinusoids was 14 cm. The orientation of this Lissajous figure was randomly rotated to a different angle on every trial. The array of distractor dots covered the visible area at all times. Participants were instructed to ignore the distractor dots as much as possible during the task. 
Figure 1
 
Experimental paradigm. (A) Time course of one trial. Trials with movements started with a synchronization phase, in which the robot guided an arm movement in a Lissajous figure of random orientation. During the random dot phase, a display of moving dots was shown. The target dots moved independently in random directions but assumed coherent motion at a random time within the 10 s. A number of distractor dots, shown in red here for clarity but white in the actual experiment, moved superimposed onto the random dot motion with a coherent Lissajous figure movement. At the end of the trial, participants were required to indicate the direction of the coherent dot motion by controlling an arrow with their hand. The length of this line was used to indicate confidence levels—an arrow as long as the widest circle indicated.
Figure 1
 
Experimental paradigm. (A) Time course of one trial. Trials with movements started with a synchronization phase, in which the robot guided an arm movement in a Lissajous figure of random orientation. During the random dot phase, a display of moving dots was shown. The target dots moved independently in random directions but assumed coherent motion at a random time within the 10 s. A number of distractor dots, shown in red here for clarity but white in the actual experiment, moved superimposed onto the random dot motion with a coherent Lissajous figure movement. At the end of the trial, participants were required to indicate the direction of the coherent dot motion by controlling an arrow with their hand. The length of this line was used to indicate confidence levels—an arrow as long as the widest circle indicated.
Procedure
To start a trial, participants were required to move a red dot (corresponding to their hand position) to a starting point. For all trials involving movement, a 4-s synchronization phase was carried out (Figure 1A, left panel). A sphere on the screen moved in a figure-eight movement that was generated from two sinusoids, as described for the distractor dots (see above). During the task, the robot guided the arm on this trajectory by simulating a spring (100 N/m) around the moving target point. Participants were instructed to follow the guidance of the robotic arm as accurately as possible. The average interaction force (the amount of force between participants' hand and the robotic arm) was measured throughout the task and presented on-screen as feedback after each block of trials to help minimize these forces during training. Zero interaction force would imply that the participants were able to move exactly on the predescribed trajectory, without any robot guidance. On average, the contact force was 0.7 N, implying an average deviation from the trajectory of 7 mm. Most of this deviation was due to a small lag of the hand behind the robot trajectory. 
The synchronization phase blended seamlessly into the main behavioral task. During the subsequent 7 s, the random dot display was presented (Figure 1A, middle panel). For a short time period, the target dots assumed a coherent motion direction (see Apparatus and stimuli section). The participants' task was to detect the increased coherence and to report the angular direction of this motion. After the trial was completed, participants moved the robotic arm to adjust an arrow on the screen to report the direction of the motion (Figure 1A, right panel). They also reported the confidence level of their report. An arrow reaching all the way to the large circle (radius = 60 mm) implied complete confidence, while an arrow close to the smaller circle (radius = 20 mm) implied that the participant was guessing. Participants indicated that they were satisfied with the adjusted arrow by pressing a small button on the handle of the robotic device. Feedback on average accuracy (reported direction was within ±35 deg of the true direction) and average contact force was displayed on-screen at the end of each training block. 
In total, there were 5 conditions in this experiment (Figure 1B). (1) In the move–match condition, participants started to move their hand concurrently with the robot along a Lissajous figure trajectory and then continued to make this hand movement throughout the random dot phase. In this condition, the distractor dot motion reflected the actual hand movement; thus, deviations from the Lissajous figure resulted in perfectly correlated visual feedback. However, these deviations were quite small since participants were very good at following the predescribed trajectories. (2) In the move–mismatch condition, the distractor dots followed a Lissajous figure that was phase-shifted from the hand motion by a random time interval between the following value ranges: 218 to 656 ms, 1218 to 1656 ms, 2218 to 2656 ms, or 3218 to 3656 ms. This ensured that the imposed delay resulted in clear phase offset between visual and hand motion, avoiding simple phase reversal that could be seen as a mirroring or rotation of the display. The time offset was changed trial by trial, thus ensuring that participants could not adapt to the time delay. Furthermore, the visual feedback was rotated by a random amount in respect to the hand movement. (3) In the no move–distractor condition, the participants held their hand still while the distractor dots moved coherently following the Lissajous figure with a random orientation. This condition therefore served as baseline for evaluating how much the distractors influenced the motion detection in the absence of any movement-based prediction. (4, 5) To test whether the movement itself had an influence on performance, participants either made arm movements (move–no distractor) or kept their arm still (no move–no distractor), while only the target dots were presented on-screen. If the movement itself required attention that distracted from the visual task, the perceptual performance should be impaired in the move–no distractor condition. 
Design
The experiment was split into two 1-h sessions, scheduled at the same time of day on subsequent days. On the first day, participants underwent a training phase, which was comprised of five blocks (85 trials). In the first block (10 trials), participants were trained to make smooth hand movements, guided by the robotic arm. On-screen feedback was given after each trial, indicating the amount of interaction force produced against the robot device. By minimizing this number, participants improved the accuracy with which they followed the predescribed trajectory. The second block (15 trials) introduced the participant to the baseline behavioral task, using the no move–no distractor condition. The coherence of the random dot motion was set to 1 to ensure that participants learned to recognize the increase in coherence of target dots when it occurred. The following two blocks (each with 15 trials) introduced the move–match and move–mismatch conditions, respectively. The final training block involved 30 trials of the no move–distractor condition with coherence level set to 0.6. Based on the performance accuracy in this block, we adjusted the coherence for the main experiment to keep an average accuracy level of 70% (see below). 
After the training phase, the experiment proceeded in 20 blocks, four in each condition. All blocks had 30 trials, with the exception of the no distractor conditions, which only had 15 trials. The order of blocks was counterbalanced between participants such that all conditions occurred once in each set of 5 blocks. The first ten blocks were performed during the first session and the last ten blocks during the second session. 
Task difficulty was adapted before each set of five blocks by regulating the number of target dots that would move coherently, based on individual performance in the last no move–distractor condition. This ensured that each condition was run once with each new coherence value. We decreased the coherence downward if performance on the no move–distractor condition was above 76% and increased coherence if performance was below 64%. The size of the step was adjusted following a fixed protocol. The no move–distractor condition was chosen as a measure to adjust the performance, as we expected it to be of medium difficulty. 
In a control experiment, we tested a further 10 participants (6 females, mean age = 26.6 years) on the two no distractor conditions only. This was done to test the hypothesis that the movement task may interfere with the perceptual task at higher levels of difficulty. As in the main experiment, we collected four blocks of 15 trials per condition. In this experiment, we adjusted the coherence such that percent correct in the no move–no distractor condition was around 55%. The move–no distractor condition was then tested at exactly the same coherence level and this protocol was counterbalanced across participants. 
Data analysis
The angular error of each trial was computed as the difference in degrees between the actual target direction and the direction reported by the participant. As a first measure for online feedback, we classified performance into correct and incorrect trials, depending on whether the absolute angular error was greater or smaller than 35 deg. 
We reasoned that two processes would determine performance: First, the participants may or may not have seen the random dot motion (detection probability). If they did not see the motion, we assume that they would simply guess an angle. Second, if they saw the motion, they would report an angle that is distributed around the actual target direction with some variability. Thus, both changes in detection probability and changes in the variability of report can influence the overall accuracy. 
To distinguish between these two possibilities, we fitted the distribution of angular errors for each person and each condition using a mixture model (e.g., see Bays, Catalao, & Husain, 2009). According to the model, participants saw the visual motion with probability δ. If they did not see it, they guessed a direction unrelated to the true direction, thereby leading to a uniform distribution of errors on the circle. If they saw the motion, the angular errors were assumed to be distributed around 0 with standard deviation σ. Since the standard deviation was relatively small, we assumed a wrapped normal distribution (N w) (Fisher, 1993). Thus, under this model, the probability of making the angular error x is 
p ( x ) = ( 1 δ ) 1 2 π + δ N w ( 0 , σ 2 ) .
(1)
 
Using an expectation maximization algorithm, we iteratively fitted the two parameters δ and σ to the distribution (Figure 2). 
Figure 2
 
Distribution of angular error in the (A) move–no distractor condition of participant 3 and (B) in the move–mismatch condition of participant 8. The dashed line indicates the fit of the mixture model. Parameter δ is the probability of detecting the stimulus (otherwise, the participant is guessing), while σ indicates the standard deviation of the response distribution.
Figure 2
 
Distribution of angular error in the (A) move–no distractor condition of participant 3 and (B) in the move–mismatch condition of participant 8. The dashed line indicates the fit of the mixture model. Parameter δ is the probability of detecting the stimulus (otherwise, the participant is guessing), while σ indicates the standard deviation of the response distribution.
To test our hypothesis that visual motion congruent with a self-generated movement would interfere less than incongruent visual motion, we conducted a one-sided t-test between the move–match and move–mismatch conditions. Given a positive result here, we then conducted two comparisons, one between the move–match and no move–distractor condition to test whether the difference was due to sensory cancellation of matching visual information and one between the move–mismatch and no move–distractor condition to test whether it was due to the amplification of mismatching visual information. These tests were Bonferroni-corrected for multiple comparisons. Because each of the comparisons had a clear directional prediction, we again used one-sided t-tests. 
Results
Can a visual sensory prediction arising from a hand movement improve perceptual sensitivity to external stimuli? To answer this question, we compared the performance in the detection of coherent dot motion when the movement of a cloud of distractor dots either moved congruently with actual hand movement (move–match) or was randomly phase-shifted from the hand motion (move–mismatch). Confirming our hypothesis, we found significantly better performance in the move–match condition (M = 70.7%, SD = 9.4) than the move–mismatch condition (M = 63.7%, SD = 10.1), t(12) = 4.585, p < 0.001 (Figure 3A, one sided test). This finding suggests that a visual distractor that is coherent with own motion interferes less with motion perception than a visual distractor that moves asynchronously to self-produced motion. 
Figure 3
 
Performance on the perceptual task in the 3 main experimental conditions (move–match, move–mismatch, no move–distractor) and the two control conditions without distractors (no move– and move–no distractor). The results of the additional control experiment, in which we lowered the coherence for the control conditions, are shown in white bars. (A) The proportion of trials with the response within 35 deg of the correct direction. (B) Standard deviation (SD) of the reported motion, given that the motion was seen. (C) Probability of detection of the target movement. These two (B and C) are parameter estimates from a mixture model. (D) Average confidence rating (between 20 and 60 mm) of the directional judgment. Error bars indicate between-subject standard error.
Figure 3
 
Performance on the perceptual task in the 3 main experimental conditions (move–match, move–mismatch, no move–distractor) and the two control conditions without distractors (no move– and move–no distractor). The results of the additional control experiment, in which we lowered the coherence for the control conditions, are shown in white bars. (A) The proportion of trials with the response within 35 deg of the correct direction. (B) Standard deviation (SD) of the reported motion, given that the motion was seen. (C) Probability of detection of the target movement. These two (B and C) are parameter estimates from a mixture model. (D) Average confidence rating (between 20 and 60 mm) of the directional judgment. Error bars indicate between-subject standard error.
This performance difference could be due to two factors. A prediction associated with the hand movement could have been used to cancel or attenuate the distractors in the move–match condition. Alternatively, the mismatch between the hand and visual motion by itself may have attracted attention and hindered the perceptual performance. We therefore compared the performance of the match and mismatch conditions to the control condition, in which no movement occurred. This condition should provide a baseline of how much the distractors impeded the visual task in the absence of any forward model prediction. Performance in this task (66.6%, SD = 11.3) was significantly worse than in the move–match condition, t(12) = 2.697, p = 0.019, but not significantly better than in the move–mismatch condition, t(12) = −1.437, p = 0.179 (one-sided t-tests with correction for two comparisons). Thus, the performance gains in the “move–match” condition may be attributed to the effect of sensory cancellation rather than interference effects caused by non-matching visual feedback. 
Since the analysis involved making comparisons across movement and non-movement conditions, it was also necessary to address the secondary effects of movement itself. Making a movement may have required participants to divide attention, which may have decreased their performance. To address this, we included two control conditions in which participants either moved or did not move. No distractor dots were presented in either of these conditions. Performance in these tasks was consequently much better (87.4% and 87.3% for the no move–no distractor and move–no distractor conditions, respectively). Most importantly, however, the requirement to follow the robot motion with the hand during the perceptual task did not influence perceptual performance, t(12) = 0.071, p = 0.944. 
One final problem with this control comparison is that we may have missed the influence of the movement on the perceptual task, because performance was relatively close to ceiling. We therefore tested 10 more participants in the two control conditions without distractors and adjusted this time the coherence level, such that performance was much lower (∼55% accuracy). The results (Figure 3A, white bars) confirmed our conclusion that hand movement itself did not impair the detection of coherent motion. The accuracies in the no move condition (53.5%) and in the move condition (55.6%) were not significantly different, t(9) = −0.574, p = 0.580. In light of this result, we can interpret the performance differences in the main experimental conditions as effects of visual sensory predictions. 
Thus, our results so far indicate that sensory predictions can improve perceptual performance by attenuating unwanted distractors. We then asked which aspect of the task improved through this filtering process. Increased performance in the move–match condition could have arisen due to one of two possibilities: Sensory cancellation of the distractor dots could have led to increased sensitivity to detect the target motion within the distractor display. Alternatively, the target motion may have been detected equally often, yet the attenuation of the distractor motion may have increased the accuracy of the motion perception. To explore these possibilities, we modeled the angular errors of the participants as a mixture of a uniform distribution for the proportion of trials in which the participants were guessing and a wrapped normal distribution with zero mean and unknown variance for the proportion of trials in which the participants saw the motion (see Methods section). We estimated the probability of detection and response variability separately based on the distribution of the errors of each participant (see Methods section). 
The response variability (Figure 3B) on trials during which participants saw the motion were not different between conditions, F(4,48) = 0.814, p = 0.523. The estimated values of this parameter were quite different between participants (ranging from 7.8 to 20.2 deg), but they were very reliable across conditions within participants. The estimates of variability correlated highly (r = 0.82) across conditions, indicating good test–retest reliability. Thus, our failure to find any effect on the response variability for detected stimuli was not due to unreliable model fits. Rather, the accuracy results were due to a change in the probability to detect the target motion (Figure 3C). We first confirmed the influence of the sensory prediction by showing again that there was a significant difference between the move–match and move–mismatch condition, t(12) = 3.923, p = 0.001 (one-sided test). As before, the difference between no move–distractor condition and the move–match condition was significant, t(12) = 2.286, p = 0.041, whereas the difference between no move–distractor and the move–mismatch condition was not, t(12) = −1.254, p = 0.234 (both one-sided tests, corrected for multiple comparisons). This suggests that attenuation of the distractor dots through sensory motor prediction increased the chances of detecting the target motion but left the accuracy of reports, once the motion was detected, unaltered. 
Consistent with the higher detection probability, participants also reported a higher confidence (Figure 3D) in the move–match condition than in the move–mismatch condition, t(12) = 2.224, p = 0.023 (one-sided test). Confidence ratings in the no move–distractor condition were not significantly different from the move–mismatch condition, t(12) = 0.180 p = 0.860, and marginally lower than in the move–match condition, t(12) = 2.080 p = 0.060 (one-sided tests, corrected for multiple comparisons). The confidence on each trial also correlated with the model-based measure of the detection probability, suggesting that the report was based on actually seeing the target motion (mean correlation −0.68, SD = 0.13). 
Finally, we tested whether matching or mismatching visual feedback influenced the movement of the participants. As the robotic device guided the participants' movements, we can take the interaction force between the participants' hand and the robot handle as a measure of how well the participants followed the movement of the robot. If the mismatching visual feedback had an influence on the guidance of movements, we might expect higher interaction forces in the move–mismatch condition than in the other condition. However, we found that the interaction forces overall were quite small and well matched across conditions (move–match: 0.748 N, SD = 0.173 N; move–mismatch: 0.745 N, SD = 0.184 N; move–no distractor: 0.745, SD = 0.183 N). Thus, visual feedback seemed not to interfere with the ability of our participants to follow the movement of the robot (Rosenbaum, Dawson, & Challis, 2006). 
Discussion
Previous studies have shown that self-generated events are perceived as less intense than unexpected external events (Aliu et al., 2009; Bays et al., 2005; Blakemore, Wolpert et al., 1998; Hesse et al., 2010; Martikainen et al., 2005; Sato, 2008; Shergill et al., 2003; Tsakiris & Haggard, 2003). Our data show—to our knowledge for the first time—that sensory cancellation of self-generated stimuli can improve sensitivity to other visual stimuli. Participants were more likely to perceive coherent motion embedded in a random dot display when the motion of distracting stimuli perfectly matched their actual own arm movement. This was true both compared to a condition in which the visual stimulus did not match the movement and compared to a condition in which the arm did not move at all. Thus, the improvement can be attributed to the cancellation of the self-generated distracting stimuli. 
Our results complement recent findings (Christensen et al., 2011) that show that visual motion stimuli that are consistent with self-generated arm movements can be better detected among distractors than stimuli that were not related to the movement. In this work, however, participants were instructed to detect the dot motion that was associated with the self-generated movement, whereas in our task these stimuli needed to be ignored. Thus, depending on task instruction, the nervous system appears to be able to flexibly attenuate or facilitate the perception of self-generated visual motion. 
Confirming related results, our data also show that sensory motor predictions rely on an efference copy of a self-generated action. Across all our conditions, the distractor motion was regular and could, therefore, be predicted equally well based on the past perceptual information. Thus, the difference between our conditions cannot be attributed to the perceptual predictability and, therefore, clearly shows that an efference copy is necessary for sensory cancellation. While the volunteer's actions were guided along an enforced path, the small interaction forces between the hand and robot implied that most of the movement was self-generated. Whether fully free movements would lead to even larger cancellation effects needs to be tested in further studies. The importance of a self-generated action in perceptual processing is consistent with work on the learning of anticipatory postural or grip force adjustments (Blakemore, Goodbody, & Wolpert, 1998; Diedrichsen, Verstynen, Hon, Lehman, & Ivry, 2003), which also depend crucially on an efference copy, even if the perturbations are fully matched for predictability. 
Importantly, we found that the improvement in accuracy was due to an increased probability of detecting the target motion rather than an amplification of motion perception. The variability of the responses—if the target motion was detected—remained stable across conditions. This may indicate that the response variability in this task was mainly determined by other factors, such as movement variability or memory. Thus, our results suggest that the influence of sensory cancellation of distractors may be most evident when stimuli are near the perceptual threshold rather than when stimuli are presented above threshold. 
Finally, our results also show that a mismatch between movement and visual feedback did not make the distractors harder to ignore—at least not significantly. This is consistent with results for haptic sensory cancellation (Bays et al., 2005), where the perceived intensity of a force pulse that is simultaneous with the action is attenuated but not amplified when the feedback does not match the prediction in time. However, this null finding has to be interpreted with some caution. It is possible that, due to the large and constant visuomotor mismatch, the motor system may have labeled the distractor motion as having been caused by an external factor. Thus, the distractor dots would have been treated as externally generated motion by the system. If this conjecture were correct, we would predict that visual attention is attracted to a distractor that first moved congruently with the action and then transiently changed to a non-matching version. Such amplification would be functional because small sensory prediction errors may be indicative of a miscalibrated internal forward model rather than an externally caused event (Kording & Wolpert, 2004; Synofzik, Thier, & Lindner, 2006). 
Where in the nervous system does the integration of visual events and sensory predictions take place? A number of functional magnetic resonance imaging (fMRI) studies have suggested a possible role of the posterior superior temporal sulcus (Kontaris, Wiggett, & Downing, 2009; Leube et al., 2003) and the angular gyrus (Farrer et al., 2007; Tsakiris, Longo, & Haggard, 2010) in detecting the discrepancy between visual sensory motor predictions and actual visual feedback. These regions show increased blood-oxygen-level-dependent (BOLD) activity during actions with mismatching compared to matching visual feedback. These studies, however, leave open the question as to whether the differences in regional fMRI activity were due to attenuation of self-generated visual stimuli or to the boosting of mismatching visual stimulation. 
The neural source of sensory predictions itself, however, remains elusive, although it has been suggested that a visuomotor forward model is located in the cerebellum. Consistent with this idea, Lindner, Haarmeier, Erb, Grodd, and Thier (2006) studied the process of cancellation of visual motion induced by a pursuit eye movement and found that the BOLD signal in Crus I of the cerebellum correlated with the size of the predicted visual shift. Other studies, however, suggest that the overlearned prediction of visual consequence of own actions may not depend on the integrity of the cerebellum (Synofzik, Lindner, & Thier, 2008). Rather, the role of the cerebellum may lie in the updating of motor behaviors and predictions when the visuomotor mapping changes. This is consistent with the finding that patients with cerebellar lesions are profoundly impaired in the adaptation of visuomotor behaviors (Martin, Keating, Goodkin, Bastian, & Thach, 1996; Tseng, Diedrichsen, Krakauer, Shadmehr, & Bastian, 2007). 
In sum, our data demonstrate, to our knowledge for the first time, that sensory motor predictions can enhance sensory perception of external stimuli. In the context of visual motion detection, we show that participants can better ignore distracting stimuli if these move congruently with a self-generated arm movement. In conjunction with other recent work (Christensen et al., 2011), these findings suggest that the nervous system may be able to either cancel or facilitate self-generated visual motion in a task-dependent manner. 
Acknowledgments
*NL and BF contributed equally to this work. 
The work was supported by a grant from the National Science Foundation (NSF, BSC 0726685). 
Commercial relationships: none. 
Corresponding author: Jörn Diedrichsen. 
Email: j.diedrichsen@ucl.ac.uk. 
Address: Institute of Cognitive Neuroscience, Alexandra House, 17 Queen Square, London WC1N 3AR, UK. 
References
Aliu S. O. Houde J. F. Nagarajan S. S. (2009). Motor-induced suppression of the auditory cortex. Journal of Cognitive Neuroscience, 21, 791–802. [PubMed] [Article] [CrossRef] [PubMed]
Bays P. M. Catalao R. F. Husain M. (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of Vision, 9(10):7, 1–11, http://www.journalofvision.org/content/9/10/7, doi:10.1167/9.10.7. [PubMed] [Article] [CrossRef] [PubMed]
Bays P. M. Wolpert D. M. (2007). Predictive attenuation in the perception of touch. In Haggard P. (Ed.), Attention & performance XXII: Sensorimotor foundations of higher cognition. (pp. 339–358). Oxford, UK: Oxford University Press.
Bays P. M. Wolpert D. M. Flanagan J. R. (2005). Perception of the consequences of self-action is temporally tuned and event driven. Current Biology, 15, 1125–1128. [PubMed] [CrossRef] [PubMed]
Blakemore S. J. Goodbody S. J. Wolpert D. M. (1998). Predicting the consequences of our own actions: The role of sensorimotor context estimation. Journal of Neuroscience, 18, 7511–7518. [PubMed] [PubMed]
Blakemore S. J. Wolpert D. M. Frith C. D. (1998). Central cancellation of self-produced tickle sensation. Nature Neuroscience, 1, 635–640. [PubMed] [CrossRef] [PubMed]
Bridgeman B. Van der Heijden A. H. C. Velichkovsky B. M. (1994). A theory of visual stability across saccadic eye movements. Behavioral and Brain Sciences, 17, 247–258. [CrossRef]
Christensen A. Ilg W. Giese M. A. (2011). Spatiotemporal tuning of the facilitation of biological motion perception by concurrent motor execution. Journal of Neuroscience, 31, 3493–3499. [PubMed] [CrossRef] [PubMed]
Craighero L. Bello A. Fadiga L. Rizzolatti G. (2002). Hand action preparation influences the responses to hand pictures. Neuropsychologia, 40, 492–502. [PubMed] [CrossRef] [PubMed]
Diedrichsen J. Verstynen T. Hon A. Lehman S. L. Ivry R. B. (2003). Anticipatory adjustments in the unloading task: Is an efference copy necessary for learning? Experimental Brain Research, 148, 272–276. [PubMed] [PubMed]
Farrer C. Frey S. H. Van Horn J. D. Tunik E. Turk D. Inati S. et al. (2007). The angular gyrus computes action awareness representations. Cerebral Cortex, 18, 254–261. [PubMed] [CrossRef] [PubMed]
Fisher N. I. (1993). Statistical analysis of circular data. Cambridge, UK: Cambridge University Press.
Haarmeier T. Bunjes F. Lindner A. Berret E. Thier P. (2001). Optimizing visual motion perception during eye movements. Neuron, 32, 527–535. [PubMed] [CrossRef] [PubMed]
Hesse M. D. Nishitani N. Fink G. R. Jousmaki V. Hari R. (2010). Attenuation of somatosensory responses to self-produced tactile stimulation. Cerebral Cortex, 20, 425–432. [PubMed] [CrossRef] [PubMed]
Kontaris I. Wiggett A. J. Downing P. E. (2009). Dissociation of extrastriate body and biological-motion selective areas by manipulation of visual-motor congruency. Neuropsychologia, 47, 3118–3124. [PubMed] [CrossRef] [PubMed]
Kording K. P. Wolpert D. M. (2004). The loss function of sensorimotor learning. Proceedings of the National Academy of Sciences of the United States of America, 101, 9839–9842. [PubMed] [Article] [CrossRef] [PubMed]
Leube D. T. Knoblich G. Erb M. Grodd W. Bartels M. Kircher T. T. J. (2003). The neural correlates of perceiving one's own movements. NeuroImage, 20, 2084–2090. [PubMed] [CrossRef] [PubMed]
Lindner A. Haarmeier T. Erb M. Grodd W. Thier P. (2006). Cerebrocerebellar circuits for the perceptual cancellation of eye-movement-induced retinal image motion. Journal of Cognitive Neuroscience, 18, 1899–1912. [PubMed] [CrossRef] [PubMed]
Martikainen M. H. Kaneko K. Hari R. (2005). Suppressed responses to self-triggered sounds in the human auditory cortex. Cerebral Cortex, 15, 299–302. [PubMed] [CrossRef] [PubMed]
Martin T. A. Keating J. G. Goodkin H. P. Bastian A. J. Thach W. T. (1996). Throwing while looking through prisms. Brain, 119, 1183–1198. [PubMed] [CrossRef] [PubMed]
Miall R. C. Stanley J. Todhunter S. Levick C. Lindo S. Miall J. D. (2006). Performing hand actions assists the visual discrimination of similar hand postures. Neuropsychologia, 44, 966–976. [PubMed] [CrossRef] [PubMed]
Newsome W. T. Pare E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed] [PubMed]
Repp B. H. Knoblich G. (2007). Action can affect auditory perception. Psychological Science, 18, 6–7. [PubMed] [CrossRef] [PubMed]
Rosenbaum D. A. Dawson A. M. Challis J. H. (2006). Haptic tracking permits bimanual independence. Journal of Experimental Psychology: Human Perception and Performance, 32, 1266–1275. [PubMed] [CrossRef] [PubMed]
Sato A. (2008). Action observation modulates auditory perception of the consequence of others' actions. Conscious and Cognition, 17, 1219–1227. [PubMed] [CrossRef]
Shergill S. S. Bays P. M. Frith C. D. Wolpert D. M. (2003). Two eyes for an eye: The neuroscience of force escalation. Science, 301, 187. [PubMed] [CrossRef] [PubMed]
Sperry R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion. Journal of Comparative and Physiological Psychology, 43, 482–489. [PubMed] [CrossRef] [PubMed]
Synofzik M. Lindner A. Thier P. (2008). The cerebellum updates predictions about the visual consequences of one's behavior. Current Biology, 18, 814–818. [PubMed] [CrossRef] [PubMed]
Synofzik M. Thier P. Lindner A. (2006). Internalizing agency of self-action: Perception of one's own hand movements depends on an adaptable prediction about the sensory action outcome. Journal of Neurophysiology, 96, 1592–1601. [PubMed] [CrossRef] [PubMed]
Takahashi C. Diedrichsen J. Watt S. J. (2009). Integration of vision and haptics during tool use. Journal of Vision, 9(6):3, 1–13, http://www.journalofvision.org/content/9/6/3, doi:10.1167/9.6.3. [PubMed] [Article] [CrossRef] [PubMed]
Tsakiris M. Haggard P. (2003). Awareness of somatic events associated with a voluntary action. Experimental Brain Research, 149, 439–446. [PubMed] [PubMed]
Tsakiris M. Longo M. R. Haggard P. (2010). Having a body versus moving your body: Neural signatures of agency and body-ownership. Neuropsychologia, 48, 2740–2749. [PubMed] [CrossRef] [PubMed]
Tseng Y. W. Diedrichsen J. Krakauer J. W. Shadmehr R. Bastian A. J. (2007). Sensory prediction errors drive cerebellum-dependent adaptation of reaching. Journal of Neurophysiology, 98, 54–62. [PubMed] [CrossRef] [PubMed]
von Holst E. Mittelstaedt H. (1950). Das Reafferenzprincip. Naturwissenschaft, 37, 464–476. [CrossRef]
Weiskrantz L. Elliott J. Darlington C. (1971). Preliminary observations on tickling oneself. Nature, 230, 598–599. [PubMed] [CrossRef] [PubMed]
Wohlschlager A. (2000). Visual motion priming by invisible actions. Vision Research, 40, 925–930. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Experimental paradigm. (A) Time course of one trial. Trials with movements started with a synchronization phase, in which the robot guided an arm movement in a Lissajous figure of random orientation. During the random dot phase, a display of moving dots was shown. The target dots moved independently in random directions but assumed coherent motion at a random time within the 10 s. A number of distractor dots, shown in red here for clarity but white in the actual experiment, moved superimposed onto the random dot motion with a coherent Lissajous figure movement. At the end of the trial, participants were required to indicate the direction of the coherent dot motion by controlling an arrow with their hand. The length of this line was used to indicate confidence levels—an arrow as long as the widest circle indicated.
Figure 1
 
Experimental paradigm. (A) Time course of one trial. Trials with movements started with a synchronization phase, in which the robot guided an arm movement in a Lissajous figure of random orientation. During the random dot phase, a display of moving dots was shown. The target dots moved independently in random directions but assumed coherent motion at a random time within the 10 s. A number of distractor dots, shown in red here for clarity but white in the actual experiment, moved superimposed onto the random dot motion with a coherent Lissajous figure movement. At the end of the trial, participants were required to indicate the direction of the coherent dot motion by controlling an arrow with their hand. The length of this line was used to indicate confidence levels—an arrow as long as the widest circle indicated.
Figure 2
 
Distribution of angular error in the (A) move–no distractor condition of participant 3 and (B) in the move–mismatch condition of participant 8. The dashed line indicates the fit of the mixture model. Parameter δ is the probability of detecting the stimulus (otherwise, the participant is guessing), while σ indicates the standard deviation of the response distribution.
Figure 2
 
Distribution of angular error in the (A) move–no distractor condition of participant 3 and (B) in the move–mismatch condition of participant 8. The dashed line indicates the fit of the mixture model. Parameter δ is the probability of detecting the stimulus (otherwise, the participant is guessing), while σ indicates the standard deviation of the response distribution.
Figure 3
 
Performance on the perceptual task in the 3 main experimental conditions (move–match, move–mismatch, no move–distractor) and the two control conditions without distractors (no move– and move–no distractor). The results of the additional control experiment, in which we lowered the coherence for the control conditions, are shown in white bars. (A) The proportion of trials with the response within 35 deg of the correct direction. (B) Standard deviation (SD) of the reported motion, given that the motion was seen. (C) Probability of detection of the target movement. These two (B and C) are parameter estimates from a mixture model. (D) Average confidence rating (between 20 and 60 mm) of the directional judgment. Error bars indicate between-subject standard error.
Figure 3
 
Performance on the perceptual task in the 3 main experimental conditions (move–match, move–mismatch, no move–distractor) and the two control conditions without distractors (no move– and move–no distractor). The results of the additional control experiment, in which we lowered the coherence for the control conditions, are shown in white bars. (A) The proportion of trials with the response within 35 deg of the correct direction. (B) Standard deviation (SD) of the reported motion, given that the motion was seen. (C) Probability of detection of the target movement. These two (B and C) are parameter estimates from a mixture model. (D) Average confidence rating (between 20 and 60 mm) of the directional judgment. Error bars indicate between-subject standard error.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×