Free
Article  |   June 2015
Processing reafferent and exafferent visual information for action and perception
Author Affiliations
Journal of Vision June 2015, Vol.15, 11. doi:10.1167/15.8.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Alexandra Reichenbach, Jörn Diedrichsen; Processing reafferent and exafferent visual information for action and perception. Journal of Vision 2015;15(8):11. doi: 10.1167/15.8.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

Introduction
A substantial part of the sensory information processed by the brain is caused by its own motor actions (reafferences). Indeed, reafferent information is of utmost importance for the evaluation and control of actions. Mounting experimental evidence suggests that the central nervous system is equipped with specialized mechanisms to process reafferent information from the somatosensory (Bays, Flanagan, & Wolpert, 2006; Blakemore, Wolpert, & Frith, 2000; Shergill, Bays, Frith, & Wolpert, 2003; Voss, Ingram, Haggard, & Wolpert, 2006) and vestibular (Cullen, 2012; Cullen, Belton, & McCrea, 1991) systems. In the visual domain, such specialized processing has been demonstrated for information resulting from eye (Sperry, 1950; von Holst & Mittelstaedt, 1950) and hand (Christensen, Ilg, & Giese, 2011; Lally, Frendo, & Diedrichsen, 2011) movements. 
Humans use reafferent visual feedback of their moving hands very sensitively to adjust ongoing reaches and thereby increase the accuracy of goal-directed movements (Sarlegna et al., 2004; Saunders & Knill, 2003, 2004). In a recent study (Reichenbach, Franklin, Zatka-Haas, & Diedrichsen, 2014), we demonstrated that allocation of covert attention to the moving cursor does not facilitate the corrective movement response. In contrast, overt attention is usually directed toward the target of reach (Neggers & Bekkering, 2000), and allocation of both overt (Diedrichsen, Nambisan, Kennerley, & Ivry, 2004) and covert (Reichenbach et al., 2014) attention to this location facilitates movement corrections to displacements of the reach target. Furthermore, the visuomotor system is better at extracting hand-related information than target-related information from a cluttered visual display (Reichenbach et al., 2014). These findings indicate the existence of a dedicated processing channel that directly links reafferent visual information to the representation of the hand outside of the focus of visual attention, for which we suggested the term visuomotor binding. Similar to visual attention (Bisley & Goldberg, 2010; Bundesen, 1990; Posner, Snyder, & Davidson, 1980; Ptak, 2012), its function is to extract relevant visual information from crowded visual scenes while filtering out distracting information. 
Here, we ask whether the enhanced processing of reafferent visual information is a special feature of motor control or whether the same advantage can be also observed when making perceptual judgments. It has been hypothesized that vision for action and vision for perception are processed by separate cortical pathways (Goodale & Milner, 1992), even though the extent of the segregation and possible interactions are still highly debated (for a recent review, see Cardoso-Leite & Gorea, 2010). Thus, visuomotor binding either may act only on the dorsal processing stream dedicated to motor control or may more generally influence all visual processes. 
In order to distinguish these possibilities, we utilized a bimanual reaching task in which the locations of the hands were represented by visual cursors. To probe motor-related processing, we occasionally displaced the visual hand cursors during the reaching movement by a small distance perpendicular to the reaching direction (Sarlegna et al., 2004; Saunders & Knill, 2003, 2004). Motor responses to such visual perturbations are highly automatic and cannot be suppressed voluntarily (Franklin & Wolpert, 2008). Thus, the strength of the feedback correction provides a sensitive measure to the processing of visual information for motor control (Franklin & Wolpert, 2011; Pruszynski et al., 2011). To assess the filtering capacity, we then added to the visual display varying numbers of distractor objects that moved in a similar fashion as the cursors and could be laterally displaced. As a comparable perception task, we asked participants whether one of the cursors was displaced and, if yes, in which direction. One group performed these perceptual judgments based on matched visual information without concurrent movements. The other group performed the reaching and perceptual judgment tasks concurrently and therefore could base both motor and perceptual responses on the same reafferent information. 
If only the motor processing stream had a specialized mechanism for extracting reafferent visual information, then the motor responses based on reafferent information should always outperform perceptual judgments regardless of whether these are based on exafferent or reafferent information. The performance in the two perception tasks should be indistinguishable. In contrast, if processing of reafferent visual information was enhanced in both processing streams to the same degree, then both perceptual judgments and motor responses should show the same improved filtering abilities compared with perceptual judgments based on exafferent information. 
Method
Neurologically healthy right-handed (Oldfield, 1971) volunteers were recruited from an internal experiment database (group 1: age = 21.5 ± 2.4 years, four females, six males; group 2: age = 20.5 ± 2 years, seven females, five males). All participants provided prior written informed consent and were paid for participating. They were naïve to the purpose of the experiment and debriefed afterwards. The experiments were conducted in compliance with the Declaration of Helsinki, and the University College London research ethics committee approved all experimental procedures. 
Apparatus and visual scene
Participants were seated comfortably in front of a virtual environment setup (Figure 1a), leaning slightly forward against a forehead and chin rest. For the reaching task, they made bimanual 20-cm reaching movements away from and toward the body while holding onto a custom-made robotic manipulandum (update rate = 1 kHz; position and force data sampled at 200 Hz) with each hand. The bimanual task was chosen to increase the visuomotor processing load and thereby maximize differences between experimental conditions. A liquid crystal display monitor (update rate = 60 Hz) mounted horizontally above the manipulanda prevented direct vision of the hands but allowed participants to view the visual scene on the monitor. An eye tracker (EyeLink 1000, SR Research Ltd., Kanata, Ontario, Canada) recorded the left eye's position with 200 Hz, and the data were processed in real time to provide feedback about eye fixation (cf. Reichenbach et al., 2014, for details). 
Figure 1
 
Experimental method. (a) Virtual reality setup with two robotic manipulanda above which the visual display is mounted, and the eye tracker for fixation control. (b) Visual display for an example trial with two distractors per hand or hemifield. Note that either a cursor or a distractor was displaced during a trial. The red outlines of targets and cursors are rendered for illustration purposes only.
Figure 1
 
Experimental method. (a) Virtual reality setup with two robotic manipulanda above which the visual display is mounted, and the eye tracker for fixation control. (b) Visual display for an example trial with two distractors per hand or hemifield. Note that either a cursor or a distractor was displaced during a trial. The red outlines of targets and cursors are rendered for illustration purposes only.
The visual display (Figure 1b) included cursors indicating the hand positions (filled white circles, 0.6-cm diameter), located vertically approximately five centimeters above the real positions of the hands. Reaching movements for each hand were executed from a start box to a target box (filled white squares, 0.6-cm size, 6-cm distance to the right and left from body midline) and alternated between up- and downward movements. Fixation had to be maintained on a white cross (0.5 cm), located at body midline at a height such that all visual perturbations occurred at the same distance from central fixation. Cursor-like distractors started at x-positions uniformly distributed within ±4 cm around the start boxes, leaving 1.2 cm free around the cursors. Their starting y-positions were uniformly distributed within ±2 cm around the start boxes. The distractors started moving at a time sampled from each participant's reaction time distribution (as assessed in the training block and adjusted when necessary throughout the experiment) and moved in y-direction following a minimum jerk profile (Flash & Hogan, 1985; Todorov & Jordan, 1998), with durations sampled from each participant's movement time distribution (as assessed in the training block and adjusted when necessary throughout the experiment). The minimum jerk profile was chosen to closely mimic the smooth movement paths of the real cursors. Assuming a start and end velocity of zero, the trajectory is fully determined by the traveled distance and movement time through a fourth-order polynomial function. To enable participants to distinguish the real cursors from the distractors in the beginning of a trial, the former were highlighted red before the task was cued. The number of distractors was always equal for both hands, and they were displayed in the respective visual hemifields. 
During the reaching movements we displaced either a visual cursor or a distractor ±2 cm in the lateral direction. Displacements were ramped over a 50-ms interval and started when both hands had traveled on average 5 cm in the forward direction. Participants were informed about the occurrence of displacements before the experiment started. 
Procedure
The experiment consisted of a matched reaching and perception task for both groups. On reaching trials, participants first moved the cursors into the start boxes, gently aided by a pushback force, while maintaining eye fixation. After 350 ms, the targets appeared at a 20-cm distance from the start positions. Participants were instructed to perform fast and accurate reaching movements toward the targets when the cursors changed in color from red to white, which occurred 800 ms after the targets appeared. The trial ended when the average hand velocity dropped below 3.5 cm/s for at least 40 ms. A trial was considered valid if eye fixation was maintained, movement duration was shorter than 800 ms, and maximum velocity ranged between 50 and 80 cm/s. Valid trials with endpoint errors of less than 7 mm were rewarded with one point per hit target, an animated “explosion,” and a pleasant tone. A running score was displayed at the top of the screen. Feedback about trial performance (accuracy, velocity, and eye fixation) was given via a color scheme at the end of each trial. Invalid trials constituted on average 10% of all executed trials and were repeated by randomly intermixing them into the remaining trials of the current experimental block. 
In half of the reaching trials a force channel restricted movements, guiding the hands on a straight path to the targets (Franklin & Wolpert, 2008; Scheidt, Reinkensmeyer, Conditt, Rymer, & Mussa-Ivaldi, 2000). The force channel was implemented with a spring-like force of 7000 N/m applied in the lateral direction. The force with which participants pressed into the channel provided a more sensitive assay of the feedback-triggered responses than did position data from unconstrained trials (cf. Supplementary Appendix S4). To enable participants to reach the target in channel trials, the cursor was first displaced outward and then always back after 350 ms independent of the feedback response. On nonchannel trials, the cursor displacements remained, requiring participants to correct for the perturbations. We refrained from using force channel trials during training blocks to prevent a possible attenuation of the feedback response (Franklin & Wolpert, 2008). 
Group 1 performed the perception task based on exafferent visual information. Participants passively watched a visual scene matched to the reaching task, with the “cursors” animated with similar participant-matched velocity profiles as the distractors. After each trial, participants indicated on a keyboard whether the left or right cursor had been displaced to either the left or the right or whether no cursor had been displaced (five-alternative forced-choice task). As in the reaching task, the cursors were highlighted in red at the beginning of the trial and turned white briefly before motion onset. A correct response was rewarded with a point for the running score. 
Group 2 executed the reaching and perception tasks in parallel such that they could utilize the same reafferent information from the cursor in both tasks. The execution of the reaching part was the same as that in group 1. After finishing the movements to the two targets, participants moved either the left or the right manipulandum into one of five circles in the middle of the screen, which corresponded to the five answer possibilities for the perception task. The correct response in the perception task was rewarded with two points in order to match the two tasks for their behavioral importance. 
All participants in group 1 completed first a reaching session of about 2 hr and on another day a perception session of about 1 hr. After one training block in each session, they carried out 16 blocks of 64 trials each for the reaching task and eight blocks for the perception task. Two reaching blocks consisted of the fully randomized permutation of all experimental conditions: movement (upward, downward) × channel (yes, no) × displacements (32). Without distractors, there were five displacement conditions (either hand's cursor to the left or right or none). With distractors (one, two, or four per hand or hemifield), there were nine displacement conditions (either hand's cursor to the left or right, either side's distractor to the left or right, or none). Because the perception condition did not require any channel trials, only half as many blocks were needed. Group 2 completed a single experimental session with 16 experimental blocks. 
Statistical analysis
As invalid trials were repeated within each block, we obtained eight repetitions for each condition and participant. For the reaching data, all position and force traces were aligned temporally to the onset of the visual perturbation or, for unperturbed trials, the time when the perturbation would have occurred. 
We assessed the corrective reaching responses by measuring the lateral forces exerted into the channels (perpendicular to the reaching direction). To remove any constant force profiles caused by the biomechanical properties of the arm and robot, we subtracted the mean force trace of unperturbed trials of each hand and condition (separately for each number of distractors and for upward and downward movements). A measure for the response strength was obtained by averaging the forces in the time window after response onset (from 200 to 350 ms after perturbation onset). For further analyses, we pooled the data over the conditions of no interest, namely upward and downward movements, right and left hand, and—after mirroring the responses to rightward displacements—rightward and leftward displacements. This yielded 64 trials for each condition of interest. A supplementary analysis confirmed that the results were consistent across reaching hand and movement direction (Supplementary Appendix S2), which is also apparent from Supplementary Figure S1, in which the trajectory data are plotted split by these conditions. 
To compare the sensitivity of the visuomotor responses directly with the sensitivity of the perceptual judgments, we transformed the force measures into a sensitivity index d′ (Macmillan & Creelman, 2004) by determining how well an unbiased, ideal observer would be able to distinguish cursor displacements from unperturbed trials based purely on the hand force. For this we picked a criterion that yielded equal proportions of misses and false alarms (Figure 2a), and the resulting classified data were then used analogously to the perceptual discrimination task to calculate d′ (Figure 2b). A parallel calculation was performed for the distractor displacements. Note that in this case, “hits” actually constituted erroneous responses—that is, distractors were mistakenly processed as cursors. For group 2, the d′ of the perceptual judgments was based only on data from nonchannel trials, as in channel trials the cursor jumped both out and back, providing more information than available for the initial motor response. 
Figure 2
 
Construction of the sensitivity index d′. (a) Schematic illustration for obtaining the criterion for response classification in the reaching task. For displacements of the left and right cursor or distractor, only force data from the left and right hand, respectively, were used. Force data for rightward displacements were mirrored such that a correct response was positive. To obtain the response distributions, data were pooled across hand, displacement directions, and movement direction. (b) Confusion matrix for classification of hits (H), false alarms (FA), misses (M), and correct rejections (CR) for both tasks. Hits and misses were specific for cursor (Hc, Mc) and distractor (Hd, Md) conditions, but false alarms and correct rejections were shared between them. Note that reports on the opposite hand were not included because the responses were always side specific in the reaching task.
Figure 2
 
Construction of the sensitivity index d′. (a) Schematic illustration for obtaining the criterion for response classification in the reaching task. For displacements of the left and right cursor or distractor, only force data from the left and right hand, respectively, were used. Force data for rightward displacements were mirrored such that a correct response was positive. To obtain the response distributions, data were pooled across hand, displacement directions, and movement direction. (b) Confusion matrix for classification of hits (H), false alarms (FA), misses (M), and correct rejections (CR) for both tasks. Hits and misses were specific for cursor (Hc, Mc) and distractor (Hd, Md) conditions, but false alarms and correct rejections were shared between them. Note that reports on the opposite hand were not included because the responses were always side specific in the reaching task.
We performed a complementary analysis of the free reaching trials using the velocity perpendicular to the reaching direction as proxy for the corrective reaching response. Because the corrections became visible slightly later in these data (cf. Figure 3), the time window over which we averaged the data was set to 250 to 400 ms after perturbation onset. Otherwise, calculation of d′ and statistical analyses were performed analogous to the force data. 
Figure 3
 
Average responses to visual perturbations in the reaching task aligned to the time of perturbation onset (t = 0 ms). Shaded areas denote 1 SEM. The dashed lines indicate the time window over which the forces or velocities were averaged to obtain the mean force or velocity, respectively, per condition. Note that the reported number of distractors is per hand or hemifield, which was equal for both sides. Corresponding detailed figures split by reaching hand (left, right) and movement direction (upward, downward) are available in Supplementary Figure S1.
Figure 3
 
Average responses to visual perturbations in the reaching task aligned to the time of perturbation onset (t = 0 ms). Shaded areas denote 1 SEM. The dashed lines indicate the time window over which the forces or velocities were averaged to obtain the mean force or velocity, respectively, per condition. Note that the reported number of distractors is per hand or hemifield, which was equal for both sides. Corresponding detailed figures split by reaching hand (left, right) and movement direction (upward, downward) are available in Supplementary Figure S1.
For statistical assessment, we used repeated measures analysis of variance (ANOVA) and preplanned two-tailed t tests between conditions (paired where applicable) or groups. Corrections for multiple comparisons were performed using Bonferroni corrections where necessary. P values smaller than 0.05 were reported as significant. 
Results
Reaching
All participants showed rapid movement corrections counteracting the cursor displacements (Figure 3). These corrections started around 150 ms after the displacements, as evident in the force trajectories (Figure 3a, c), and their strength reduced with increasing number of distractors. To compare the motor responses directly with perceptual judgments, we converted the average force in the time interval 200 to 350 ms to a sensitivity index d′ (see Method). The two measures were highly correlated (r = 0.803 for group 1, r = 0.770 for group 2), with the small differences between force and d′ being accounted for by the fact that sensitivity also depends on the variability of corrections and unperturbed baseline trials. Subsequently, unless stated otherwise, we refer only to the sensitivity index d′ from the channel trials when reporting visuomotor responses. 
The capability of the motor processing stream for filtering reafferent visual information served as a baseline for comparison with the perceptual performance. Thus, we assessed the sensitivity of visuomotor responses in an increasingly cluttered visual scene. The introduction of distractors led to a significant decline of the online corrections to cursor displacements in both groups (Figure 4, bold black lines)—one-way ANOVA on number of distractors: main effect group 1, F(3, 27) = 28.430, p < 0.001, Cohen's d = 2.612 from zero to four distractors; main effect group 2, F(3, 33) = 50.577, p < 0.001, Cohen's d = 3.353 from zero to four distractors. However, even with four distractors per hand, clear feedback corrections were observed—t test versus zero: group 1, t(9) = 7.837, p < 0.001; group 2, t(11) = 11.956, p < 0.001; each also significant after correcting for four comparisons: α = 0.013. 
Figure 4
 
Visual sensitivities in reaching and perception tasks of (a) group 1 and (b) group 2. The sensitivities to displacements of cursors (bold lines) and distractors (thin lines) are plotted as a function of the number of distractors per hand or hemifield. The responses to cursor displacements decreased with increasing number of distractors, whereas the small responses to distractor displacements were independent of the number of distractors. In both groups, the sensitivity of perceptual judgments declined more rapidly than the sensitivity of visuomotor responses. With the maximum number of distractors, the visuomotor responses were still noticeable, whereas perceptual judgments were not distinguishable from chance level. However, the sensitivity of the perceptual judgments decreased less when the judgments were based on reafferent information (group 2; panel b) compared with exafferent information (group 1; panel a). Error bars denote 1 SEM across participants.
Figure 4
 
Visual sensitivities in reaching and perception tasks of (a) group 1 and (b) group 2. The sensitivities to displacements of cursors (bold lines) and distractors (thin lines) are plotted as a function of the number of distractors per hand or hemifield. The responses to cursor displacements decreased with increasing number of distractors, whereas the small responses to distractor displacements were independent of the number of distractors. In both groups, the sensitivity of perceptual judgments declined more rapidly than the sensitivity of visuomotor responses. With the maximum number of distractors, the visuomotor responses were still noticeable, whereas perceptual judgments were not distinguishable from chance level. However, the sensitivity of the perceptual judgments decreased less when the judgments were based on reafferent information (group 2; panel b) compared with exafferent information (group 1; panel a). Error bars denote 1 SEM across participants.
The reaching responses of both groups were comparable in terms of response magnitude d′—two-way ANOVA on group × number of distractors: main effect group, F(1, 20) = 0.943, p = 0.343; interaction, F(3, 60) = 1.657, p = 0.186; and response onset (see Supplementary Appendix S2 for details). These results indicate that executing the dual task did not alter visuomotor processing. Supplementary analyses (see Supplementary Appendix S2) that include the factors of no interest reaching hand and movement direction support these findings by demonstrating coherent responses for both groups across these conditions. 
When distractors were displaced, the visuomotor system also reacted with small erroneous corrections (Figure 4 thin black lines)—t test versus zero on sensitivity pooled over number of distractors: group 1, t(9) = 2.647, p = 0.027; group 2, t(11) = 5.738, p < 0.001; independently of the number of distractors (one-way ANOVA on number of distractors): main effect group 1, F(2, 18) = 1.539, p = 0.242; main effect group 2, F(2, 22) = 2.038, p = 0.154. Such influence of distracting information on online control of movement is consistent with previous reports (Diedrichsen et al., 2004; Saijo, Murakami, Nishida, & Gomi, 2005). Note, however, that these authors displaced target-like objects or the visual background, eliciting movement corrections in the direction of visual motion. Here, the direction of the response was opposite to the displacement, indicating that the sudden movement was indeed mistaken as a movement of a cursor. 
These erroneous responses were always smaller than the responses to cursor displacements—all t test cursor versus distractor displacements with the same number of distractors: group 1, t(9) > 6.268, p < 0.001; group 2, t(11) > 8.487, p < 0.001; significance threshold after correcting for three comparisons: α = 0.017. This indicates that visuomotor responses were able to distinguish between cursors and distractors, even when a total of eight distractors (four per hand or hemifield) were present. 
The corresponding results for the free reaching trials (using the velocity data) were qualitatively comparable. Notably, the sensitivities based on the lateral velocities were highly correlated to the sensitivities based on the force data but significantly lower (see Supplementary Appendix S4). However, we observed a similar decline in reaching responses with increasing number of distractors but still clearly detectable responses with four distractors per hemifield (see Supplementary Appendix S3). 
We demonstrated that the sensitivity of filtering reafferent hand feedback for visuomotor responses was affected by the complexity of the visual scene. We further established that in our maximally cluttered scene, which included 10 moving objects (two cursors and eight distractors), the motor processing stream could filter the visual information sufficiently to differentiate between cursor and distractor displacements. 
Perceptual judgments on exafferent information (group 1)
The perception task of group 1 was designed to test how the sensitivity of perceptual judgments based on exafferent visual information compares with the sensitivity of visuomotor responses. All participants discriminated the cursor displacement (i.e., the displacement of the initially highlighted object) very well when no distractors were present (Figure 4a, bold gray line). Indeed, without distractors, the sensitivity of perceptual judgements clearly outperformed the sensitivity of visuomotor responses—t test, t(9) = 8.814, p < 0.001; significance threshold after correction for four comparisons: α = 0.013. A likely cause for this difference was the very different time constraint for each of the two tasks (Cardoso-Leite & Gorea, 2010). For the perceptual judgments, participants could exploit sensory information from the whole trial, whereas the motor system needed to quickly decide on movement corrections based on initial visual information only. Furthermore, the responses in the two tasks were qualitatively different in terms of the readout of the internal estimate. The answers in the perception task can be assumed to accurately reflect the decision reached. In contrast, the motor responses were always corrupted through a noisy output system. 
The introduction of distractors significantly decreased sensitivity of perceptual judgments—one-way ANOVA on number of distractors, main effect: F(3, 27) = 112.196, p < 0.001, Cohen's d = 5.820 from zero to four distractors. This decrease was more pronounced than the decrease in sensitivity for visuomotor responses (Figure 4a, bold lines)—two-way ANOVA on task (reaching, perception) × number of distractors; interaction: F(3, 27) = 41.001, p < 0.001. When four distractors per hemifield were presented, participants were not able to report the cursor displacements—t test versus zero, t(9) = 0.197, p = 0.849, giving perceptual judgments a significant lower sensitivity than visuomotor responses—t test, t(9) = 5.949, p < 0.001; significance threshold after correction for four comparisons: α = 0.013. The corresponding results for the velocity data corroborate these findings by being qualitatively the very same, except for an additional main effect of task due to the lower sensitivity of this measure (see Supplementary Appendix S3). To conclude, visuomotor responses are more robust to the deteriorating effect of visual scene complexity than perceptual judgments. The motor processing stream clearly filtered the information better than the perceptual processing stream in the maximally cluttered environment. 
Perceptual judgments on reafferent information (group 2)
The perception task of group 2 allowed us to test whether the previously found differences in sensitivity were due to the reafferent versus exafferent nature of the visual input or to the processing of the visual input via different mechanisms (i.e., visuomotor vs. perceptual processing stream). Perceptual judgments clearly benefited from the reafferent visual information in the more cluttered conditions (Figure 4, bold gray lines)—two-way ANOVA on group × number of distractors; interaction: F(3, 60) = 11.053, p < 0.001, Cohen's d = 0.629 for four distractors. However, participants of group 2 were still not better than chance to report the cursor displacements with four distractors per hand or hemifield—t test versus zero, t(11) = 1.619, p = 0.134. 
While these results indicate that the privileged processing of reafferent information also benefitted the perceptual processing stream, a key question is whether perceptual judgments were as selective as visuomotor responses performed on the same visual information. The direct comparisons between tasks suggest indeed that this was the case. None of the tests for each level of distractors reached significance—t tests perceptual judgments versus visuomotor responses, all t(11) < 0.91, p > 0.38. Yet the sensitivity for perceptual judgments based on reafferent information still decreased more steeply compared with the visuomotor responses (Figure 4b, bold lines)—two-way ANOVA task (reaching, perception) × number of distractors; interaction: F(3, 33) = 4.103, p = 0.014. This interaction suggests that perceptual judgments were more strongly influenced by a cluttered distracting environment, even when the objects to track were defined by reafferent information. The corresponding results for the free reaching trials (see Supplementary Appendix S3) substantiate these findings. In particular, the visuomotor responses in the free reaching trials and the perceptual judgments in group 2 were based on the same set of trials (i.e., on the very same visual information), and we still found a stronger influence of distractors on the perception task than on the reaching responses. 
Discussion
We tested the ability of human volunteers to detect sudden visual displacements of moving objects in an increasingly complex visual scene and compared this with their ability to utilize the same information for fast online movement corrections. We found that visuomotor responses were more immune to increasing the number of distractors than were perceptual judgments—that is, the sensitivity of the latter decreased more steeply as evidenced by the significant interactions between task and number of distractors. This was the case both for group 1, which performed the perceptual task based on exafferent information (without concurrent movements), and for group 2, which performed the perceptual task on reafferent visual information (together with movements). However, we found a clear advantage of using reafferent (group 2) over exafferent (group 1) information for perceptual judgments when dealing with increasing visual complexity. 
The higher sensitivity of perceptual judgments over visuomotor responses when no distractors were present can be explained by at least two differences between tasks. First, perceptual judgments can be made based on visual information integrated over the whole trial, whereas the motor system works under very tight temporal constraints (Cardoso-Leite & Gorea, 2010). Indeed, the very first motor responses are already detectable 150 ms after the visual perturbation (cf. Figure 3a, c). Second, the noise level associated with the responses is qualitatively different in the two tasks. Perceptual judgments can be assumed to perfectly reflect the internal decision, as evident by the nearly immaculate score in the easy perception task without distractors. The responses of the motor system, in contrast, are always disturbed by central (Churchland, Afshar, & Shenoy, 2006; Churchland, Yu, Ryu, Santhanam, & Shenoy, 2006) and peripheral (Faisal, Selen, & Wolpert, 2008; Jones, Hamilton, & Wolpert, 2002; van Beers, Haggard, & Wolpert, 2004) motor noise. Importantly, however, perceptual performance deteriorated more rapidly with the introduction of distractors in both groups. This differential sensitivity decrement across tasks with increasing complexity of the visual scene can be explained only by different visual processing mechanisms because differences in temporal constraints and noise level in the readout always stayed the same. Notably, in our maximally cluttered scene with eight distractors in total, visuomotor responses to cursor displacements were still more vigorous than responses to distractor displacements. Participants, however, ceased to be able to accurately report these cursor displacements in the perceptual judgment task. When the perceptual judgments were based on reafferent information (group 2), the perceptual performance improved for complex scenes, but the influence of the distractors was still more pronounced than in the motor task executed in parallel. 
The remaining advantage in the sensitivity of the visuomotor responses is striking considering that the perceptual judgments not only had access to the very same information but also had more time and potentially additional input from the motor response. This finding supports the view that the motor system can utilize visual information that does not reach our awareness for performing perceptual judgments (Milner, 2012). The hypothesis of a perception–action dissociation (Goodale & Milner, 1992) has so far been supported only by studies looking at processing of exafferent information (i.e., the goal of a movement and, infrequently, obstacles; for recent reviews see Cardoso-Leite & Gorea, 2010; Milner, 2012; Schenk & McIntosh, 2010). The only exception directly demonstrating a dissociation for reafferent information is a study on a patient with visual extinction, in which the patient could use visual hand feedback for increasing reaching accuracy independently of consciously perceiving this visual information (Schenk, Schindler, McIntosh, & Milner, 2005). Indirect evidence about such a dissociation can also be derived from studies that use saccadic suppression to render the cursor displacement imperceptible and still demonstrate corrective motor responses (Sarlegna et al., 2003, 2004). Furthermore, findings of a temporal dissociation between automatic visuomotor and voluntary movement corrections to cursor displacements (Franklin & Wolpert, 2008; Telgen, Parvin, & Diedrichsen, 2014) also indirectly suggest distinct visual processes for automatic action and voluntary motor processes driven by conscious perception. The latter parallels findings of a similar temporal dissociation for exafferent visual information (i.e., responses to target displacements; Day & Lyon, 2000; Pisella et al., 2000). However, given that processing of target and hand information differs in terms of the required attentional resources (Reichenbach et al., 2014), it is important to investigate the action–perception dissociation in the context of the processing of reafferent information as well. We demonstrate here that vision for action is better able to filter out two visual objects in parallel from a field of eight distracting objects than vision for perception. This advantage in filtering is substantially smaller when the perceptual judgments are based on reafferent information, but the processing difference remains significant. 
An alternative explanation for the worse filtering for the perceptual judgments in group 2 is that this perception task might have suffered dual task costs. The reaching task did not seem to be affected from the parallel execution of the perceptual judgments, as apparent from the detailed comparison of magnitude and onset of the motor responses between experimental groups (cf. Supplementary Appendix S2). The perceptual task, however, could have been treated secondary to the reaching task or be less automatic and therefore disturbed by the additional motor processing. This remains an open question to be tested. However, the very existence of dual task costs for only one of the processes would also support the dissociation of visual processing for these two tasks. 
The finding that perceptual judgments were better on reafferent compared with exafferent information (comparison between groups) might be explained in two ways. First, an internal model informed by an efference copy (Desmurget & Grafton, 2000; Wolpert & Ghahramani, 2000; Wolpert, Ghahramani, & Jordan, 1995) may have aided tracking the cursors before the displacement (Viswanathan, Fritz, & Grafton, 2012) and improved processing of the cursor motion. Therefore, visuomotor binding (Reichenbach et al., 2014) may have acted on early visual areas in a similar fashion to attention (Fries, Reynolds, Rorie, & Desimone, 2001; Moran & Desimone, 1985; Niebur, Hsiao, & Johnson, 2002; Reynolds, Pasternak, & Desimone, 2000). An outstanding question is still how the visual information is enhanced by visuomotor binding and what the differences and commonalities to visual attention are. Group 1 probably accomplished tracking the cursors by directing covert visual attention to them, whereas for group 2 the locus of covert attention was more likely directed toward the targets of the reach because this behavior would benefit the reaching task (Reichenbach et al., 2014). Second, since the motor response to the cursor displacement preceded the perceptual decision, the corrective motor command could have led to some sensory inflow and motor efferences that the perceptual system might have used. However, previous studies demonstrated that movement corrections to an unperceived visual target perturbation do not inform subsequent perceptual discrimination (Goodale, Pelisson, & Prablanc, 1986; Gritsenko, Yakovenko, & Kalaska, 2009), rendering this option less likely. Thus, processing of visual reafferent information was boosted most likely by visuomotor binding in early visual areas. 
In sum, the specialized processing of reafferent visual information appears to enhance early visual processes that feed into both the action and perception streams. The visuomotor system then immediately acts on the information, whereas the input to the perception system might depend on recurring processes that quickly fade if they are not actively maintained (Baddeley, 1992; Goldman-Rakic, 1995; Haxby, Petit, Ungerleider, & Courtney, 2000; Tallon-Baudry, Bertrand, & Fischer, 2001). The maintenance of visual information for conscious perception might indeed be a costly process that is easily disturbed (e.g., by the parallel execution of the reaching task). This model would explain why perceptual judgments based on the reafferent input are better than the judgments based on exafferent input because of the enhanced early processing. Furthermore, it would also explain why perceptual judgments based on the reafferent input are worse than the visuomotor responses due to information decay over time, which was potentially caused by the cost of the motor task executed in parallel. 
Conclusions
The current study set out to investigate whether the superior processing capacity for reafferent over exafferent visual information is specific to the motor processing stream or whether this distinction for visual input is also evident in the perceptual processing stream. The results indicate that perceptual judgments indeed benefit from specialized processing of reafferent information, suggesting that visuomotor binding acts on early visual areas and thereby promotes processing in both streams. Visuomotor feedback responses, however, are even more robust to cluttered visual scenes, suggesting some segregated processing in the perception and action streams. 
Acknowledgments
The authors thank Nobuhiro Hagura for insightful comments on an earlier version of this article. This research was funded by the Biotechnology and Biological Science Research Council (BB/ J009458/1) and a postdoctoral fellowship of the Deutsche Forschungsgemeinschaft (RE 3265/1-1) to A. R. 
Commercial relationships: none. 
Corresponding author: Alexandra Reichenbach. 
Email: a.reichenbach@ucl.ac.uk. 
Address: Motor Control Group, Institute of Cognitive Neuroscience, University College London, London, UK. 
References
Baddeley A. (1992). Working memory. Science, 255 (5044), 556–559.
Bays P. M., Flanagan J. R., Wolpert D. M. (2006). Attenuation of self-generated tactile sensations is predictive, not postdictive. PLoS Biology, 4 (2), e28.
Bisley J. W., Goldberg M. E. (2010). Attention, intention, and priority in the parietal lobe. Annual Review of Neuroscience, 33, 1–21.
Blakemore S. J., Wolpert D., Frith C. (2000). Why can't you tickle yourself? Neuroreport, 11 (11), R11–R16.
Bundesen C. (1990). A theory of visual attention. Psychological Review, 97 (4), 523–547.
Cardoso-Leite P., Gorea A. (2010). On the perceptual/motor dissociation: A review of concepts, theory, experimental paradigms and data interpretations. Seeing Perceiving, 23 (2), 89–151.
Christensen A., Ilg W., Giese M. A. (2011). Spatiotemporal tuning of the facilitation of biological motion perception by concurrent motor execution. Journal of Neuroscience, 31 (9), 3493–3499.
Churchland M. M., Afshar A., Shenoy K. V. (2006). A central source of movement variability. Neuron, 52 (6), 1085–1096.
Churchland M. M., Yu B. M., Ryu S. I., Santhanam G., Shenoy K. V. (2006). Neural variability in premotor cortex provides a signature of motor preparation. Journal of Neuroscience, 26 (14), 3697–3712.
Cullen K. E. (2012). The vestibular system: Multimodal integration and encoding of self-motion for motor control. Trends in Neuroscience, 35 (3), 185–196.
Cullen K. E., Belton T., McCrea R. A. (1991). A non-visual mechanism for voluntary cancellation of the vestibulo-ocular reflex. Experimental Brain Research, 83 (2), 237–252.
Day B. L., Lyon I. N. (2000). Voluntary modification of automatic arm movements evoked by motion of a visual target. Experimental Brain Research, 130 (2), 159–168.
Desmurget M., Grafton S. (2000). Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Science, 4 (11), 423–431.
Diedrichsen J., Nambisan R., Kennerley S. W., Ivry R. B. (2004). Independent on-line control of the two hands during bimanual reaching. European Journal of Neuroscience, 19 (6), 1643–1652.
Faisal A. A., Selen L. P., Wolpert D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9 (4), 292–303.
Flash T., Hogan N. (1985). The coordination of arm movements: An experimentally confirmed mathematical model. Journal of Neuroscience, 5 (7), 1688–1703.
Franklin D. W., Wolpert D. M. (2008). Specificity of reflex adaptation for task-relevant variability. Journal of Neuroscience, 28 (52), 14165–14175.
Franklin D. W., Wolpert D. M. (2011). Feedback modulation: A window into cortical function. Current Biology, 21 (22), R924–R926.
Fries P., Reynolds J. H., Rorie A. E., Desimone R. (2001). Modulation of oscillatory neuronal synchronization by selective visual attention. Science, 291 (5508), 1560–1563.
Goldman-Rakic P. S. (1995). Cellular basis of working memory. Neuron, 14 (3), 477–485.
Goodale M. A., Milner A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15 (1), 20–25.
Goodale M. A., Pelisson D., Prablanc C. (1986). Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement. Nature, 320 (6064), 748–750.
Gritsenko V., Yakovenko S., Kalaska J. F. (2009). Integration of predictive feedforward and sensory feedback signals for online control of visually guided movement. Journal of Neurophysiology, 102 (2), 914–930.
Haxby J. V., Petit L., Ungerleider L. G., Courtney S. M. (2000). Distinguishing the functional roles of multiple regions in distributed neural systems for visual working memory. Neuroimage, 11 (5 Pt. 1), 380–391.
Jones K. E., Hamilton A. F., Wolpert D. M. (2002). Sources of signal-dependent noise during isometric force production. Journal of Neurophysiology, 88 (3), 1533–1544.
Lally N., Frendo B., Diedrichsen J. (2011). Sensory cancellation of self-movement facilitates visual motion detection. Journal of Vision, 11 (14): 5, 1–9, doi:10.1167/11.14.5. [PubMed][Article]
Macmillan N. A., Creelman C. D. (2004). Detection theory: A user's guide. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Milner A. D. (2012). Is visual processing in the dorsal stream accessible to consciousness? Proceedings of the Royal Society of London B: Biological Sciences, 279 (1737), 2289–2298.
Moran J., Desimone R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229 (4715), 782–784.
Neggers S. F., Bekkering H. (2000). Ocular gaze is anchored to the target of an ongoing pointing movement. Journal of Neurophysiology, 83 (2), 639–651.
Niebur E., Hsiao S. S., Johnson K. O. (2002). Synchrony: A neuronal mechanism for attentional selection? Current Opinion in Neurobiology, 12 (2), 190–194.
Oldfield R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113.
Pisella L., Grea H., Tilikete C., Vighetto A., Desmurget M., Rode G., . . . Rossetti, Y. (2000). An “automatic pilot” for the hand in human posterior parietal cortex: Toward reinterpreting optic ataxia. Nature Neuroscience, 3 (7), 729–736.
Posner M. I., Snyder C. R., Davidson B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology, 109 (2), 160–174.
Pruszynski J. A., Kurtzer I., Nashed J. Y., Omrani M., Brouwer B., Scott S. H. (2011). Primary motor cortex underlies multi-joint integration for fast feedback control. Nature, 478 (7369), 387–390.
Ptak R. (2012). The frontoparietal attention network of the human brain: Action, saliency, and a priority map of the environment. Neuroscientist, 18 (5), 502–515.
Reichenbach A., Franklin D. W., Zatka-Haas P., Diedrichsen J. (2014). A dedicated binding mechanism for the visual control of movement. Current Biology, 24 (7), 780–785.
Reynolds J. H., Pasternak T., Desimone R. (2000). Attention increases sensitivity of V4 neurons. Neuron, 26 (3), 703–714.
Saijo N., Murakami I., Nishida S., Gomi H. (2005). Large-field visual motion directly induces an involuntary rapid manual following response. Journal of Neuroscience, 25 (20), 4941–4951.
Sarlegna F., Blouin J., Bresciani J. P., Bourdin C., Vercher J. L., Gauthier G. M. (2003). Target and hand position information in the online control of goal-directed arm movements. Experimental Brain Research, 151 (4), 524–535.
Sarlegna F., Blouin J., Vercher J. L., Bresciani J. P., Bourdin C., Gauthier G. M. (2004). Online control of the direction of rapid reaching movements. Experimental Brain Research, 157 (4), 468–471.
Saunders J. A., Knill D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research, 152 (3), 341–352.
Saunders J. A., Knill D. C. (2004). Visual feedback control of hand movements. Journal of Neuroscience, 24 (13), 3223–3234.
Scheidt R. A., Reinkensmeyer D. J., Conditt M. A., Rymer W. Z., Mussa-Ivaldi F. A. (2000). Persistence of motor adaptation during constrained, multi-joint, arm movements. Journal of Neurophysiology, 84 (2), 853–862.
Schenk T., McIntosh R. D. (2010). Do we have independent visual streams for perception and action? Cognitive Neuroscience, 1 (1), 52–62.
Schenk T., Schindler I., McIntosh R. D., Milner A. D. (2005). The use of visual feedback is independent of visual awareness: Evidence from visual extinction. Experimental Brain Research, 167 (1), 95–102.
Shergill S. S., Bays P. M., Frith C. D., Wolpert D. M. (2003). Two eyes for an eye: The neuroscience of force escalation. Science, 301 (5630), 187.
Sperry R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion. Journal of Comparative and Physiological Psychology, 43 (6), 482–489.
Tallon-Baudry C., Bertrand O., Fischer C. (2001). Oscillatory synchrony between human extrastriate areas during visual short-term memory maintenance. Journal of Neuroscience, 21 (20), RC177.
Telgen S., Parvin D., Diedrichsen J. (2014). Mirror reversal and visual rotation are learned and consolidated via separate mechanisms: Recalibrating or learning de novo? Journal of Neuroscience, 34 (41), 13768–13779.
Todorov E., Jordan M. I. (1998). Smoothness maximization along a predefined path accurately predicts the speed profiles of complex arm movements. Journal of Neurophysiology, 80 (2), 696–714.
van Beers R. J., Haggard P., Wolpert D. M. (2004). The role of execution noise in movement variability. Journal of Neurophysiology, 91 (2), 1050–1063.
Viswanathan S., Fritz C., Grafton S. T. (2012). Telling the right hand from the left hand: Multisensory integration, not motor imagery, solves the problem. Psychological Science, 23 (6), 598–607.
von Holst E., Mittelstaedt H. (1950). Das Reafferenzprinzip. Naturwissenschaften, 37 (20), 464–476.
Voss M., Ingram J. N., Haggard P., Wolpert D. M. (2006). Sensorimotor attenuation by central motor command signals in the absence of movement. Nature Neuroscience, 9 (1), 26–27.
Wolpert D. M., Ghahramani Z. (2000). Computational principles of movement neuroscience. Nature Neuroscience, 3 (Suppl.), 1212–1217.
Wolpert D. M., Ghahramani Z., Jordan M. I. (1995). An internal model for sensorimotor integration. Science, 269 (5232), 1880–1882.
Figure 1
 
Experimental method. (a) Virtual reality setup with two robotic manipulanda above which the visual display is mounted, and the eye tracker for fixation control. (b) Visual display for an example trial with two distractors per hand or hemifield. Note that either a cursor or a distractor was displaced during a trial. The red outlines of targets and cursors are rendered for illustration purposes only.
Figure 1
 
Experimental method. (a) Virtual reality setup with two robotic manipulanda above which the visual display is mounted, and the eye tracker for fixation control. (b) Visual display for an example trial with two distractors per hand or hemifield. Note that either a cursor or a distractor was displaced during a trial. The red outlines of targets and cursors are rendered for illustration purposes only.
Figure 2
 
Construction of the sensitivity index d′. (a) Schematic illustration for obtaining the criterion for response classification in the reaching task. For displacements of the left and right cursor or distractor, only force data from the left and right hand, respectively, were used. Force data for rightward displacements were mirrored such that a correct response was positive. To obtain the response distributions, data were pooled across hand, displacement directions, and movement direction. (b) Confusion matrix for classification of hits (H), false alarms (FA), misses (M), and correct rejections (CR) for both tasks. Hits and misses were specific for cursor (Hc, Mc) and distractor (Hd, Md) conditions, but false alarms and correct rejections were shared between them. Note that reports on the opposite hand were not included because the responses were always side specific in the reaching task.
Figure 2
 
Construction of the sensitivity index d′. (a) Schematic illustration for obtaining the criterion for response classification in the reaching task. For displacements of the left and right cursor or distractor, only force data from the left and right hand, respectively, were used. Force data for rightward displacements were mirrored such that a correct response was positive. To obtain the response distributions, data were pooled across hand, displacement directions, and movement direction. (b) Confusion matrix for classification of hits (H), false alarms (FA), misses (M), and correct rejections (CR) for both tasks. Hits and misses were specific for cursor (Hc, Mc) and distractor (Hd, Md) conditions, but false alarms and correct rejections were shared between them. Note that reports on the opposite hand were not included because the responses were always side specific in the reaching task.
Figure 3
 
Average responses to visual perturbations in the reaching task aligned to the time of perturbation onset (t = 0 ms). Shaded areas denote 1 SEM. The dashed lines indicate the time window over which the forces or velocities were averaged to obtain the mean force or velocity, respectively, per condition. Note that the reported number of distractors is per hand or hemifield, which was equal for both sides. Corresponding detailed figures split by reaching hand (left, right) and movement direction (upward, downward) are available in Supplementary Figure S1.
Figure 3
 
Average responses to visual perturbations in the reaching task aligned to the time of perturbation onset (t = 0 ms). Shaded areas denote 1 SEM. The dashed lines indicate the time window over which the forces or velocities were averaged to obtain the mean force or velocity, respectively, per condition. Note that the reported number of distractors is per hand or hemifield, which was equal for both sides. Corresponding detailed figures split by reaching hand (left, right) and movement direction (upward, downward) are available in Supplementary Figure S1.
Figure 4
 
Visual sensitivities in reaching and perception tasks of (a) group 1 and (b) group 2. The sensitivities to displacements of cursors (bold lines) and distractors (thin lines) are plotted as a function of the number of distractors per hand or hemifield. The responses to cursor displacements decreased with increasing number of distractors, whereas the small responses to distractor displacements were independent of the number of distractors. In both groups, the sensitivity of perceptual judgments declined more rapidly than the sensitivity of visuomotor responses. With the maximum number of distractors, the visuomotor responses were still noticeable, whereas perceptual judgments were not distinguishable from chance level. However, the sensitivity of the perceptual judgments decreased less when the judgments were based on reafferent information (group 2; panel b) compared with exafferent information (group 1; panel a). Error bars denote 1 SEM across participants.
Figure 4
 
Visual sensitivities in reaching and perception tasks of (a) group 1 and (b) group 2. The sensitivities to displacements of cursors (bold lines) and distractors (thin lines) are plotted as a function of the number of distractors per hand or hemifield. The responses to cursor displacements decreased with increasing number of distractors, whereas the small responses to distractor displacements were independent of the number of distractors. In both groups, the sensitivity of perceptual judgments declined more rapidly than the sensitivity of visuomotor responses. With the maximum number of distractors, the visuomotor responses were still noticeable, whereas perceptual judgments were not distinguishable from chance level. However, the sensitivity of the perceptual judgments decreased less when the judgments were based on reafferent information (group 2; panel b) compared with exafferent information (group 1; panel a). Error bars denote 1 SEM across participants.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×