Free
Research Article  |   September 2010
Obstacle avoidance during online corrections
Author Affiliations
  • Craig S. Chapman
    Department of Psychology, University of Western Ontario, London, Ontario, Canadac.s.chapman@gmail.com
  • Melvyn A. Goodale
    Department of Psychology, University of Western Ontario, London, Ontario, Canada
    Neuroscience Program, University of Western Ontario, London, Ontario, Canadamgoodale@uwo.ca
Journal of Vision September 2010, Vol.10, 17. doi:10.1167/10.11.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Craig S. Chapman, Melvyn A. Goodale; Obstacle avoidance during online corrections. Journal of Vision 2010;10(11):17. doi: 10.1167/10.11.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The dorsal visual stream codes information crucial to the planning and online control of target-directed reaching movements in dynamic and cluttered environments. Two specific dorsally mediated abilities are the avoidance of obstacles and the online correction for changes in target location. The current study was designed to test whether or not both of these abilities can be performed concurrently. Participants made reaches to touch a target that, on two-thirds of the trials, remained stationary and on the other third “jumped” at movement onset. Importantly, on target-jump trials, a single object (in one of four positions) sometimes became an obstacle that interfered with the reach. When a target jump caused an object to suddenly become an obstacle, we observed clear spatial avoidance behavior, an effect that was not present when the target jumped but the object did not become an obstacle. This automatic spatial avoidance was accompanied by significant velocity reductions only when the risk for collision with the obstacle was high, suggesting an “intelligent” encoding of potential obstacle locations. We believe that this provides strong evidence that the entire workspace is encoded during reach planning and that the representation of all objects in the workspace is available to the automatic online correction system.

Introduction
Reaching for an object in the real world is different from reaching movements studied in the typical laboratory experiment in two important respects. First, the position of targets and other objects in the real world often change in a dynamic fashion, and second, the workspace in the real world is often cluttered with many different objects. Although the effects of changes in target position and the effects of obstacles have been investigated separately in the laboratory, there have been few studies looking at them together (Aivar, Brenner, & Smeets, 2008; Liu & Todorov, 2007) and none where a real object becomes an obstacle while the hand is in flight. The goal of the current experiment, therefore, was to combine these two aspects of real-world reaching and examine how responses to changes in target position would be affected by a physical obstacle whose level of interference was contingent on the direction of the corrected movement. 
Research that has studied the effects of sudden changes in the environment on reaching movements has examined at least two different types of environmental perturbations—changing the target position and changing the “visual context” (by suddenly introducing other objects or visual stimuli into the workspace, see Gomi, 2008 for succinct review). In both cases, if the change occurs while the hand is in flight, it will often induce an automatic response (known as an online correction) toward the new target position (e.g., Brenner & Smeets, 1997; Day & Lyon, 2000; Soechting & Lacquaniti, 1983) or with respect to the change in the visual context (Brenner & Smeets, 1997; Gomi, Abekawa, & Nishida, 2006; Proteau & Masson, 1997; Saijo, Murakami, Nishida, & Gomi, 2005; Whitney, Westwood, & Goodale, 2003)—even if the changes occur without awareness (Goodale, Pelisson, & Prablanc, 1986; Pelisson, Prablanc, Goodale, & Jeannerod, 1986; Prablanc & Martin, 1992). Several elegant studies have shown that the visual information required to respond to changes in target position flows through the dorsal visual stream—from early visual areas to the posterior parietal cortex, which has reciprocal connections with premotor areas (Desmurget et al., 1999; Desmurget & Grafton, 2000; Desmurget et al., 2001; Grea et al., 2002; Pisella et al., 2000). Specifically, studies of patients with optic ataxia (whose dorsal stream is damaged) show that these individuals do not respond normally to a perturbation in target position (Pisella et al., 2000). Similarly, disrupting dorsal stream processing by applying transcranial magnetic stimulation (TMS) at the precise moment a reach is initiated and target position is perturbed selectively impairs the ability to correct the movement toward the new target location (Desmurget et al., 1999). It remains open to debate as to whether online corrections in response to changes in visual context are mediated by the same automatically engaged dorsal stream processes that control responses to changes in target position. While one prominent theory (Glover, 2004) argues that only visuomotor processes involved in planning a movement should have access to contextual information, other research (Aivar et al., 2008; Brenner & Smeets, 1997; Cameron, Franks, Enns, & Chua, 2007; Coello & Magne, 2000; Gomi et al., 2006; Saijo et al., 2005; Whitney et al., 2003) has demonstrated that online corrections can be influenced by visual context. Indeed, simply by adding contextual features while the hand is in flight, endpoint accuracy improves (Coello & Magne, 2000). Similarly, motion of background elements presented around the target while the hand is moving induces trajectory deviation in the direction of motion, although it is unclear whether the change in trajectory is due to a perceived shift in target position (Brenner & Smeets, 1997; Whitney et al., 2003) or to a reflexive response to retinal motion (Gomi et al., 2006; Saijo et al., 2005). In two recent studies, the effect of suddenly shifting the position of discrete non-target objects (rather than background texture) demonstrated that changes in the position of non-targets can affect reaches with a latency and magnitude that is similar to responses induced by changes in the target position (Aivar et al., 2008; Cameron et al., 2007). One aim of the current study was to contribute to the debate about the effects of visual context by specifically testing how the presence of a non-target object (which can be construed as contextual information) can affect adjustments to reaching movements that are made when the position of the target is suddenly perturbed. 
When reaching for an object in the presence of other non-target objects, the other objects can have a profound impact on the performed action. If the non-target objects are treated like potential targets (Chapman et al., 2010) or share critical features with the target (Chang & Abrams, 2004; Howard & Tipper, 1997; Keulen, Adam, Fischer, Kuipers, & Jolles, 2003; Sailer, Eggert, Ditterich, & Straube, 2002; Song & Nakayama, 2006; Tipper, Lortie, & Baylis, 1992; Tipper, Meegan, & Howard, 2002; Welsh & Elliott, 2004), they can act as competing or distracting stimuli and cause large deviations in the path of the hand. Depending on the timing, the task, and the location of non-target objects, these deviations can be made either toward the distracting stimuli or away from them. If the non-target objects act as obstacles that physically restrict the path of the hand, then they are always avoided (Chapman & Goodale, 2008, 2010; Mon-Williams, Tresilian, Coppard, & Carson, 2001; Tresilian, 1998, 1999) with the hand moving away from them and following a path that reduces the likelihood of collision (Hamilton & Wolpert, 2002; Liu & Todorov, 2007; Sabes & Jordan, 1997; Sabes, Jordan, & Wolpert, 1998). Again, work with optic ataxic patients strongly suggests that the dorsal visual stream controls the observed automatic avoidance of obstacles (Schindler et al., 2004), although our own recent work with a patient with damage to primary visual cortex (Striemer, Chapman, & Goodale, 2009) suggests that information about the location of the obstacle can reach dorsal stream areas via pathways outside of the geniculostriate pathway. 
Even though the dorsal stream has been implicated in both online corrections and obstacle avoidance, it remains an open question as to whether or not it is capable of performing both these functions simultaneously. One recent study in our laboratory (Chapman & Goodale, 2010) suggests that the avoidance of obstacles may be relatively unaffected by online control. In this study, we manipulated whether or not vision of the hand and environment was available while reaching. Although removing vision of the hand has been found to significantly reduce the degree to which online adjustments are made to the reach trajectory (Elliott, Binsted, & Heath, 1999; Elliott, Carson, Goodman, & Chua, 1991; Elliott, Helsen, & Chua, 2001; Heath, 2005; Heath, Westwood, & Binsted, 2004; Reichenbach, Thielscher, Peer, Bulthoff, & Bresciani, 2009; Sarlegna et al., 2003), removing vision of both the hand and the obstacles during movement execution did not affect the participants' ability to avoid the obstacles. In other words, providing vision during movement execution did not change the obstacle avoidance behavior—a result that suggests that, in our previous experiment (Chapman & Goodale, 2010), the encoding of the obstacle and the planned reach trajectory were not updated in flight. It is possible, however, that in our previous study the task did not require the motor plan to be altered during the action, and hence, we did not observe any online alterations. To properly test whether or not the representation and effect of an obstacle can be updated during a reach movement, the current study introduced a target perturbation that dynamically altered the degree to which an obstacle interfered with the reach. 
One recent study has examined the effects of perturbing the location of either the target or one or two virtual obstacles (placed between the start and target positions) during a rapid pointing movement with a stylus on a touch pad (Aivar et al., 2008). While this study was primarily interested in examining the response latency differences between target and obstacle changes, it elegantly demonstrated that changes in visual context occurring during rapid reach movements can alter trajectories, with participants showing a slightly shorter latency when responding to target changes as compared to obstacle changes. It is unclear, however, how well these results translate to a more natural reach setting involving real obstacles where the consequences of collision are literally tangible. In fact, in Aivar et al.'s (2008) study, the average incidence of “collision” (where the hand passed through the virtual obstacle) was close to 40% in some perturbation conditions. In our experience testing the avoidance of real obstacles, participants rarely (<1%) touch an obstacle (even when instructed to ignore it) and are quite alarmed when they do collide with it (Chapman & Goodale, 2008, 2010). As Aivar et al. (2008) suggest, the initial reach deviation they observe in response to the obstacle perturbation probably represents a response to moving visual context (i.e., a moving background, e.g., Whitney et al., 2003) and may not be related to an avoidance strategy (though a later second correction in some participants might). To build on their finding, the current study examined natural reach responses to a perturbation in target position in the presence of a three-dimensional object to examine real obstacle avoidance during online corrections. 
The overall goal of the current study, therefore, was to investigate what the effect of obstacles would be when they specifically interfered with a corrected movement. This necessitated that the effects of the obstacle be isolated to the portion of the reach occurring after the target was perturbed. That is, if an obstacle was shown to affect a reach prior to the online correction, then any avoidance we observed during the online correction would not be guaranteed to reflect an updated obstacle representation, but rather could merely reflect the correction of an already deviated reach (e.g., see Liu & Todorov, 2007 where they take advantage of planned deviations around obstacles in order to observe online corrections in longer duration movements). To overcome this problem, we capitalized on our earlier observation that obstacles placed at a depth beyond a target no longer affect reach trajectories (Chapman & Goodale, 2008). Participants therefore made reaches to an initial target position in the presence of a single obstacle whose position varied but, critically, remained behind the initial target. When the position of the target was rapidly changed (at reach onset on one-third of trials), it was moved both laterally (left or right) and further in depth. Therefore, an obstacle that had been beyond the initial position of the target could now be located between the hand and the new position of the target. We predicted that reaches would be affected by obstacles only when the target jumped to the side of space where the obstacle was positioned; for example, a reach correcting for a rightward jump of a target would be unaffected by an obstacle on the left. 
Methods
Participants
A group of 21 right-handed (determined by self-report) adults (4 males, mean age 21.9 years, range 18 to 51) were included in this study. All participants had normal or corrected-to-normal vision and all participants were naive to the purpose of the experiment. The present study is part of ongoing research that has been approved by the local ethics committee. 
Materials and design
Participants sat in front of a dimly lit 1 m × 1 m table covered in black fabric with a laterally centered start button located 15 cm from the front edge of the table. Participants wore PLATO LCD goggles (Translucent Technologies, Toronto, Canada) to control visual feedback and had OPTOTRAK (Northern Digital, Waterloo, Canada) infrared markers (IREDs) taped to the tip and the base of their right index finger. During recording, the position of each IRED was tracked by two OPTOTRAK cameras at a rate of 100 Hz for 3 s. Marker wires were held in place with elastic wrist and elbow bands to allow for unrestricted arm movement. 
Tall rectangular objects (4 × 4 × 25 cm, with IREDs in the middle of the top facing surface) were placed in four different positions (2 depths × 2 sides of space): near-right (the center of the inside edge of the object was 10 cm right of the midline and 35 cm in depth from the start button), far-right (the center of the front edge of the object was 5 cm right of the midline, 40 cm from start button), near-left (the center of the inside edge of the object was 5 cm left of midline, 35 cm from start button), and far-left (the center of the front edge of the object was 10 cm left of midline, 40 cm from start button; see Figure 1). A fifth condition was included in which no objects were placed on the table. 
Figure 1
 
Experimental setup with target and object positions. Participants made reaches from the start button to the initial target (green circle), which were on a 1 m × 1 m black fabric board. On 1/3 of trials, the target jumped in depth and to the left (blue circle) or right (red circle). When an object was present, they appeared in one of four positions (indicated by colored squares, size and position to scale). Movements were recorded using two OPTOTRAK cameras (one left, one in front) at 100 Hz.
Figure 1
 
Experimental setup with target and object positions. Participants made reaches from the start button to the initial target (green circle), which were on a 1 m × 1 m black fabric board. On 1/3 of trials, the target jumped in depth and to the left (blue circle) or right (red circle). When an object was present, they appeared in one of four positions (indicated by colored squares, size and position to scale). Movements were recorded using two OPTOTRAK cameras (one left, one in front) at 100 Hz.
Procedure
Each trial started with participants placing their right index finger on the start button. The goggles were closed, allowing the experimenter to place the objects without the participant seeing. The trial was triggered by the experimenter, causing the goggles to open and the OPTOTRAK to start recording. Participants were instructed to reach to the target (red LED placed 30 cm directly in front of the start position) quickly and accurately as soon as it became visible (i.e., when the goggles opened). They were told to ignore any objects that were on the table and that there could be one or no objects present on any given trial. On most trials (160/240), the target position remained unchanged. On some trials (80/240), however, the target would “jump” to a new location when the participants released the start button. Of the 80 target-jump trials, 40 were jumps to the left (10 cm to the left and 10 cm further in depth) and 40 were jumps to the right (10 cm to the right and 10 cm further in depth; see Figure 1). The 40 jump-left and 40 jump-right trials were evenly split across the five obstacle conditions, such that there were 8 repetitions of each obstacle condition and jump direction. All trials were completely randomized. Prior to the experiment, participants were given 24 practice trials where no objects were present. On 16 of the practice trials, the target did not jump; on 4 trials, there was a rightward jump; and on 4 trials, there was a leftward jump. 
Data processing
All analyses were conducted on data from the IRED on the tip of the right index finger. Raw 3D data for each trial were filtered using a low-pass Butterworth filter (dual pass, 10-Hz cutoff, 2nd order). Instantaneous velocities in each cardinal dimension (x, y, z) were calculated for each time point and the resulting velocity profiles were filtered (low-pass Butterworth filter, dual pass, 12-Hz cutoff, 2nd order) and combined to create a vector velocity (i.e., three-dimensional) profile for each trial. Onset of reaches were defined as the first of four consecutive vector velocity readings of greater than 20 mm/s where there was a total acceleration of 20 mm/s2 across the four points. Reaches were said to terminate when whichever of two conditions was first met: the first of three consecutive displacement readings back toward the start button (i.e., three negative displacements in the y-direction) or the first time the velocity dropped below 20 mm/s. 
Missing data from a fingertip IRED that was temporarily blocked from the view of the OPTOTRAK cameras due to the positioning of the objects were filled in with data from the finger-base IRED. This was accomplished by translating the base IRED data to the last known position of the tip IRED, using the base IRED data over the missing segment, then stretching (in all three dimensions) the endpoint of the filled sequence to match the position of the tip IRED when it reappeared. When both IREDs were missing, the data were interpolated using the inpaint_nans function (available online at http://www.mathworks.com/matlabcentral/fileexchange/4551) in Matlab. Interpolation was required on only 5 trials across an average of less than 3 time points. 
Trials were rejected for the following reasons: The reach was too short in either duration (<100 ms) or distance (<150 mm in depth), the obstacle was misplaced by the experimenter, or a collision with an object was detected (object moved by more than 5 cm). Under these criteria, <1% of the trials were rejected (for complete analysis of removed trials, see Supplementary material). 
All trajectories were translated such that the first reading of the index finger IRED was taken as the origin of the trajectory (i.e., 0, 0, 0 in 3D Cartesian space, x = horizontal, y = depth, z = vertical). Trials were then spatially normalized using functional data analysis techniques (Ramsay & Silverman, 2005) whereby B-splines were fit to each dimension of the raw data. This allowed us to extract the lateral (x) values from 200 points equally spaced across the reach distance (y; for details, see Chapman et al., 2010). 
Results
We analyzed data from experimental trials only. To identify reaches to an incorrect target position (i.e., not reacting to a target jump), we performed a cluster analysis of each participant's reach endpoints (in the x and y dimensions) when they reached toward each of the three target positions (collapsed across all object positions) and removed any reaches with endpoints further than 5 cm from the mean of their largest cluster. Every participant's largest cluster of points was within 5 cm of the actual target position and less than 2% of trials were removed for having an incorrect endpoint. To account for trials where participants performed two discrete movements (rather than one continuous online correction), we also removed trials where there was a reacceleration in the y-dimension that exceeded 20% of the peak y-velocity. While some reacceleration was expected (given that targets jumped 10 cm in depth), we wanted to isolate true online corrective behavior; less than 2% of trials were removed for exceeding the reacceleration criterion (for complete analysis of removed trials, see Supplementary material). After trial removal, any participant with fewer than 4 repetitions of any condition (jump type × obstacle position) was excluded from analysis. Three participants were removed with the application of this criterion, leaving n = 18 for all statistical analyses. 
Spatial trajectories
For each participant, the spatially normalized reach trajectories were averaged across each of the 15 experimental conditions (3 jump types × 5 object positions). We then conducted a set of planned repeated-measures functional ANOVAs (implemented in Matlab 7, using custom code adapted from http://www.psych.mcgill.ca/misc/fda/) to separately examine the effects of obstacles when participants reached to the initial target position and the effects of obstacles when they made a correction to a jumped target. The functional ANOVA compared the lateral (x) deviation at different reach distances (y) across the different conditions. This statistically sensitive technique, which extends a traditional univariate ANOVA to all points in a curve, allows a quantification of not only if, but also where and with what magnitude, the trajectories differed (Ramsay & Silverman, 2005, see Chapman et al., 2010 for recent use and details of this technique). Because we used a repeated-measures design in the functional ANOVAs, we applied a Greenhouse–Geisser correction for correlations across conditions at each time point. The obvious advantage of using functional versus discrete measures of movements is that a more complete description of the evolution of differences is available. However, this necessitates that each trajectory be fit mathematically, and, as a relatively new analysis technique, statistical conventions (i.e., appropriate alpha levels) have yet to be agreed upon. For this reason, we present the functional output corresponding to the range (p < 0.1 to p < 0.00001) of statistical significance across the movement (see significance bars and legends in Figures 2 and 3) to allow for a complete depiction of the pattern of differences. 
Figure 2
 
Overhead view (x, y) of average (average of 18 participants' individual averages) reach trajectories to the initial target (small red circle) with objects in each position (position and size not to scale). Trajectory traces are color coded to match the object positions (green = no objects, note: x-axis magnified 8×). Shaded area around trajectory traces represents average standard error across 18 participants. Gray significance bar to the left gives a measure of where there were statistical differences (magnitude of difference is proportional to intensity of gray—see p-value legend, note: p-values are Greenhouse–Geisser corrected) between trajectories in the lateral dimension. (Inset) Functional pair-wise comparisons between all possible pairs of trajectories arranged as a matrix. Each row (color inside the box) and column (border of box) corresponds to a different object position with the intersection being the comparison between those two trajectories. Within each intersection box, the position of the colored area corresponds to where along the reach distance (y) the trajectories differed in the lateral (x) dimension with the intensity of the color corresponding to the magnitude of the statistical difference (see p-value legend and exploded box to side, where the (red) near-left trajectory is being compared to the (green) no-object trajectory).
Figure 2
 
Overhead view (x, y) of average (average of 18 participants' individual averages) reach trajectories to the initial target (small red circle) with objects in each position (position and size not to scale). Trajectory traces are color coded to match the object positions (green = no objects, note: x-axis magnified 8×). Shaded area around trajectory traces represents average standard error across 18 participants. Gray significance bar to the left gives a measure of where there were statistical differences (magnitude of difference is proportional to intensity of gray—see p-value legend, note: p-values are Greenhouse–Geisser corrected) between trajectories in the lateral dimension. (Inset) Functional pair-wise comparisons between all possible pairs of trajectories arranged as a matrix. Each row (color inside the box) and column (border of box) corresponds to a different object position with the intersection being the comparison between those two trajectories. Within each intersection box, the position of the colored area corresponds to where along the reach distance (y) the trajectories differed in the lateral (x) dimension with the intensity of the color corresponding to the magnitude of the statistical difference (see p-value legend and exploded box to side, where the (red) near-left trajectory is being compared to the (green) no-object trajectory).
Figure 3
 
Overhead view (x, y) of average reach trajectories to the jumped target positions with objects on the (A) left or (B) right. Trajectory traces are color coded to match the object positions (green = no objects, object size not to scale). Shaded area around trajectory traces represents average standard error across 18 participants. The gray significance bars denote where there was an effect due to obstacles for a given jump direction and the green significance bars denote where the jump-left and jump-right trials were significantly different (magnitude of lateral deviation difference is proportional to intensity of color—see p-value legend—note that its location does not obscure any part of the significance bar in (B)). (Insets) Functional pair-wise comparisons between trajectories for (A) target jumps left and obstacles-left and (B) target jumps right and obstacles-right. Configuration of pair-wise comparison boxes is identical to Figure 2.
Figure 3
 
Overhead view (x, y) of average reach trajectories to the jumped target positions with objects on the (A) left or (B) right. Trajectory traces are color coded to match the object positions (green = no objects, object size not to scale). Shaded area around trajectory traces represents average standard error across 18 participants. The gray significance bars denote where there was an effect due to obstacles for a given jump direction and the green significance bars denote where the jump-left and jump-right trials were significantly different (magnitude of lateral deviation difference is proportional to intensity of color—see p-value legend—note that its location does not obscure any part of the significance bar in (B)). (Insets) Functional pair-wise comparisons between trajectories for (A) target jumps left and obstacles-left and (B) target jumps right and obstacles-right. Configuration of pair-wise comparison boxes is identical to Figure 2.
Reaches to initial target position
Since it was important to establish the baseline effects of the objects, we compared reaches made to the initial target position with objects in each position. The results of the repeated-measures functional ANOVA are shown with the gray significance bar in Figure 2. The position of the gray coloring corresponds to the locations along the reach distance (y) where the trajectories differed in the lateral (x) dimension, with the intensity of the color corresponding to the magnitude of the statistical difference. As can be seen, the objects had a significant effect on lateral deviation throughout the reach. To investigate this effect, we conducted functional pair-wise comparisons (implemented as two-level repeated-measures functional ANOVAs) between all pairs of trajectories (see Figure 2, inset). This analysis revealed that the differences in trajectories due to objects were entirely driven by the objects in the “near” positions. That is, when an object is in the near-right position (black trace, Figure 2) the average trajectory was significantly shifted to the left and when the object was in the near-left position (red trace) the trajectory was shifted to the right. These two trajectories were significantly different from all the other trajectories, and no other trajectories significantly differed from one another. It should be noted that while there were clearly significant differences due to the presence of obstacles, these effects are very small (x-axis magnified 8 times in Figure 2) with the largest difference from baseline spanning less than 5 mm. This subtle yet significant deviation speaks to the remarkable sensitivity of the visuomotor system when avoiding potential obstacles. However, it does indicate that the two “near” object positions interfered with the reach even during unperturbed reaches, and thus may result in different reach behavior when online corrections are required. 
Reaches to jumped target positions
For clarity, we separately analyzed reaches on jump-target trials with objects to the left of midline (near-left and far-left, Figure 3A) and reaches made on jump-target trials with objects to the right of midline (near-right and far-right, Figure 3B). Within each of these sets of trajectories (objects-left and objects-right), we conducted three functional ANOVAs: one comparing jump-left versus jump-right trials (results indicated with green significance bars), and the second and third comparing the effect of the objects on the jump-left and jump-right trials, respectively (results indicated with the gray significance bars on the left and right of plots). 
For both the objects-left and the objects-right, the difference between jump-left and jump-right trials begins to be (and thereafter remains) significant approximately 17 cm (or 43%) into the y-movement (see green significance bars, Figure 3). This is markedly different from how the reaches on no-jump trials were affected by the objects. 
On object-left trials (Figure 3A), the effect due to objects emerges much later in the reach (25 cm or 63% of y-movement, gray significance bar to the left, Figure 3A) but only on trials where the target jumped to the left. That is, when the target jumped to the right, there was no difference between either of the object conditions or the baseline (no-object, green-trace) trials. When we investigated the effect of left-objects on left-jump trials further, the pair-wise functional comparisons (see Figure 3A, inset) confirmed what is visually apparent—the left-near (red-trace) reaches were driven further in depth and to the right, while the left-far (blue-trace) reaches were driven closer in depth and to the left relative to the baseline (no-object, green-trace) reaches. 
The pattern observed on the object-right trials (Figure 3B) is similar, though it is somewhat obscured by the interference that was observed when participants reached to the initial target position (described above). That is, reaches with objects in the near-right location (black traces) were initially shifted slightly left, leading to significant differences in trajectories early in the movement (gray significance bars to the left and right, Figure 3B). Importantly, however, these differences were observed for both jump-left and jump-right trials. Critically, only on the jump-right trials did the effects due to objects persist (and become more significant) late in the reach. The pair-wise functional comparisons (Figure 3B, inset) confirmed this finding, with only the near-right trace being significantly different from the baseline (no-object, green-trace) trials early in the movement. The pair-wise comparisons also revealed how the objects affected the reaches during the online correction. Similar to the left-object trials, on the right-object trials the near-right location (black trace) drove the hand further in depth and to the left, while the far-right location (pink trace) drove the hand closer in depth and to the right. 
Overall, once the initial interference effects were accounted for, there were two major findings from the analysis of the reach trajectories on jump-target trials. First, the deviation due to the hand reacting to the jumped target occurred earlier (in space) than the effects due to objects. Second, these object effects, which occurred exclusively during the online correction, showed a clear pattern of obstacle avoidance, consistently moving the hand away from the object position. 
Temporal profiles
Of course, it is not possible to characterize the effect of obstacles by analyzing only the spatial component of the reach. What appear to be automatic and fluid spatial deviations away from objects that interfere with corrected movements could actually come at the expense of significant velocity reductions. To examine the temporal component of the reaches, we analyzed three dependent measures determined from the vector (3D) velocity: peak velocity, time to peak velocity, and percent time to peak velocity. To complement the velocity analysis, we also examined reaction time and movement time. Above, we described a spatial definition of where trajectories on target-jump trials became significantly different from no-jump trials, but here we wanted to provide a rigorous temporal definition of correction latency for reactions to jumps in both directions. To calculate this, we performed 2 two-level (jump-left vs. no-jump and jump-right vs. no-jump) MANOVAs (with three dependent variables, one each for x, y, and z velocities) at each time point (defined by frames) for each participant. The correction latencies were defined as the first time point where these MANOVAs became (p < 0.05) and remained significant for the longest number of consecutive frames. 
For each participant, each of the non-latency measures was entered into a two-factor jump-type × object-position (3 × 5) repeated-measures (RM) ANOVA and the latency measure was entered into a two-factor jump-direction × object-position (2 × 5) RM-ANOVA (all RM-ANOVAs Greenhouse–Geisser corrected, significant at p < 0.05). Means and results for these tests are shown in Table 1. The average vector velocity and lateral velocity (where the greatest differences due to jumped targets were expected) traces for no-object trials, as well as the left and right correction latencies, are shown in Figure 4A. The vector and lateral velocity traces for trials with objects are shown in Figure 4B. The results naturally fell into two categories—effects due to jump direction and effects due to object position. 
Table 1
 
The means and statistical results for the temporal dependant measures. Where significant, the strength of an interaction is indicated in the row with the measure name. The F column shows the results of an F-test of the main effect (or simple main effect) of the means in a given row (Greenhouse–Geisser corrected). Results from pair-wise contrasts (Bonferroni corrected) are shown next to each significant F-test. * and < or > = p < 0.05, ** = p < 0.005, ns = not significant.
Table 1
 
The means and statistical results for the temporal dependant measures. Where significant, the strength of an interaction is indicated in the row with the measure name. The F column shows the results of an F-test of the main effect (or simple main effect) of the means in a given row (Greenhouse–Geisser corrected). Results from pair-wise contrasts (Bonferroni corrected) are shown next to each significant F-test. * and < or > = p < 0.05, ** = p < 0.005, ns = not significant.
Reaction time (ms)
No-Obj Near-L Far-L Far-R Near-R F
351.79 331.65 331.59 335.26 333.92 ** No-Obj > Rest
Movement time (ms) Interaction** Jump type × object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 471.44 468.93 470.03 471.97 468.33 ns
Jump-L 695.78 719.98 714.65 692.96 696.49 * Near-L > No-Obj, Far-R, Near-R
Jump-R 625.91 615.85 608.59 635.85 759.27 ** Near-R > Rest; Far-R > Near-L
Peak velocity (mm/s) Interaction* Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 1334.78 1325.37 1323.28 1326.80 1322.42 ns
Jump-L 1319.81 1320.90 1307.08 1319.77 1332.48 ns
Jump-R 1402.48 1389.38 1400.22 1392.95 1344.26 * None
Time to peak velocity (ms) Interaction* Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 223.70 221.90 222.74 223.48 223.07 ns
Jump-L 217.56 228.11 229.94 223.53 226.94 ns
Jump-R 250.66 250.72 256.52 250.63 231.67 * Near-R < Near-L, Far-L
Percent time to peak velocity (%) Interaction** Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 47.44 47.17 47.44 47.39 47.50 ns
Jump-L 31.44 31.72 32.33 32.39 32.72 ns
Jump-R 40.72 41.28 42.50 39.89 31.11 ** Near-R < Rest; Far-R < Near-L
Correction latency (ms)
No-Obj Near-L Far-L Far-R Near-R F
Jump-L 299.44 317.78 309.44 310.55 313.89 ns
Jump-R 268.89 266.67 269.44 278.89 286.11 ns
Figure 4
 
(Top) Average vector velocity and (bottom) lateral velocity traces for trials with (A) no objects and with (B) objects. (A) Trace color denotes jump direction: green = no jump, blue = jump left, red = jump right. Shaded area around trajectory traces represents average standard error across 18 participants. Vertical lines denote the correction latency for jumps to the right (red) and left (blue). (B) Trace color denotes the position of the object: blue = far-left, red = near-left, magenta = far-right, black = near-right. The style of the line denotes the jump direction: thin = no jump, dashed = jump-left, thick = jump-right. The green shaded region corresponds to the shaded regions in (A)—no objects—and serves as a baseline.
Figure 4
 
(Top) Average vector velocity and (bottom) lateral velocity traces for trials with (A) no objects and with (B) objects. (A) Trace color denotes jump direction: green = no jump, blue = jump left, red = jump right. Shaded area around trajectory traces represents average standard error across 18 participants. Vertical lines denote the correction latency for jumps to the right (red) and left (blue). (B) Trace color denotes the position of the object: blue = far-left, red = near-left, magenta = far-right, black = near-right. The style of the line denotes the jump direction: thin = no jump, dashed = jump-left, thick = jump-right. The green shaded region corresponds to the shaded regions in (A)—no objects—and serves as a baseline.
Effects due to jump direction
Overall reaches requiring corrections to the left were much slower than reaches requiring corrections to the right (independent of object position). This resulted in reaches with longer movement times, lower peak velocities, and more time spent decelerating (earlier peak velocities) for jump-left as compared to jump-right trials. Given these velocity differences, it is not surprising that we also observed longer correction latencies for reaches correcting left than for reaches correcting right (see Table 1 and Figure 4A, correct-left versus correct-right differences are also reflected in our analysis of error trials, see Supplementary material). This finding replicates the result that is obtained when right-handed participants make online corrections to the left and right with their right hand (Carnahan, 1998) and is consistent with biomechanical factors like limb inertia (Gordon, Ghilardi, Cooper, & Ghez, 1994). 
Effects due to object position
Of more interest to the current study were the effects of the non-target objects on trials when the target jumped at reach onset. Since we predicted that objects to the left and right of midline would have different effects depending on the direction of the target jump, we were specifically interested in investigating interactions between jump type (or jump direction) and object position. As such, any variable with a significant interaction between these factors was further investigated by running a single factor RM-ANOVA of the five obstacle positions for each of the three jump types (see Table 1). 
It should be noted that reaction time showed a significant effect only for object position. This was driven by slower responses when no object was present, a finding that replicates our previous work (Chapman & Goodale, 2010). Aside from reaction time, all other non-latency measures showed a significant interaction between jump type and object position. The results from movement time follow from the trajectory results. Reaches that deviated from the no-object trajectories showed longer movements. That is, there was no significant effect of object position on movement time when making reaches to the initial target position but a significant effect of object position on both jump-left and jump-right trials. For both left and right jumps, the movement times were longer when objects were on the side of the final target position (i.e., left objects for left jumps) and longest for the objects in the near positions. 
From only movement times, however, it is difficult to tell if these longer duration movements are the result of a larger distance traveled by the hand or a decrease in velocity. When examining the interaction between target position and object position for the velocity variables (peak velocity, time to peak velocity, and percent time to peak velocity), it appears that only for jump-right trials does the position of the object actually cause a significant slowing of the reach. Specifically, there was no effect of object position when the target remained in the initial position or when the target jumped to the left, but for jump-right trials, all velocity variables showed an object-position effect. From pair-wise comparisons (Bonferroni corrected, p < 0.05), the significant object effect on jump-right velocities was shown to be caused almost entirely by a significant slowing of the reach (lower peak velocity and longer deceleration phase) only when the object was in the near-right location. While no specific velocity measure showed an effect of obstacle position on jump-left trials, it should be noted that there was evidence of temporal interference caused by the near-left location, as suggested by the velocity profiles (Figure 4B) where the jump-left trials with objects in the near-left position (red dashed trace) show departures from the no-object trials (green shaded region) especially in the lateral velocity profile. 
From the observed effects, it is clear that objects that become obstacles during an online correction can interfere with the movement spatially and temporally. However, it is critical to know if this interference occurs before or after the correction is initiated. To test this, we looked at the effect of object position on the correction latencies. If the position of the object had an effect on correction latencies, then it could be argued that the observed obstacle interference occurred prior to the initiation of the online correction. Importantly, the correction latencies showed neither a significant interaction between object position and jump direction nor a significant main effect of object position for either the jump-left or jump-right trials. This suggests that the observed obstacle interference occurs only after the correction has been initiated. 
Taken together, the results from analyzing the temporal profiles of the reach suggest that the fluid avoidance of obstacle during online corrections seen in the spatial trajectories can occur without any significant alteration in the velocity of the reach. However, if the risk for collision during a correction was high (which is certainly the case for the near-right position, which accounted for all 10 collisions detected across all participants, see Supplementary material), we saw that the spatial avoidance was accompanied by significant velocity reductions. This effect can be seen in Figure 4B. Here, for reaches with objects on the left, only reaches corrected left and with an object in the near-left position (dashed red trace) showed some evidence of temporal interference. This effect was much stronger for reaches with objects on the right, where reaches corrected right with an object in the near-right position (thick black trace) showed large velocity reductions. 
Discussion
The primary aim of the current study was to test whether or not participants could avoid an obstacle that became an impediment to a reach only after the initial reach target changed position during the movement. The results from the analysis of the spatial trajectories (Figure 3) clearly demonstrate that obstacle avoidance while correcting a reach toward a new jumped target is possible. Critically, avoidance was observed only when the new target position caused an object to become an obstacle (i.e., when the new position was on the same side of space as the object) and not when the target jumped away from the object. The results from the analysis of the velocity profiles of these reaches (Figure 4) demonstrated that, in some cases, this spatial avoidance was accompanied by a significant slowing of the reach (when objects were between the hand and the new target, especially on the right) and in others, the spatial avoidance occurred without a significant alteration in the speed of the movement (when objects were at the same depth as the new target). 
The design of the current study allowed us some insight into two questions currently being debated in the field of visuomotor control. First, by isolating the obstacle avoidance to the corrected portion of the reach, we were able to provide evidence that the visual context (i.e., objects other than the participant's hand and the target) can affect a reach during online corrections. To make this claim, it was necessary to show that the avoidance effects occurred after the correction to the jumped target. We confirmed this using both a spatial definition of when the correction occurred (reactions to jumped targets occurred closer in depth than avoidance effects, see Figure 3) and a temporal definition of when the correction occurred (correction latencies were unaffected by obstacle position, see Table 1). The finding that a non-target object has an effect on reaching that is restricted to the automatically corrected portion of the reach is consistent with two recent studies showing that position changes of items other than the target that occurred while the hand was in flight caused deviations in the trajectory of the reach. In one case, the non-target object was the target of an upcoming movement (Cameron et al., 2007), and in the other, the non-target objects were virtual obstacles (Aivar et al., 2008). 
The response to perturbations of virtual obstacles observed by Aivar et al. (2008) leads to the second question that we were able to investigate: whether or not reacting to real objects during the online correction of a real reach would be different than that observed for corrections in the presence of virtual obstacles during a rapid stylus-pointing task. By varying both the side of space the objects were on and the depth at which they were placed, we were able to show that objects with a higher risk for collision altered not only the spatial trajectories but also significantly slowed the reach. This was especially evident on trials with objects placed in the near-right position when targets jumped to the right (thick black line, Figure 4B). The noticeable slowing of reaches on these near-right/jump-right trials may have been caused by the near-right obstacle position being occluded by the moving arm. We do not favor this explanation for two reasons. First, the objects were 25 cm tall and at least some part of the object was always visible regardless of the position of the arm. Second, in previous work (Chapman & Goodale, 2008), short objects that could have been completely occluded by the moving arm resulted in less interference, opposite to the current result for the near-right/jump-right trials. Indeed, we believe that the significant slowing noted on these trials was a result of the biomechanics and physical arrangement of the hand and arm, which meant that the direct path to the new target position was blocked by the near-right object (the difficulty of this configuration was confirmed in an analysis of the collision trials, see Supplementary material). We also observed some slowing on trials with an object in the near-left position where targets jumped left (dashed red trace, Figure 4), but no slowing when objects were in the far positions, or when they were on the side opposite the target jump. This almost parametric slowing is entirely consistent with the degree to which the object was actually an obstacle to the corrected movement. While there are other methodological differences between the current study and the work conducted by Aivar et al. (2008), we believe that this novel finding of reach slowing in accord with obstacle interference demonstrates that the real-world consequences of collision with a three-dimensional object results in different reaches from those performed in a virtual context. 
It is possible that the different effects we observed between near and far objects on corrected movements had to do with the interference we saw for near objects on non-jump trials (see Figure 2). That is, it could be that the slowdown we observed later in the reach for corrected movements toward near objects was actually a consequence of the original deviations induced in the early trajectory by objects in these positions. We do not support this interpretation for two reasons: first, the deviation induced by the near objects on non-jump trials was very small (less than 5 mm for both the near-left and near-right objects) and occurred only in the lateral dimension, while deviations that occurred during avoidance were much larger (more than 10 mm) and occurred laterally and in depth. Second, the onset of the correction was independent of the position of the objects (see Table 1). Specifically, for both the jump-right and jump-left trials, the difference between the correction latency for the near and far locations of objects on the same side of the jump was less than 5 ms. It appears then that the visuomotor system automatically avoids obstacles in a sensible fashion—providing a margin of error around all objects near the hand path, but slowing the reach only for corrected movements where obstacles truly impede the movement. This extra time likely allows for more deliberate control of an action where the chance of an undesirable collision is high. 
Given this study and previous findings, what can be concluded about the representation and encoding of obstacles during movement planning and execution? First, we believe that non-target objects in the reach environment are encoded and accessed similarly to target objects—that is, a representation is available during both planning and execution. Until now, there was plenty of evidence showing that obstacles encoded during movement planning affected the subsequent reach movement (Chapman & Goodale, 2008, 2010; Mon-Williams et al., 2001; Tresilian, 1998, 1999). It remained inconclusive, however, whether or not this obstacle representation could be accessed in flight causing the reach (and/or the obstacle representation) to be updated online. Here we show that, like the position of a target, the position of an obstacle can be dynamically accessed and can influence corrected movements. Although we were successful in isolating the obstacle effects to the corrected portion of the reach, we are unable to state conclusively that the observed avoidance during a corrected movement was the result of planned obstacle representations being integrated into the trajectory rather than the result of a new obstacle representation being created in-flight. This issue is ultimately related to whether online correction mechanisms—which use feedback to reduce the error between the original and corrected movement goals—include a feed-forward/predictive signal in the error estimate or whether the error signal arises from purely sensory mechanisms (for discussion of this point, see Desmurget & Grafton, 2000). Given the inherent delays in sensory feedback, however, some sort of feed-forward mechanism seems necessary. Recent behavioral and modeling studies confirm the role of prediction in successful online correction (e.g., Danion & Sarlegna, 2007; Gritsenko, Yakovenko, & Kalaska, 2009). One particularly relevant finding is that online corrections to a new target position made while a person was holding a mass in a pinched grip showed that the grip force adjustments required to accelerate the mass toward a corrected target position preceded the actual in-flight trajectory adjustment (Danion & Sarlegna, 2007). That the grip adjustment can lead the trajectory shift suggests that the consequences of the corrected movement were predicted and compensated for before the trajectory was altered. In the context of the current study, it therefore suggests that the consequences of the corrected movement and the subsequent deviation around the obstacle are predicted and rely on planned obstacle representations. It also suggests that one should be able to see obstacle influences prior to the correction if the expectation is that a corrected movement will be interfered with. That is, if one designed an experiment where the expectation of a target jump was high (much greater than the 1/3 used here) and the direction of the corrected movement was predictable (the target always jumped in one direction), then even the initial movement should be affected by an obstacle that interfered only with the corrected movement. In this case, it would make sense for predictive mechanisms to anticipate the obstacle's potential interference and adopt an initial reach that made the upcoming correction easier to perform. 
The second conclusion regarding obstacle representations that we infer from the current study is that non-target objects are encoded and accessed by the dorsal visual stream. Since dorsal stream structures have been implicated in both the avoidance of obstacles during non-corrected movements (Schindler et al., 2004) and in performing online corrections toward jumped targets (Desmurget et al., 1999; Pisella et al., 2000), it follows logically that a task combining both would recruit similar neural pathways. To support this idea, studies recording from cells in the monkey dorsal visual stream have shown populations of neurons that encode multiple potential reach targets (Cisek & Kalaska, 2002, 2005). Given our findings demonstrating sensitivity to obstacles during corrected movements, we believe that the encoding of multiple objects extends to both target and non-target objects. Moreover, recent neuroimaging work in humans (Gallivan, Cavina-Pratesi, & Culham, 2009) has shown that objects within reachable space were preferentially encoded (relative to objects beyond reach) when participants were passively viewing the workspace. While these objects were sometimes the targets of action, on trials in which no action was performed, there was still evidence of encoding in the superior parietal cortex, a critical structure for visuomotor processing in the dorsal stream. 
This notion of reachable space provides an elegant way of summarizing our results. When performing reaches to the initial target position where no jump occurred (the majority of trials), the evidence for obstacle encoding was minimal (near objects) or non-existent (far objects). However, on trials where the target did jump and an object impeded the corrected movement, we observed automatic avoidance sensitive to the risks of collision. This suggests that the potential obstacles were encoded on every trial and that reachable space is defined not just by the part of our environment we are most likely to act in but includes everything within reach of our acting hand. This conclusion resonates with a recent review (Baldauf & Deubel, 2010) that argues that visuomotor planning (or “visual preparation”) automatically results in the dynamic deployment of attention across all of reachable space. This attentional landscape allows for multiple relevant locations in the workspace to be processed in parallel. Following from this idea, in the current experiment potential targets and potential obstacles must have been encoded simultaneously in order to produce the reported effects on reach behavior. Presumably, there are limits on both the number of objects that can be represented in parallel as well as the spatial extent over which the concurrent representation of objects can occur (explaining why objects well out of reach have no effect on movements). Exactly what defines reachable space, the objects in it and how it must be dynamically modulated both by our movements through the environment and by our goals remains an open and interesting question. 
Conclusion
Rather than consider only the encoding of the hand and target, it should be acknowledged that the entire reach environment must be represented in order for humans to successfully act in the real world. It is obvious that obstacles in the environment necessarily affect our movements; after all the consequences of colliding with a particularly dangerous obstacle are likely more dire than the consequences of missing a target. Here we provide evidence that obstacle encoding shares one critical feature with target encoding in that movements were automatically deviated in reaction to changes in both target and obstacle information that occurred while the hand was in flight. 
Supplementary Materials
Supplementary materials - Supplementary materials 
Acknowledgments
This work was supported by a Natural Sciences and Engineering Research Council of Canada operating grant (MAG) and graduate student award (CSC). 
Commercial relationships: none. 
Corresponding author: Melvyn A. Goodale. 
Email: mgoodale@uwo.ca. 
Adress: Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada. 
References
Aivar M. P. Brenner E. Smeets J. B. (2008). Avoiding moving obstacles. Experimental Brain Research, 190, 251–264. [CrossRef] [PubMed]
Baldauf D. Deubel H. (2010). Attentional landscapes in reaching and grasping. Vision Research, 50, 999–1013. [CrossRef] [PubMed]
Brenner E. Smeets J. B. (1997). Fast responses of the human hand to changes in target position. Journal of Motor Behavior, 29, 297–310. [CrossRef] [PubMed]
Cameron B. D. Franks I. M. Enns J. T. Chua R. (2007). Dual-target interference for the “automatic pilot” in the dorsal stream. Experimental Brain Research, 181, 297–305. [CrossRef] [PubMed]
Carnahan H. (1998). Manual asymmetries in response to rapid target movement. Brain and Cognition, 37, 237–253. [CrossRef] [PubMed]
Chang S. W. Abrams R. A. (2004). Hand movements deviate toward distracters in the absence of response competition. Journal of General Psychology, 131, 328–344. [PubMed]
Chapman C. S. Gallivan J. P. Wood D. K. Milne J. L. Culham J. C. Goodale M. A. (2010). Reaching for the unknown: Multiple target encoding and real-time decision-making in a rapid reach task. Cognition, 116, 168–176. [CrossRef] [PubMed]
Chapman C. S. Goodale M. A. (2008). Missing in action: The effect of obstacle position and size on avoidance while reaching. Experimental Brain Research, 191, 83–97. [CrossRef] [PubMed]
Chapman C. S. Goodale M. A. (2010). Seeing all the obstacles in your way: The effect of visual feedback and visual feedback schedule on obstacle avoidance while reaching. Experimental Brain Research, 202, 363–375. [CrossRef] [PubMed]
Cisek P. Kalaska J. F. (2002). Simultaneous encoding of multiple potential reach directions in dorsal premotor cortex. Journal of Neurophysiology, 87, 1149–1154. [PubMed]
Cisek P. Kalaska J. F. (2005). Neural correlates of reaching decisions in dorsal premotor cortex: Specification of multiple direction choices and final selection of action. Neuron, 45, 801–814. [CrossRef] [PubMed]
Coello Y. Magne P. (2000). Determination of target distance in a structured environment: Selection of visual information for action. European Journal of Cognitive Psychology, 12, 489–519. [CrossRef]
Danion F. Sarlegna F. R. (2007). Can the human brain predict the consequences of arm movement corrections when transporting an object? Hints from grip force adjustments. Journal of Neuroscience, 27, 12839–12843. [CrossRef] [PubMed]
Day B. L. Lyon I. N. (2000). Voluntary modification of automatic arm movements evoked by motion of a visual target. Experimental Brain Research, 130, 159–168. [CrossRef] [PubMed]
Desmurget M. Epstein C. M. Turner R. S. Prablanc C. Alexander G. E. Grafton S. T. (1999). Role of the posterior parietal cortex in updating reaching movements to a visual target. Nature Neuroscience, 2, 563–567. [CrossRef] [PubMed]
Desmurget M. Grafton S. (2000). Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Science, 4, 423–431. [CrossRef]
Desmurget M. Grea H. Grethe J. S. Prablanc C. Alexander G. E. Grafton S. T. (2001). Functional anatomy of nonvisual feedback loops during reaching: A positron emission tomography study. Journal of Neuroscience, 21, 2919–2928. [PubMed]
Elliott D. Binsted G. Heath M. (1999). The control of goal-directed limb movements: Correcting errors in trajectory. Human Movement Science, 18, 121. [CrossRef]
Elliott D. Carson R. G. Goodman D. Chua R. (1991). Discrete vs. continuous visual control of manual aiming. Human Movement Science, 10, 393–418. [CrossRef]
Elliott D. Helsen W. F. Chua R. (2001). A century later: Woodworth's (1899) two-component model of goal-directed aiming. Psychological Bulletin, 127, 342–357. [CrossRef] [PubMed]
Gallivan J. P. Cavina-Pratesi C. Culham J. C. (2009). Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand. Journal of Neuroscience, 29, 4381–4391. [CrossRef] [PubMed]
Glover S. (2004). Separate visual representations in the planning and control of action. Behavioral and Brain Science, 27, 3–24.
Gomi H. (2008). Implicit online corrections of reaching movements. Current Opinion in Neurobiology, 18, 558–564. [CrossRef] [PubMed]
Gomi H. Abekawa N. Nishida S. (2006). Spatiotemporal tuning of rapid interactions between visual-motion analysis and reaching movement. Journal of Neuroscience, 26, 5301–5308. [CrossRef] [PubMed]
Goodale M. A. Pelisson D. Prablanc C. (1986). Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement. Nature, 320, 748–750. [CrossRef] [PubMed]
Gordon J. Ghilardi M. F. Cooper S. E. Ghez C. (1994). Accuracy of planar reaching movements. II. Systematic extent errors resulting from inertial anisotropy. Experimental Brain Research, 99, 112–130. [CrossRef] [PubMed]
Grea H. Pisella L. Rossetti Y. Desmurget M. Tilikete C. Grafton S. et al. (2002). A lesion of the posterior parietal cortex disrupts on-line adjustments during aiming movements. Neuropsychologia, 40, 2471–2480. [CrossRef] [PubMed]
Gritsenko V. Yakovenko S. Kalaska J. F. (2009). Integration of predictive feedforward and sensory feedback signals for online control of visually guided movement. Journal of Neurophysiology, 102, 914–930. [CrossRef] [PubMed]
Hamilton A. F. Wolpert D. M. (2002). Controlling the statistics of action: Obstacle avoidance. Journal of Neurophysiology, 87, 2434–2440. [PubMed]
Heath M. (2005). Role of limb and target vision in the online control of memory-guided reaches. Motor Control, 9, 281–311. [PubMed]
Heath M. Westwood D. A. Binsted G. (2004). The control of memory-guided reaching movements in peripersonal space. Motor Control, 8, 76–106. [PubMed]
Howard L. A. Tipper S. P. (1997). Hand deviations away from visual cues: Indirect evidence for inhibition. Experimental Brain Research, 113, 144–152. [CrossRef] [PubMed]
Keulen R. F. Adam J. J. Fischer M. H. Kuipers H. Jolles J. (2003). Distractor interference in selective reaching: Dissociating distance and grouping effects. Journal of Motor Behavior, 35, 119–126. [CrossRef] [PubMed]
Liu D. Todorov E. (2007). Evidence for the flexible sensorimotor strategies predicted by optimal feedback control. Journal of Neuroscience, 27, 9354–9368. [CrossRef] [PubMed]
Mon-Williams M. Tresilian J. R. Coppard V. L. Carson R. G. (2001). The effect of obstacle position on reach-to-grasp movements. Experimental Brain Research, 137, 497–501. [CrossRef] [PubMed]
Pelisson D. Prablanc C. Goodale M. A. Jeannerod M. (1986). Visual control of reaching movements without vision of the limb. II. Evidence of fast unconscious processes correcting the trajectory of the hand to the final position of a double-step stimulus. Experimental Brain Research, 62, 303–311. [CrossRef] [PubMed]
Pisella L. Grea H. Tilikete C. Vighetto A. Desmurget M. Rode G. et al. (2000). An “automatic pilot” for the hand in human posterior parietal cortex: Toward reinterpreting optic ataxia. Nature Neuroscience, 3, 729–736. [CrossRef] [PubMed]
Prablanc C. Martin O. (1992). Automatic control during hand reaching at undetected two-dimensional target displacements. Journal of Neurophysiology, 67, 455–469. [PubMed]
Proteau L. Masson G. (1997). Visual perception modifies goal-directed movement control: Supporting evidence from a visual perturbation paradigm. Quarterly Journal of Experimental Psychology A, 50, 726–741. [CrossRef]
Ramsay J. O. Silverman B. W. (2005). Functional data analysis (2nd ed.). New York: Springer.
Reichenbach A. Thielscher A. Peer A. Bulthoff H. H. Bresciani J. P. (2009). Seeing the hand while reaching speeds up on-line responses to a sudden change in target position. The Journal of Physiology, 587, 4605–4616. [CrossRef] [PubMed]
Sabes P. N. Jordan M. I. (1997). Obstacle avoidance and a perturbation sensitivity model for motor planning. Journal of Neuroscience, 17, 7119–7128. [PubMed]
Sabes P. N. Jordan M. I. Wolpert D. M. (1998). The role of inertial sensitivity in motor planning. Journal of Neuroscience, 18, 5948–5957. [PubMed]
Saijo N. Murakami I. Nishida S. Gomi H. (2005). Large-field visual motion directly induces an involuntary rapid manual following response. Journal of Neuroscience, 25, 4941–4951. [CrossRef] [PubMed]
Sailer U. Eggert T. Ditterich J. Straube A. (2002). Global effect of a nearby distractor on targeting eye and hand movements. Journal of Experimental Psychology: Human Perception and Performance, 28, 1432–1446. [CrossRef] [PubMed]
Sarlegna F. Blouin J. Bresciani J. P. Bourdin C. Vercher J. L. Gauthier G. M. (2003). Target and hand position information in the online control of goal-directed arm movements. Experimental Brain Research, 151, 524–535. [CrossRef] [PubMed]
Schindler I. Rice N. J. McIntosh R. D. Rossetti Y. Vighetto A. Milner A. D. (2004). Automatic avoidance of obstacles is a dorsal stream function: Evidence from optic ataxia. Nature Neuroscience, 7, 779–784. [CrossRef] [PubMed]
Soechting J. F. Lacquaniti F. (1983). Modification of trajectory of a pointing movement in response to a change in target location. Journal of Neurophysiology, 49, 548–564. [PubMed]
Song J. H. Nakayama K. (2006). Role of focal attention on latencies and trajectories of visually guided manual pointing. Journal of Vision, 6, (9):11, 982–995, http://www.journalofvision.org/content/6/9/11, doi:10.1167/6.9.11. [PubMed] [Article] [CrossRef]
Striemer C. L. Chapman C. S. Goodale M. A. (2009). “Real-time” obstacle avoidance in the absence of primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 106, 15996–16001. [CrossRef] [PubMed]
Tipper S. P. Lortie C. Baylis G. C. (1992). Selective reaching: Evidence for action-centered attention. Journal of Experimental Psychology: Human Perception and Performance, 18, 891–905. [CrossRef] [PubMed]
Tipper S. P. Meegan D. V. Howard L. A. (2002). Action-centered negative priming: Evidence for reactive inhibition. Visual Cognition, 9, 591–614. [CrossRef]
Tresilian J. R. (1998). Attention in action or obstruction of movement? A kinematic analysis of avoidance behavior in prehension. Experimental Brain Research, 120, 352–368. [CrossRef] [PubMed]
Tresilian J. R. (1999). Selective attention in reaching: When is an object not a distractor? Trends in Cognitive Science, 3, 407–408. [CrossRef]
Welsh T. Elliott D. (2004). Movement trajectories in the presence of a distracting stimulus: Evidence for a response activation model of selective reaching. Quarterly Journal of Experimental Psychology A, 57, 1031–1057. [CrossRef]
Whitney D. Westwood D. A. Goodale M. A. (2003). The influence of visual motion on fast reaching movements to a stationary object. Nature, 423, 869–873. [CrossRef] [PubMed]
Figure 1
 
Experimental setup with target and object positions. Participants made reaches from the start button to the initial target (green circle), which were on a 1 m × 1 m black fabric board. On 1/3 of trials, the target jumped in depth and to the left (blue circle) or right (red circle). When an object was present, they appeared in one of four positions (indicated by colored squares, size and position to scale). Movements were recorded using two OPTOTRAK cameras (one left, one in front) at 100 Hz.
Figure 1
 
Experimental setup with target and object positions. Participants made reaches from the start button to the initial target (green circle), which were on a 1 m × 1 m black fabric board. On 1/3 of trials, the target jumped in depth and to the left (blue circle) or right (red circle). When an object was present, they appeared in one of four positions (indicated by colored squares, size and position to scale). Movements were recorded using two OPTOTRAK cameras (one left, one in front) at 100 Hz.
Figure 2
 
Overhead view (x, y) of average (average of 18 participants' individual averages) reach trajectories to the initial target (small red circle) with objects in each position (position and size not to scale). Trajectory traces are color coded to match the object positions (green = no objects, note: x-axis magnified 8×). Shaded area around trajectory traces represents average standard error across 18 participants. Gray significance bar to the left gives a measure of where there were statistical differences (magnitude of difference is proportional to intensity of gray—see p-value legend, note: p-values are Greenhouse–Geisser corrected) between trajectories in the lateral dimension. (Inset) Functional pair-wise comparisons between all possible pairs of trajectories arranged as a matrix. Each row (color inside the box) and column (border of box) corresponds to a different object position with the intersection being the comparison between those two trajectories. Within each intersection box, the position of the colored area corresponds to where along the reach distance (y) the trajectories differed in the lateral (x) dimension with the intensity of the color corresponding to the magnitude of the statistical difference (see p-value legend and exploded box to side, where the (red) near-left trajectory is being compared to the (green) no-object trajectory).
Figure 2
 
Overhead view (x, y) of average (average of 18 participants' individual averages) reach trajectories to the initial target (small red circle) with objects in each position (position and size not to scale). Trajectory traces are color coded to match the object positions (green = no objects, note: x-axis magnified 8×). Shaded area around trajectory traces represents average standard error across 18 participants. Gray significance bar to the left gives a measure of where there were statistical differences (magnitude of difference is proportional to intensity of gray—see p-value legend, note: p-values are Greenhouse–Geisser corrected) between trajectories in the lateral dimension. (Inset) Functional pair-wise comparisons between all possible pairs of trajectories arranged as a matrix. Each row (color inside the box) and column (border of box) corresponds to a different object position with the intersection being the comparison between those two trajectories. Within each intersection box, the position of the colored area corresponds to where along the reach distance (y) the trajectories differed in the lateral (x) dimension with the intensity of the color corresponding to the magnitude of the statistical difference (see p-value legend and exploded box to side, where the (red) near-left trajectory is being compared to the (green) no-object trajectory).
Figure 3
 
Overhead view (x, y) of average reach trajectories to the jumped target positions with objects on the (A) left or (B) right. Trajectory traces are color coded to match the object positions (green = no objects, object size not to scale). Shaded area around trajectory traces represents average standard error across 18 participants. The gray significance bars denote where there was an effect due to obstacles for a given jump direction and the green significance bars denote where the jump-left and jump-right trials were significantly different (magnitude of lateral deviation difference is proportional to intensity of color—see p-value legend—note that its location does not obscure any part of the significance bar in (B)). (Insets) Functional pair-wise comparisons between trajectories for (A) target jumps left and obstacles-left and (B) target jumps right and obstacles-right. Configuration of pair-wise comparison boxes is identical to Figure 2.
Figure 3
 
Overhead view (x, y) of average reach trajectories to the jumped target positions with objects on the (A) left or (B) right. Trajectory traces are color coded to match the object positions (green = no objects, object size not to scale). Shaded area around trajectory traces represents average standard error across 18 participants. The gray significance bars denote where there was an effect due to obstacles for a given jump direction and the green significance bars denote where the jump-left and jump-right trials were significantly different (magnitude of lateral deviation difference is proportional to intensity of color—see p-value legend—note that its location does not obscure any part of the significance bar in (B)). (Insets) Functional pair-wise comparisons between trajectories for (A) target jumps left and obstacles-left and (B) target jumps right and obstacles-right. Configuration of pair-wise comparison boxes is identical to Figure 2.
Figure 4
 
(Top) Average vector velocity and (bottom) lateral velocity traces for trials with (A) no objects and with (B) objects. (A) Trace color denotes jump direction: green = no jump, blue = jump left, red = jump right. Shaded area around trajectory traces represents average standard error across 18 participants. Vertical lines denote the correction latency for jumps to the right (red) and left (blue). (B) Trace color denotes the position of the object: blue = far-left, red = near-left, magenta = far-right, black = near-right. The style of the line denotes the jump direction: thin = no jump, dashed = jump-left, thick = jump-right. The green shaded region corresponds to the shaded regions in (A)—no objects—and serves as a baseline.
Figure 4
 
(Top) Average vector velocity and (bottom) lateral velocity traces for trials with (A) no objects and with (B) objects. (A) Trace color denotes jump direction: green = no jump, blue = jump left, red = jump right. Shaded area around trajectory traces represents average standard error across 18 participants. Vertical lines denote the correction latency for jumps to the right (red) and left (blue). (B) Trace color denotes the position of the object: blue = far-left, red = near-left, magenta = far-right, black = near-right. The style of the line denotes the jump direction: thin = no jump, dashed = jump-left, thick = jump-right. The green shaded region corresponds to the shaded regions in (A)—no objects—and serves as a baseline.
Table 1
 
The means and statistical results for the temporal dependant measures. Where significant, the strength of an interaction is indicated in the row with the measure name. The F column shows the results of an F-test of the main effect (or simple main effect) of the means in a given row (Greenhouse–Geisser corrected). Results from pair-wise contrasts (Bonferroni corrected) are shown next to each significant F-test. * and < or > = p < 0.05, ** = p < 0.005, ns = not significant.
Table 1
 
The means and statistical results for the temporal dependant measures. Where significant, the strength of an interaction is indicated in the row with the measure name. The F column shows the results of an F-test of the main effect (or simple main effect) of the means in a given row (Greenhouse–Geisser corrected). Results from pair-wise contrasts (Bonferroni corrected) are shown next to each significant F-test. * and < or > = p < 0.05, ** = p < 0.005, ns = not significant.
Reaction time (ms)
No-Obj Near-L Far-L Far-R Near-R F
351.79 331.65 331.59 335.26 333.92 ** No-Obj > Rest
Movement time (ms) Interaction** Jump type × object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 471.44 468.93 470.03 471.97 468.33 ns
Jump-L 695.78 719.98 714.65 692.96 696.49 * Near-L > No-Obj, Far-R, Near-R
Jump-R 625.91 615.85 608.59 635.85 759.27 ** Near-R > Rest; Far-R > Near-L
Peak velocity (mm/s) Interaction* Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 1334.78 1325.37 1323.28 1326.80 1322.42 ns
Jump-L 1319.81 1320.90 1307.08 1319.77 1332.48 ns
Jump-R 1402.48 1389.38 1400.22 1392.95 1344.26 * None
Time to peak velocity (ms) Interaction* Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 223.70 221.90 222.74 223.48 223.07 ns
Jump-L 217.56 228.11 229.94 223.53 226.94 ns
Jump-R 250.66 250.72 256.52 250.63 231.67 * Near-R < Near-L, Far-L
Percent time to peak velocity (%) Interaction** Jump type × Object position
No-Obj Near-L Far-L Far-R Near-R F
No jump 47.44 47.17 47.44 47.39 47.50 ns
Jump-L 31.44 31.72 32.33 32.39 32.72 ns
Jump-R 40.72 41.28 42.50 39.89 31.11 ** Near-R < Rest; Far-R < Near-L
Correction latency (ms)
No-Obj Near-L Far-L Far-R Near-R F
Jump-L 299.44 317.78 309.44 310.55 313.89 ns
Jump-R 268.89 266.67 269.44 278.89 286.11 ns
Supplementary materials
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×