In their day-to-day activities human beings are constantly generating behavior, such as pointing, grasping or verbal reports, on the basis of visible target locations. The question arises how the brain represents target locations. One possibility is that the brain represents them metrically, i.e. in terms of distance and direction. Another equally plausible possibility is that the brain represents locations non-metrically, using for example ordered geometry or topology. Here we report two experiments that were designed to test if the brain represents locations metrically or non-metrically. We measured accuracy and variability of visually guided reach-to-point movements (Experiment 1) and probe-stimulus adjustments (Experiment 2). The specific procedure of informing subjects about the relevant response on each trial enabled us to dissociate the use of non-metric target location from the use of metric distance and direction in head/eye-centered, hand-centered and externally defined (allocentric) coordinates. The behavioral data show that subjects' responses are least variable when they can direct their response at a visible target location, the only condition that permitted the use of non-metric information about target location in our experiments. Data from Experiments 1 and 2 correspond well quantitatively. Response variability in non-metric conditions cannot be predicted based on response variability in metric conditions. We conclude that the brain uses non-metric geometrical structure to represent locations.

^{1}Furthermore, motor actions that are directed at locations in the physical world, such as reaching, grasping, walking or saccadic eye movements are typically metrically scaled. Thus, it seems natural to assume that our brain would represent locations in a metric format and that this metric representation is used to generate various kinds of responses. If one assumes that the brain represents target location in a metric format, then the question arises as to where that metric coordinate system is anchored. For example, distance and direction could be computed in egocentric coordinates with respect to the observer or the observer's body parts (i.e. eye, head, shoulder, hand) or in allocentric coordinates with respect to an external frame of reference. Much research has addressed the question as to which coordinate system the brain uses to compute target distance and direction and how the different coordinate systems interact on behavioral and neural level (e.g. Andersen, Snyder, Bradley, & Xing, 1997; Colby & Goldberg, 1999; McGuire & Sabes, 2009; Snyder, Grieve, Brotchie, & Andersen, 1998; Sober & Sabes, 2005; Soechting & Flanders, 1992; Thaler & Todd, 2009a, 2009b). Yet, remarkably, nobody has tested the fundamental assumption that the brain uses metric structure (i.e. distance and direction) to represent locations.

*Movement Magnitude*was computed as the length of that line and movement direction as its angular orientation. For each movement, we could then compute the

*Movement Direction Error*as the angular deviation between the response direction and movement direction. To assess systematic deviations of the responses from the visually specified target magnitude and direction, we computed average movement magnitude and average movement direction error. To assess variability of performance, we computed standard deviations (

*SD*) of movement magnitude and movement direction error for each subject. For the direction data, we computed both linear and circular statistics (Fisher, 1993). Since differences between linear and circular statistics were very small (max. absolute deviation between measures 0.0017°), we report linear statistics only. To characterize

*Distributions of Movement Endpoints*across subjects we fit minimum variance ellipses to the endpoints of all subjects' hand movements for each target magnitude and presentation condition (Gordon et al., 1994; van Beers, Haggard, & Wolpert, 2004). To remove any contribution of individual differences to this measure, we subtracted each subjects mean endpoint (

*λ*and the eigenvectors of the 2 × 2 sample covariance matrix

*R,*whose elements are given by:

*δ*

_{i}=

_{i}−

*i*along one of two orthogonal axes (rows and columns j, k ⊂ {x, y}) and

*n*trials. The square root of the eigenvalues corresponds to the standard deviation of movements along each axis specified by the associated eigenvectors. The aspect ratio of the ellipse is equal to the ratio of the square roots of the two eigenvalues, i.e.

*SD*of movements in the plane is equivalent to ellipse area:

*Kinematic Parameters*such as movement speed, duration and trajectory shape (van Beers et al., 2004). To determine if the shape of the movement trajectories differed across conditions, we determined movement curvature by computing the absolute distance of any point on a movement trajectory to the straight line connecting trajectory start and endpoints, and by dividing the maximum absolute distance by the length of the straight line (Atkeson & Hollerbach, 1985). To represent curvature values in percent, we multiplied this ratio by 100. Movement curvature of 0% corresponds to a straight-line trajectory, whereas Movement curvature of 50% would correspond to a half-circular trajectory. Average movement speed, peak movement speed and movement duration were computed by numerical differentiation of smoothed movement trajectories (Butterworth filter with 7 Hz cut off).

*25-percentile − 2.5**

*iqr*or

*75-percentile*+

*2.5**

*iqr*(iqr = inter quartile range). Using this method, which is robust in the presence of outliers, only 0.94% (n = 36) of all movements were rejected.

*Adjusted Magnitude*was computed as the overall magnitude that the probe dot was moved on each trial. The mean and standard deviation of these magnitudes was computed for each subject, target magnitude and presentation condition. We excluded outlier trials for each subject, target magnitude and presentation condition, where the adjusted magnitude exceeded the

*25-percentile − 2.5**

*iqr*or

*75-percentile*+

*2.5**

*iqr*. Only 0.003 % (n = 8) of responses were rejected.

*SD*) movement direction errors with ‘presentation condition’ and ‘target magnitude’ as factors. The analysis revealed a significant main effect of target magnitude on both average movement direction error (

*F*(3,27) = 14.302;

*p*= .0001) and on the

*SD*of movement direction error (

*F*(3,27) = 4.841,

*p*= .008). Neither main effect of presentation condition nor interaction effects were significant. Therefore, we averaged the constant errors (i.e., average movement direction error) and the variable errors (i.e.,

*SD*of movement direction error), respectively, across presentation conditions and plotted them as a function of target magnitude (Figure 4).

*SD*of movement direction errors, Figure 4b shows that

*SD*of movement direction errors decreases with increasing target magnitude (compare Gordon et al., 1994; Messier & Kalaska, 1997; Thaler & Todd, 2009a). The overall effect is small (∼0.5°). In summary, subjects' movement direction errors were unaffected by the way visual information was presented. As outlined in the method section, this was expected since response direction was specified in the same way across all presentation conditions.

*F*(3,27) = 337.499,

*p*< .0001), presentation condition (

*F*(3,27) = 11.062;

*p*< .0001) and a significant interaction between these two factors (

*F*(9,81) = 2.274;

*p*= .025). These results confirm our impression that subjects movements scale metrically with target magnitude in all presentation conditions, but that systematic over- and under-shoots depend on the way visual information was presented to subjects as well as on the magnitude of the specified endpoint. We carried out a series of post-hoc

*T*-tests between average movement magnitudes in the ‘Endpoint’ condition for each of the specified endpoints and the corresponding average movement magnitude for the specified endpoints in each of the other three presentation conditions. Threshold for significance for each test was chosen to be

*p*= .05. Since we computed a total number of twelve tests the degrees of freedom for each test were adjusted using Tukey's HSD procedure in order to control for accumulation of Type-I error.

^{2}As can be seen in Figure 5, only the movement magnitudes for the two farthest targets in the ‘Head/Eye Centered’ and ‘Allocentric’ presentation conditions differed significantly from those in the ‘Endpoint’ conditions. In summary, average movement magnitude was equally accurate with respect to physical target magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions, but subjects tend to overshoot the two farthest target magnitudes in ‘Head/Eye Centered’ and ‘Allocentric’ conditions.

*SD*of movement magnitudes are plotted as a function of average movement magnitude in the right-hand column of Figure 5. In agreement with the depiction of the data in Figure 3, the

*SD*of movement magnitude is lowest in ‘Endpoint’ conditions. It is also evident that

*SD*depends on movement magnitude, but this relationship differs amongst presentation conditions. Specifically,

*SD*increases proportionally to movement magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions whereas

*SD*decreases slightly with increases in movement magnitude in the ‘Head/Eye Centered’ conditions, i.e. slope is slightly negative. In ‘Allocentric’ conditions,

*SD*increases at first, but drops for the farthest magnitude. The observation that

*SD*of movement magnitude does not increase proportionally with movement magnitudes for both ‘Head/Eye Centered’ and ‘Allocentric’ conditions is consistent with the fact that ellipses in those conditions become rounder as movement magnitude increases (see Figure 3).

*SD*of movement magnitude is expected to increase proportional with movement magnitude, i.e. Fitts' law (Fitts, 1954). Since movements in ‘Head/Eye Centered’ and ‘Allocentric’ conditions were longer than the movements in ‘Endpoint’ or ‘Hand Centered’ conditions, we would therefore expect the

*SD*s of these movements to increase simply as a function of response magnitude. To eliminate movement magnitude as potential confound, we used linear regression to remove the effects of movement magnitude (see 1 for computational details). The residual

*SD*left after this analysis enabled us to determine those differences in the

*SD*that were free from effects of movement magnitude. Because residual

*SD*is the difference between

*SD*observed in the data and

*SD*expected based on the linear relationship between

*SD*and movement magnitude, residual

*SD*can be negative (i.e.

*SD*is lower than expected) and positive (i.e.

*SD*is higher than expected). The sum of all residuals is always zero.

*SD*was −4.05 mm in the ‘Endpoint’ conditions, 1.12 mm in the ‘Hand Centered’ conditions, and 1.49 mm in the ‘Head/Eye Centered’ conditions and 1.44 mm in the ‘Allocentric’ conditions. To test for possible differences in the residual

*SD*s among the four conditions, we computed

*T*-tests for all possible pairwise comparisons. Threshold for significance was chosen to be

*p*= .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical

*t*

_{05; HSD}= 3.12; critical

*t*

_{01; HSD}= 4.22). We found that the residual

*SD*s in the ‘Endpoint’ conditions differed from the residual

*SD*s in all the other conditions (Hand Centered:

*t*(9) = 4.3; Head/Eye Centered:

*t*(9) = 4.37; Allocentric:

*t*(9) = 6.01). No other comparisons were significant.

*T*-tests (two tailed). Threshold for significance was chosen to be

*p*= .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical

*t*

_{05; HSD}= 3.12).

Endpoint | Hand Centered | Head/Eye Centered | Allocentric | Significant differences (p < .05) | |
---|---|---|---|---|---|

Curvature (%) | 3.4 (1.3) | 3.3 (1) | 3.4 (1.2) | 3.1 (0.9) | – |

Average Speed (cm/s) | 17 (4.7) | 16 (4.7) | 17.5 (5.6) | 16.5 (5.4) | Endpoint vs. Hand C. |

Max. Speed (cm/s) | 34.7 (14.9) | 31.4 (13.1) | 33.8 (13.8) | 32.4 (15) | Endpoint vs. Hand C.; Endpoint vs. Allocentric |

Duration (ms) | 843 (185) | 914 (209) | 988 (225) | 990 (244) | All comparisons, except: Head C.
vs. Allocentric; Endpoint vs. Allocentric |

*F*(3,27) = 233.93;

*p*< .0001) and presentation condition (

*F*(3,27) = 18.128;

*p*< .0001). The interaction effect is not significant. Thus, just as was the case for the average movement magnitudes, average adjusted magnitudes scale with target magnitude in all conditions, and systematic over- and under-adjustments vary as a function of the way in which visual information was presented to subjects. We carried out a series of post-hoc

*T*-tests between average adjusted magnitudes in the ‘Endpoint’ condition for each of the specified endpoints and the corresponding average adjusted magnitude for the specified endpoints in each of the other three presentation conditions. Just as in Experiment 1 we chose the threshold for significance for each test to be

*p*= .05 and we adjusted the degrees of freedom for each test using Tukey's HSD procedure (for more details see Footnote 2). As can be seen in Figure 6, adjusted magnitudes in the ‘Endpoint’ condition differ significantly from those in the other conditions, except for the shortest target magnitude in ‘Hand Centered’ and ‘Allocentric’ conditions. In summary, average adjusted magnitude is most accurate in ‘Endpoint’ conditions, but subjects tend to over-adjust the physical target magnitude in ‘Hand Centered’, ‘Head/Eye Centered’, and ‘Allocentric’ conditions.

*SD*s of adjusted magnitudes are plotted as a function of adjusted magnitude in the right-hand column of Figure 6. Just as for the reach-to-point data, it is immediately apparent that

*SD*is lowest in ‘Endpoint’ conditions. It is also evident that

*SD*depends on adjusted magnitude, but that this relationship differs amongst the presentation conditions. Direct visual comparison between Figures 5 and 6 reveals that the relationship between

*SD*of adjusted magnitude and average adjusted magnitude is strikingly similar to the relationship between

*SD*of movement magnitude and average movement magnitude. Specifically, just as for the reach-to-point data from Experiment 1,

*SD*increases proportionally with adjusted magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions, whereas

*SD*decreases slightly as adjusted magnitude increases in ‘Head/Eye Centered’ conditions (i.e. slope is negative). In ‘Allocentric’ conditions,

*SD*increases at first, but drops for the farthest magnitude.

*SD*of adjusted magnitude is expected to increase proportional to adjustment magnitude, i.e. Weber's law. To remove adjustment magnitude as potential confound we analyze the

*SD*of adjusted magnitude in the same way as the

*SD*s of movement magnitude, i.e. we used linear regression to remove the linear effects of adjusted magnitude on

*SD*(see 1 for computational details). The residual

*SD*left after this analysis enabled us to determine those differences in

*SD*that were free from effects of adjustment magnitude. Because residual

*SD*is the difference between

*SD*observed in the data and

*SD*expected based on the linear relationship between

*SD*and adjustment magnitude, residual

*SD*can be negative (i.e.

*SD*is lower than expected) and positive (i.e.

*SD*is higher than expected). The sum of all residuals is always zero.

*SD*was −2.36 mm in ‘Endpoint’ conditions, −0.28 mm in ‘Hand Centered’ conditions, 1.73 mm in ‘Head/Eye Centered’ conditions, and 0.9 mm in ‘Allocentric’ conditions. To test for possible differences in the residual

*SD*s among the four conditions, we computed

*T*-tests for all possible pairwise comparisons. Threshold for significance was chosen to be

*p*= .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical

*t*

_{05; HSD}= 3.12; critical

*t*

_{01; HSD}= 4.22). We found that residual

*SD*in ‘Endpoint’ conditions differed significantly from residual

*SD*in ‘Head/Eye Centered’ and ‘Allocentric’ conditions (Head/Eye Centered:

*t*(9) = 5.15; Allocentric:

*t*(9) = 3.66). No other comparisons were significant. However, without HSD correction, the comparison between residual

*SD*in ‘Endpoint’ and ‘Hand Centered’ conditions reached significance as well (

*t*(9) = 2.46;

*p*= .036).

*SD*of reach-to-point movements is similar to

*SD*of probe stimulus adjustments, except for the ‘Endpoint’ conditions, in which

*SD*of the adjustments appears to be larger. To determine if the

*SD*s from Experiments 1 and 2 are significantly different from one another we compared the average

*SD*s as well as the average residual

*SD*s for each of the four presentation conditions across the two experiments using paired

*T*-tests. Using Tukey's HSD procedure to adjust the degrees of freedom to account for multiple comparisons, none of the comparisons were significant at

*p*= .05. Without HSD correction, the comparison between average

*SD*of reach-to-point movements and probe stimulus adjustments in ‘Endpoint’ conditions reached significance (

*t*(9) = 2.77;

*p*= .022) and the comparison between average residual

*SD*s in ‘Endpoint’ conditions approached but did not reach significance (

*t*(9) = 1.92;

*p*= .087). None of the other comparisons were significant or showed even a tendency to be significant.

*SD*had a tendency to be higher in probe-dot adjustments than in reach-to-point movements only in ‘Endpoint’ conditions could mean that the saccadic eye movements had a larger impact on responses in ‘Endpoint’ conditions compared to their effect on responses in the other presentation conditions.

Subject | Group | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |||

Magnitude | n = 16 N = 160 | 0.87 | 0.99 | 0.95 | 0.92 | 0.96 | 0.96 | 0.89 | 0.9 | 0.94 | 0.94 | 0.8*** |

Residual Magn. | n = 16 N = 160 | 0.29 | 0.87 | 0.72 | 0.46 | 0.18 | 0.73 | 0.69 | 0.82 | 0.2 | 0.41 | 0.61*** |

SD | n = 16 N = 160 | 0.12 | 0.8 | 0.65 | 0.6 | −0.2 | 0.54 | 0.48 | 0.61 | 0.35 | 0.19 | 0.32*** |

Residual SD | n = 16 N = 160 | 0.27 | 0.77 | 0.63 | 0.55 | −0.2 | 0.66 | 0.2 | 0.48 | 0.32 | 0.15 | 0.35*** |

Average Res. SD | n = 4 N = 40 | 0.4 | 0.82 | 0.95 | 0.88 | 0.08 | 0.92 | 0.31 | 0.37 | 0.3 | 0.69 | 0.47** |

*SD*in ‘Hand Centered’ and ‘Endpoint’ in the probe stimulus adjustments only reached significance without HSD correction. The otherwise good agreement between probe stimulus adjustments and reach-to-point movements is striking, especially since the two tasks differed in a number of other respects. First, to generate a response in the reach-to-point task in Experiment 1, subjects invoke a multitude of steps involved in reach planning and control that recruit visual and proprioceptive feedback and feed-forward mechanisms (e.g. Desmurget, Pelisson, Rossetti, & Prablanc, 1998; Kawato, 1999; Wolpert & Ghahramani, 2000). Except for the processing of the relevant visual information in the two kinds of tasks, we do not see how the same steps that are involved in reach planning and control in Experiment 1 could be involved in the generation of the button presses in Experiment 2. It follows that the differences between presentation conditions that we observed in both Experiments 1 and 2 are independent of the way a response was generated. Second, the adjustment task required a response in only one dimension (adjusted magnitude), whereas the reach-to-point task required a response in two dimensions (movement direction and movement magnitude). The fact that we find the same systematic differences amongst the four conditions with regard to both adjusted and movement magnitude highlights the fact that performance differences amongst the presentation conditions are independent of the dimensionality of the response. Finally, the adjustment task required subjects to move their eyes between the presentation of the target and the generation of the response, whereas the reach-to-point task did not. Even though

*SD*in ‘Endpoint’ conditions appears to have a tendency to be higher in probe-dot adjustments than reach-to-point movements, the differences amongst the four presentation conditions are nevertheless strikingly similar between Experiments 1 and 2. This finding suggests that

*SD*differences amongst presentation conditions are present regardless of whether subjects make eye movements or not, and that saccadic eye movements may add more variability to responses in ‘Endpoint’ conditions than to responses in the other three presentation conditions.

^{3}

*σ*

_{ab}

^{2}can be obtained from the variances of the individual estimate

*σ*

_{a}

^{2}and

*σ*

_{b}

^{2}using Equation 3:

*σ*

_{Endpoint}

^{2}should be predictable based on the individual variances in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions,

*σ*

_{Head/Eye}

^{2}and

*σ*

_{Hand}

^{2}, using Equation 4:

*σ*

_{Head/Eye}and

*σ*

_{Hand}using empirically observed

*SD*in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions,

*SD*

_{Head/Eye}and

*SD*

_{Hand}. In the simplest case, we can then compute a prediction on the

*SD*in ‘Endpoint’ conditions, i.e.

_{Endpoint}=

*SD*

_{Head/Eye}and

*SD*

_{Hand}into Equation 4 for each target magnitude and subject separately. In a next step, we can then compare observed

*SD*

_{Endpoint}to predicted

_{Endpoint}. However, a prediction based on

*SD*

_{Head/Eye}and

*SD*

_{Hand}might be considered inappropriate, because

*SD*

_{Head/Eye}and

*SD*

_{Hand}were observed in response to different response magnitudes, compared to

*SD*

_{Endpoint}. Thus, a more appropriate

_{Endpoint}might be obtained by substituting

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}into Equation 4, where

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}are

*SD*s that are

*expected*in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions for responses of the same magnitude as those observed in ‘Endpoint’ conditions. Accordingly, we computed ‘magnitude corrected’

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}using both linear and quadratic magnitude correction functions (see 2 for computational details) and substituted these estimates in Equation 4 in order to compute

_{Endpoint}in a way that takes differences in response magnitude into account.

_{Endpoint}matches observed

*SD*

_{Endpoint}, would be consistent with the idea that the brain uses a combination of the metric information provided in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions to perform in ‘Endpoint’ conditions. However, if predicted

_{Endpoint}does not match observed

*SD*

_{Endpoint}, it would seem that the brain uses information in ‘Endpoint’ conditions that is not captured by metric distance and direction. In other words, the information would be non-metric. Of course, if the prediction fails one could also question the general validity of the MLE model. But given the current evidence about the way the brain might integrate different kinds of visual information (Knill & Pouget, 2004), MLE appears to be a suitable framework for testing the metric model in the context of our experiments.

*SD*

_{Head/Eye}and

*SD*

_{Hand}or magnitude-corrected

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}into Equation 4 in order to compute

_{Endpoint}, we assume that all variability in responses is due to the underlying representation. It has been argued, however, that motor noise associated with moving the hand is an additional and independent source of response variability (van Beers et al., 2004). In fact, our own work suggests that motor noise contributes ∼40% to overall variability in the kinds of hand movements used in the current experiments, i.e. comparable speed, duration, etc. (Thaler & Todd, 2009b). In order to test the influence of motor noise on our prediction in Experiment 1, we also implemented a metric MLE models that assumes 40% motor noise (see 3 for computational details).

*SD*in Endpoint conditions, with and without 40% motor noise, respectively. Error bars denote 95% confidence intervals around the mean prediction error. The remaining three columns on the right hand-side show the data used to obtain the prediction error. In these plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted

*SD*, whereas standard errors were computed based on the variability of observed and predicted

*SD*. Observed data and magnitude correction functions (both averaged across subjects) are plotted in black. Please note that the observed data are just replotted from Figure 3 and that they are the same in all rows. Red crosses denote

*SD*values that were substituted for

*σ*

_{Head/Eye}and

*σ*

_{Hand}in Equation 4, also averaged across subjects. Blue and red circles denote

_{Endpoint}for the ‘representation model’ with and without 40% motor noise, respectively, also averaged across subjects. Figure 8 shows the data for Experiment 2 plotted in the same format as Figure 7, except that we only plotted predictions for the ‘representation model’, because there is no motor noise model for probe-dot adjustments.

_{Endpoint}>

*SD*

_{Endpoint}, and ‘under-predicts’ in Experiment 2, i.e.

_{Endpoint}<

*SD*

_{Endpoint}. We think that a likely explanation of this result is that eye movements introduced additional noise in the responses in ‘Endpoint’ conditions in Experiment 2 and that this noise is absent in Experiment 1. This interpretation is also consistent with the finding that

*SD*in ‘Endpoint’ conditions has a tendency to be lower in Experiment 1 than in Experiment 2, whereas

*SD*s in the three metric conditions do not show a tendency to differ between Experiments 1 and 2 (compare Direct comparison between reach-to-point movements (Experiment 1) and probe stimulus adjustments (Experiment 2) section).

*d*and direction

*ϕ*are not sufficient to generate a response in metric conditions, but that they have to be transformed into a new anchor position

*P*′ first. Since the position

*P*is already given in ‘Endpoint’ conditions, a transformation is not required, and variance is lowest. It is important to realize when raising this argument, however, that existing metric models cannot produce responses based on position

*P*alone. In fact, the only way that current metric models generate a response based on position

*P*is that they transform

*P*into distance

*d*and orientation

*ϕ*either with respect to the eye, head, hand or some other origin, and it is for this reason that

*P*is always represented in a metric Cartesian or spherical coordinate system (e.g. Blohm, Keith, & Crawford, 2009; Buneo & Andersen, 2006; Flanders, Helms-Tillery, & Soechting, 1992; Guenther, Bullock, Greve, & Grossberg, 1994; Rosenbaum, Loukopoulos, Meulenbroek, Vaughan, & Engelbrecht, 1995; Snyder, 2000; Soechting & Flanders, 1989a, 1989b; van Pelt & Medendorp, 2008; for reviews see for example Desmurget & Grafton, 2000; Desmurget et al., 1998; Lacquaniti & Caminiti, 1998; Todorov, 2004). Thus, if one wants to ‘rescue’ the metric model by raising a transformation argument one also has to explain why metric models should need a new anchor point

*P*′ in order to compute distance

*d*and orientation

*ϕ*in metric conditions, and why

*d*and

*ϕ*as they are provided in our metric conditions cannot be used directly to generate a response.

*arbitrarily*chose a higher value for the (currently) free parameter ‘visual target variance’ in the metric compared to ‘Endpoint’ conditions in our experiment. (The same would hold for the choice of parameters for the prior distributions if one were to use these to explain our results.) In summary, our results (and our argument) address a variable in McGuire and Sabes' model which is currently a free parameter and some kind of a ‘black box’. Thus, even though McGuire and Sabes' (2009) model is not inconsistent with our results, it does not predict them

*a priori*. It follows that our results underscore those parts of the model that are currently underspecified and that would have to be extended in order to deal with them.

*SD*s in these tasks were reasonably low. In fact, in Experiment 1 it is impossible to tell from the movement kinematics which condition subjects were performing. As mentioned earlier, metric response scaling in metric conditions shows that subjects can represent metric visual information—possibly in combination with non-metric representations. This interpretation is consistent with work that suggests that the human brain uses both non-metric and metric representations to navigate large-scale environments (Foo et al., 2005). Taken together, the results suggest that the computational processes used by the visuomotor system are quite flexible. Current and future models of visuomotor planning should be equally flexible, and the four conditions used in our experiments could provide a useful yardstick for testing the validity of these models (see also our discussion of the relationship of our results to existing models in Ruling out potential alternative explanations of our results section). The idea of a computationally flexible visuomotor system is not new (Desmurget et al., 1998; McGuire & Sabes, 2009; Sober & Sabes, 2005; Todorov & Jordan, 2002), but it is new to suggest that the way visual information is presented may affect how movements are planned and controlled (see also Thaler & Todd, 2009b).

*SD*s of the response magnitude. For brevity, we describe this procedure only for the reaching responses (Experiment 1) but the effects on the

*SD*s of the button-pressing responses (Experiment 2) can be obtained by making the appropriate substitutions. Similarly, the computations for removing the linear effects of target distance on both the reaching and the button-pressing responses can also be obtained by substituting the appropriate values in the equations.

*SD*of movement magnitude based on movement magnitude across presentation conditions and target magnitudes for each subject separately. This linear function has the form ŝ

_{ ijk}=

*a*

_{ i}+

*b*

_{ i}*

*d*

_{ ijk}, where ŝ

_{ ijk}is predicted

*SD*of movement magnitude for a particular subject

*i,*presentation condition

*j*and target magnitude

*k, d*

_{ ijk}is the movement magnitude for a particular subject, presentation condition, and target magnitude, and

*a*

_{ i}and

*b*

_{ i}are subject specific intercept and slope parameters. In a second step, we computed the residual

*r*

_{ S ijk}=

*s*

_{ ijk}− ŝ

_{ ijk}, which is the difference between observed

*SD*of movement magnitude,

*s*

_{ ijk}, and predicted

*SD*of movement magnitude, ŝ

_{ ijk}, for a particular presentation condition, target magnitude, and subject. It follows that the residual

*r*

_{ S ijk}is the amount of observed

*SD*of movement magnitude independent of the linear effects of movement magnitude. The average residual for each subject and presentation condition was obtained by averaging residuals across distances for a particular presentation condition and subject, i.e.

*r*

_{ S ij}=

*r*

_{ S ijk}, where

*n*is the number of distances per presentation condition, which was four in our experiment. The average residual for each presentation condition can be negative or positive. In contrast, the average residual across presentation conditions, i.e.

*r*

_{ S i}=

*r*

_{ S ijk}, where

*m*is the number of presentation conditions, is by definition always zero. It follows that removing linear effects of movement magnitude also removes subject-specific biases in

*SD*of movement magnitude.

*SD*s that were substituted in Equation 4. For brevity, we only describe the procedure for

*SD*of movement magnitude for the reaching response (Experiment 1). Computations for

*SD*of the adjusted (button-pressing) magnitude can be obtained by making the appropriate substitutions.

*SD*of magnitude based on response magnitude for each subject and presentation condition separately. This linear function has the form ŝ

_{ ijk}=

*a*

_{ ij}+

*b*

_{ ij}*

*d*

_{ ijk}, where ŝ

_{ ijk}is predicted

*SD*of movement magnitude for a particular subject

*i,*presentation condition

*j*and target magnitude

*k, d*

_{ ijk}is the movement magnitude for a particular subject, presentation condition and target magnitude, and

*a*

_{ ij}and

*b*

_{ ij}are subject and presentation condition specific coefficients of the linear polynomial, i.e. intercept and slope parameters. To obtain magnitude corrected

*SD*for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions,

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}, we substituted movement distances obtained in ‘Endpoint’ conditions,

*d*

_{ i Endpoint k}, into the equations that predict ŝ

_{ ijk}for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions. To obtain the MLE prediction,

*SD*

_{Head/Eye_MR}and

*SD*

_{Hand_MR}were then substituted in Equation 4. Predictions were computed for each subject and target magnitude separately.

*SD*of magnitude based on response magnitude for each subject and presentation condition. This quadratic function has the form ŝ

_{ ijk}=

*a*

_{ ij}+

*b*

_{ ij}

*d*

_{ ijk}+

*c*

_{ ij}

*d*

_{ ijk}

^{2}, where ŝ

_{ ijk}is predicted

*SD*of movement magnitude for a particular subject

*i,*presentation condition

*j*and target magnitude

*k, d*

_{ ijk}is the movement magnitude for a particular subject, presentation condition and target magnitude, and

*a*

_{ ij},

*b*

_{ ij}and

*c*

_{ ij}are subject and presentation condition specific coefficients of the polynomial. The remaining computations are identical to the linear case.

*σ*

^{2}=

*σ*

_{Representation}

^{2}+

*σ*

_{Motor}

^{2}. We have shown previously that

*SD*in ‘Endpoint’ conditions can be used to estimate motor noise, such that

_{Motor}

^{2}=

*k*(

*σ*

_{Endpoint}

^{2}), where

*k*denotes the proportion of motor noise to overall movement variability (Thaler & Todd, 2009b). In the current experiments, we can estimate

*σ*

_{Endpoint}

^{2}using

*SD*

_{Endpoint}. It follows that the simplest estimate of motor noise

_{Motor}

^{2}for each target magnitude can be obtained using Equation C1:

*k*= 0.4 for our simulations. To obtain an estimate of representation noise

_{Representation}

^{2}in our experiment, we can then simply subtract

_{Motor}

^{2}from

*SD*

^{2}for each target magnitude. To generate MLE predictions for our experiments we therefore used ‘Endpoint’ conditions to estimate

_{Motor}

^{2}for each target magnitude and subject. We then subtracted

_{Motor}

^{2}from

*SD*

_{Head/Eye}

^{2}and

*SD*

_{Hand}

^{2}for each target magnitude and subject and substituted the remainder into Equation 4 to yield the metric MLE prediction,

_{Representation_Endpoint}

^{2}. To obtain the MLE prediction + motor noise,

_{Motor}

^{2}was added to

_{Representation_Endpoint}

^{2}for each target magnitude and subject, i.e.

_{Endpoint}and

*SD*

_{Endpoint}.

*SD*based on movement magnitude in ‘Endpoint’ conditions for each target magnitude and subject (see previous sections for details on linear magnitude correction functions). To predict magnitude-corrected motor noise for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions, we then substituted movement magnitudes observed in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions into the linear magnitude correction function obtained for ‘Endpoint’ conditions and substituted the result of this prediction into Equation C1. The result of these computations is the amount of motor noise that is expected for movement magnitudes observed in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions. The remaining computations are identical to those in the non-magnitude corrected motor noise. We used non-magnitude corrected motor noise in combination with the non-magnitude corrected MLE prediction. For linear and quadratic magnitude corrected MLE predictions, we used linear magnitude corrected motor noise.

^{1}In the current paper, we use the term ‘metric’ to refer to quantitative distance and direction. In the mathematical literature, a metric geometry is defined as a set of points and a distance-function

*d*(

*x, y*), which defines the distance between any two points x and y in the set and which satisfies the three metric axioms of (1) isolation i.e.

*d*(

*x, y*) = 0 if

*x*=

*y,*(2) symmetry, i.e.

*d*(

*x, y*) =

*d*(

*y, x*) and (3) triangle inequality, i.e.

*d*(

*x, y*) +

*d*(

*y, z*) =

*d*(

*x, z*) (Coxeter, 1969). This definition implies that metric geometries permit the computation of quantitative distance and direction. Thus, in the current paper we use the term ‘metric’ differently as it is used in the mathematical literature, but the way we use it is consistent with the mathematical definition.

^{2}To adjust the degrees of freedom using Tukey's HSD procedure, we chose df = 9 and k = 6, where k is the number of means to be compared. The reason for choosing k = 6 instead of k = 16, which is the actual number of groups that we have in our experiment, was that we computed only 12 out of all 120 possible post-hoc comparisons. Given a fixed number of groups k, Tukey's HSD test corrects degrees of freedom based on the assumption that all possible comparisons between the k groups are going to be computed, i.e. the number of comparisons is assumed to be k(k − 1)/2. Thus, if we had chosen k = 16, which is the number of groups that we actually have in our experiment, we would have corrected the degrees of freedom assuming that the number of comparisons is 120, which would make our test very conservative. Choosing k = 6 corrects the degrees of freedom assuming that the number of comparisons is 15, which makes our test only slightly more conservative then necessary.

^{3}The reader might wonder why we predict variance of response magnitude, but not bias. The reason for concentrating on the variance is that a prediction of bias would be an unfair test of the metric model, because bias of response magnitude depends on the direction in which visual information is specified (Thaler & Todd, 2009a). Since the direction in which visual information is specified differs between metric and endpoint conditions in our experiments, it would be expected that metric conditions show different biases in response magnitude from endpoint conditions, and this is what we observe in our data. It follows that prediction of bias in endpoint conditions based on bias in the metric conditions would fail. However, these differences in bias (and therefore the failure of the metric prediction) would be due to differences in the orientation in which the visual information is specified, not due to the fact that the representation that is used in the endpoint condition is non-metric. Luckily, we have shown previously, that variance of response magnitude does not depend on response direction (Thaler & Todd, 2009a). Therefore, prediction of variance provides a fair test of the metric model in our experiments.

*Annual Reviews in Neuroscience*, 20, 303–330. [PubMed] [CrossRef]

*Proceedings of the 5th International Conference on Computer Vision, Boston*(pp. 58–65). IEEE Computer Society Press.

*Cerebral Cortex*, 19, 1372–1393. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 8, (16):3, 1–23, http://journalofvision.org/8/16/3/, doi:10.1167/8.16.3. [PubMed] [Article] [CrossRef] [PubMed]

*Experimental Brain Research*, 64, 476–482. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 15, 393–414. [PubMed] [CrossRef] [PubMed]

*Neuropsychologia*, 44, 2594–606. [PubMed] [CrossRef] [PubMed]

*Annual Reviews in Neuroscience*, 22, 319–49. [PubMed] [CrossRef]

*Introduction to geometry*. New York: John Wiley & Sons.

*Experimental Brain Research*, 84, 434–438. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 55, 56–62. [PubMed] [CrossRef]

*Trends in Cognitive Sciences*, 4, 423–431. [PubMed] [CrossRef] [PubMed]

*Neuroscience and Biobehavioral Reviews*, 22, 761–788. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 415, 429–433. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 12, 465–484. [CrossRef]

*Biophysics*, 11, 766–775.

*λ*model) for motor control.

*Journal of Motor Behavior*, 18,17–54. [PubMed] [CrossRef] [PubMed]

*Progress in motor control—A multidisciplinary perspective*(pp. 699–726). New York: Springer.

*Statistical analysis of circular data*. New York, NY: Cambridge University Press.

*Journal of Experimental Psychology*, 47, 381–391. [PubMed] [CrossRef] [PubMed]

*Behavioral and Brain Sciences*, 15, 309–362. [CrossRef]

*Journal of Experimental Psychology: Learning, Memory, and Cognition*, 31, 195–215. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 27, 1124–1144. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 7, (5):11, 1–12, http://journalofvision.org/7/5/11/, doi:10.1167/7.5.11. [PubMed] [Article] [CrossRef] [PubMed]

*Progress in Neurobiology*, 77, 215–251. [PubMed] [CrossRef] [PubMed]

*Trends in Neurosciences*, 15, 20–25. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 99, 97–111. [PubMed] [CrossRef] [PubMed]

*Journal of Cognitive Neuroscience*, 6, 341–358. [CrossRef] [PubMed]

*Current Opinion in Neurobiology*, 9, 718–727. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 98, 1075–1082. [PubMed] [Article] [CrossRef] [PubMed]

*Trends in Neuroscience*, 27, 712–719. [PubMed] [CrossRef]

*Journal of the Optical Society of America A*, 8, 377–385. [CrossRef]

*European Journal of Neuroscience*, 10, 195–203. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 12, 1056–1061. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 115, 469–478. [PubMed] [CrossRef] [PubMed]

*The visual brain in action*. Oxford: Oxford UP.

*Neuropsychologia*, 46, 774–785. [PubMed] [CrossRef] [PubMed]

*Neural Networks*, 2, 159–168. [CrossRef]

*Visual navigation: From biological systems to unmanned ground vehicles, advances in computer vision vol. II*(pp. 898–134). Mahwah, New Jersey, USA: Lawrence Erlbaum Associates.

*Psychological Review*, 102, 28–67. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 97, 4203–4214. [PubMed] [Article] [CrossRef] [PubMed]

*Current Opinion in Neurobiology*, 10, 747–54. [PubMed] [CrossRef] [PubMed]

*Nature*, 394, 887–91. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 62, 595–608. [PubMed]

*Journal of Neurophysiology*, 62, 582–594. [PubMed]

*Annual Reviews in Neuroscience*, 15, 167–191. [PubMed] [CrossRef]

*Neuroscience*, 159, 578–598. [PubMed] [CrossRef]

*Neuropsychologia*, 47, 1227–1244. [PubMed] [CrossRef]

*Trends in Cognitive Sciences*, 8, 115–121. [PubMed] [Article] [CrossRef] [PubMed]

*Perception & Psychophysics*, 65, 31–47. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 7, 907–915. [PubMed] [Article] [CrossRef] [PubMed]

*Nature Neuroscience*, 5, 1226–1235. [PubMed] [CrossRef] [PubMed]

*Analysis of visual behavior*(pp. 549–586). Cambridge, MA: MIT Press.

*Journal of Neurophysiology*, 91, 1050–1063. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 99, 2281–2290. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 3, 1212–1217. [PubMed] [CrossRef] [PubMed]