Free
Research Article  |   March 2010
Beyond distance and direction: The brain represents target locations non-metrically
Author Affiliations
Journal of Vision March 2010, Vol.10, 3. doi:https://doi.org/10.1167/10.3.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lore Thaler, Melvyn A. Goodale; Beyond distance and direction: The brain represents target locations non-metrically. Journal of Vision 2010;10(3):3. https://doi.org/10.1167/10.3.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In their day-to-day activities human beings are constantly generating behavior, such as pointing, grasping or verbal reports, on the basis of visible target locations. The question arises how the brain represents target locations. One possibility is that the brain represents them metrically, i.e. in terms of distance and direction. Another equally plausible possibility is that the brain represents locations non-metrically, using for example ordered geometry or topology. Here we report two experiments that were designed to test if the brain represents locations metrically or non-metrically. We measured accuracy and variability of visually guided reach-to-point movements (Experiment 1) and probe-stimulus adjustments (Experiment 2). The specific procedure of informing subjects about the relevant response on each trial enabled us to dissociate the use of non-metric target location from the use of metric distance and direction in head/eye-centered, hand-centered and externally defined (allocentric) coordinates. The behavioral data show that subjects' responses are least variable when they can direct their response at a visible target location, the only condition that permitted the use of non-metric information about target location in our experiments. Data from Experiments 1 and 2 correspond well quantitatively. Response variability in non-metric conditions cannot be predicted based on response variability in metric conditions. We conclude that the brain uses non-metric geometrical structure to represent locations.

Introduction
Every day we use vision to locate objects and to generate appropriate responses, such as reaching out to grasp them or communicating their locations to other people. But how does our brain represent the location of objects and how does it use that representation to generate different responses? 
Our world is well described in terms of metric geometry, i.e. in terms of distance and direction. 1 Furthermore, motor actions that are directed at locations in the physical world, such as reaching, grasping, walking or saccadic eye movements are typically metrically scaled. Thus, it seems natural to assume that our brain would represent locations in a metric format and that this metric representation is used to generate various kinds of responses. If one assumes that the brain represents target location in a metric format, then the question arises as to where that metric coordinate system is anchored. For example, distance and direction could be computed in egocentric coordinates with respect to the observer or the observer's body parts (i.e. eye, head, shoulder, hand) or in allocentric coordinates with respect to an external frame of reference. Much research has addressed the question as to which coordinate system the brain uses to compute target distance and direction and how the different coordinate systems interact on behavioral and neural level (e.g. Andersen, Snyder, Bradley, & Xing, 1997; Colby & Goldberg, 1999; McGuire & Sabes, 2009; Snyder, Grieve, Brotchie, & Andersen, 1998; Sober & Sabes, 2005; Soechting & Flanders, 1992; Thaler & Todd, 2009a, 2009b). Yet, remarkably, nobody has tested the fundamental assumption that the brain uses metric structure (i.e. distance and direction) to represent locations. 
However, from a computational perspective, target location could just as easily be represented in a weaker non-metric geometric format, such as ordered geometries or topologies. The advantage of non-metric geometries is that they can be accurately and reliably computed in the presence of sensory uncertainty that would corrupt, or even prevent, computation of metric structure—and that they can be computed faster (e.g. Beardsley, Reid, Zisserman, & Murray, 1995; Faugeras, 1995; Koenderink & van Doorn, 1991; for a review with respect to human vision see Todd, 2004). Importantly, if a non-metric geometry were used to represent space, metrically scaled behavior is expected to emerge only as a consequence of adaptive mechanisms (Robert, Zeller, Faugeras, & Hebert, 1997). 
From an empirical perspective, there is evidence to support the idea that the human brain employs non-metric geometrical structure, such as ordered geometries or topologies, for example, in the generation of overt judgments of visually perceived 3D shape (for review see Todd, 2004; Todd & Norman, 2003) and for visually guided navigation of large-scale environments (Foo, Warren, Duchon, & Tarr, 2005). 
Ordered geometries, as one might expect, specify the order of two or more points. Thus, it is possible, for example, to determine if two points are spatially distinct and if one point is closer than the other. Disparity signals, for example, are a natural source of information that specifies depth order. Of course, disparity signals could also be combined with other signals such as vergence, version, and head position to compute metric depth structure, but ordered structure is available even without those additional sources of information (see Blohm, Kahn, Ren, Schreiber, & Crawford, 2008 for review and for a metric model that (by definition) provides ordered structure as well). 
Another type of non-metric structure, topological structure, also permits unique identification of locations, but does not permit determination of order. Locations in a topological sense could, for example be computed by a neural network, in which neural inputs from various sources such as disparity, vergence, version, and head position, converge onto neurons later in the network such that a certain pattern of activation within the sensory input layer would result in the activation of a certain ‘location’ neuron in the representation layer. According to this conceptual model, combinations of sensory signals lead to the identification of locations. Most importantly, in a topological model, the representation signifies locations in space, not metric units. How a topological representation might be used to generate various kinds of responses is addressed in more detail in the Discussion section. 
In conclusion, both topologies and ordered geometries provide weaker geometrical structure than metric geometries. All three geometric representations (metric, ordered, topological) permit unique identification of locations. In contrast to metric geometries, however, neither ordered nor topological representations provide measures of distance and direction (Coxeter, 1969). At the same time, even though non-metric geometries are ‘weaker’, they also come at a lower computational cost and are more robust towards sensor noise (Robert et al., 1997). Thus, a non-metric representation can in principle be beneficial for performance, and the possibility arises that our brain would represent target locations non-metrically. 
The current experiments were designed to investigate which format the human brain uses to represent locations (i.e. non-metric, metric head/eye centered, metric hand centered, and metric allocentric). To achieve this goal, we developed a paradigm that required subjects to direct responses towards visible target locations or to generate responses that could be accomplished only by computing target distance and direction in head/eye, hand or allocentric coordinates. To confirm that any potential differences in performance were due to the way the brain represents target locations and not due to the type of response that subject had to give, subjects performed both reach-to-point movements (Experiment 1) and probe stimulus adjustments (Experiment 2). 
The data show that responses were metrically scaled in all conditions but that response variability was lowest in conditions in which subjects could use a non-metric representation of locations to generate their response. Response variability in non-metric conditions could not be predicted based on response variability in metric conditions. Our results are the first evidence for the idea that the brain represents locations non-metrically, even when those locations are positioned well within reach space. We discuss the implications of our findings for computational modeling of visually guided movement and for the understanding of visual processing for perception and action. 
Materials and methods
Reach-to-point movements (Experiment 1)
Apparatus
The experimental apparatus is illustrated in Figure 1. Subjects were seated on an adjustable chair. Stimuli were displayed on a 17-inch CRT at a temporal and spatial resolution of 75 Hz and 1280(H) × 1024(V) pixels, respectively. The active display area subtended 335(H) × 268(V) mm. Subjects viewed stimuli in a half-silvered front-surface mirror that was mounted halfway between the monitor and a touch panel (distance from the mirror to either surface was 30 cm). During the experiment, the back of the half-silvered mirror was covered and subjects moved their hands below the mirror on the touch panel. Thus, subjects could not see their hand during the experiment. At the same time, the matched distances between monitor, mirror and touch panel made the mirror reflection of stimuli appear to be in the same plane as the touch panel. Displays were viewed binocularly in a darkened room and a combined chin/forehead rest was used avoid to changes in head position. Subjects' eyes were located ∼460 mm above the touch panel. 
Figure 1
 
Sketch of the experimental apparatus. Planes emanated at the forehead and consisted of yarn spun in random orientations around thin, clear plastic frames. Threads were arranged such as to not obstruct the view of the scene for either eye. To eliminate head movements, subjects placed their head in a combined chin-forehead-rest (not shown). To eliminate eye movements, subjects directed their gaze at a fixation target. Eye movements were monitored with a webcam (not shown).
Figure 1
 
Sketch of the experimental apparatus. Planes emanated at the forehead and consisted of yarn spun in random orientations around thin, clear plastic frames. Threads were arranged such as to not obstruct the view of the scene for either eye. To eliminate head movements, subjects placed their head in a combined chin-forehead-rest (not shown). To eliminate eye movements, subjects directed their gaze at a fixation target. Eye movements were monitored with a webcam (not shown).
The most interesting feature of our apparatus were two vertically oriented planes placed between the monitor and mirror. When a subject placed her head in the chin/forehead rest, the two planes appeared to emanate from the forehead and extend onto the movement surface. The two planes aided in the specification of head/eye centered coordinates. The planes consisted of yarn spun in random orientations around thin, clear plastic frames. Threads were arranged so as not to obstruct the view of the scene for either eye. Defining planes using yarn avoided reflections or occlusions that might have otherwise impaired vision of the scene. One plane was made of black yarn and oriented 35° clockwise (35° plane). The other plane was made of red yarn and oriented 11° counter clockwise (−11° plane). 
Hand movements were recorded with a stylus on the touch panel (MagicTouch Add-On Touch Screen; Model: KTMT-1700-USB; Keytec, Garland, TX). Touch panel coordinates were calibrated to the display coordinates, i.e. pixels. Thus, the workspace on the touch panel was 335 × 268 mm and spatial resolution was 0.26 mm. Touch panel coordinates were sampled at 150 Hz. To ensure good correspondence between display and movement parameters, the apparatus was calibrated before each session. For calibration, display coordinates and touch panel coordinates were aligned using a 25-point calibration procedure. For calibration, the backing of the half-silvered mirror was removed to permit visual alignment between the physical stylus tip and the virtual image. Calibration was evaluated by projecting a virtual stylus tip (a 1.5 mm diameter circle) onto the physical stylus tip. If the physical stylus tip fell within the virtual stylus tip across all workspace regions, calibration was deemed successful and the backing of the mirror was reinstated. If the physical tip did not fall within the virtual stylus tip, calibration was repeated. 
Stimuli and task
The experiment involved four presentation conditions that differed in the way visual information relevant for movement production was presented to subjects. These four conditions are illustrated in Figure 2. In all conditions, the subject's hand was initially located at a visible starting point, which was located in the 35° plane on the virtual movement surface (i.e. 460 mm below, 80 mm right, and 115 mm to the front of subjects' eyes). Visual feedback was provided in between trials to help subjects move their hand towards the starting point. During experimental trials, however, visual feedback was not available and the hand was unseen. The hand starting point was visible throughout the experiment. Subjects were instructed to maintain fixation on a peripheral target during experimental trials (for more details see Eye movements), but subjects were permitted to move their eyes between trials to facilitate the hand movement to the starting position. 
Figure 2
 
Illustration of the four presentation conditions used in the experiments. Only one target magnitude is illustrated for each presentation condition. In the actual experiments the hand was unseen, but it is drawn here for illustration. Please see Materials and methods for details regarding stimuli and instructions.
Figure 2
 
Illustration of the four presentation conditions used in the experiments. Only one target magnitude is illustrated for each presentation condition. In the actual experiments the hand was unseen, but it is drawn here for illustration. Please see Materials and methods for details regarding stimuli and instructions.
In the ‘Endpoint’ condition, subjects were presented with a black target dot located in the 35° plane on the virtual movement surface and they were asked to move their hand towards the target dot along the response direction, i.e. along the line where the 35° plane intersected the virtual movement surface. Thus, in the ‘Endpoint’ condition, subjects could move their hand towards a visible target location. As laid out in the Introduction, the brain can use non-metric information about target location in order to generate responses in this condition. However, subjects could of course also use metric distance and direction in either head/eye, hand or allocentric coordinates to represent locations and generate their response in the ‘Endpoint’ condition. 
In the metric ‘Head/Eye Centered’ condition, subjects were presented with a black target dot located in the −11° plane on the virtual movement surface. Subjects were instructed to move their hand along the response direction towards a location in space that had the same distance with respect to their head or eye as the black dot, but that was located in the direction defined by the 35° plane. It follows that subjects could not direct their response at a visible target location in the ‘Head/Eye Centered’ condition. It follows that subjects could not use non-metric information to generate their response, but they had to compute the metric distance of the black target dot with respect to their head or eye in order to get it right. Importantly, the stimulus layout was designed such that subjects could not use hand-centered or allocentric metric distance to generate accurate responses, i.e. if subjects used the distance between the hand starting position and the target to program their movement, they would overshoot on average 69 mm. 
In the metric ‘Hand Centered’ condition, subjects were presented with a black target dot that was located on a line that emanated from the hand starting point and that was oriented 11° counter clockwise. Subjects were instructed to move their hand along the response direction towards a location in space that had the same distance with respect to their hand starting point as the black dot. Thus, just as in the ‘Head/Eye Centered’ condition, subjects could not use non-metric locations to generate their response, but they had to compute the metric distance of the black target dot with respect to their hand in order to get it right. The stimulus layout prevented subjects to base their performance on head/eye-centered coordinates, because if subjects used for example the distance between their forehead and the target to program their movement, they would undershoot on average 23 mm. In principle, however, subjects could make use of allocentric coordinates, because the hand starting point could be treated as an allocentric reference. 
In the metric ‘Allocentric’ condition, subjects were presented with a black and a white dot connected by a thin line. Subjects were instructed to move their hand along the response direction towards a location in space that had the same distance with respect to their hand starting point as the black dot to the white dot. Therefore, subjects could not direct their response at a visible location, but they had to compute metric distance in allocentric coordinates to generate their response. The white dot in the ‘Allocentric’ condition was located always 120 mm to the left and 240 mm above the starting point of the hand. Thus, subjects could not use head/eye or hand centered coordinates to perform correctly. 
In all conditions, subjects were asked to move their hand in one smooth movement. Target dots were positioned such that they specified the same target magnitudes to be moved in each presentation condition, i.e. 100, 130, 160 and 190 mm, respectively. Hand starting position was fixed over the course of the experiment and located on the recording surface 460 mm below, 80 mm right, and 115 mm to the front of subjects' eyes. The start position of the hand and the visual targets were small 7-mm circles (white or black) with a small 2 mm dot of the opposite luminance in the center. The start position was visible throughout the experiment. All stimuli were presented on a light gray background covered with small (1 mm) darker gray, randomly positioned points. Random positions were recomputed on every trial. 
It needs to be highlighted that our experimental manipulation affected only the way response magnitude was visually specified, since the direction in which the response was to be made (‘response direction’) was visually specified by the 35° plane in all conditions; i.e. the response was to be made along the line where the 35° plane intersected the virtual movement surface. Thus, we would expect to find larger effects of our manipulations on movement magnitude than on movement direction. 
Predictions
We can dissociate predictions regarding response magnitude (i.e. bias) and response variability amongst the four conditions. As it turns out, an analysis of variability enables us to determine if the brain uses a non-metric representation to represent locations, whereas an analysis of bias enables us to determine if the brain computes a metric representation or if it has access to a non-metric representation only. 
With regard to variability, the prediction is that if subjects represent location non-metrically, then variability of responses in ‘Endpoint’ conditions should not be predictable from the variability in metric conditions. In contrast, if subjects represent locations metrically, we would expect that variability in ‘Endpoint’ conditions should indeed be predictable from variability in metric conditions. The simplest prediction is that variances in any of the three metric conditions match variance in the ‘Endpoint’ condition. This would suggest that the metric coordinate system whose variance matches variance in ‘Endpoint’ conditions is used to represent locations in ‘Endpoint’ conditions. In a more complex scenario, however, subjects might use combinations of metric coordinate systems to represent locations. In that case variance in ‘Endpoint’ conditions would be predicted by a combination of variances in metric conditions. 
With regard to response magnitude, i.e. bias, we predicted that if the brain does not have access to any metric representation, but relies exclusively on non-metric structure, then response magnitudes are expected to scale randomly in all metric conditions. In contrast, if the brain has access to a metric representation, then response magnitude should scale metrically with target magnitude in metric conditions. It is important to realize that responses are expected to scale metrically in ‘Endpoint’ conditions regardless of the underlying representation, and in fact, there is plenty of evidence in the literature that this is the case (e.g. Gordon, Ghilardi, & Ghez, 1994; Messier & Kalaska, 1997). However, as stated in the Introduction, metric response scaling in ‘Endpoint’ conditions might be either the consequence of adaptive mechanisms that work in the presence of a non-metric representation or the consequence of an underlying metric representation. It follows that metric response scaling in ‘Endpoint’ conditions by itself is not indicative of the metric or non-metric structure of the underlying representation. 
Subjects
Ten right-handed subjects (7 male, 3 female) participated in Experiment 1. Subjects gave informed consent before the experiment and were paid $10 for their participation. All subjects had self-reported normal or corrected to normal vision. 
Eye movements
To avoid gaze position as a potential experimental confound, subjects were instructed to fixate a target in the virtual movement plane in all conditions. It has been shown that the position of a target on the retina as well as gaze direction affect pointing responses (Bock, 1986; Schlicht & Schrater, 2007). Thus, we selected the location of the fixation target such as to match gaze direction and retinal eccentricity across presentation conditions. In order to match retinal eccentricity of the stimuli, we had to place the fixation target slightly differently in ‘Hand Centered’ compared to the other three presentation conditions. Specifically, in ‘Hand Centered’ conditions, the fixation target was placed 133.5 mm to the right, 365 mm to the front, and 460 mm below the center of the forehead, whereas it was placed 80.5 mm to the right, 375 mm to the front, and 460 mm below the forehead in the other three presentation conditions. Our choice of fixation location ensured that the target dots were located in the lower visual field in all conditions, that the average distance between target dots and fixation target was ∼13 degrees in all conditions and that gaze direction was matched across presentation conditions (i.e. gaze direction was identical in ‘Endpoint’, ‘Head/Eye Centered’ and ‘Allocentric’ conditions and differed only slightly in ‘Hand Centered’ conditions). To confirm that subjects followed our instructions we monitored eye movements with a webcam (Logitech Quickcam Pro9000). Due to the geometry of our apparatus and the mirror-based viewing, we could not use a standard eye tracker to monitor eye movements. 
Procedure
Each trial began with the display of the hand starting position, fixation target, and the black target dot (in ‘Allocentric’ conditions the white reference dot was shown as well). To initiate a trial, subjects moved their hand to the starting position. During this phase, subjects received online feedback on hand position via a pink cursor dot (2 mm diameter) projected onto their real hand position. Once subjects had remained within a 2 mm diameter circle around the starting position for at least 1.8 s, a beep would indicate the start of a trial. Synchronous with the beep, online feedback about hand position would disappear. The other display elements remained visible. Subjects were instructed to move their hand in one smooth movement along the response direction, i.e. along the line where the 35° plane intersected the virtual movement surface. Subjects were told that there was no time pressure and that they should move as accurately as possible. 
A trial was terminated by the computer either if subjects had not started to move after 3 s or if the hand moved less than 10 mm during the last 333 ms. A beep signaled the end of a trial at which point the target dots for the next trial would appear. After subjects had moved at least 30 mm away from their final hand position, online feedback was restored. Stimulus presentation was blocked with respect to the four presentation conditions (‘Endpoint’, ‘Head/Eye Centered ’, ‘Hand Centered’, ‘Allocentric’), yielding four blocks. Within each block, each of the four target magnitudes was presented 8 times in pseudo-random order. An experimental session consisted of three sets of four blocks. Order of blocks within each set was randomized. Each subject participated in one ∼60 minute session and therefore made 24 responses to each stimulus. At the beginning of each session, subjects were given written instructions and they were made familiar with the task in a practice phase, during which they made at least four responses in each of the four presentation conditions (i.e., a total of at least 16 practice trials before beginning the experiment proper). Practice trials were not recorded. Stimulus presentation and data collection was computer-controlled using C/C++ and OpenGL programming language. 
Data analysis
For each movement, the equation for a straight line joining movement start and endpoints was computed. Movement Magnitude was computed as the length of that line and movement direction as its angular orientation. For each movement, we could then compute the Movement Direction Error as the angular deviation between the response direction and movement direction. To assess systematic deviations of the responses from the visually specified target magnitude and direction, we computed average movement magnitude and average movement direction error. To assess variability of performance, we computed standard deviations ( SD) of movement magnitude and movement direction error for each subject. For the direction data, we computed both linear and circular statistics (Fisher, 1993). Since differences between linear and circular statistics were very small (max. absolute deviation between measures 0.0017°), we report linear statistics only. To characterize Distributions of Movement Endpoints across subjects we fit minimum variance ellipses to the endpoints of all subjects' hand movements for each target magnitude and presentation condition (Gordon et al., 1994; van Beers, Haggard, & Wolpert, 2004). To remove any contribution of individual differences to this measure, we subtracted each subjects mean endpoint (
x
,
y
) for each target magnitude in each presentation condition before computing the ellipse. Ellipses were determined by computing the eigenvalues λ and the eigenvectors of the 2 × 2 sample covariance matrix R, whose elements are given by: 
Rjk=1ni=1nδijδik,
(1)
where the deviation δi =
p
i
p
is the endpoint of movement i along one of two orthogonal axes (rows and columns j, k ⊂ {x, y}) and
p
is the mean position over n trials. The square root of the eigenvalues corresponds to the standard deviation of movements along each axis specified by the associated eigenvectors. The aspect ratio of the ellipse is equal to the ratio of the square roots of the two eigenvalues, i.e.
λ1
/
λ2
. The larger the ratio, the more elongated the ellipse. Ellipse size depends on magnitude of the eigenvalues and SD of movements in the plane is equivalent to ellipse area: 
SD2D=πλ12λ22.
(2)
 
Variability in movement direction and magnitude, and therefore the distribution of movement endpoints, are affected by Kinematic Parameters such as movement speed, duration and trajectory shape (van Beers et al., 2004). To determine if the shape of the movement trajectories differed across conditions, we determined movement curvature by computing the absolute distance of any point on a movement trajectory to the straight line connecting trajectory start and endpoints, and by dividing the maximum absolute distance by the length of the straight line (Atkeson & Hollerbach, 1985). To represent curvature values in percent, we multiplied this ratio by 100. Movement curvature of 0% corresponds to a straight-line trajectory, whereas Movement curvature of 50% would correspond to a half-circular trajectory. Average movement speed, peak movement speed and movement duration were computed by numerical differentiation of smoothed movement trajectories (Butterworth filter with 7 Hz cut off). 
We excluded outliers for each subject, target magnitude and presentation condition, where the magnitude, orientation error, curvature or x and y coordinate of a movement exceeded the 25-percentile − 2.5 * iqr or 75-percentile + 2.5 * iqr (iqr = inter quartile range). Using this method, which is robust in the presence of outliers, only 0.94% (n = 36) of all movements were rejected. 
Probe stimulus adjustments (Experiment 2)
Stimuli and task
The stimuli were identical to those used in Experiment 1. To perform the response, subjects were instructed to adjust a probe dot, the movement of which was restricted to the response direction that had also been used in Experiment 1, i.e. the line where the 35° plane intersected the virtual movement surface. Location of the probe dot could be adjusted in 1 mm steps using the up- and down-arrow keys on the computer keyboard. Subjects were instructed to adjust the location of the probe dot to match the location of the target or its distance in head/eye, hand, or allocentric coordinates, depending on the presentation condition that was being performed. Note that the start point of the probe dot coincided with the start point used in Experiment 1, and thus with the location of the subject's index finger. 
Predictions
The predictions for Experiment 2 are the same as those for Experiment 1. 
Subjects and apparatus
The same subjects who participated in Experiment 1 also participated in Experiment 2 and they were paid $10 for their participation. Experiments 1 and 2 were performed on separate days and the order in which Experiments 1 and 2 were performed was counterbalanced across subjects. The adjustment task was performed using the same apparatus as in Experiment 1, with the only difference being that the touch screen was replaced with a keyboard. The keyboard was placed such that the midpoint between the up- and down-arrow keys was located at the same point in space as the hand starting position used in Experiment 1. 
Procedure and eye movements
Each trial began with the display of the initial probe dot position, the fixation target and the target dot. To initiate a trial, subjects pressed the up-arrow key. Once subjects pressed the key, the target dot disappeared from view. Thus, subjects saw only the probe dot and could not compare its position directly with the target dot, which was no longer visible. Even though the target dot disappeared, however, subjects could in principle perform the task using some sort of retinal matching or disparity matching or combinations of these two strategies in the ‘Endpoint’ condition. Apparently, this would give the ‘Endpoint’ condition an unfair advantage over the three metric conditions. To avoid retinal or disparity matching as potential confounds, the fixation target jumped towards a new position as soon as the target dot disappeared in the ‘Endpoint’ condition. To eliminate the gaze shift itself as a potential confound, the target jumped in the other conditions as well. To summarize, in all conditions subjects had to shift their gaze 5.5° in between target dot presentation and probe dot adjustment. The fixation target jumped towards the same position in all presentation conditions. However, since the initial position of the fixation target differed between ‘Hand Centered’ and the other three presentation conditions (compare Experiment 1), the direction of the jump differed between ‘Hand Centered’ (leftwards jump) and the other three conditions (upwards jump). To prevent subjects from using visual ‘landmarks’ to make their response, the location of the gray dots covering the background shifted randomly in between target dot presentation and probe dot adjustment phase. 
Just as in Experiment 1, stimulus presentation was blocked with respect to the four presentation conditions. Within a block, each of the four target magnitudes was presented 8 times in pseudo-random order. Each subject participated in one ∼40 minute session that contained two sets of four blocks. Thus, every subject gave 16 responses to every stimulus. In the beginning of the session, subjects received written instructions and were made familiar with the task in a practice session, during which they made two responses in each of the four presentation conditions for a total of 8 of practice trials. Practice trials were not recorded. 
Data analysis
Adjusted Magnitude was computed as the overall magnitude that the probe dot was moved on each trial. The mean and standard deviation of these magnitudes was computed for each subject, target magnitude and presentation condition. We excluded outlier trials for each subject, target magnitude and presentation condition, where the adjusted magnitude exceeded the 25-percentile − 2.5 * iqr or 75-percentile + 2.5 * iqr. Only 0.003 % (n = 8) of responses were rejected. 
Results
Reach-to-point movements (Experiment 1)
Distributions of movement endpoints
Figure 3 shows distributions of movement endpoints for the different target magnitudes and presentation conditions. It is evident from Figure 3 that the ellipses are aligned with the direction of movement and that the areas of the ellipses increase as movement magnitude increases in all presentation conditions. In all presentation conditions, the endpoints of the movements are shifted slightly counter clockwise, i.e. towards the sagittal body midline, and the endpoints of the movements are distributed in roughly the same fan shape. In other words, the angular errors in movement direction were similar across presentation conditions. However, it is also evident that shapes and areas of the ellipses differ across conditions. Since angular errors appear to be similar across presentation conditions, it appears that differences in size and shape of the ellipses are most likely due to differences in movement magnitudes. 
Figure 3
 
Distributions of movement endpoints and variability ellipses for the different experimental conditions in Experiment 1. Ellipse axes denote two SD around the mean. Straight lines in each ellipse denote average movement direction, i.e. the last portion of the vector joining movement start and endpoints. Ellipses were computed based on all subjects' responses after subtracting each subject's mean. Ellipses are positioned on the average movement endpoint across all subjects. Black squares mark the endpoint that would have resulted from a movement executed veridical along the target direction over the target magnitude.
Figure 3
 
Distributions of movement endpoints and variability ellipses for the different experimental conditions in Experiment 1. Ellipse axes denote two SD around the mean. Straight lines in each ellipse denote average movement direction, i.e. the last portion of the vector joining movement start and endpoints. Ellipses were computed based on all subjects' responses after subtracting each subject's mean. Ellipses are positioned on the average movement endpoint across all subjects. Black squares mark the endpoint that would have resulted from a movement executed veridical along the target direction over the target magnitude.
It is important to note that one has to be careful when interpreting what the shape of ellipses might indicate. Although ellipses in ‘Hand Centered’ conditions appear to be narrower than in the ‘Head/Eye Centered’ and ‘Allocentric’ conditions, this is entirely due to the fact that the average movement magnitudes are shorter for ‘Hand Centered' compared to ‘Head/Eye Centered’ and ‘Allocentric’ conditions. These shorter movement magnitudes, in combination with roughly constant angular scatter, produces a reduction in the short axes of the ellipses in this condition. But because the length of the long axis is roughly similar across ‘Hand Centered’, ‘Head/Eye Centered’ and ‘Allocentric’ conditions, the reduction in the size of the short axis in the ‘Hand Centered’ condition results in more elongated ellipses. The ellipses in the ‘Endpoint’ condition, however, really are quite different from those of the other three conditions. Here the ellipses have shorter long axes because there was much less variability in movement magnitude in this condition compared to the others. On an overall level it is interesting to note that ellipses become less elongated with increasing magnitude in ‘Head/Eye Centered’ and ‘Allocentric’ conditions, but there appears to be no systematic change in ellipse aspect ratio for ‘Hand Centered’ and ‘Endpoint’ conditions. 
Average movements in ‘Endpoint’ and ‘Hand Centered’ conditions are comparably accurate for closer targets, but there is a tendency to undershoot the farthest target. In contrast, subjects tend to overshoot targets in ‘Head/Eye Centered’ and ‘Allocentric’ conditions, but the overshoot is reduced for the farthest target in ‘Allocentric’ conditions. 
Despite systematic errors, subjects average movement endpoints are fairly accurate with respect to the specified endpoint (max. spatial deviation is 21 mm, average deviation is 12 mm), which suggests that subjects used the coordinate system they were instructed to use to guide their hand in all conditions. 
Movement direction errors
We applied repeated measures ANOVA to both constant (averages) and variable ( SD) movement direction errors with ‘presentation condition’ and ‘target magnitude’ as factors. The analysis revealed a significant main effect of target magnitude on both average movement direction error (F(3,27) = 14.302; p = .0001) and on the SD of movement direction error (F(3,27) = 4.841, p = .008). Neither main effect of presentation condition nor interaction effects were significant. Therefore, we averaged the constant errors (i.e., average movement direction error) and the variable errors (i.e., SD of movement direction error), respectively, across presentation conditions and plotted them as a function of target magnitude (Figure 4). 
Figure 4
 
(a) Average movement direction errors (in degrees) averaged across presentation conditions. Positive errors indicate errors towards the sagittal body midline. (b) SD of movement direction errors (in degrees) averaged across presentation conditions. Error bars denote standard errors of the mean between subjects.
Figure 4
 
(a) Average movement direction errors (in degrees) averaged across presentation conditions. Positive errors indicate errors towards the sagittal body midline. (b) SD of movement direction errors (in degrees) averaged across presentation conditions. Error bars denote standard errors of the mean between subjects.
As can be seen in Figure 4a, subjects' shifted the endpoints of their movements slightly (∼2°) towards their sagittal body midline, although this shift decreased with increasing target magnitude (compare Thaler & Todd, 2009a). This observation is consistent with the depiction of the data in Figure 3 and with the significant effect of target magnitude on average movement direction errors. Similarly, in agreement with the significant effect of target magnitude on SD of movement direction errors, Figure 4b shows that SD of movement direction errors decreases with increasing target magnitude (compare Gordon et al., 1994; Messier & Kalaska, 1997; Thaler & Todd, 2009a). The overall effect is small (∼0.5°). In summary, subjects' movement direction errors were unaffected by the way visual information was presented. As outlined in the method section, this was expected since response direction was specified in the same way across all presentation conditions. 
Movement magnitude
As the left-hand column of Figure 5 shows, subjects were equally accurate with respect to target magnitude in the ‘Endpoint’ and ‘Hand Centered’ conditions, but tended to overshoot in ‘Head/Eye Centered’ conditions and to a lesser degree in ‘Allocentric’ conditions. This result is consistent with the depiction of the data in Figure 3. In order to determine the reliability of this effect, we applied repeated measures ANOVA with ‘presentation condition’ and ‘target magnitude’ as factors to the average movement magnitudes. The overall analysis revealed a highly significant main effect of target magnitude (F(3,27) = 337.499, p < .0001), presentation condition (F(3,27) = 11.062; p < .0001) and a significant interaction between these two factors (F(9,81) = 2.274; p = .025). These results confirm our impression that subjects movements scale metrically with target magnitude in all presentation conditions, but that systematic over- and under-shoots depend on the way visual information was presented to subjects as well as on the magnitude of the specified endpoint. We carried out a series of post-hoc T-tests between average movement magnitudes in the ‘Endpoint’ condition for each of the specified endpoints and the corresponding average movement magnitude for the specified endpoints in each of the other three presentation conditions. Threshold for significance for each test was chosen to be p = .05. Since we computed a total number of twelve tests the degrees of freedom for each test were adjusted using Tukey's HSD procedure in order to control for accumulation of Type-I error. 2 As can be seen in Figure 5, only the movement magnitudes for the two farthest targets in the ‘Head/Eye Centered’ and ‘Allocentric’ presentation conditions differed significantly from those in the ‘Endpoint’ conditions. In summary, average movement magnitude was equally accurate with respect to physical target magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions, but subjects tend to overshoot the two farthest target magnitudes in ‘Head/Eye Centered’ and ‘Allocentric’ conditions. 
Figure 5
 
Left column: Subjects' average movement magnitude plotted as a function of target magnitude for the different presentation conditions. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average movement magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of movement magnitude plotted as a function of movement magnitude for the different presentation conditions. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 5
 
Left column: Subjects' average movement magnitude plotted as a function of target magnitude for the different presentation conditions. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average movement magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of movement magnitude plotted as a function of movement magnitude for the different presentation conditions. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
The SD of movement magnitudes are plotted as a function of average movement magnitude in the right-hand column of Figure 5. In agreement with the depiction of the data in Figure 3, the SD of movement magnitude is lowest in ‘Endpoint’ conditions. It is also evident that SD depends on movement magnitude, but this relationship differs amongst presentation conditions. Specifically, SD increases proportionally to movement magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions whereas SD decreases slightly with increases in movement magnitude in the ‘Head/Eye Centered’ conditions, i.e. slope is slightly negative. In ‘Allocentric’ conditions, SD increases at first, but drops for the farthest magnitude. The observation that SD of movement magnitude does not increase proportionally with movement magnitudes for both ‘Head/Eye Centered’ and ‘Allocentric’ conditions is consistent with the fact that ellipses in those conditions become rounder as movement magnitude increases (see Figure 3). 
SD of movement magnitude is expected to increase proportional with movement magnitude, i.e. Fitts' law (Fitts, 1954). Since movements in ‘Head/Eye Centered’ and ‘Allocentric’ conditions were longer than the movements in ‘Endpoint’ or ‘Hand Centered’ conditions, we would therefore expect the SDs of these movements to increase simply as a function of response magnitude. To eliminate movement magnitude as potential confound, we used linear regression to remove the effects of movement magnitude (see 1 for computational details). The residual SD left after this analysis enabled us to determine those differences in the SD that were free from effects of movement magnitude. Because residual SD is the difference between SD observed in the data and SD expected based on the linear relationship between SD and movement magnitude, residual SD can be negative (i.e. SD is lower than expected) and positive (i.e. SD is higher than expected). The sum of all residuals is always zero. 
The average residual SD was −4.05 mm in the ‘Endpoint’ conditions, 1.12 mm in the ‘Hand Centered’ conditions, and 1.49 mm in the ‘Head/Eye Centered’ conditions and 1.44 mm in the ‘Allocentric’ conditions. To test for possible differences in the residual SDs among the four conditions, we computed T-tests for all possible pairwise comparisons. Threshold for significance was chosen to be p = .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical t 05; HSD = 3.12; critical t 01; HSD = 4.22). We found that the residual SDs in the ‘Endpoint’ conditions differed from the residual SDs in all the other conditions (Hand Centered: t(9) = 4.3; Head/Eye Centered: t(9) = 4.37; Allocentric: t(9) = 6.01). No other comparisons were significant. 
In summary, the results suggest that performance was least variable when subjects moved their hands towards a visible target, the only condition which permits the use of non-metric information about target location. Performance was more variable in the three other presentation conditions, where subjects had to rely on metric information about target magnitude in hand-centered, head/eye centered or allocentric coordinates. 
Kinematic parameters
Table 1 shows averages and standard deviations of kinematic parameters computed across target magnitudes for the four presentation conditions. To determine differences between presentation conditions for each of these measures we applied standard paired samples T-tests (two tailed). Threshold for significance was chosen to be p = .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical t 05; HSD = 3.12). 
Table 1
 
Averages and Standard Deviations (in parenthesis) of Movement Kinematics for each presentation condition in Experiment 1, computed across subjects and target magnitudes. Statistically significant differences between presentation conditions were determined using T-tests for paired samples, with degrees of freedom adjusted using Tukey's HSD procedure. Significant comparisons (p < .05) are indicated in the right column.
Table 1
 
Averages and Standard Deviations (in parenthesis) of Movement Kinematics for each presentation condition in Experiment 1, computed across subjects and target magnitudes. Statistically significant differences between presentation conditions were determined using T-tests for paired samples, with degrees of freedom adjusted using Tukey's HSD procedure. Significant comparisons (p < .05) are indicated in the right column.
Endpoint Hand Centered Head/Eye Centered Allocentric Significant differences (p < .05)
Curvature (%) 3.4 (1.3) 3.3 (1) 3.4 (1.2) 3.1 (0.9)
Average Speed (cm/s) 17 (4.7) 16 (4.7) 17.5 (5.6) 16.5 (5.4) Endpoint vs. Hand C.
Max. Speed (cm/s) 34.7 (14.9) 31.4 (13.1) 33.8 (13.8) 32.4 (15) Endpoint vs. Hand C.; Endpoint vs. Allocentric
Duration (ms) 843 (185) 914 (209) 988 (225) 990 (244) All comparisons, except: Head C. vs. Allocentric; Endpoint vs. Allocentric
It is evident from Table 1 that movement curvature is low for all presentation conditions. In other words, the movement trajectories were almost perfectly straight. This finding is in reasonably good agreement with results from other studies that investigated hand movements in the plane (Brenner, Smeets, & Remijnse-Tamerius, 2002, Figures 3 and 5; Desmurget, Jordan, Prablanc, & Jeannerod, 1997). To test if curvature is larger or smaller for different target magnitudes and presentation conditions, we applied repeated measure ANOVA with ‘target magnitude’ and ‘presentation condition’ as factors. This analysis revealed no significant effects. Although there were significant differences amongst the conditions in average speed, maximum speed and duration, these differences were quite small and unsystematic. Moreover, these differences cannot account for the observed differences in the accuracy and variability in movement magnitude among presentation conditions. For example, one would expect that movement variability would be higher in conditions that have higher movement speed. However, the observed differences in movement speed do not correspond to the observed differences in movement variability. 
Probe stimulus adjustments (Experiment 2)
Adjusted magnitudes
As the left-hand column of Figure 6 shows, averages of adjusted magnitudes in Experiment 1 are very similar to averages of movement magnitudes in Experiment 2 (compare Figure 5). The only noticeable difference between average adjusted and average movement magnitudes is that subjects tend to over-adjust target magnitude in ‘Hand Centered’ conditions. Just as for the reach-to-point data from Experiment 1, a repeated measures ANOVA with ‘presentation condition’ and ‘target magnitude’ as factors revealed a highly significant main effect of target magnitude (F(3,27) = 233.93; p < .0001) and presentation condition (F(3,27) = 18.128; p < .0001). The interaction effect is not significant. Thus, just as was the case for the average movement magnitudes, average adjusted magnitudes scale with target magnitude in all conditions, and systematic over- and under-adjustments vary as a function of the way in which visual information was presented to subjects. We carried out a series of post-hoc T-tests between average adjusted magnitudes in the ‘Endpoint’ condition for each of the specified endpoints and the corresponding average adjusted magnitude for the specified endpoints in each of the other three presentation conditions. Just as in Experiment 1 we chose the threshold for significance for each test to be p = .05 and we adjusted the degrees of freedom for each test using Tukey's HSD procedure (for more details see Footnote 2). As can be seen in Figure 6, adjusted magnitudes in the ‘Endpoint’ condition differ significantly from those in the other conditions, except for the shortest target magnitude in ‘Hand Centered’ and ‘Allocentric’ conditions. In summary, average adjusted magnitude is most accurate in ‘Endpoint’ conditions, but subjects tend to over-adjust the physical target magnitude in ‘Hand Centered’, ‘Head/Eye Centered’, and ‘Allocentric’ conditions. 
Figure 6
 
Left column: Subjects' average adjusted magnitude plotted as a function of target magnitude. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average adjusted magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of adjusted magnitude plotted as a function of adjusted magnitude. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 6
 
Left column: Subjects' average adjusted magnitude plotted as a function of target magnitude. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average adjusted magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of adjusted magnitude plotted as a function of adjusted magnitude. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
The SDs of adjusted magnitudes are plotted as a function of adjusted magnitude in the right-hand column of Figure 6. Just as for the reach-to-point data, it is immediately apparent that SD is lowest in ‘Endpoint’ conditions. It is also evident that SD depends on adjusted magnitude, but that this relationship differs amongst the presentation conditions. Direct visual comparison between Figures 5 and 6 reveals that the relationship between SD of adjusted magnitude and average adjusted magnitude is strikingly similar to the relationship between SD of movement magnitude and average movement magnitude. Specifically, just as for the reach-to-point data from Experiment 1, SD increases proportionally with adjusted magnitude in both ‘Endpoint’ and ‘Hand Centered’ conditions, whereas SD decreases slightly as adjusted magnitude increases in ‘Head/Eye Centered’ conditions (i.e. slope is negative). In ‘Allocentric’ conditions, SD increases at first, but drops for the farthest magnitude. 
SD of adjusted magnitude is expected to increase proportional to adjustment magnitude, i.e. Weber's law. To remove adjustment magnitude as potential confound we analyze the SD of adjusted magnitude in the same way as the SDs of movement magnitude, i.e. we used linear regression to remove the linear effects of adjusted magnitude on SD (see 1 for computational details). The residual SD left after this analysis enabled us to determine those differences in SD that were free from effects of adjustment magnitude. Because residual SD is the difference between SD observed in the data and SD expected based on the linear relationship between SD and adjustment magnitude, residual SD can be negative (i.e. SD is lower than expected) and positive (i.e. SD is higher than expected). The sum of all residuals is always zero. 
The average residual SD was −2.36 mm in ‘Endpoint’ conditions, −0.28 mm in ‘Hand Centered’ conditions, 1.73 mm in ‘Head/Eye Centered’ conditions, and 0.9 mm in ‘Allocentric’ conditions. To test for possible differences in the residual SDs among the four conditions, we computed T-tests for all possible pairwise comparisons. Threshold for significance was chosen to be p = .05 and degrees of freedom were adjusted using Tukey's HSD procedure (critical t 05; HSD = 3.12; critical t 01; HSD = 4.22). We found that residual SD in ‘Endpoint’ conditions differed significantly from residual SD in ‘Head/Eye Centered’ and ‘Allocentric’ conditions (Head/Eye Centered: t(9) = 5.15; Allocentric: t(9) = 3.66). No other comparisons were significant. However, without HSD correction, the comparison between residual SD in ‘Endpoint’ and ‘Hand Centered’ conditions reached significance as well (t(9) = 2.46; p = .036). 
In summary, the results suggest that performance was least variable when subjects adjusted the probe dot in ‘Endpoint’ conditions, the only condition which permits the use of non-metric information about target location. Performance was more variable in the three other presentation conditions, where subjects had to rely on metric information about target distance in hand-centered, head/eye centered or allocentric coordinates. 
Direct comparison between reach-to-point movements (Experiment 1) and probe stimulus adjustments (Experiment 2)
Direct visual comparison between Figures 5 and 6 reveals that SD of reach-to-point movements is similar to SD of probe stimulus adjustments, except for the ‘Endpoint’ conditions, in which SD of the adjustments appears to be larger. To determine if the SDs from Experiments 1 and 2 are significantly different from one another we compared the average SDs as well as the average residual SDs for each of the four presentation conditions across the two experiments using paired T-tests. Using Tukey's HSD procedure to adjust the degrees of freedom to account for multiple comparisons, none of the comparisons were significant at p = .05. Without HSD correction, the comparison between average SD of reach-to-point movements and probe stimulus adjustments in ‘Endpoint’ conditions reached significance (t(9) = 2.77; p = .022) and the comparison between average residual SDs in ‘Endpoint’ conditions approached but did not reach significance (t(9) = 1.92; p = .087). None of the other comparisons were significant or showed even a tendency to be significant. 
The finding that response variability is equal or higher in the adjustment task is surprising, since it is typically found that the variability of probe adjustments or discrimination judgments that are made by pressing a response key is typically lower than the variability of reach-to-point or reach-to-grasp movements (DeGraaf, Sittig, & Denier van der Gon, 1991; Franz, Fahle, Buelthoff, & Gegenfurtner, 2001; Gegenfurtner & Franz, 2007; Thaler & Todd, 2009a). One possible explanation for our results is that the saccadic eye-movements that subjects were required to perform in Experiment 2 introduced additional variability in the adjustment response. The fact that SD had a tendency to be higher in probe-dot adjustments than in reach-to-point movements only in ‘Endpoint’ conditions could mean that the saccadic eye movements had a larger impact on responses in ‘Endpoint’ conditions compared to their effect on responses in the other presentation conditions. 
To compare reach-to-point movements and probe stimulus adjustments quantitatively, we correlated data from Experiments 1 and 2 for both within individual subjects and across the whole group. With regard to average movement and average adjusted magnitude it is important to realize that we would expect a high correlation simply because both measures are highly correlated with physical target magnitude. Across the group, we would expect high correlations simply because there were individual differences in performance that were consistent across the two experiments. To test quantitative correspondence between average movement and adjusted magnitudes more critically, we therefore not only correlated ‘raw’ average magnitudes, but also residual average magnitudes that remain after linear effects of physical target magnitude (and therefore individual differences) are removed from both movement and adjusted magnitude (see 1 for computational details). The significance of the correlation coefficients was determined on group level. Table 2 shows that almost all individual correlations are positive (except 2 correlations for subject 5) and that all correlations on group level are significant. We conclude that adjustment and reach-to-point data correspond quantitatively well. 
Table 2
 
Correlations between various measures of performance for Experiments 1 and 2, for both individual subjects and all subjects together (group). n: number of data points for subject correlation. N: number of data points for group correlation. **: p < .01; *** p < .001.
Table 2
 
Correlations between various measures of performance for Experiments 1 and 2, for both individual subjects and all subjects together (group). n: number of data points for subject correlation. N: number of data points for group correlation. **: p < .01; *** p < .001.
Subject Group
1 2 3 4 5 6 7 8 9 10
Magnitude n = 16 N = 160 0.87 0.99 0.95 0.92 0.96 0.96 0.89 0.9 0.94 0.94 0.8***
Residual Magn. n = 16 N = 160 0.29 0.87 0.72 0.46 0.18 0.73 0.69 0.82 0.2 0.41 0.61***
SD n = 16 N = 160 0.12 0.8 0.65 0.6 −0.2 0.54 0.48 0.61 0.35 0.19 0.32***
Residual SD n = 16 N = 160 0.27 0.77 0.63 0.55 −0.2 0.66 0.2 0.48 0.32 0.15 0.35***
Average Res. SD n = 4 N = 40 0.4 0.82 0.95 0.88 0.08 0.92 0.31 0.37 0.3 0.69 0.47**
In summary, the data from Experiment 2 are in good agreement with those from Experiment 1 and they show that performance is least variable and most accurate when subjects can use non-metric information about target location to generate their response. The main difference between probe stimulus adjustments and reach-to-point movements is that average adjusted magnitudes in the adjustment task differed significantly between ‘Hand Centered’ and ‘Endpoint’ conditions, whereas they did not differ in the reach-to-point task. Furthermore, the comparison between residual SD in ‘Hand Centered’ and ‘Endpoint’ in the probe stimulus adjustments only reached significance without HSD correction. The otherwise good agreement between probe stimulus adjustments and reach-to-point movements is striking, especially since the two tasks differed in a number of other respects. First, to generate a response in the reach-to-point task in Experiment 1, subjects invoke a multitude of steps involved in reach planning and control that recruit visual and proprioceptive feedback and feed-forward mechanisms (e.g. Desmurget, Pelisson, Rossetti, & Prablanc, 1998; Kawato, 1999; Wolpert & Ghahramani, 2000). Except for the processing of the relevant visual information in the two kinds of tasks, we do not see how the same steps that are involved in reach planning and control in Experiment 1 could be involved in the generation of the button presses in Experiment 2. It follows that the differences between presentation conditions that we observed in both Experiments 1 and 2 are independent of the way a response was generated. Second, the adjustment task required a response in only one dimension (adjusted magnitude), whereas the reach-to-point task required a response in two dimensions (movement direction and movement magnitude). The fact that we find the same systematic differences amongst the four conditions with regard to both adjusted and movement magnitude highlights the fact that performance differences amongst the presentation conditions are independent of the dimensionality of the response. Finally, the adjustment task required subjects to move their eyes between the presentation of the target and the generation of the response, whereas the reach-to-point task did not. Even though SD in ‘Endpoint’ conditions appears to have a tendency to be higher in probe-dot adjustments than reach-to-point movements, the differences amongst the four presentation conditions are nevertheless strikingly similar between Experiments 1 and 2. This finding suggests that SD differences amongst presentation conditions are present regardless of whether subjects make eye movements or not, and that saccadic eye movements may add more variability to responses in ‘Endpoint’ conditions than to responses in the other three presentation conditions. 
To summarize, even though the tasks used in Experiments 1 and 2 required subjects to make very different responses, the performances that we observed in the two experiments were remarkably similar. 
Predicting ‘endpoint’ variance based on ‘metric’ variance
In Experiment 1, our manipulations did not affect the directional components of reach-to-point movements (see Reach-to-point movements (Experiment 1) section). Thus, we decided to limit our predictions about variance to response magnitude for Experiment 1. Since responses in Experiment 2 were limited to magnitude, we also focused only on variance in this dimension. 3 
In our experiment, ‘Endpoint’ conditions provide the same metric information as ‘Head/Eye Centered’ and ‘Hand Centered’ conditions combined (compare Figure 1). It follows that lower response variability in ‘Endpoint’ conditions might be due to the fact that the brain makes use of multiple sources of metric information, not that the brain makes use of non-metric information. The human brain appears to integrate information from multiple sources in a way that can be described using maximum likelihood estimation (MLE) (Ernst & Banks, 2002). Under the assumption that the individual estimates are mutually independent and normally distributed, MLE predicts that the variance of the combined estimate σab2 can be obtained from the variances of the individual estimate σa2 and σb2 using Equation 3: 
σab2=σa2σb2σa2+σb2.
(3)
 
If lower variance in ‘Endpoint’ conditions in our experiment is due to the fact that the brain combines metric information contained in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions according to the MLE model, then variance in ‘Endpoint’ conditions σ Endpoint 2 should be predictable based on the individual variances in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions, σ Head/Eye 2 and σ Hand 2, using Equation 4:  
σ E n d p o i n t 2 = σ H e a d / E y e 2 σ H a n d 2 σ H e a d / E y e 2 + σ H a n d 2 .
(4)
 
In our experiment, we can estimate σ Head/Eye and σ Hand using empirically observed SD in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions, SD Head/Eye and SD Hand. In the simplest case, we can then compute a prediction on the SD in ‘Endpoint’ conditions, i.e.
σ ^
Endpoint =
σ ^ E n d p o i n t 2
, by simply substituting SD Head/Eye and SD Hand into Equation 4 for each target magnitude and subject separately. In a next step, we can then compare observed SD Endpoint to predicted
σ ^
Endpoint. However, a prediction based on SD Head/Eye and SD Hand might be considered inappropriate, because SD Head/Eye and SD Hand were observed in response to different response magnitudes, compared to SD Endpoint. Thus, a more appropriate
σ ^
Endpoint might be obtained by substituting SD Head/Eye_MR and SD Hand_MR into Equation 4, where SD Head/Eye_MR and SD Hand_MR are SDs that are expected in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions for responses of the same magnitude as those observed in ‘Endpoint’ conditions. Accordingly, we computed ‘magnitude corrected’ SD Head/Eye_MR and SD Hand_MR using both linear and quadratic magnitude correction functions (see 2 for computational details) and substituted these estimates in Equation 4 in order to compute
σ ^
Endpoint in a way that takes differences in response magnitude into account. 
The finding that predicted
σ ^
Endpoint matches observed SD Endpoint, would be consistent with the idea that the brain uses a combination of the metric information provided in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions to perform in ‘Endpoint’ conditions. However, if predicted
σ ^
Endpoint does not match observed SD Endpoint, it would seem that the brain uses information in ‘Endpoint’ conditions that is not captured by metric distance and direction. In other words, the information would be non-metric. Of course, if the prediction fails one could also question the general validity of the MLE model. But given the current evidence about the way the brain might integrate different kinds of visual information (Knill & Pouget, 2004), MLE appears to be a suitable framework for testing the metric model in the context of our experiments. 
If we substitute observed SD Head/Eye and SD Hand or magnitude-corrected SD Head/Eye_MR and SD Hand_MR into Equation 4 in order to compute
σ ^
Endpoint, we assume that all variability in responses is due to the underlying representation. It has been argued, however, that motor noise associated with moving the hand is an additional and independent source of response variability (van Beers et al., 2004). In fact, our own work suggests that motor noise contributes ∼40% to overall variability in the kinds of hand movements used in the current experiments, i.e. comparable speed, duration, etc. (Thaler & Todd, 2009b). In order to test the influence of motor noise on our prediction in Experiment 1, we also implemented a metric MLE models that assumes 40% motor noise (see 3 for computational details). 
Figure 7 shows the results of our MLE analyses for Experiment 1 (computational details of the analyses are described in 2 and 3). The different rows show the results obtained using different magnitude correction functions (top row: no correction, second: linear correction, third: quadratic). Blue and red squares in the left hand column of Figure 7 show the average prediction error, i.e. the difference between observed and predicted SD in Endpoint conditions, with and without 40% motor noise, respectively. Error bars denote 95% confidence intervals around the mean prediction error. The remaining three columns on the right hand-side show the data used to obtain the prediction error. In these plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. Observed data and magnitude correction functions (both averaged across subjects) are plotted in black. Please note that the observed data are just replotted from Figure 3 and that they are the same in all rows. Red crosses denote SD values that were substituted for σ Head/Eye and σ Hand in Equation 4, also averaged across subjects. Blue and red circles denote
σ ^
Endpoint for the ‘representation model’ with and without 40% motor noise, respectively, also averaged across subjects. Figure 8 shows the data for Experiment 2 plotted in the same format as Figure 7, except that we only plotted predictions for the ‘representation model’, because there is no motor noise model for probe-dot adjustments. 
Figure 7
 
Results of MLE prediction for Experiment 1. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 7
 
Results of MLE prediction for Experiment 1. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 8
 
Results of MLE prediction for Experiment 2. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 8
 
Results of MLE prediction for Experiment 2. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
It is evident that for both Experiments 1 and 2, the metric MLE model prediction does not capture the data well, i.e. the confidence intervals around the prediction error do not contain zero for the majority of our predictions. This suggests that variability of subjects responses in ‘Endpoint’ conditions cannot be predicted based on the combined use of metric information provided in ‘Head/Eye Centered’ and ‘Hand centered’ conditions. This suggests that subject use non-metric information about location to generate responses in ‘Endpoint’ conditions. Interestingly, the model ‘over-predicts’ in Experiment 1, i.e.
σ ^
Endpoint > SD Endpoint, and ‘under-predicts’ in Experiment 2, i.e.
σ ^
Endpoint < SD Endpoint. We think that a likely explanation of this result is that eye movements introduced additional noise in the responses in ‘Endpoint’ conditions in Experiment 2 and that this noise is absent in Experiment 1. This interpretation is also consistent with the finding that SD in ‘Endpoint’ conditions has a tendency to be lower in Experiment 1 than in Experiment 2, whereas SDs in the three metric conditions do not show a tendency to differ between Experiments 1 and 2 (compare Direct comparison between reach-to-point movements (Experiment 1) and probe stimulus adjustments (Experiment 2) section). 
Discussion
The experiments reported here were designed to test if the brain uses non-metric information to represent locations. We observed that the variability of both visually guided reach-to-point movements (Experiment 1) and probe stimulus adjustments (Experiment 2) was lowest when subjects could direct their response at a target, which was the only condition that permitted the use of a non-metric representation. In the other three conditions, subjects showed greater variability, but still performed reasonably well. Variability in ‘Endpoint’ condition could not be predicted based on variability in metric ‘Head/Eye Centered’ and ‘Hand Centered’ conditions. Taken together, these results suggest that the brain represents locations non-metrically. The fact that responses scaled metrically in the metric conditions in our experiments suggest that the brain also has access to a metric representation. Before discussing the potential implications of our interpretation, we will address arguments that could be raised against it. 
Ruling out potential alternative explanations of our results
It could be argued that subjects were less variable in the ‘Endpoint’ conditions because they were more familiar with this kind of task. But if practice were an important factor, then one would expect that not only variability in movement magnitude, but also variability in movement direction, should be higher in the other three presentation conditions, at least in Experiment 1. But this was not the case. Importantly, a familiarity-based argument can also not be easily invoked to explain the data from Experiment 2. The button-pressing task was an unfamiliar one and yet again performance in the ‘Endpoint’ condition was better than performance in any of the other conditions. 
The similarity of responses in Experiments 1 and 2 also rules out proprioception and/or movement planning as a potential explanation for improved performance in ‘Endpoint’ conditions. To generate a response in the reach-to-point task in Experiment 1, subjects invoke a multitude of steps involved in reach planning and control that recruit visual and proprioceptive feedback and feed-forward mechanisms (e.g. Desmurget et al., 1998; Kawato, 1999; Wolpert & Ghahramani, 2000). Except for the processing of the relevant visual information, we do not see how the same steps that are involved in reach planning and control in Experiment 1 could be involved in the generation of the button presses in Experiment 2. It follows that the differences between presentation conditions that we observed cannot be explained by proprioceptive processes or differences in movement planning, but that they must be due to the representation of space that subjects use to generate their response. 
One might also argue that our results are limited to the specific response direction (35°) we used or to the specific relative orientation between response direction and target direction (i.e. 35° vs. −11°) that we tested. But we have shown in previous experiments that variability in performance on these kinds of tasks is independent of response direction and the relative orientation between response and target direction (Thaler & Todd, 2009a). Based on these previous results we can rule out the argument that our results do not generalize to other response directions. 
It could also be argued that the greater endpoint scatter along the movement direction in Experiment 1 reflects the fact that distance is coded separately from direction in metric hand-centered coordinates. But Thaler and Todd (2009b) showed that even though it is true that the pattern of movement scatter follows the direction in which a target is approached, this scatter reflects motor noise rather than representation noise. 
One could also argue that performance was better in ‘Endpoint’ conditions, because the other conditions required subjects to mentally transform the position of the target in the actual visual array into some sort of displaced mental anchor point in order to generate the response (i.e. the subjects engaged in mental translation and/or rotation). We think that this ‘transformation’ argument cannot explain our results for empirical and conceptual reasons. 
First, empirically, if transformations were the reason for differences in variability, then the variability of responses should increase as the number of transformations increases. For our experiment, this would mean that we would expect the following ordering of variability: ‘Endpoint’ conditions should have the lowest variability because they do not require any transformations. ‘Head’ and ‘Hand Centered’ conditions should show more variability than ‘Endpoint’ conditions, because they require one transformation, i.e. the visual target magnitude has to be rotated. The highest level of variability should be found in allocentric conditions, because these conditions require two transformations, i.e. the visual target magnitude has to be translated onto the starting point and rotated. We do not find this ordering of variability in our experiment, suggesting that the need for transformations cannot explain the differences in response variability that we observe. 
Second, conceptually, the transformation argument assumes that metric distance d and direction ϕ are not sufficient to generate a response in metric conditions, but that they have to be transformed into a new anchor position P′ first. Since the position P is already given in ‘Endpoint’ conditions, a transformation is not required, and variance is lowest. It is important to realize when raising this argument, however, that existing metric models cannot produce responses based on position P alone. In fact, the only way that current metric models generate a response based on position P is that they transform P into distance d and orientation ϕ either with respect to the eye, head, hand or some other origin, and it is for this reason that P is always represented in a metric Cartesian or spherical coordinate system (e.g. Blohm, Keith, & Crawford, 2009; Buneo & Andersen, 2006; Flanders, Helms-Tillery, & Soechting, 1992; Guenther, Bullock, Greve, & Grossberg, 1994; Rosenbaum, Loukopoulos, Meulenbroek, Vaughan, & Engelbrecht, 1995; Snyder, 2000; Soechting & Flanders, 1989a, 1989b; van Pelt & Medendorp, 2008; for reviews see for example Desmurget & Grafton, 2000; Desmurget et al., 1998; Lacquaniti & Caminiti, 1998; Todorov, 2004). Thus, if one wants to ‘rescue’ the metric model by raising a transformation argument one also has to explain why metric models should need a new anchor point P′ in order to compute distance d and orientation ϕ in metric conditions, and why d and ϕ as they are provided in our metric conditions cannot be used directly to generate a response. 
In conclusion, a transformation argument might be invoked, but it raises the question why subjects should need an anchor point to provide a bridge for the computations in metric conditions in the first place. The current literature does not provide an answer to this question. Consequently, we believe that the most parsimonious explanation for higher variance in metric as compared to ‘Endpoint’ conditions is that the brain can use a non-metric representation in ‘Endpoint’ conditions, but that it cannot do so in the other three conditions. 
One final issue has to be addressed. It is possible that our results could be explained by existing metric models that relate movement variability to differences in the sensory transformations that are made across different reference frames (McGuire & Sabes, 2009; Schlicht & Schrater, 2007; Sober & Sabes, 2003, 2005). In their current form, these models deal only with movement direction. Thus, technically, they cannot be applied to our results, where movement magnitude (Experiment 1) or the probe dot adjustments (in Experiment 2) were the dependent variables. In principle, however, these models could be extended to predict performance for these variables. But even so, they still cannot explain or predict our results. 
Schlicht and Schrater (2007) proposed a model in which uncertainty about target direction is a function of gaze direction, saccade magnitude, and/or retinal eccentricity of the visual target. Using this model, Schlicht and Schrater explain the effects of those three variables on variance and bias of reaching direction (Schlicht & Schrater, 2007, Figure 5C). If the model were extended to explain movement magnitude, then one would expect that these same three variables should explain variance and bias of reaching magnitude. In our current experiments, however, we were careful to match gaze direction and retinal eccentricity across conditions in Experiments 1 and 2, as well as saccade amplitude in Experiment 2 (see section Eye movements for Experiments 1 and 2). Thus, according to Schlicht and Schrater, we should not find any differences in movement direction. Our results are certainly consistent with this prediction, but if the model were extended, then their model would also predict no differences in movement (or probe dot) magnitudes across our different presentation conditions. This was clearly not the case. Thus, it would seem that the Schlicht and Schrater model cannot account for our results. 
Sabes and colleagues have shown that bias and variability in hand movement direction can be explained with Bayesian models that integrate sensory information about hand and target position from both vision and proprioception in order to minimize overall variability (McGuire & Sabes, 2009; Sober & Sabes, 2003, 2005). The most recent model (McGuire & Sabes, 2009) assumes that five sensory signals are potentially available: vision of the fingertip, proprioception of the fingertip, vision of the target, proprioception of the target, and felt gaze position. These signals are modeled as coming from independent Gaussian distributions. When a signal is unavailable, it is modeled as coming from an independent uniform distribution. Prior distributions, which are combined with the sensory signals using Bayes' rule, are assumed to be uniform as well, but other forms are possible. With regard to our experiment, the model predicts that sensory signals about felt gaze position, vision of the fingertip, proprioception of the fingertip and proprioception of the target should follow the same distribution in all conditions, because subjects had exactly the same sensory information about these variables in all the conditions used in our experiment (There was no proprioceptive information about target position in our experiment, which specified visually, so proprioceptive signals relating to target position should be distributed uniformly.) Thus, according to McGuire and Sabes' model, the only variable that could explain differences in performance across our presentation conditions is the variance of visual signals about target position. Importantly, the magnitude of this variable is a free parameter in the model and the only assumption regarding variance of visual signals about the target is that it should increase linearly with the distance of the target from (felt) gaze center (McGuire & Sabes, 2009; supplementary online methods, ‘Model Fitting’). As described above, target eccentricity and gaze direction were matched across conditions in our experiment and it follows therefore that McGuire and Sabes would predict the same variance in movement direction in all our conditions, just as Schlicht and Schrater (2007) would. Again, our directional data are consistent with this prediction. But if the McGuire & Sabes model were extended to the magnitude of a movement (or even a probe dot adjustment), then the model would only explain our results if one arbitrarily chose a higher value for the (currently) free parameter ‘visual target variance’ in the metric compared to ‘Endpoint’ conditions in our experiment. (The same would hold for the choice of parameters for the prior distributions if one were to use these to explain our results.) In summary, our results (and our argument) address a variable in McGuire and Sabes' model which is currently a free parameter and some kind of a ‘black box’. Thus, even though McGuire and Sabes' (2009) model is not inconsistent with our results, it does not predict them a priori. It follows that our results underscore those parts of the model that are currently underspecified and that would have to be extended in order to deal with them. 
In conclusion, we are not aware of any metric model that can predict our results, without invoking a go-between target, or without invoking an arbitrarily chosen difference in the visual variance from the start. Neither of these solutions can be easily justified within the context of the current literature. From the point of view of existing metric models, our result is negative; i.e., we show that the models fail, and that to explain our results a radically new model has to be developed. The pattern of results that we report with respect to variance across conditions is complex and is further modulated by differences in eye movements. In addition, it is unclear whether or not metric and non-metric representations are independent from one another. It is also unclear which particular kind of non-metric representation might be used (i.e. topological, ordered, or some other). In short, any model we might come up with at this stage would require too many free parameters to be meaningful. Future work is needed to limit the range of these free parameters. Nevertheless, it should be emphasized once more that our findings cannot be explained by any model that represents locations in a purely metric fashion. It would appear that the best account of our results is likely to come from a model that acknowledges that the brain represents locations non-metrically. 
Implications for computational modeling
The main advantage of a non-metric representation is that it would allow the brain to compute locations of visible targets more efficiently. There are two reasons why this is the case. First, as mentioned in the Introduction, the computation of non-metric structure is better constrained from a mathematical point of view than is the computation of metric structure, which makes it more robust in the presence of sensory uncertainty (e.g. Beardsley et al., 1995; Faugeras, 1995; Koenderink & van Doorn, 1991; Robert et al., 1997). Consistent with this argument, psychophysical studies have shown that people generally have a poor metrical representation of the 3-D structure of the world (for review see Todd, 2004; Todd & Norman, 2003). Second, the brain could use a non-metric representation alongside a metric representation, which creates redundancy gain and would increase reliability according to the MLE model, for example. 
A great deal of research has focused on how visual information about visible target locations is transformed into movement parameters and which neural structures are involved in those computations (e.g. Blohm et al., 2009; Buneo & Andersen, 2006; Flanders et al., 1992; Snyder, 2000; Soechting & Flanders, 1989a, 1989b; for reviews see for example Desmurget & Grafton, 2000; Desmurget et al., 1998; Lacquaniti & Caminiti, 1998; Todorov, 2004). We believe that our results provide a new way of looking at these questions. 
Traditionally, neuronal firing patterns observed in the context of visually guided movements towards visible targets have been interpreted in terms of metric coding. For example, in the context of visually guided saccades, neural activation in the superior colliculus is thought to encode target position metrically in terms of distance and direction with respect to the fovea (Girard & Berthoz, 2005). We think, however, that it is entirely plausible that the coding of visual locations that is used to program the appropriate motor response is topological. To generate the appropriate eye movement based on a topological code, for example, one needs only a link between ‘neurons’ that represent visual locations and ‘neurons’ that code the desired state of the ocular apparatus. The same idea can be equally applied to visually guided hand movements. Thus, to perform a goal directed hand movement one needs only a link between representation neurons that code locations in visual space and neurons that code the desired state of the limb. If a model other than the equilibrium point model (or versions thereof) is entertained, it is also required that the current state of the effector is coded so that the appropriate motor commands can be computed. However, the current state of the effector could be coded in another map which could be linked to the visual representation and output maps. For both eye and hand movements, the link between the non-metric representation of the target and the desired goal state of the effector could be established and continuously calibrated through experience. As stated in the Introduction, these adaptive processes would also be responsible for the emergence of metrically scaled behavior. In the robotic literature, the computational feasibility of topological mappings between visual and motor space for the control of multi-joint-robot arms (e.g. Ritter, Martinetz, & SchuIten, 1989) as well as the applicability of non-metric spatial representations for visually guided robotic tasks (e.g. Beardsley et al., 1995; Robert et al., 1997) is well established. 
It is important to note in this context that the use of a topological representation for movement planning towards locations does not require that the equilibrium-point model holds (Bizzi, Accornero, Chapple, & Hogan, 1984; Feldman, 1966, 1986; Polit & Bizzi, 1979), a model whose physiological validity is heavily debated in the literature (Desmurget et al., 1998; Feldman & Levin, 2009; Kistemaker, Van Soest, & Bobbert, 2007). However, the use of a topological representation for movement planning towards locations does require that endpoint coding holds, and the equilibrium-point model is only one of a number of possible endpoint-coding models (De Grave, Brenner, & Smeets, 2004; Thaler & Todd, 2009b). We showed recently that the brain uses endpoint coding to plan movements in conditions such as our ‘Endpoint’ conditions (Thaler & Todd, 2009b). This finding lends empirical plausibility to the idea of a topological representation for movement planning towards locations. Nevertheless, it is important to keep in mind that endpoint coding models are also consistent with the use of a metric head/eye centered representation for movement planning, and this is in fact the representational format we used in previous work (Thaler & Todd, 2009b). 
Clearly, the response-generation process that applies for the generation of reach-to-point movements would not work for our button-pressing task in Experiment 2, because subjects performed this task for the first time. For example, it is unlikely that there is a direct link between the location of the target dot and the number of button presses required to reach that dot. Nonetheless, it turns out that the button-pressing task could be achieved using a non-metric model, if we invoke mechanisms that can retain location information in memory and that can compare the location of the probe-dot to the location stored in memory. According to this idea, sensory input during the presentation of the target dot would specify a location and this location would be stored. After the target dot disappears, the probe dot that is now visible creates is own sensory input and thus specifies locations. To perform the adjustment, the location of the probe dot and the location in memory are compared and upon a match the adjustment is terminated, i.e. the probe dot comes to a halt at its final position. A similar ‘memory and match’ mechanism could also be employed during tasks such as location discrimination judgments. 
Even though subjects performed most reliably in ‘Endpoint’ conditions, they could still perform the metric tasks and the SDs in these tasks were reasonably low. In fact, in Experiment 1 it is impossible to tell from the movement kinematics which condition subjects were performing. As mentioned earlier, metric response scaling in metric conditions shows that subjects can represent metric visual information—possibly in combination with non-metric representations. This interpretation is consistent with work that suggests that the human brain uses both non-metric and metric representations to navigate large-scale environments (Foo et al., 2005). Taken together, the results suggest that the computational processes used by the visuomotor system are quite flexible. Current and future models of visuomotor planning should be equally flexible, and the four conditions used in our experiments could provide a useful yardstick for testing the validity of these models (see also our discussion of the relationship of our results to existing models in Ruling out potential alternative explanations of our results section). The idea of a computationally flexible visuomotor system is not new (Desmurget et al., 1998; McGuire & Sabes, 2009; Sober & Sabes, 2005; Todorov & Jordan, 2002), but it is new to suggest that the way visual information is presented may affect how movements are planned and controlled (see also Thaler & Todd, 2009b). 
Implications for understanding of visual processing for perception and action
Lastly, our results have interesting implications for the understanding of visual processing for perceptual judgments and motor action. It has been proposed that the anatomical segregation into dorsal and ventral visual streams (Ungerleider & Mishkin, 1982) corresponds to a functional specialization into visual processing for perceptual judgments and motor action (Goodale & Milner, 1992; Milner & Goodale, 1995, 2008). Although Goodale and Milner have claimed that the computations used to localize a target by vision-for-perception and vision-for-action are quite different, they have not been explicit about what the nature of those differences might be, aside from arguing that the perception is more ‘scene-based’ and action is more ‘egocentric’. They have been silent about whether metric or non-metric computations are used. The present results suggest that the locations of visible targets are computed using a non-metric format for both well-practiced visually guided reaching movements as well as for more arbitrary responses, such as the button pressing task. Only when people have to make a response that is not aimed directly at the target but that still takes its distance into account, does the brain appear to use metric computations. Although it seems likely that non-metric computations are used by the visuomotor networks in the dorsal stream, the way in which the distinctions between metric and non-metric computations map onto Goodale and Milner's two-visual-systems proposal is not yet clear. In any case, in developing the arguments about how perception and action might differ in the way they use visual information, it is important to take into account the nature of the computations that might be required to localize targets and to generate the response. 
Conclusion
In summary, our results strongly suggest that the brain uses non-metric information to represent locations. More studies are needed to further probe the computational processes and neural structures involved in the various tasks introduced in the current experiments. 
Appendix A
Here we describe how we removed linear effects of response magnitude on the SDs of the response magnitude. For brevity, we describe this procedure only for the reaching responses (Experiment 1) but the effects on the SDs of the button-pressing responses (Experiment 2) can be obtained by making the appropriate substitutions. Similarly, the computations for removing the linear effects of target distance on both the reaching and the button-pressing responses can also be obtained by substituting the appropriate values in the equations. 
In a first step, we used the least squares method to obtain the linear function that predicts SD of movement magnitude based on movement magnitude across presentation conditions and target magnitudes for each subject separately. This linear function has the form ŝ ijk = a i + b i * d ijk, where ŝ ijk is predicted SD of movement magnitude for a particular subject i, presentation condition j and target magnitude k, d ijk is the movement magnitude for a particular subject, presentation condition, and target magnitude, and a i and b i are subject specific intercept and slope parameters. In a second step, we computed the residual r S ijk = s ijk − ŝ ijk, which is the difference between observed SD of movement magnitude, s ijk, and predicted SD of movement magnitude, ŝ ijk, for a particular presentation condition, target magnitude, and subject. It follows that the residual r S ijk is the amount of observed SD of movement magnitude independent of the linear effects of movement magnitude. The average residual for each subject and presentation condition was obtained by averaging residuals across distances for a particular presentation condition and subject, i.e. r S ij =
1 n k = 1 n
r S ijk, where n is the number of distances per presentation condition, which was four in our experiment. The average residual for each presentation condition can be negative or positive. In contrast, the average residual across presentation conditions, i.e. r S i =
1 n 1 m j = 1 m k = 1 n
r S ijk, where m is the number of presentation conditions, is by definition always zero. It follows that removing linear effects of movement magnitude also removes subject-specific biases in SD of movement magnitude. 
Appendix B
Here we describe how we computed magnitude-corrected SDs that were substituted in Equation 4. For brevity, we only describe the procedure for SD of movement magnitude for the reaching response (Experiment 1). Computations for SD of the adjusted (button-pressing) magnitude can be obtained by making the appropriate substitutions. 
Linear magnitude correction
In a first step, we used the least squares method to obtain the linear function that predicts SD of magnitude based on response magnitude for each subject and presentation condition separately. This linear function has the form ŝ ijk = a ij + b ij * d ijk, where ŝ ijk is predicted SD of movement magnitude for a particular subject i, presentation condition j and target magnitude k, d ijk is the movement magnitude for a particular subject, presentation condition and target magnitude, and a ij and b ij are subject and presentation condition specific coefficients of the linear polynomial, i.e. intercept and slope parameters. To obtain magnitude corrected SD for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions, SD Head/Eye_MR and SD Hand_MR, we substituted movement distances obtained in ‘Endpoint’ conditions, d i Endpoint k, into the equations that predict ŝ ijk for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions. To obtain the MLE prediction, SD Head/Eye_MR and SD Hand_MR were then substituted in Equation 4. Predictions were computed for each subject and target magnitude separately. 
Quadratic magnitude correction
For the quadratic magnitude correction, we used the least squares method to obtain the quadratic function that predicts SD of magnitude based on response magnitude for each subject and presentation condition. This quadratic function has the form ŝ ijk = a ij + b ij d ijk + c ij d ijk 2, where ŝ ijk is predicted SD of movement magnitude for a particular subject i, presentation condition j and target magnitude k, d ijk is the movement magnitude for a particular subject, presentation condition and target magnitude, and a ij, b ij and c ij are subject and presentation condition specific coefficients of the polynomial. The remaining computations are identical to the linear case. 
Appendix C
Here we describe how we computed motor noise and how we used it to generate MLE + 40% motor noise predictions for Experiment 1. We implemented these computations only for Experiment 1, because there is no motor noise model for probe-dot adjustments. 
Under the assumption that motor and representation noise are additive, movement variability is the sum of representation and motor noise, i.e. σ 2 = σ Representation 2 + σ Motor 2. We have shown previously that SD in ‘Endpoint’ conditions can be used to estimate motor noise, such that
σ ^
Motor 2 = k(σ Endpoint 2), where k denotes the proportion of motor noise to overall movement variability (Thaler & Todd, 2009b). In the current experiments, we can estimate σEndpoint2 using SDEndpoint. It follows that the simplest estimate of motor noise
σ ^
Motor2 for each target magnitude can be obtained using Equation C1: 
σ ^Motor2=k(SDEndpoint2).
(C1)
 
Previous results suggest that movements with kinematics comparable to those observed in the current experiments have 40% motor noise (Thaler & Todd, 2009b). Thus, we chose k = 0.4 for our simulations. To obtain an estimate of representation noise
σ^
Representation2 in our experiment, we can then simply subtract
σ^
Motor2 from SD2 for each target magnitude. To generate MLE predictions for our experiments we therefore used ‘Endpoint’ conditions to estimate
σ^
Motor2 for each target magnitude and subject. We then subtracted
σ^
Motor2 from SDHead/Eye2 and SDHand2 for each target magnitude and subject and substituted the remainder into Equation 4 to yield the metric MLE prediction,
σ^
Representation_Endpoint2. To obtain the MLE prediction + motor noise,
σ^
Motor2 was added to
σ^
Representation_Endpoint2 for each target magnitude and subject, i.e.
σ^Endpoint=σ^Representation_Endpoint2+σ^Motor2
. Ninety-five percent confidence intervals were computed based on the difference between
σ^
Endpoint and SDEndpoint
For the computations described so far, we assume that motor noise for a given target magnitude is the same across presentation conditions. This assumption can be questioned, because motor noise depends on movement magnitude and movement magnitude was actually different amongst the different presentation conditions. 
Fortunately, we can use linear magnitude correction functions to reduce the assumption of equal motor noise to the assumption that motor noise scales proportionally to movement magnitude across presentation conditions equally. The latter assumption is justified given that movement kinematics (i.e. speed, duration, maximum speed, curvature) were similar across presentation conditions and thus motor noise would be expected to scale equally with movement magnitude across those conditions (van Beers et al., 2004). To obtain motor noise predictions that scale proportionally to movement magnitude, we simply computed the best fitting linear function that predicts SD based on movement magnitude in ‘Endpoint’ conditions for each target magnitude and subject (see previous sections for details on linear magnitude correction functions). To predict magnitude-corrected motor noise for ‘Head/Eye Centered’ and ‘Hand Centered’ conditions, we then substituted movement magnitudes observed in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions into the linear magnitude correction function obtained for ‘Endpoint’ conditions and substituted the result of this prediction into Equation C1. The result of these computations is the amount of motor noise that is expected for movement magnitudes observed in ‘Head/Eye Centered’ and ‘Hand Centered’ conditions. The remaining computations are identical to those in the non-magnitude corrected motor noise. We used non-magnitude corrected motor noise in combination with the non-magnitude corrected MLE prediction. For linear and quadratic magnitude corrected MLE predictions, we used linear magnitude corrected motor noise. 
Acknowledgments
This work was supported by the Natural Sciences and Engineering Research Council of Canada (MGA) and a Postdoctoral Fellowship of the Ministry of Research and Innovation (Ontario) (LT). We thank two anonymous reviewers, Gunnar Blohm and Denise Henriques for helpful comments and discussions regarding a previous version of this manuscript. 
Commercial relationships: none. 
Corresponding author: Lore Thaler. 
Address: Department of Psychology, The University of Western Ontario, Social Science Building, Room 6238, London, Ontario, N6A 5C2, Canada. 
Footnotes
Footnotes
1  In the current paper, we use the term ‘metric’ to refer to quantitative distance and direction. In the mathematical literature, a metric geometry is defined as a set of points and a distance-function d(x, y), which defines the distance between any two points x and y in the set and which satisfies the three metric axioms of (1) isolation i.e. d(x, y) = 0 if x = y, (2) symmetry, i.e. d(x, y) = d(y, x) and (3) triangle inequality, i.e. d(x, y) + d(y, z) = d(x, z) (Coxeter, 1969). This definition implies that metric geometries permit the computation of quantitative distance and direction. Thus, in the current paper we use the term ‘metric’ differently as it is used in the mathematical literature, but the way we use it is consistent with the mathematical definition.
Footnotes
2  To adjust the degrees of freedom using Tukey's HSD procedure, we chose df = 9 and k = 6, where k is the number of means to be compared. The reason for choosing k = 6 instead of k = 16, which is the actual number of groups that we have in our experiment, was that we computed only 12 out of all 120 possible post-hoc comparisons. Given a fixed number of groups k, Tukey's HSD test corrects degrees of freedom based on the assumption that all possible comparisons between the k groups are going to be computed, i.e. the number of comparisons is assumed to be k(k − 1)/2. Thus, if we had chosen k = 16, which is the number of groups that we actually have in our experiment, we would have corrected the degrees of freedom assuming that the number of comparisons is 120, which would make our test very conservative. Choosing k = 6 corrects the degrees of freedom assuming that the number of comparisons is 15, which makes our test only slightly more conservative then necessary.
Footnotes
3  The reader might wonder why we predict variance of response magnitude, but not bias. The reason for concentrating on the variance is that a prediction of bias would be an unfair test of the metric model, because bias of response magnitude depends on the direction in which visual information is specified (Thaler & Todd, 2009a). Since the direction in which visual information is specified differs between metric and endpoint conditions in our experiments, it would be expected that metric conditions show different biases in response magnitude from endpoint conditions, and this is what we observe in our data. It follows that prediction of bias in endpoint conditions based on bias in the metric conditions would fail. However, these differences in bias (and therefore the failure of the metric prediction) would be due to differences in the orientation in which the visual information is specified, not due to the fact that the representation that is used in the endpoint condition is non-metric. Luckily, we have shown previously, that variance of response magnitude does not depend on response direction (Thaler & Todd, 2009a). Therefore, prediction of variance provides a fair test of the metric model in our experiments.
References
Andersen R. A. Snyder L. H. Bradley D. C. Xing J. (1997). Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Reviews in Neuroscience, 20, 303–330. [PubMed] [CrossRef]
Atkeson C. G. Hollerbach J. M. (1985). Kinematic features of unrestrained vertical arm movements. Journal of Neuroscience, 5, 2318–2330. [PubMed] [Article] [PubMed]
Beardsley P. A. Reid I. D. Zisserman A. Murray D. W. (1995). Active visual navigation using non-metric structuren Proceedings of the 5th International Conference on Computer Vision, Boston (pp. 58–65). IEEE Computer Society Press.
Bizzi E. Accornero N. Chapple W. Hogan N. (1984). Posture control and trajectory formation during arm movement. Journal of Neuroscience, 4, 2738–2744. [PubMed] [Article] [PubMed]
Blohm G. Keith G. P. Crawford J. D. (2009). Decoding the cortical transformations for visually-guided reaching in 3D space. Cerebral Cortex, 19, 1372–1393. [PubMed] [CrossRef] [PubMed]
Blohm G. Khan A. Z. Ren L. Schreiber K. M. Crawford J. D. (2008). Depth estimation from retinal disparity requires eye and head orientation signals. Journal of Vision, 8, (16):3, 1–23, http://journalofvision.org/8/16/3/, doi:10.1167/8.16.3. [PubMed] [Article] [CrossRef] [PubMed]
Bock O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64, 476–482. [PubMed] [CrossRef] [PubMed]
Brenner E. Smeets J. B. J. Remijnse-Tamerius H. C. (2002). Curvature in hand movements as a result of visual misjudgements of direction. Spatial Vision, 15, 393–414. [PubMed] [CrossRef] [PubMed]
Buneo C. A. Andersen R. A. (2006). The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44, 2594–606. [PubMed] [CrossRef] [PubMed]
Colby C. L. Goldberg M. E. (1999). Space and attention in parietal cortex. Annual Reviews in Neuroscience, 22, 319–49. [PubMed] [CrossRef]
Coxeter H. S. M. (1969). Introduction to geometry. New York: John Wiley & Sons.
DeGraaf J. B. Sittig A. C. Denier van der Gon J. J. (1991). Misdirections in slow goal-directed arm movements and pointer setting tasks. Experimental Brain Research, 84, 434–438. [PubMed] [CrossRef] [PubMed]
De Grave D. D. J. Brenner E. Smeets J. B. J. (2004). Illusions as a tool to study the coding of pointing movements. Experimental Brain Research, 55, 56–62. [PubMed] [CrossRef]
Desmurget M. Grafton S. (2000). Forward Modeling allows feedback control for fast reaching movements. Trends in Cognitive Sciences, 4, 423–431. [PubMed] [CrossRef] [PubMed]
Desmurget M. Jordan M. Prablanc C. Jeannerod M. (1997). Constrained and unconstrained movements involve different control strategies. Journal of Neurophysiology, 77, 1644–1650. [PubMed] [PubMed]
Desmurget M. Pelisson D. Rossetti Y. Prablanc C. (1998). From eye to hand: Planning goal-directed movements. Neuroscience and Biobehavioral Reviews, 22, 761–788. [PubMed] [Article] [CrossRef] [PubMed]
Ernst M. O. Banks M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. [PubMed] [CrossRef] [PubMed]
Faugeras O. (1995). Stratification of three-dimensional vision: Projective, affine, and metric representations. Journal of the Optical Society of America A, 12, 465–484. [CrossRef]
Feldman A. G. (1966). Functional tuning of the nervous system during control of movement or maintenance of a steady posture: III Mechanographic analysis of the execution by man of the simplest motor tasks. Biophysics, 11, 766–775.
Feldman A. G. (1986). Once more on the equilibrium-point hypothesis (λ model) for motor control. Journal of Motor Behavior, 18,17–54. [PubMed] [CrossRef] [PubMed]
Feldman A. G. Levin M. F. (2009). The equilibrium-point hypothesis—Past, present and future. In Sternad D. (Ed.), Progress in motor control—A multidisciplinary perspective (pp. 699–726). New York: Springer.
Fisher N. I. (1993). Statistical analysis of circular data. New York, NY: Cambridge University Press.
Fitts P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47, 381–391. [PubMed] [CrossRef] [PubMed]
Flanders M. Helms-Tillery S. I. Soechting J. F. (1992). Early stages in a sensorimotor transformation. Behavioral and Brain Sciences, 15, 309–362. [CrossRef]
Foo P. Warren W. H. Duchon A. Tarr M. J. (2005). Do humans integrate routes into a cognitive map Map- versus landmark-based navigation of novel shortcuts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 195–215. [PubMed] [CrossRef] [PubMed]
Franz V. H. Fahle M. Buelthoff H. H. Gegenfurtner K. R. (2001). Effects of visual illusions on grasping. Journal of Experimental Psychology: Human Perception and Performance, 27, 1124–1144. [PubMed] [CrossRef] [PubMed]
Gegenfurtner K. R. Franz V. H. (2007). A comparison of localization judgments and pointing precision. Journal of Vision, 7, (5):11, 1–12, http://journalofvision.org/7/5/11/, doi:10.1167/7.5.11. [PubMed] [Article] [CrossRef] [PubMed]
Girard B. Berthoz A. (2005). From brainstem to cortex: Computational models of saccade generation circuitry. Progress in Neurobiology, 77, 215–251. [PubMed] [CrossRef] [PubMed]
Goodale M. A. Milner A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. [PubMed] [CrossRef] [PubMed]
Gordon J. Ghilardi M. F. Ghez C. (1994). Accuracy of planar reaching movements: I Independence of direction and extent variability. Experimental Brain Research, 99, 97–111. [PubMed] [CrossRef] [PubMed]
Guenther F. H. Bullock D. Greve D. Grossberg S. (1994). Neural representations for sensory-motor control: III Learning a body-centered representation of 3-D target position. Journal of Cognitive Neuroscience, 6, 341–358. [CrossRef] [PubMed]
Kawato M. (1999). Internal models for motor control and trajectory planning. Current Opinion in Neurobiology, 9, 718–727. [PubMed] [CrossRef] [PubMed]
Kistemaker D. A. Van Soest A. K. Bobbert M. F. (2007). Equilibrium point control cannot be refuted by experimental reconstruction of equilibrium point trajectories. Journal of Neurophysiology, 98, 1075–1082. [PubMed] [Article] [CrossRef] [PubMed]
Knill D. C. Pouget A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation for perception and action. Trends in Neuroscience, 27, 712–719. [PubMed] [CrossRef]
Koenderink J. J. van Doorn A. J. (1991). Affine structure from motion. Journal of the Optical Society of America A, 8, 377–385. [CrossRef]
Lacquaniti F. Caminiti R. (1998). Visuo-motor transformations for arm reaching. European Journal of Neuroscience, 10, 195–203. [PubMed] [CrossRef] [PubMed]
McGuire L. M. M. Sabes P. N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nature Neuroscience, 12, 1056–1061. [PubMed] [CrossRef] [PubMed]
Messier J. Kalaska J. F. (1997). Differential effect of task conditions on errors of direction and extent of reaching movements. Experimental Brain Research, 115, 469–478. [PubMed] [CrossRef] [PubMed]
Milner A. D. Goodale M. A. (1995). The visual brain in action. Oxford: Oxford UP.
Milner A. D. Goodale M. A. (2008). Two visual systems re-viewed. Neuropsychologia, 46, 774–785. [PubMed] [CrossRef] [PubMed]
Polit A. Bizzi E. (1979). Characteristics of motor programs underlying arm movements in monkeys. Journal of Neurophysiology, 42, 183–194. [PubMed] [PubMed]
Ritter H. Martinetz T. SchuIten K. (1989). Topology conserving maps for learning visuomotor-coordination. Neural Networks, 2, 159–168. [CrossRef]
Robert L. Zeller C. Faugeras O. Hebert M. (1997). Applications of non-metric vision to some visually-guided robotics tasks. In Aloimonos Y. (Ed.), Visual navigation: From biological systems to unmanned ground vehicles, advances in computer vision vol. II (pp. 898–134). Mahwah, New Jersey, USA: Lawrence Erlbaum Associates.
Rosenbaum D. A. Loukopoulos L. D. Meulenbroek R. G. J. Vaughan F. Engelbrecht S. E. (1995). Planning reaches by evaluating stored postures. Psychological Review, 102, 28–67. [PubMed] [CrossRef] [PubMed]
Schlicht E. Schrater P. (2007). Impact of coordinate transformation uncertainty on human sensorimotor control. Journal of Neurophysiology, 97, 4203–4214. [PubMed] [Article] [CrossRef] [PubMed]
Snyder L. H. (2000). Coordinate transformations for eye and arm movements in the brain. Current Opinion in Neurobiology, 10, 747–54. [PubMed] [CrossRef] [PubMed]
Snyder L. H. Grieve K. L. Brotchie P. Andersen R. A. (1998). Separate body- and world-referenced representations of visual space in parietal cortex. Nature, 394, 887–91. [PubMed] [CrossRef] [PubMed]
Sober S. J. Sabes P. N. (2003). Multisensory integration during motor planning. Journal of Neuroscience, 23, 6982–6992. [PubMed] [PubMed]
Sober S. J. Sabes P. N. (2005). Flexible strategies for sensory integration during motor planning. Nature Neuroscience, 8, 490–497. [PubMed] [Article] [PubMed]
Soechting J. F. Flanders M. (1989a). Errors in pointing are due to approximations in sensorimotor transformations. Journal of Neurophysiology, 62, 595–608. [PubMed]
Soechting J. F. Flanders M. (1989b). Sensorimotor representations for pointing to targets in three-dimensional space. Journal of Neurophysiology, 62, 582–594. [PubMed]
Soechting J. F. Flanders M. (1992). Moving in three dimensional space: Frames of reference, vectors and coordinate systems. Annual Reviews in Neuroscience, 15, 167–191. [PubMed] [CrossRef]
Thaler L. Todd J. T. (2009a). The control parameters used by the CNS to guide the hand depend on the visuo-motor task: Evidence from visually guided pointing. Neuroscience, 159, 578–598. [PubMed] [CrossRef]
Thaler L. Todd J. T. (2009b). The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments. Neuropsychologia, 47, 1227–1244. [PubMed] [CrossRef]
Todd J. T. (2004). The visual perception of 3D shape. Trends in Cognitive Sciences, 8, 115–121. [PubMed] [Article] [CrossRef] [PubMed]
Todd J. T. Norman J. F. (2003). The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception & Psychophysics, 65, 31–47. [PubMed] [CrossRef] [PubMed]
Todorov E. (2004). Optimality principles in sensorimotor control. Nature Neuroscience, 7, 907–915. [PubMed] [Article] [CrossRef] [PubMed]
Todorov E. Jordan M. I. (2002). Optimal feedback control as a theory for motor coordination. Nature Neuroscience, 5, 1226–1235. [PubMed] [CrossRef] [PubMed]
Ungerleider L. G. Mishkin M. (1982). Two cortical visual systems. In Jgle, D. Goodale, M. A. Mansfield R. J. W. (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT Press.
van Beers R. J. Haggard P. Wolpert D. M. (2004). The role of execution noise in movement variability. Journal of Neurophysiology, 91, 1050–1063. [PubMed] [CrossRef] [PubMed]
Van Pelt S. Medendorp W. P. (2008). Updating target distance across eye movements in depth. Journal of Neurophysiology, 99, 2281–2290. [PubMed] [CrossRef] [PubMed]
Wolpert D. M. Ghahramani Z. (2000). Computational principles of movement neuroscience. Nature Neuroscience, 3, 1212–1217. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Sketch of the experimental apparatus. Planes emanated at the forehead and consisted of yarn spun in random orientations around thin, clear plastic frames. Threads were arranged such as to not obstruct the view of the scene for either eye. To eliminate head movements, subjects placed their head in a combined chin-forehead-rest (not shown). To eliminate eye movements, subjects directed their gaze at a fixation target. Eye movements were monitored with a webcam (not shown).
Figure 1
 
Sketch of the experimental apparatus. Planes emanated at the forehead and consisted of yarn spun in random orientations around thin, clear plastic frames. Threads were arranged such as to not obstruct the view of the scene for either eye. To eliminate head movements, subjects placed their head in a combined chin-forehead-rest (not shown). To eliminate eye movements, subjects directed their gaze at a fixation target. Eye movements were monitored with a webcam (not shown).
Figure 2
 
Illustration of the four presentation conditions used in the experiments. Only one target magnitude is illustrated for each presentation condition. In the actual experiments the hand was unseen, but it is drawn here for illustration. Please see Materials and methods for details regarding stimuli and instructions.
Figure 2
 
Illustration of the four presentation conditions used in the experiments. Only one target magnitude is illustrated for each presentation condition. In the actual experiments the hand was unseen, but it is drawn here for illustration. Please see Materials and methods for details regarding stimuli and instructions.
Figure 3
 
Distributions of movement endpoints and variability ellipses for the different experimental conditions in Experiment 1. Ellipse axes denote two SD around the mean. Straight lines in each ellipse denote average movement direction, i.e. the last portion of the vector joining movement start and endpoints. Ellipses were computed based on all subjects' responses after subtracting each subject's mean. Ellipses are positioned on the average movement endpoint across all subjects. Black squares mark the endpoint that would have resulted from a movement executed veridical along the target direction over the target magnitude.
Figure 3
 
Distributions of movement endpoints and variability ellipses for the different experimental conditions in Experiment 1. Ellipse axes denote two SD around the mean. Straight lines in each ellipse denote average movement direction, i.e. the last portion of the vector joining movement start and endpoints. Ellipses were computed based on all subjects' responses after subtracting each subject's mean. Ellipses are positioned on the average movement endpoint across all subjects. Black squares mark the endpoint that would have resulted from a movement executed veridical along the target direction over the target magnitude.
Figure 4
 
(a) Average movement direction errors (in degrees) averaged across presentation conditions. Positive errors indicate errors towards the sagittal body midline. (b) SD of movement direction errors (in degrees) averaged across presentation conditions. Error bars denote standard errors of the mean between subjects.
Figure 4
 
(a) Average movement direction errors (in degrees) averaged across presentation conditions. Positive errors indicate errors towards the sagittal body midline. (b) SD of movement direction errors (in degrees) averaged across presentation conditions. Error bars denote standard errors of the mean between subjects.
Figure 5
 
Left column: Subjects' average movement magnitude plotted as a function of target magnitude for the different presentation conditions. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average movement magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of movement magnitude plotted as a function of movement magnitude for the different presentation conditions. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 5
 
Left column: Subjects' average movement magnitude plotted as a function of target magnitude for the different presentation conditions. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average movement magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of movement magnitude plotted as a function of movement magnitude for the different presentation conditions. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 6
 
Left column: Subjects' average adjusted magnitude plotted as a function of target magnitude. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average adjusted magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of adjusted magnitude plotted as a function of adjusted magnitude. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 6
 
Left column: Subjects' average adjusted magnitude plotted as a function of target magnitude. Diagonal lines indicate veridical performance and asterisks indicate a significant difference in average adjusted magnitude to the corresponding magnitude in ‘Endpoint’ conditions (* p < .05, ** p < .01). Degrees of freedom for tests of significance were adjusted using Tukey's HSD procedure (for details see text). Right column: Subjects' average SD of adjusted magnitude plotted as a function of adjusted magnitude. Diagonal lines indicate the best linear fit to the data. Model parameter and fit statistic (R 2) are given in the lower right corner of each plot. Error bars denote standard errors of the mean between subjects.
Figure 7
 
Results of MLE prediction for Experiment 1. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 7
 
Results of MLE prediction for Experiment 1. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 8
 
Results of MLE prediction for Experiment 2. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Figure 8
 
Results of MLE prediction for Experiment 2. Error bars in the left hand column denote 95% confidence intervals around the prediction error across subjects. In all other plots, error bars denote standard errors of the mean across subjects. Note that 95% confidence intervals are smaller than standard errors, because confidence intervals were computed based on the variability of the difference between observed and predicted SD, whereas standard errors were computed based on the variability of observed and predicted SD. 2 and 3 describe computational details of the prediction.
Table 1
 
Averages and Standard Deviations (in parenthesis) of Movement Kinematics for each presentation condition in Experiment 1, computed across subjects and target magnitudes. Statistically significant differences between presentation conditions were determined using T-tests for paired samples, with degrees of freedom adjusted using Tukey's HSD procedure. Significant comparisons (p < .05) are indicated in the right column.
Table 1
 
Averages and Standard Deviations (in parenthesis) of Movement Kinematics for each presentation condition in Experiment 1, computed across subjects and target magnitudes. Statistically significant differences between presentation conditions were determined using T-tests for paired samples, with degrees of freedom adjusted using Tukey's HSD procedure. Significant comparisons (p < .05) are indicated in the right column.
Endpoint Hand Centered Head/Eye Centered Allocentric Significant differences (p < .05)
Curvature (%) 3.4 (1.3) 3.3 (1) 3.4 (1.2) 3.1 (0.9)
Average Speed (cm/s) 17 (4.7) 16 (4.7) 17.5 (5.6) 16.5 (5.4) Endpoint vs. Hand C.
Max. Speed (cm/s) 34.7 (14.9) 31.4 (13.1) 33.8 (13.8) 32.4 (15) Endpoint vs. Hand C.; Endpoint vs. Allocentric
Duration (ms) 843 (185) 914 (209) 988 (225) 990 (244) All comparisons, except: Head C. vs. Allocentric; Endpoint vs. Allocentric
Table 2
 
Correlations between various measures of performance for Experiments 1 and 2, for both individual subjects and all subjects together (group). n: number of data points for subject correlation. N: number of data points for group correlation. **: p < .01; *** p < .001.
Table 2
 
Correlations between various measures of performance for Experiments 1 and 2, for both individual subjects and all subjects together (group). n: number of data points for subject correlation. N: number of data points for group correlation. **: p < .01; *** p < .001.
Subject Group
1 2 3 4 5 6 7 8 9 10
Magnitude n = 16 N = 160 0.87 0.99 0.95 0.92 0.96 0.96 0.89 0.9 0.94 0.94 0.8***
Residual Magn. n = 16 N = 160 0.29 0.87 0.72 0.46 0.18 0.73 0.69 0.82 0.2 0.41 0.61***
SD n = 16 N = 160 0.12 0.8 0.65 0.6 −0.2 0.54 0.48 0.61 0.35 0.19 0.32***
Residual SD n = 16 N = 160 0.27 0.77 0.63 0.55 −0.2 0.66 0.2 0.48 0.32 0.15 0.35***
Average Res. SD n = 4 N = 40 0.4 0.82 0.95 0.88 0.08 0.92 0.31 0.37 0.3 0.69 0.47**
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×