March 2007
Volume 7, Issue 5
Free
Research Article  |   July 2007
A comparison of localization judgments and pointing precision
Author Affiliations
Journal of Vision July 2007, Vol.7, 11. doi:https://doi.org/10.1167/7.5.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Karl R. Gegenfurtner, Volker H. Franz; A comparison of localization judgments and pointing precision. Journal of Vision 2007;7(5):11. https://doi.org/10.1167/7.5.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We compared the precision of perceptual localization and manual pointing. A Gaussian blob was presented 6° to the right or left of a central fixation spot on a CRT screen. Above and below the blob, vertical lines were displayed. On each trial, the blob was slightly offset to the right or left with respect to the lines. The subjects had to judge whether the blob appeared to the right or to the left of the vertical lines. At the same time, they had to point to the center of the blob with their index finger. Precision for perceived position was significantly better than precision for pointing. Performance in these two tasks correlated highly between the subjects. Overall, subjects pointed more leftward on trials where they judged the blob to be to the left of the lines. There was also a significant correlation for each subject between the pointing error and the perceived location error, calculated by partialling out the effect of the physical offset. The results are in agreement with the idea that the signals determining the perceived location of an object are used to guide the motor system in pointing toward it.

Introduction
The localization of visual targets is one of the most important tasks for our visual system. Whenever we want to interact with objects or persons, we first need to localize them. It is not surprising then that the visual system is remarkably good at localization. In fact, certain visual tasks like Vernier acuity can be performed much better than would be expected based on the spacing of the cone photoreceptors. Therefore, the term “hyperacuity” has been coined to characterize such high performance levels. In foveal vision, thresholds for detecting the spatial offset of Vernier targets are as low as several arc seconds (Westheimer, 1979; Westheimer & McKee, 1977). In peripheral vision, thresholds are larger, but they can still be well below the distance of individual cones. However, it is quite clear by now that performance in this type of task is not limited by factors such as cone spacing but that it depends mostly on the properties of oriented receptive fields in the visual cortex (see Wilson, 1986, for review). 
Highly optimized localization performance is certainly not an end to itself. Rather, it is an important prerequisite for nearly all of our interactions with the environment. For example, before we can grasp an object, we first need to determine the exact position of its retinal image. This information then has to be converted from a retinocentric coordinate system into a world-centered coordinate system that is independent of potential movements of our eyes, head, and body (Andersen, Snyder, Li, & Stricanne, 1993; Duhamel, Colby, & Goldberg, 1992). Only then can the motor system devise a program of movements that will eventually lead to our fingers touching the object at the proper position. While such a serial scheme of sensorimotor processing seems intuitive, it has been suggested more recently that there might, in fact, be two parallel visual systems, one for mediating conscious perception of objects and the other for guiding our actions. 
The notion of two visual systems for perception and action is based on a large body of evidence presented by Goodale and Milner (1992), Goodale and Westwood (2004), and Milner and Goodale (1995). They described a double dissociation between shape perception and grasping in two patients. In subsequent work, they observed a similar dissociation when healthy subjects were asked to grasp visual illusions as, for example, the Ebbinghaus illusion (Aglioti, DeSouza, & Goodale, 1995). One problem with these studies is that effects of visual illusions occur not only in perception but also in grasping (Franz, Gegenfurtner, Bülthoff, & Fahle, 2000; Pavani, Boscagli, Benvenuti, Rabuffetti, & Farne, 1999; Vishton, Rea, Cutting, & Nunez, 1999). This led to different interpretations, with some authors opting for two parallel visual systems (Carey, 2001; Haffenden, Schiff, & Goodale, 2001; Milner & Dyde, 2003), whereas others suggested that the data can be explained by a single visual system that was exposed to different task demands in some of the studies (Franz, 2001; Franz, Bülthoff, & Fahle, 2003; Pavani et al., 1999; Vishton et al., 1999). 
A safe conclusion from these numerous studies comparing size illusions and grasping is that comparing the effects of visual illusions for perception and action is difficult. This holds not only for grasping but also for pointing. Yamagishi, Anderson, and Ashida (2001) and Kerzel and Gegenfurtner (2005) showed that manual pointing is affected by a visual illusion first described by De Valois and De Valois (1991). When a Gabor patch is drifted within its aperture, the stimulus is typically misperceived in the direction of motion. Yamagishi et al. observed that mislocalization was larger for action than for perception. Kerzel and Gegenfurtner showed that perceptual judgments for these stimuli depended heavily on the exact nature of the task. Different results were found, depending on whether the Gabor patch was compared to other Gabor patches, to stationary lines, or to flashed lines. 
We therefore decided to use a different approach and instead compare the precision of manual pointing to the precision of perceptual localization. Rather than dealing with the idiosyncrasies in processing stimuli for which certain illusions exist, we are concerned here with the factors limiting performance for stimuli for which near-optimal performance can be achieved. A similar approach was used previously in comparing the perception of motion to the properties of smooth pursuit eye movements. For pursuit, there is generally excellent agreement in speed or direction thresholds and the corresponding variation of smooth pursuit eye movements (Beutter & Stone, 1998, 2000; Braun, Pracejus, & Gegenfurtner, 2006; Gegenfurtner, Xing, Scott, & Hawken, 2003; Kowler & McKee, 1987; Krauzlis & Stone, 1999; Osborne, Lisberger, & Bialek, 2005; Stone, Beutter, & Lorenceau, 2000; Stone & Krauzlis, 2003). However, when the correlation over individual trials was investigated, no such correlation was found for speed changes (Braun et al., 2006; Gegenfurtner et al., 2003). In contrast, Stone and Krauzlis (2003) obtained a significant trial-by-trial agreement for perceived direction judgments and pursuit direction. The interpretation of these results is slightly more complicated by the fact that smooth pursuit occurs in a closed loop. Signals about the pursuit error are used for the correction of ongoing pursuit, and signals about pursuit are available to perception, which makes the comparison between eye speed and perceived speed potentially more complicated (see Stone & Krauzlis, 2003). 
Here, we investigate the relationship between perceptual and motor precision for a motor subsystem, which is distinctly different from the pursuit system. For pointing movements of the hand, visual feedback is thought to play a less important role (Goodale, Pelisson, & Prablanc, 1986; Paillard, 1981; but see Saunders & Knill, 2003, 2004). More complex visuomotor transformations are required, which are achieved in distinct brain regions (see Andersen & Buneo, 2002). Finally, in terms of execution, the dynamics and kinematics of eye and hand movements are different (Soechting, Buneo, Herrmann, & Flanders, 1995; Vetter, Flash, & Wolpert, 2002). 
Methods
The precision of perceptual localization and manual pointing was determined using a three-stimulus alignment task. On each trial, subjects pointed to the target and gave a psychophysical judgment of its horizontal position relative to two vertical marker lines above and below the target. From the judgments, we determined psychometric functions whose steepness indicates the precision of perceptual localization. At the same time, we used the landing positions of the finger to construct manometric functions whose steepness indicates the precision of the position information available to the motor system. We also correlated perceptual and pointing responses across individual trials and across subjects to determine the degree of common variation in both tasks. 
Stimulus configuration
Stimuli were displayed on an ELO Touchsystems 17-in. color CRT monitor that was driven by a Cambridge Research VSG 2/4 graphics board at a refresh rate of 120 Hz noninterlaced. The images were generated on the monitor by reading through the picture memory in a raster scan and then interpreting the numbers in each location as a color defined in a 256-element color lookup table. Two 8-bit digital-to-analog converters, which were combined to produce an intensity resolution of 12 bits, were used to control the intensity of each of the three monitor primaries. The luminance of each of the phosphors was measured at various output voltage levels using a Graseby Optronics Model 370 radiometer with a model 265 photometric filter. A smooth function was used to interpolate between the measured points, and lookup tables were generated to linearize the relationship between voltage output and luminance. We also made sure that additivity of the three phosphors held over the range of intensities used in these experiments (Brainard, 1989). All the displays in the present experiments had a space–time-averaged luminance of 26.0 cd/m2. The monitor had a resolution of 800 × 600 pixels and extended 32 cm in width and 24 cm in height. The subject viewed the display at a comfortable reaching distance of 53 cm. This way, 1 pixel corresponded to 0.4 mm on the screen and 2.5 arcmin of visual angle. 
Figure 1A shows a typical stimulus display. Two aligned vertical marker lines 15 min wide and 3° high were displayed 6° to the right or left of a central fixation spot on a CRT screen. A Gaussian blob was presented in between the two marker lines. The blob had a standard deviation of 20 arcmin of visual angle and a peak intensity of 52 cd/m 2, corresponding to 100% contrast. As shown in Figure 1B, on each trial, the blob was slightly offset to the right or left with respect to the lines. Eleven different offsets were used: −16, −8, −4, −2, −1, 0, 1, 2, 4, 8, and 16 pixels, corresponding to −40, −20, −10, −5, −2.5, 0, 2.5, 5, 10, 20, and 40 arcmin of visual angle. Positive offset values indicate a blob position that is “far”, that is, more eccentric, relative to the reference lines. One of the offsets was randomly chosen on each trial. At the beginning of each trial, the subjects kept a large, central button pressed on a specially devised keypad resting just below the monitor at a distance of 20 cm from the center of the screen. A central fixation spot was visible on the screen for a period chosen randomly between 1,000 and 1,500 ms. Then, both the blob and the vertical lines were displayed for 100 ms. After the stimulus appeared on the screen, the subjects first had to point to the center of the blob with their index finger. Then, they had to judge whether the blob appeared to the right or to the left of the vertical lines by pressing one of two buttons on either side of the keypad. No constraints on reaction time or movement time were imposed on the subjects. The landing position of the finger on the screen was measured using an ELO Touchsystems (Menlo Park, CA, USA) IntelliTouch controller. The landing position was calculated as the average of the touchscreen samples indicating contact, weighted by the pressure coordinate of the touch controller. Because the movements were fast and the contacts were only brief, this did not differ significantly from the very first contact sample. The touchscreen was calibrated individually for each subject at the beginning of each experimental session. Twenty-four students of Magdeburg University, all naïve with respect to the experiment and all with normal vision, participated in the experiment. Each subject completed 1,000 trials, divided into four sessions of about 30 min duration. 
Figure 1
 
Stimulus displays used in this study. Panel A shows the layout of the marker lines and the Gaussian blob on the CRT monitor. The fixation spot shown in the center was extinguished at the moment when the targets became visible. Panel B shows a close-up of the stimuli. In this case, the Gaussian blob was offset to the right of the marker lines.
Figure 1
 
Stimulus displays used in this study. Panel A shows the layout of the marker lines and the Gaussian blob on the CRT monitor. The fixation spot shown in the center was extinguished at the moment when the targets became visible. Panel B shows a close-up of the stimuli. In this case, the Gaussian blob was offset to the right of the marker lines.
In a second experiment, subjects were wearing liquid crystal shutter goggles (PLATO Translucent Technologies, Toronto, Ontario, Canada). The goggles closed at the point when subjects released their finger from the central button on the keypad to initiate the pointing movement. This way, the subjects did not see their finger or the computer screen during the movement. As before, no constraints on reaction time or movement time were imposed. Fifteen students of Giessen University, all naïve with respect to the experiment and all with normal vision, participated in the experiment. Each subject completed 500 trials, divided into two sessions of about 30 min duration. These experiments were performed on the same setup as the earlier experiments. 
Psychometric analysis
Eleven different offsets of the blob were used to determine psychometric functions for both perceived position and pointing. At least 35 trials were available for each offset. Figure 2 shows a typical data set together with the psychometric function (Wichmann & Hill, 2001a, 2001b) best fitting the proportion of trials where the observer judged the blob to be “far” with respect to the marker lines. Figure 3 shows the procedure for devising the equivalent pointing or manometric functions. Figure 3A shows the distribution of finger landing positions for 5 of the 11 offsets: −16, −4, 0, 4, and 16 pixels, from left to right. The heavy vertical line indicates the average of all trials where the offset was 0; that is, the Gaussian blob was exactly aligned with the two vertical marker lines. Pointing psychometric functions were calculated by determining whether the landing position of the index finger on each trial was to the “near” or “far” of this mean. Because the landing position varied systematically with the stimulus offset, as shown in Figure 3B, this procedure leads to an increasing proportion of landing positions further away than the 0-offset mean. These proportions were used to construct a psychometric function for pointing, as illustrated in Figure 3C. One consequence of this procedure is that the proportion of landing positions to the right of the mean is close to 0.5 for the 0-offset condition. Therefore, the manometric function is centered at 0. The true offset of this function can be recovered through the absolute position on the screen of the 0-offset mean. For both functions, psychometric and manometric, the ability to discriminate between different positions is given by the steepness of the psychometric functions. The steepness, which serves as our measure of precision, is specified here by the standard deviation of the cumulative Gaussian function that was used in the psychometric function fit. 
Figure 2
 
Sample psychometric function. The x-axis plots the horizontal offset of the Gaussian blob with respect to the marker lines. The y-axis indicates the percentage of trials where the subjects judged the blob to be further away from fixation than the marker lines. The horizontal position of the function shows a bias toward seeing these targets more eccentric than the marker lines. The slope of the function indicates the precision of localization judgments. The colored symbols indicate data for fixed offsets of −40 arcmin (red), −10 arcmin (magenta), 0 arcmin (blue), +10 arcmin (cyan), and +40 arcmin (green).
Figure 2
 
Sample psychometric function. The x-axis plots the horizontal offset of the Gaussian blob with respect to the marker lines. The y-axis indicates the percentage of trials where the subjects judged the blob to be further away from fixation than the marker lines. The horizontal position of the function shows a bias toward seeing these targets more eccentric than the marker lines. The slope of the function indicates the precision of localization judgments. The colored symbols indicate data for fixed offsets of −40 arcmin (red), −10 arcmin (magenta), 0 arcmin (blue), +10 arcmin (cyan), and +40 arcmin (green).
Figure 3
 
Construction of the manometric functions. Panel A shows histograms of the horizontal finger landing positions relative to each subject's average landing position. For illustrative purposes, the data for all 24 subjects were summed. Only 5 of all 11 offsets are shown here, using the same color code as in Figure 2. Panel B relates the stimulus offset ( x-axis) to the offset of the average finger landing position ( y-axis). The diagonal line indicates a gain of unity. Panel C shows the manometric function constructed from the summary data in Panel A. On each trial, it was determined whether the finger landed to the “near” or “far” of the particular subject's reference location, which was defined as the average endpoint under the zero offset condition. The y-axis then plots the proportion of trials for each stimulus offset in which the finger was further away from fixation than the reference location.
Figure 3
 
Construction of the manometric functions. Panel A shows histograms of the horizontal finger landing positions relative to each subject's average landing position. For illustrative purposes, the data for all 24 subjects were summed. Only 5 of all 11 offsets are shown here, using the same color code as in Figure 2. Panel B relates the stimulus offset ( x-axis) to the offset of the average finger landing position ( y-axis). The diagonal line indicates a gain of unity. Panel C shows the manometric function constructed from the summary data in Panel A. On each trial, it was determined whether the finger landed to the “near” or “far” of the particular subject's reference location, which was defined as the average endpoint under the zero offset condition. The y-axis then plots the proportion of trials for each stimulus offset in which the finger was further away from fixation than the reference location.
Trial-by-trial variation
The degree of agreement between manual pointing and perceptual judgments was determined across individual trials. Some agreement is to be expected based upon the stimulus offsets. If the blob is presented far to the left, then subjects will mostly judge the stimulus as being to the left, and their pointing will also be mostly on the left side. Therefore, we used the methods established by Stone & Krauzlis (2003; see also Britten, Shadlen, Newsome, & Movshon, 1992) to calculate the percentage of agreement (“% Same”) between perceptual and motor judgments. In particular, let ppoint indicate the percentage of trials, for a particular subject and offset, in which the subject pointed “far” relative to the marker lines, and let pperception denote the percentage of trials where the stimulus was judged as being to the “far” of the marker lines. Then, the percentage of trials where an agreement is expected just by chance is (see Stone & Krauzlis, 2003): 
%Same(chance)=ppointpperception+(1ppoint)(1pperception).
(1)
Under conditions where ppoint and pperception are both close to 0.5, the expected chance agreement is also close to 0.5. When both are close to 1 or 0, the expected chance agreement is close to 1. The latter is mostly the case for large stimulus offsets, and these trials are, therefore, not as informative. 
Results
Figure 4 shows the results for 6 typical subjects out of the 24 in Experiment 1. In each panel, the blue symbols indicate the psychophysical judgments and the red symbols indicate the pointing results. The slopes for the pointing functions are lower than those for the psychometric functions, which means that the precision of pointing is lower than that of perception. For the exemplary subjects shown in Figure 4, the precisions were nearly equal in the top row (Panels A and B), slightly different in the middle row (Panels C and D), and quite distinct in the bottom row (Panels E and F). 
Figure 4
 
Psychometric and manometric functions for six typical subjects in Experiment 1. Each panel plots psychometric (blue) and manometric (red) functions constructed according to the principles illustrated in Figure 3. Panels A and B show two cases where precision for perception and pointing was roughly equal. Panels C and D show two cases where the pointing precision was only slightly lower than perceptual precision, and Panels E and F show two cases where pointing precision was vastly lower than perceptual precision.
Figure 4
 
Psychometric and manometric functions for six typical subjects in Experiment 1. Each panel plots psychometric (blue) and manometric (red) functions constructed according to the principles illustrated in Figure 3. Panels A and B show two cases where precision for perception and pointing was roughly equal. Panels C and D show two cases where the pointing precision was only slightly lower than perceptual precision, and Panels E and F show two cases where pointing precision was vastly lower than perceptual precision.
In Figure 5A, summary data are shown from all 24 observers, and the 6 observers from Figure 4 are indicated by red squares. All the data points lie on or above the diagonal, indicating that the slopes of the manometric functions are shallower than those of the psychometric functions. The precision for perception is 9.8 arcmin, compared to 17.8 arcmin for pointing ( t = 8.51, df = 23, p < .001). It is also notable that there is a significant correlation between the precision in perception and pointing across the different observers ( ρ = .66, t = 4.11, df = 22, p < .005). Observers who are better at localizing the target perceptually are also more precise in pointing to the target. Perception and action share a substantial proportion (43%) of the variance between observers. This correlation was not caused by the timing of our subjects. Although there is a correlation between movement speed and pointing precision in general (Fitts, 1954), this did not become a significant factor in our unconstrained movements. Reaction times were between 195 and 351 ms, and movement times were between 365 and 506 ms. Correlations between pointing precision and reaction time, movement time, or the sum of the two were low (<0.1) and did not become significant. 
Figure 5
 
(A) Pointing versus perceptual precision. The black symbols show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment with visual feedback for 24 subjects. All data points lie above the diagonal, indicating better perceptual than pointing precision. The red symbols show data from the six observers shown in Figure 4. (B) Bias for perception and pointing. The x-axis shows perceptual bias as determined by the shift of the psychometric function. The y-axis shows the pointing bias, which was determined as the landing location of the finger relative to the markers for the 0-offset condition.
Figure 5
 
(A) Pointing versus perceptual precision. The black symbols show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment with visual feedback for 24 subjects. All data points lie above the diagonal, indicating better perceptual than pointing precision. The red symbols show data from the six observers shown in Figure 4. (B) Bias for perception and pointing. The x-axis shows perceptual bias as determined by the shift of the psychometric function. The y-axis shows the pointing bias, which was determined as the landing location of the finger relative to the markers for the 0-offset condition.
Another trend noticeable in Figure 4 is that all the psychometric functions are slightly shifted to the left. This means that the targets were perceived to be farther from the fixation point than the vertical marker lines. When the Gaussian blob was exactly aligned with the markers, the subjects judged the blob to be more eccentric than the markers in 70% to 80% of all trials. This bias has been observed before and is related to the exact spatiotemporal properties of marker lines and target stimulus (Müsseler, van der Heijden, Mahmud, Deubel, & Ertsey, 1999). The manometric functions were all constructed to be centered at 0, but the exact landing positions of the finger can be recovered by calculating the average finger position on the screen. The results of this calculation are shown in Figure 5B. Subjects did tend to undershoot the peripherally presented target. The undershoot was much larger than for perception, and the pointing offsets were not correlated to the perceptual offsets. Because we observed a similar motor undershoot in a different set of experiments where only single targets were presented (White, Kerzel, & Gegenfurtner, 2006), this undershoot is most likely caused by the gain of the motor system being less than unity under the particular circumstances of our setup. Because it is impossible to disentangle these factors here, we will not further pursue the bias here. 
Basically, these results are in line with a serial processing scheme, where the motor system acts on noisy sensory estimates of position and, in the process, adds its own noise. An alternative explanation could be that the perceptual system uses feedback from the motor system to refine its own estimate. We therefore ran a second experiment where visual feedback was eliminated. 
Figure 6 shows the results from this experiment, where 15 different observers wore shutter goggles, which closed as soon as the hand motion started. Under these conditions, perceptual performance was 9.6 arcmin, which is nearly the same as in the earlier experiments without the shutter goggles. Pointing performance, however, was worse with the shutter goggles (26.02 arcmin) than with a free view of the hand (17.8 arcmin). For seven observers, it was difficult to construct manometric functions at all because the proportion of “far” finger landing locations varied only within a small range from 0.4 to 0.6, and therefore, the slopes of the manometric functions were extremely shallow. The fact that the shutter goggles did worsen performance in the motor task makes it clear that the pointing task used here is not simply open loop. At some point during the pointing movement, corrections are made based on visual feedback, and this feedback is not available in the condition with the shutter goggles. Our results here agree well with recent findings by Ma-Wyatt and McKee (2007), who also found that precision got worse with the use of shutter goggles. 
Figure 6
 
Pointing versus perceptual precision in the experiment with shutter goggles. The large filled circles show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment without visual feedback for 15 subjects. The small open symbols indicate the same data shown in Figure 5A, in the experiment with a free view of the hand during pointing.
Figure 6
 
Pointing versus perceptual precision in the experiment with shutter goggles. The large filled circles show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment without visual feedback for 15 subjects. The small open symbols indicate the same data shown in Figure 5A, in the experiment with a free view of the hand during pointing.
Because our observers judged the stimuli to which they pointed on each trial, we can also calculate a trial-by-trial correlation for each observer. It is, however, fairly clear that there is a high correlation between offset position and judgment. When the offset is far to the right, observers will most likely judge the stimulus as being to the right of the markers. This is, in fact, what the psychometric functions are based on. The more interesting question is whether there is a correlation between judgment and pointing at each fixed offset. There are two ways to investigate this issue. 
We can use the method of partial correlation (e.g., Hays, 1981, p. 471) to determine exactly this correlation between judgment and pointing. Here, the effect of stimulus offset is taken into account by first making linear predictions of the finger landing positions and the judgments from the offsets and then computing the correlation between the residuals. Figure 7 shows the full and partial correlations for all subjects. The partial correlations were between .13 and .55, with a mean of .28, and they were all significant because the number of trials (500) was fairly high. While significant partial correlations could be due to a common processing mechanism for perception and action, they could also occur due to slow common trends in the data. For example, if the subjects were slowly drifting leftward or rightward over the course of the experiment, then this might show up as a significant correlation. Therefore, we detrended our data using the following procedure. First, we computed the error for perception and pointing on each trial, compared to the best linear predictor based on the target offset. This amounts to the same prediction as is used in the partial correlation. Then, the overall trends for each trial are computed as the running means over 30 trials centered on the current trial, for perception and pointing, respectively. The detrended data are then simply the difference between these two time series. The correlations between perception and pointing were not at all affected by this procedure, indicating that they are not due to common slow trends. 
Another way to look at this correlation is to plot the average pointing position separately for the two types of judgment (“near” vs. “far”) as a function of offset. At each offset, the pointing positions for “right” and “left” judgments are separated by about 20 arcmin ( Figure 8). This shows that there is agreement between pointing and perception on average. It is of interest then to determine whether the differences are large enough to predict one response from the other on individual trials. Because the separation of 20 arcmin roughly corresponds to the standard deviation of pointing (see Figure 2), we expect to see that the agreement on individual trials is above chance. 
Figure 7
 
Full and partial correlations between perception and pointing for all 24 subjects. For each subject, the correlation between finger endpoints and perceptual judgments across trials is shown by the open bars. The filled bars indicate partial correlations between finger endpoint and judgment, using linear regression to partial out predictions based on target offset.
Figure 7
 
Full and partial correlations between perception and pointing for all 24 subjects. For each subject, the correlation between finger endpoints and perceptual judgments across trials is shown by the open bars. The filled bars indicate partial correlations between finger endpoint and judgment, using linear regression to partial out predictions based on target offset.
Figure 8
 
Finger landing position ( y-axis) as a function of target offset ( x-axis) for the two types of judgment separately. The downward-pointing triangles indicate trials where the subject judged the stimulus to be further away from fixation as the markers. The upward-pointing triangles indicate trials where the subject judged the stimulus to be nearer to fixation as the markers. The two lines are the best fitting regression lines for all subjects.
Figure 8
 
Finger landing position ( y-axis) as a function of target offset ( x-axis) for the two types of judgment separately. The downward-pointing triangles indicate trials where the subject judged the stimulus to be further away from fixation as the markers. The upward-pointing triangles indicate trials where the subject judged the stimulus to be nearer to fixation as the markers. The two lines are the best fitting regression lines for all subjects.
Figure 9 shows the results of our analysis of the trial-by-trial agreement. The proportion of agreement expected by chance is plotted against the proportion of agreement that was observed. Most informative is the left part of the graph where % Same (chance) is close to 0.5. The actual agreement varies between 46% and 83% for different observers and different stimulus offsets, but the mean is at around 60%, which is significantly above the chance value. The values are in line with values observed by Stone and Krauzlis (2003) in a task where pursuit direction was compared to perceived motion direction. Their observers show agreements of 68% and 73%. 
Figure 9
 
Proportion of trials where the pointing and perceptual responses agree (y-axis) as a function of the proportion of agreement expected by chance (see the Methods section). Separate data points are for 24 different observers and 11 different stimulus offsets. The red curve illustrates an average of y values binned for x values with a spacing of 0.05.
Figure 9
 
Proportion of trials where the pointing and perceptual responses agree (y-axis) as a function of the proportion of agreement expected by chance (see the Methods section). Separate data points are for 24 different observers and 11 different stimulus offsets. The red curve illustrates an average of y values binned for x values with a spacing of 0.05.
Discussion
Summary
We compared the precision of manual pointing movements to the precision for perceptual localization. Our results show that precision is better for perception than for pointing. Because we also observed a strong correlation between perception and pointing, both across subjects and across trials, this is probably indicative of common processing mechanisms for both tasks with added noise in the motor system. 
Comparison to other studies on localization
Both perceptual and motor localization have been investigated numerous times, but there are relatively few studies so far where these two measures were compared. One exception is the experiments by Anderson and Yamagishi (2000). These authors measured localization precision through manual pointing movements and relative to the location of a precursor target. They used two different types of stimuli, chosen to preferentially activate magno- versus parvocellular geniculo-striate processing pathways. A motion stimulus was chosen to activate the M pathway, and an isoluminant color stimulus was chosen to activate the P pathway. They found roughly equal location errors for pointing of about 1.3° (80 arcmin) at an eccentricity of 11°, independently of the type of stimulus. This is in agreement with a recent experiment by White et al. (2006), who also found similar precision in two motor tasks (pointing and saccades) to luminance and isoluminant targets of matched cone contrasts. For perceptual localization, Anderson and Yamagishi found that precision increased to 0.5° for the motion stimulus. This is similar to our result, where perceptual localization was better than for pointing. 
In two recent studies, Ma-Wyatt and McKee (2006, 2007) performed a detailed analysis of the visual signals for pointing and perceptual precision. They found that at eccentricities larger than 5°, the precision for pointing and perception is about equal (Ma-Wyatt & McKee, 2006). Nearer to the fovea, perception has a major advantage over pointing (Prablanc, Echallier, Jeannerod, & Komilis, 1979; Prablanc, Echallier, Komilis, & Jeannerod, 1979; White, Levi, & Aitsebaomo, 1992). When shutter goggles that closed at various times after movement onset were used (Ma-Wyatt & McKee, 2007), precision got worse with decreasing visual exposure. This means that visual information is essential both during the planning phase of the movement and during the execution of the movement. Our results confirm the results obtained by Ma-Wyatt and McKee (2006, 2007) and extend their findings to show a significant correlation between perception and pointing, both across observers and across individual trials. 
Relative versus absolute localization
This brings up the question as to whether performance in these two tasks can be compared at all. In the pointing task, the stimulus needs to be localized relative to the observer, in egocentric coordinates. The marker lines are not really relevant for the task. For perceptual localization, the absolute location is irrelevant, and only the location relative to the marker lines is important. This problem is difficult to avoid because perceptual judgments always have to be relative to some reference, which can be physically present or in memory. In fact, this is one of the arguments why the visual system for action should be separate from the one underlying perception (Goodale & Milner, 1992). Consequently, relative localization is often confounded with perceptual responses and absolute localization is often confounded with motor responses. Results might show a dissociation between perception and action, which is, in fact, due to different task demands. This might explain, for example, why dissociations between perceptual judgments and pointing responses can be found at the time of saccades (Burr, Morrone, & Ross, 2001). Similarly, it has been shown that the finding of a dissociation between perceptual judgments and grasping (Aglioti et al., 1995) might be better explained by different task demands than by a neuronal dissociation between two systems (Franz et al., 2000; Pavani et al., 1999; Vishton et al., 1999). For the present results, the slight differences in task demands imply that our estimate of common processing is conservative. 
The fact that confounds between response mode and task demands were often found leads some authors to a pessimistic view as to whether a comparison of perception and action was possible at all (e.g., Smeets & Brenner, 2001). An interesting recent study showed, however, that it is possible to disentangle the usual confound of relative perceptual and absolute motor tasks: Schenk (2006) independently varied task demands (relative vs. absolute) and response mode (perceptual vs. motor) and showed that the well-known patient D.F. might have a deficit in relative localization and not in perception. D.F. was able to perform perceptual and motor tasks if the information required was absolute, and she failed in both tasks if the information required was relative. This suggests that D.F.'s deficits are more based on a dissociation of relative versus absolute processing than on a dissociation of perception and action. Future research should show whether this new explanation is strong enough to explain all the deficits found in this patient. 
With respect to our results, one would not expect the degree of covariation that we observed if entirely different mechanisms underlay performance in the perceptual and motor tasks. Certainly, our finding that the pointing precisions were higher than perceptual precisions can be explained by two stages of processing occurring in succession. In early visual processing, the targets are localized with a certain amount of noise, and further noise is added in the transformations required for motor processing and during motor output. This way, even if action and perception were based on the same internal estimate of an object's position, both judgments would still disagree often, just because of the additional noise in the motor system. The agreement that we observed is evidence that there is a large degree of common processing in pointing and perception, at least for localization. 
Comparing sensory and motor noises
Our results are in excellent agreement with a study by Stone and Krauzlis (2003) who compared the direction of smooth pursuit eye movements with judgments of the direction of motion. Stone and Krauzlis found a significant agreement between oculometric and psychometric precisions, as had been observed in all other studies making such a comparison (Beutter & Stone, 1998, 2000; Braun et al., 2006; Gegenfurtner et al., 2003; Kowler & McKee, 1987; Krauzlis & Stone, 1999; Osborne et al., 2005; Stone et al., 2000; Stone & Krauzlis, 2003). They also showed a significant level of trial-by-trial covariation between pursuit direction and motion direction, similar to the agreement found in this study. It is not entirely straightforward that the results from pursuit should generalize to other types of motor tasks (e.g., Soechting et al., 1995). As pointed out above, hand movements are less reliant on visual feedback. They also require a more complex visuomotor transformation than eye movements and are represented in different cortical regions (for review, see Andersen & Buneo, 2002). In terms of execution, hand movements have a higher number of degrees of freedom and different dynamics and kinematics (Vetter et al., 2002). Our findings, therefore, strengthen the argument that common processing for perception and action may be a general principle of sensorimotor function. 
Conclusions
We conclude that perception and action share a large degree of processing for localization. Although the demands in perceptual and motor tasks are somewhat different, we observed a significant agreement both across different observers and across trials for individual observers. 
Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft Forschergruppe 560 “Perception and Action”. We would like to thank Doris Braun for valuable discussions of these experiments and Miriam Spering for helpful comments on a previous version of the manuscript. We are particularly grateful to Sören Krüger for help with running the experiments with the shutter goggles. These results were first presented in abstract form at ARVO 2001 (Gegenfurtner & Franz, 2001). 
Commercial relationships: none. 
Corresponding author: Karl R. Gegenfurtner. 
Email: gegenfurtner@uni-giessen.de. 
Address: Abteilung Allgemeine Psychologie, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany. 
References
Aglioti, S. DeSouza, J. F. Goodale, M. A. (1995). Size-contrast illusions deceive the eye but not the hand. Current Biology, 5, 679–685. [PubMed] [Article] [CrossRef] [PubMed]
Andersen, R. A. Buneo, C. A. (2002). Intentional maps in posterior parietal cortex. Annual Review of Neuroscience, 25, 189–220. [PubMed] [CrossRef] [PubMed]
Andersen, R. A. Snyder, L. H. Li, C. S. Stricanne, B. (1993). Coordinate transformations in the representation of spatial information. Current Opinion in Neurobiology, 3, 171–176. [PubMed] [CrossRef] [PubMed]
Anderson, S. J. Yamagishi, N. (2000). Spatial localization of colour and luminance stimuli in human peripheral vision. Vision Research, 40, 759–771. [PubMed] [CrossRef] [PubMed]
Beutter, B. R. Stone, L. S. (1998). Human motion perception and smooth eye movements show similar directional biases for elongated apertures. Vision Research, 38, 1273–1286. [PubMed] [CrossRef] [PubMed]
Beutter, B. R. Stone, L. S. (2000). Motion coherence affects human perception and pursuit similarly. Visual Neuroscience, 17, 139–153. [PubMed] [CrossRef] [PubMed]
Brainard, D. H. (1989). Calibration of a computer controlled color monitor. Color Research & Application, 14, 23–34. [CrossRef]
Braun, D. I. Pracejus, L. Gegenfurtner, K. R. (2006). Motion aftereffect elicits smooth pursuit eye movements. Journal of Vision, 6, (7):1, 671–684, http://journalofvision.org/6/7/1/, doi:10.1167/6.7.1. [PubMed] [Article] [CrossRef] [PubMed]
Britten, K. H. Shadlen, M. N. Newsome, W. T. Movshon, J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12, 4745–4765. [PubMed] [Article] [PubMed]
Burr, D. C. Morrone, M. C. Ross, J. (2001). Separate visual representations for perception and action revealed by saccadic eye movements. Current Biology, 11, 798–802. [PubMed] [Article] [CrossRef] [PubMed]
Carey, D. P. (2001). Do action systems resist visual illusions? Trends in Cognitive Sciences, 5, 109–113. [PubMed] [CrossRef] [PubMed]
De Valois, R. L. De Valois, K. K. (1991). Vernier acuity with stationary moving Gabors. Vision Research, 31, 1619–1626. [PubMed] [CrossRef] [PubMed]
Duhamel, J. R. Colby, C. L. Goldberg, M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92. [PubMed] [CrossRef] [PubMed]
Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47, 381–391. [PubMed] [CrossRef] [PubMed]
Franz, V. H. (2001). Action does not resist visual illusions. Trends in Cognitive Sciences, 5, 457–459. [PubMed] [CrossRef] [PubMed]
Franz, V. H. Bülthoff, H. H. Fahle, M. (2003). Grasp effects of the Ebbinghaus illusion: Obstacle-avoidance is not the explanation. Experimental Brain Research, 149, 470–477. [PubMed] [PubMed]
Franz, V. H. Gegenfurtner, K. R. Bülthoff, H. H. Fahle, M. (2000). Grasping visual illusions: No evidence for a dissociation between perception and action. Psychological Science, 11, 20–25. [PubMed] [CrossRef] [PubMed]
Gegenfurtner, K. R. Franz, V. (2001). A comparison of localization and pointing accuracy in peripheral position judgments. Investigative Ophthalmology & Visual Science, 42,
Gegenfurtner, K. R. Xing, D. Scott, B. H. Hawken, M. J. (2003). A comparison of pursuit eye movement and perceptual performance in speed discrimination. Journal of Vision, 3, (11):19, 865–876, http://journalofvision.org/3/11/19/, doi:10.1167/3.11.19. [PubMed] [Article] [CrossRef]
Goodale, M. A. Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. [PubMed] [CrossRef] [PubMed]
Goodale, M. A. Pelisson, D. Prablanc, C. (1986). Large adjustments in visually guided reaching do not depend on vision of the hand or perception of target displacement. Nature, 320, 748–750. [PubMed] [CrossRef] [PubMed]
Goodale, M. A. Westwood, D. A. (2004). An evolving view of duplex vision: Separate but interacting cortical pathways for perception and action. Current Opinion in Neurobiology, 14, 203–211. [PubMed] [CrossRef] [PubMed]
Haffenden, A. M. Schiff, K. C. Goodale, M. A. (2001). The dissociation between perception and action in the Ebbinghaus illusion: Nonillusory effects of pictorial cues on grasp. Current Biology, 11, 177–181. [PubMed] [Article] [CrossRef] [PubMed]
Hays, W. T. (1981). Statistics. New York: CBS College Publishing.
Kerzel, D. Gegenfurtner, K. R. (2005). Motion-induced illusory displacement reexamined: Differences between perception and action? Experimental Brain Research, 162, 191–201. [PubMed] [CrossRef] [PubMed]
Kowler, E. McKee, S. P. (1987). Sensitivity of smooth eye movement to small differences in target velocity. Vision Research, 27, 993–1015. [PubMed] [CrossRef] [PubMed]
Krauzlis, R. J. Stone, L. S. (1999). Tracking with the mind's eye. Trends in Neurosciences, 22, 544–550. [PubMed] [CrossRef] [PubMed]
Ma-Wyatt, A. McKee, S. P. (2006). Initial visual information determines endpoint precision for rapid pointing. Vision Research, 46, 4675–4683. [PubMed] [CrossRef] [PubMed]
Ma-Wyatt, A. McKee, S. P. (2007). Visual information throughout a reach determines endpoint precision. Experimental Brain Research, 179, 55–64. [PubMed] [CrossRef] [PubMed]
Milner, D. Dyde, R. (2003). Why do some perceptual illusions affect visually guided action, when others don't? Trends in Cognitive Sciences, 7, 10–11. [PubMed] [CrossRef] [PubMed]
Milner, A. D. Goodale, M. A. (1995). The visual brain in action. Oxford: Oxford University Press.
Müsseler, J. van der Heijden, A. H. Mahmud, S. H. Deubel, H. Ertsey, S. (1999). Relative mislocalization of briefly presented stimuli in the retinal periphery. Perception & Psychophysics, 61, 1646–1661. [PubMed] [CrossRef] [PubMed]
Osborne, L. C. Lisberger, S. G. Bialek, W. (2005). A sensory source for motor variation. Nature, 437, 412–416. [PubMed] [CrossRef] [PubMed]
Pavani, F. Boscagli, I. Benvenuti, F. Rabuffetti, M. Farné, A. (1999). Are perception and action affected differently by the Titchener circles illusion? Experimental Brain Research, 127, 95–101. [PubMed] [CrossRef] [PubMed]
Paillard, J. Jgle,, D. Goodale,, M. A. Mansfield, R. (1981). The contribution of peripheral and central vision to visually guided reaching. Advances in the analysis of visual behavior. (pp. 367–385). Cambridge, MA: MIT Press.
Prablanc, C. Echallier, J. E. Jeannerod, M. Komilis, E. (1979). Optimal response of eye and hand motor systems in pointing at a visual target II Static and dynamic visual cues in the control of hand movement. Biological Cybernetics, 35, 183–187. [PubMed] [CrossRef] [PubMed]
Prablanc, C. Echallier, J. E. Komilis, E. Jeannerod, M. (1979). Optimal response of eye and hand motor systems in pointing at a visual target I Spatio-temporal characteristics of eye and hand movements and their relationships when varying the amount of visual information. Biological Cybernetics, 35, 113–124. [PubMed] [CrossRef] [PubMed]
Saunders, J. A. Knill, D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research, 152, 341–352. [PubMed] [CrossRef] [PubMed]
Saunders, J. A. Knill, D. C. (2004). Visual feedback control of hand movements. Journal of Neuroscience, 24, 3223–3234. [PubMed] [Article] [CrossRef] [PubMed]
Schenk, T. (2006). An allocentric rather than perceptual deficit in patient D F. Nature Neuroscience, 9, 1369–1370. [PubMed] [CrossRef] [PubMed]
Soechting, J. F. Buneo, C. A. Herrmann, U. Flanders, M. (1995). Moving effortlessly in hree dimensions: Does Donders' law apply to arm movement? Journal of Neuroscience, 15, 6271–6280. [PubMed] [Article] [PubMed]
Smeets, J. B. J. Brenner, E. (2001). Action beyond our grasp. Trends in Cognitive Sciences, 5, 287. [CrossRef]
Stone, L. S. Beutter, B. R. Lorenceau, J. (2000). Visual motion integration for perception and pursuit. Perception, 29, 771–787. [PubMed] [CrossRef] [PubMed]
Stone, L. S. Krauzlis, R. J. (2003). Shared motion signals for human perceptual decisions and oculomotor actions. Journal of Vision, 3, (11):7, 725–736, http://journalofvision.org/3/11/7/, doi:10.1167/3.11.7. [PubMed] [Article] [CrossRef]
Vetter, P. Flash, T. Wolpert, D. M. (2002). Planning movements in a simple redundant task. Current Biology, 12, 488–491. [PubMed] [Article] [CrossRef] [PubMed]
Vishton, P. M. Rea, J. G. Cutting, J. E. Nuñez, L. N. (1999). Comparing effects of the horizontal–vertical illusion on grip scaling and judgment: Relative versus absolute, not perception versus action. Journal of Experimental Psychology: Human Perception and Performance, 25, 1659–1672. [PubMed] [CrossRef] [PubMed]
Westheimer, G. (1979). The spatial sense of the eye Proctor lecture. Investigative Ophthalmology and Visual Science, 18, 893–912. [PubMed] [Article] [PubMed]
Westheimer, G. McKee, S. P. (1977). Spatial configurations for visual hyperacuity. Vision Research, 17, 941–947. [PubMed] [CrossRef] [PubMed]
White, B. J. Kerzel, D. Gegenfurtner, K. R. (2006). Visually guided movements to color targets. Experimental Brain Research, 175, 110–126. [PubMed] [CrossRef] [PubMed]
White, J. M. Levi, D. M. Aitsebaomo, A. P. (1992). Spatial localization without visual references. Vision Research, 32, 513–526. [PubMed] [CrossRef] [PubMed]
Wichmann, F. A. Hill, N. J. (2001a). The psychometric function: I Fitting, sampling and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [PubMed] [Article] [CrossRef]
Wichmann, F. A. Hill, N. J. (2001b). The psychometric function: II Bootstrap-based confidence intervals and sampling. Perception & Psychophysics, 63, 1314–1329. [PubMed] [Article] [CrossRef]
Wilson, H. R. (1986). Responses of spatial mechanisms can explain hyperacuity. Vision Research, 26, 453–469. [PubMed] [CrossRef] [PubMed]
Yamagishi, N. Anderson, S. J. Ashida, H. (2001). Evidence for dissociation between the perceptual and visuomotor systems in humans. Proceedings of the Royal Society B: Biological Sciences, 268, 973–977. [PubMed] [Article] [CrossRef]
Figure 1
 
Stimulus displays used in this study. Panel A shows the layout of the marker lines and the Gaussian blob on the CRT monitor. The fixation spot shown in the center was extinguished at the moment when the targets became visible. Panel B shows a close-up of the stimuli. In this case, the Gaussian blob was offset to the right of the marker lines.
Figure 1
 
Stimulus displays used in this study. Panel A shows the layout of the marker lines and the Gaussian blob on the CRT monitor. The fixation spot shown in the center was extinguished at the moment when the targets became visible. Panel B shows a close-up of the stimuli. In this case, the Gaussian blob was offset to the right of the marker lines.
Figure 2
 
Sample psychometric function. The x-axis plots the horizontal offset of the Gaussian blob with respect to the marker lines. The y-axis indicates the percentage of trials where the subjects judged the blob to be further away from fixation than the marker lines. The horizontal position of the function shows a bias toward seeing these targets more eccentric than the marker lines. The slope of the function indicates the precision of localization judgments. The colored symbols indicate data for fixed offsets of −40 arcmin (red), −10 arcmin (magenta), 0 arcmin (blue), +10 arcmin (cyan), and +40 arcmin (green).
Figure 2
 
Sample psychometric function. The x-axis plots the horizontal offset of the Gaussian blob with respect to the marker lines. The y-axis indicates the percentage of trials where the subjects judged the blob to be further away from fixation than the marker lines. The horizontal position of the function shows a bias toward seeing these targets more eccentric than the marker lines. The slope of the function indicates the precision of localization judgments. The colored symbols indicate data for fixed offsets of −40 arcmin (red), −10 arcmin (magenta), 0 arcmin (blue), +10 arcmin (cyan), and +40 arcmin (green).
Figure 3
 
Construction of the manometric functions. Panel A shows histograms of the horizontal finger landing positions relative to each subject's average landing position. For illustrative purposes, the data for all 24 subjects were summed. Only 5 of all 11 offsets are shown here, using the same color code as in Figure 2. Panel B relates the stimulus offset ( x-axis) to the offset of the average finger landing position ( y-axis). The diagonal line indicates a gain of unity. Panel C shows the manometric function constructed from the summary data in Panel A. On each trial, it was determined whether the finger landed to the “near” or “far” of the particular subject's reference location, which was defined as the average endpoint under the zero offset condition. The y-axis then plots the proportion of trials for each stimulus offset in which the finger was further away from fixation than the reference location.
Figure 3
 
Construction of the manometric functions. Panel A shows histograms of the horizontal finger landing positions relative to each subject's average landing position. For illustrative purposes, the data for all 24 subjects were summed. Only 5 of all 11 offsets are shown here, using the same color code as in Figure 2. Panel B relates the stimulus offset ( x-axis) to the offset of the average finger landing position ( y-axis). The diagonal line indicates a gain of unity. Panel C shows the manometric function constructed from the summary data in Panel A. On each trial, it was determined whether the finger landed to the “near” or “far” of the particular subject's reference location, which was defined as the average endpoint under the zero offset condition. The y-axis then plots the proportion of trials for each stimulus offset in which the finger was further away from fixation than the reference location.
Figure 4
 
Psychometric and manometric functions for six typical subjects in Experiment 1. Each panel plots psychometric (blue) and manometric (red) functions constructed according to the principles illustrated in Figure 3. Panels A and B show two cases where precision for perception and pointing was roughly equal. Panels C and D show two cases where the pointing precision was only slightly lower than perceptual precision, and Panels E and F show two cases where pointing precision was vastly lower than perceptual precision.
Figure 4
 
Psychometric and manometric functions for six typical subjects in Experiment 1. Each panel plots psychometric (blue) and manometric (red) functions constructed according to the principles illustrated in Figure 3. Panels A and B show two cases where precision for perception and pointing was roughly equal. Panels C and D show two cases where the pointing precision was only slightly lower than perceptual precision, and Panels E and F show two cases where pointing precision was vastly lower than perceptual precision.
Figure 5
 
(A) Pointing versus perceptual precision. The black symbols show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment with visual feedback for 24 subjects. All data points lie above the diagonal, indicating better perceptual than pointing precision. The red symbols show data from the six observers shown in Figure 4. (B) Bias for perception and pointing. The x-axis shows perceptual bias as determined by the shift of the psychometric function. The y-axis shows the pointing bias, which was determined as the landing location of the finger relative to the markers for the 0-offset condition.
Figure 5
 
(A) Pointing versus perceptual precision. The black symbols show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment with visual feedback for 24 subjects. All data points lie above the diagonal, indicating better perceptual than pointing precision. The red symbols show data from the six observers shown in Figure 4. (B) Bias for perception and pointing. The x-axis shows perceptual bias as determined by the shift of the psychometric function. The y-axis shows the pointing bias, which was determined as the landing location of the finger relative to the markers for the 0-offset condition.
Figure 6
 
Pointing versus perceptual precision in the experiment with shutter goggles. The large filled circles show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment without visual feedback for 15 subjects. The small open symbols indicate the same data shown in Figure 5A, in the experiment with a free view of the hand during pointing.
Figure 6
 
Pointing versus perceptual precision in the experiment with shutter goggles. The large filled circles show the precision of perceptual localization (x-axis) versus pointing precision (y-axis) in the experiment without visual feedback for 15 subjects. The small open symbols indicate the same data shown in Figure 5A, in the experiment with a free view of the hand during pointing.
Figure 7
 
Full and partial correlations between perception and pointing for all 24 subjects. For each subject, the correlation between finger endpoints and perceptual judgments across trials is shown by the open bars. The filled bars indicate partial correlations between finger endpoint and judgment, using linear regression to partial out predictions based on target offset.
Figure 7
 
Full and partial correlations between perception and pointing for all 24 subjects. For each subject, the correlation between finger endpoints and perceptual judgments across trials is shown by the open bars. The filled bars indicate partial correlations between finger endpoint and judgment, using linear regression to partial out predictions based on target offset.
Figure 8
 
Finger landing position ( y-axis) as a function of target offset ( x-axis) for the two types of judgment separately. The downward-pointing triangles indicate trials where the subject judged the stimulus to be further away from fixation as the markers. The upward-pointing triangles indicate trials where the subject judged the stimulus to be nearer to fixation as the markers. The two lines are the best fitting regression lines for all subjects.
Figure 8
 
Finger landing position ( y-axis) as a function of target offset ( x-axis) for the two types of judgment separately. The downward-pointing triangles indicate trials where the subject judged the stimulus to be further away from fixation as the markers. The upward-pointing triangles indicate trials where the subject judged the stimulus to be nearer to fixation as the markers. The two lines are the best fitting regression lines for all subjects.
Figure 9
 
Proportion of trials where the pointing and perceptual responses agree (y-axis) as a function of the proportion of agreement expected by chance (see the Methods section). Separate data points are for 24 different observers and 11 different stimulus offsets. The red curve illustrates an average of y values binned for x values with a spacing of 0.05.
Figure 9
 
Proportion of trials where the pointing and perceptual responses agree (y-axis) as a function of the proportion of agreement expected by chance (see the Methods section). Separate data points are for 24 different observers and 11 different stimulus offsets. The red curve illustrates an average of y values binned for x values with a spacing of 0.05.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×