June 2010
Volume 10, Issue 6
Free
Research Article  |   June 2010
Discrimination contours for the perception of head-centered velocity
Journal of Vision June 2010, Vol.10, 14. doi:10.1167/10.6.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rebecca A. Champion, Tom C. A. Freeman; Discrimination contours for the perception of head-centered velocity. Journal of Vision 2010;10(6):14. doi: 10.1167/10.6.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

There is little direct psychophysical evidence that the visual system contains mechanisms tuned to head-centered velocity when observers make a smooth pursuit eye movement. Much of the evidence is implicit, relying on measurements of bias (e.g., matching and nulling). We therefore measured discrimination contours in a space dimensioned by pursuit target motion and relative motion between target and background. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented parallel to the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity—discrimination ellipses were significantly oriented away from the cardinal axes, toward the main negative diagonal. However, ellipse orientation was considerably less steep than predicted by a pure combination of components. This suggests that observers used a mixture of two strategies across trials, one based on individual components and another based on their sum. We provide a model that simulates this type of behavior and is able to reproduce the ellipse orientations we found.

Introduction
There is a large literature concerning the ability of the visual system to compensate for the retinal effects of smooth pursuit eye movement. This literature emphasizes perceptual errors that arise when the visual system estimates location, speed, and direction during tracking eye movements, as well as more complex judgments such as depth and heading. Examples include the misperception of flashed locations during pursuit (Brenner, Smeets, & Van den Berg, 2001; Mitrani, Dimitrov, Yakimoff, & Mateef, 1979), the changes in perceived speed that occur when moving objects are tracked (Dichgans, Wist, Diener, & Brandt, 1975; Sumnall, Freeman, & Snowden, 2003), the illusory motion of stationary backgrounds over which the eye movement is made (Freeman & Sumnall, 2002; Mack & Herman, 1978), and the misperception of object direction when a separate target is pursued (Hansen, 1979; Souman, Hooge, & Wertheim, 2005). One general account of these effects starts with the idea that estimates of retinal position or motion are added to estimates of eye position or velocity to transform images into a head-centered frame (Freeman, 2001; Haarmeier, Bunjes, Lindner, Berret, & Thier, 2001; Rotman, Brenner, & Smeets, 2005; Souman & Freeman, 2008; Souman, Hooge, & Wertheim, 2006; Turano & Massof, 2001; Wertheim, 1994). The mistakes exhibited by observers are then either a function of different errors associated with retinal and eye-based inputs or lie within the combination stage itself. An important implication of this general account is that relatively early on in the processing pathway there exist mechanisms tuned to position and velocity in a head-centered frame. Unfortunately, the psychophysical evidence for this is somewhat indirect, largely based on the measurement of bias (e.g., matching and nulling). 
Here we make a more direct test of this claim by measuring discrimination contours in a 2D motion space. The discrimination-contour paradigm has been widely used in color vision to probe the tuning of chromatic processing mechanisms (Gegenfurtner & Hawken, 1995; Noorlander, Heuts, & Koenderink, 1980; Poirson, Wandell, Varner, & Brainard, 1990; Wandell, 1985). This paradigm has been used to explore space–time separability in speed perception (Lappin, Bell, Harm, & Kottas, 1975; Reisbeck & Gegenfurtner, 1999) and bears a close relationship to the investigation of redundant cue combination in the 3D shape literature (e.g., Hillis, Ernst, Banks, & Landy, 2002; Landy, Maloney, Johnston, & Young, 1995). Discrimination contours are measured in a 2D space parameterized by the dimensions of interest (Figure 1). The subsequent orientation of the contours allows inferences to be drawn about the way the dimensions are combined by the observer. As discussed in more detail below, discrimination thresholds are determined in a number of directions away from a “standard” stimulus. If the observer combines dimensions—for instance, combines space and time with a mechanism that yields speed—then the discrimination contour will be oriented with respect to the combination of interest (e.g., red oblique ellipse labeled “combination” in Figure 1). If the observer is unable to combine the dimensions but instead uses the individual components, then the subsequent orientation of the discrimination contour will run parallel to one of the axes of the space (blue ellipse labeled “components” in Figure 1). 
Figure 1
 
Predicted orientation of discrimination contours for a space dimensioned by pursuit target motion and relative motion. The oblique red ellipse labeled “combination” shows predicted thresholds if observers combine the two dimensions to yield head-centered velocity. The blue ellipse labeled “components” shows predicted thresholds for observers that use individual components only. See text for details.
Figure 1
 
Predicted orientation of discrimination contours for a space dimensioned by pursuit target motion and relative motion. The oblique red ellipse labeled “combination” shows predicted thresholds if observers combine the two dimensions to yield head-centered velocity. The blue ellipse labeled “components” shows predicted thresholds for observers that use individual components only. See text for details.
We used this paradigm to seek direct psychophysical evidence for mechanisms tuned to the head-centered velocity. The stimulus consisted of a moving background over which the observer tracked a moving pursuit target. The background's head-centered velocity is the sum of retinal motion and eye velocity (H = R + E). The most obvious dimensions to use are therefore R and E. However, two recent studies suggest that observers do not rely on these motion cues but rather the relative motion (between pursued target and background) and the motion of the pursued target itself. Using a speed discrimination task, Freeman, Champion, Sumnall, and Snowden (2009) showed that observers do not have direct access to retinal motion when making discrimination judgments during pursuit—instead, observers use the relative motion between pursuit target and background object, even when feedback concerning absolute retinal motion was explicitly provided. In the case of the pursued target, Welchman, Harris, and Brenner (2009) showed that observers summed eye velocity information with retinal slip information when discriminating the motion-in-depth of a target tracked by a vergence eye movement (we have found similar evidence in unpublished investigations of speed and direction discrimination for pursued stimuli moving in the fronto-parallel plane). 
These results suggest that perceived head-centered velocity of background objects is separable into relative motion (Rel) and pursuit target motion (T). We do not mean this to imply that retinal motion and eye velocity are ignored by the observer—rather, retinal motion and eye velocity are incorporated into the estimates of relative motion and target velocity. Importantly, relative motion by itself does not tell the observer how a background object is moving with respect to the head—it simply informs the observer how two objects are moving with respect to one another. To determine the head-centered velocity of the background stimulus, the observer must add relative motion to velocity of the pursuit target (H = Rel + T). The current paper therefore asks whether the visual system contains relatively low-level mechanisms explicitly tuned to H, or whether it is inferred by some more circuitous route. 
Figure 1 shows in more detail how the discrimination-contour paradigm relates to the judgment of head-centered velocity. The figure describes a space spanned by pursuit target motion and relative motion. At any point in this space, we can define a standard stimulus (T s, Rel s) and a test stimulus (T t, Rel t), where T t = T s + ΔT and Rel t = Rel s + ΔRel. The variation in head-centered velocity of the background object is therefore ΔH = ΔT + ΔRel, such that a line of constant ΔH has slope of −1 (the negative diagonal in Figure 1 defines ΔH = 0). Suppose we are able to obtain thresholds for discriminating test from standard for the set of directions θ. The blue ellipse labeled “components” in Figure 1 describes the expected threshold contour if observers base judgments on individual components rather than their combination (the figure assumes that sensitivity to relative motion is greater than pursuit target motion, which is why the major axis of the “components” ellipse is horizontal). Along the cardinal axes, only one motion component conveys any useful information, so in these directions thresholds are limited by one component alone (dotted lines). In all other directions, however, useful information is conveyed by both components, so observers may gain a statistical advantage due to probability summation (e.g., Alais & Burr, 2004). 
The red ellipse labeled “combination” in Figure 1 describes the threshold contour expected if observers combine T and Rel to yield H. Ideally, if observers based their judgments on head-centered velocity alone, the threshold contour would consist of two lines parallel to the negative diagonal. In this case, observers would find it particularly difficult to differentiate any pair of stimuli that lie along lines of constant ΔH because these form head-centered “metamers”. In practice, however, for relatively extreme values of ΔT and ΔRel, observers are likely to be able to differentiate standard and test on the basis of individual components (see Hillis et al., 2002). Hence, the resulting thresholds will produce a closed contour oriented with respect to the negative diagonal. 
Some authors have found that the mapping of retinal motion information onto a head-centered frame is more likely to occur when pursuit and retinal motion are in opposite directions (Brenner & van den Berg, 1994; Morvan & Wexler, 2009; Tong, Aydin, & Bedell, 2007; Tong, Patel, & Bedell, 2006; Turano & Heidenreich, 1996). A possible reason for the asymmetry is that the world is predominantly stationary, so pursuit eye movement is more likely to produce retinal motion in the opposite direction (Tong et al., 2006). We therefore measured discrimination contours for pursuit target and relative motion in “same” and “opposite” directions. In Figure 1, the “same” conditions lie in the upper right and lower left quadrants of the depicted space. The “opposite” conditions lie in the lower right and upper left quadrants. 
Methods
Stimuli
Stimuli were generated using OpenGL and controlled by a Radeon 9800 Pro graphics card. Stimuli were presented on a ViewSonic P225f monitor at a frame rate of 100 Hz and viewed binocularly from 70 cm in a completely darkened room. A red gel placed over the monitor screen helped eliminate phosphor glow and dot trails. An Eyelink 1000 eye tracker recorded eye movements at a sampling rate of 1000 Hz. 
Stimuli consisted of a circular pursuit target at the center of a random dot pattern against a black background. The pursuit target had a diameter of 0.2°; the random dot pattern was composed of dots with a diameter of 0.1° with a density of 1 dot/deg2. The random dot pattern appeared in an annulus window with inner and outer radii of 1° and 8°, respectively. The movement of the window was yoked to the pursuit target. The target, dot pattern, and window moved horizontally in all conditions investigated. 
The pursuit target was stationary for the first 500 ms of each interval. Its speed was then ramped to the desired value over a mean duration of 250 ms and then continued moving at this value for the rest of the interval (mean 500 ms). Random perturbations of ±50 ms were added to these two time periods. Each interval therefore lasted for a mean duration of 1250 ms. The random dot pattern was presented at the end of the ramp and remained visible until the pursuit target disappeared. The start position of the pursuit target was displaced from the center of the screen by half the distance of the full sweep plus a random perturbation of ±1°. The perturbations in time and space were designed to encourage judgments of motion not position. 
Procedure
To determine thresholds, we used a three-interval forced-choice oddity task, consisting of two standard intervals and one test interval. These were presented in random order on each trial and observers were required to judge which interval was the odd one out. In the “same” condition, standard intervals consisted of a pursuit target and random dot stimulus moving in the same direction. Hence (T s, Rel s) = (+4, +4)°/s, with the dot stimulus moving at 8°/s on the screen. In the “opposite” condition, (T s, Rel s) = (+4, −4)°/s. The dot stimulus in this case was always stationary on the screen. 
Test stimuli had velocities (T t, Rel t) = (T s + ΔT, Rel s + ΔRel), where the increments ΔT = gT scosθ and ΔRel = g Rel ssinθ. The parameter g defines the step size, and θ is the direction of the discrimination task in TRel space (see Figure 1). Italics denote speed, emphasizing that test stimuli could not flip phase through 180° for any given θ. The increments ΔT and ΔRel were controlled by a 3-down 1-up staircase. Along any direction θ, the ratio of ΔT to ΔRel was constant, with the staircase changing the distance between the test and standard in steps D = g(T s 2cos2 θ + Rel s 2sin2 θ)1/2. Staircases were terminated after 9 reversals, with the step size before the first reversal set to g = 0.2 and all subsequent step sizes set to g = 0.1. Sixteen directions in TRel space were investigated (θ = 0° to 337.5° in increments of 22.5°). Within one experimental session, 4 of these different directions were investigated, each assigned one staircase. Staircases were randomly interleaved. In total, three replications of each staircase were completed. Staircases for “same” and “opposite” were blocked, with observers S1–S3 completing “same” blocks first, and S4 and S5 completing “opposite” blocks first. 
The direction of the standard was alternated on each trial, i.e., (−T s, −Rel s). By definition, this also flips the test, such that both test and standard rotate 180° about the origin in Figure 1. For the “same” condition, trials therefore alternated between the upper right and lower left quadrants; for the “opposite” condition, trials alternated between upper left and lower right quadrants. Data were collapsed within these quadrant pairings. 
Observers were instructed to maintain fixation on the pursuit target at all times. Following the completion of each trial, observers indicated which interval contained the odd one out. No feedback was given. 
Eye-movement analysis
Eye movements were recorded using an Eyelink 1000 eye tracker, with samples recorded at 1000 Hz. Eye-position data were low-pass filtered and a time derivative was taken. A region of interest was defined as the period of time during which the dot pattern was presented. Any saccades occurring within this region of interest were detected using a velocity threshold of 40 deg/s and these trials were discarded (mean = 7.4%). The mean eye velocity was computed over this region of interest and the mean retinal slip (target velocity − average eye velocity) was calculated for each interval. Retinal slip estimates were then averaged across intervals to summarize each observer's pursuit accuracy for a given condition. 
Psychophysical analysis
Error rates were plotted as a function of D (as defined above), which corresponds to length along a direction θ. Error rates from each direction condition θ were concatenated with those from the condition θ + 180° (for the latter D = D* − 1). A Gaussian was then fit to the data using maximum likelihood minimization, with standard deviation and a lapse rate parameter free to vary and mean fixed at D = 0. Lapse rate was constrained to be less than 6% (Wichmann & Hill, 2001). Threshold values were defined as the standard deviation of the Gaussian. We also estimated 95% confidence limits by bootstrapping 999 thresholds. In some cases, the lower and upper confidence intervals are unequal because the distribution of standard deviations was sometimes asymmetric. Trials were excluded on the basis of eye movements if a saccade was detected. 
Following Noorlander et al. (1980), Poirson et al. (1990), Reisbeck and Gegenfurtner (1999), Wandell (1985), and others, ellipses were fit to the data to quantify the orientation of the discrimination contour. To do this, we used an iterative method that minimized the geometric distance between data points and curve (Brown, 2007). Orientation was defined as the angle of the major axis swept out anti-clockwise from the right-hand horizontal (for example, the ellipse labeled “combination” in Figure 1 has an orientation of 135°). 
Observers
Five observers took part in the experiment, including the two authors (S1 = RAC, S2 = TCAF) and three naive observers (S3–S5). All were experienced observers and wore their appropriate optical correction. 
Results
Figure 2 shows the pattern of thresholds obtained for each of the five observers in the “same” condition (top row) and “opposite” condition (bottom row). The open points show thresholds with confidence intervals >20°/s (in these cases, the confidence intervals have not been plotted for clarity). The red curves show the best-fitting ellipses to the closed symbols. Note that the scales for naive observers S4 and S5 are larger than S1–S3, indicating that they were generally less sensitive to the speed differences displayed. 
Figure 2
 
Discrimination contours for (top) “same” and (bottom) “opposite” conditions. Each column shows a different observer's data. Error bars correspond to 95% confidence intervals (error bars >20°/s are not plotted for clarity; associated data points are shown as open symbols). The red curves show best-fitting ellipses to the remaining data (closed symbols). Note that the axes for S4 and S5 have different scales to S1–S3.
Figure 2
 
Discrimination contours for (top) “same” and (bottom) “opposite” conditions. Each column shows a different observer's data. Error bars correspond to 95% confidence intervals (error bars >20°/s are not plotted for clarity; associated data points are shown as open symbols). The red curves show best-fitting ellipses to the remaining data (closed symbols). Note that the axes for S4 and S5 have different scales to S1–S3.
The results show that ellipses are oriented close to the cardinal axes in each case, suggesting that observers based their judgments on individual motion cues. However, in all cases, except for S3 in the “same” condition, there is a small but consistent deviation of the ellipse orientation away from cardinal, toward the negative oblique. This was confirmed statistically. In both “same” and “opposite” conditions, orientations were significantly different from cardinals (mean difference from cardinals: “same” = 15.7° (SE = 6.0°), t(4) = 2.61, p < 0.05, one-tailed; “opposite” = 17.0° (SE = 4.4°), t(4) = 3.81, p < 0.01, one-tailed). Hence our results lie somewhere in between the predictions based on the use of individual cues (ellipses oriented along cardinal) and the prediction based on the use of head-centered cues (oriented along negative diagonal). Such a pattern of results suggest that discrimination was based on a mixture of two strategies; on some trials, observers combined components and based their judgments on head-centered motion, whereas on other trials they used the individual components. Below we present a model that simulates such a strategy and is able to produce ellipse orientations like those found here. 
Figure 2 also suggests that there is little difference in ellipse orientation for the “same” and “opposite” conditions. Again this was confirmed statistically (mean orientation “same” = 158.0° (SE = 10.9°), “opposite” = 157.6° (SE = 9.4°), t(4) = 0.004, p = 0.95, two-tailed). Hence, the lack of asymmetry between “same” and “opposite” conditions contrasts with work in other areas that report an anisotropy (e.g., motion smear: Tong et al., 2006; though see Morvan & Wexler, 2009). We note that we have previously failed to find this asymmetry in analogous experiments on retinal speed discrimination during pursuit and have discussed this finding in more detail elsewhere (Freeman et al., 2009). 
The results of the eye-movement analysis are shown in Figure 3. The top panel shows the mean retinal slip of the target (target velocity − eye velocity) across observers as a function of the direction θ. For the “opposite” condition, eye movements were quite accurate—average retinal slip is close to 0. For the “same” condition, pursuit tended to be faster than required. The influence of the direction of background motion on pursuit is well documented and explains the differences found here (Lindner & Ilg, 2006; Spering, Gegenfurtner, & Kerzel, 2006; Yee, Daniels, Jones, Baloh, & Honrubia, 1983). Closer inspection also revealed an influence of interval order. The lower panel of Figure 3 shows that eye speed decreased from intervals one to three in the “opposite” condition, whereas eye speed remained more or less the same in the “same” condition. These differences may reflect an influence of background motion on pursuit interacting with certain judgment strategies based on appearance. For instance, if the first two intervals appeared the same, observers could have decided that the final interval was the odd one out before it was displayed. In this case, the final interval could be ignored, perhaps leading to lower eye speeds. 
Figure 3
 
Pursuit accuracy (retinal slip of pursuit target) averaged across the five observers for “same” (closed symbols) and “opposite” conditions (open symbols). The top panel shows accuracy as a function of direction θ (see Figure 1 for definition). The bottom panel shows accuracy as a function of interval order. Error bars represent ±1 SE.
Figure 3
 
Pursuit accuracy (retinal slip of pursuit target) averaged across the five observers for “same” (closed symbols) and “opposite” conditions (open symbols). The top panel shows accuracy as a function of direction θ (see Figure 1 for definition). The bottom panel shows accuracy as a function of interval order. Error bars represent ±1 SE.
To reiterate, the main results in Figure 2 suggest that observers used a mixture of two strategies across trials to make their judgments. Hence, we constructed the following model to investigate whether a mixture of “individual-components” and “combination” strategies could produce ellipses oriented between the cardinal axes and the negative diagonal. 
Model
In the model, each interval was defined by three motion signals: Rel i , T i , and H i , where H i = Rel i + T i and i = interval 1, 2, or 3. Rel i and T i were corrupted by Gaussian noise with a mean of 0 and a standard deviation σ. The precision of H i was therefore assumed to be fully determined by noise at the input stage (i.e., σ Rel and σ T). For the “combination strategy,” the odd interval was taken as the H i that was most different from the mean of the other two intervals. For the “individual-components strategy,” an odd interval was identified separately for Rel i and T i , using the method described for H i . This potentially yields two different candidate odd intervals on each trial, one determined by Rel i and one determined by T i . In these cases, the signal corresponding to the “most different” odd one out was chosen. 
We used a parameter “k” to determine the probability on each trial of using the “combination strategy” or “individual-components strategy.” The parameter k therefore set the weighting or mixture between strategies. For instance, with k = 0, the model's choice was determined entirely by the individual-components strategy. Conversely, with k = 1, the model's choice was determined entirely by the combination strategy. With k = 0.5, the probability of using either strategy was the same on each trial. In the latter case, the resulting threshold therefore comprised a mixture of judgments based on combination and individual-component strategies. To determine discrimination-ellipse orientation as a function of k, a series of simulations was run for a range of directions θ through Rel–T space. Each simulation sampled the underlying psychometric function by running 10,000 trials at 7 equally spaced steps along the given direction θ. Step size “g” was set to 0.5 and values of ΔRel and ΔT were calculated as described in the Methods section. Thresholds and ellipse orientation were then derived using the fitting procedures also described in the Methods section. 
Figure 4A shows the simulated ellipse orientations as a function of the weighting k between strategies. We investigated five different levels of relative noise between Rel and T, corresponding to the five lines in the figure (dashed lines represent the same ratio σ Rel: σ T as the solid lines, but with a factor of 10 decrease in noise). Figure 4B provides examples of the ellipses returned by the model at three of these levels of relative noise. We did not investigate a full range of noise values—the simulations shown in Figure 4 are simply meant to demonstrate “proof of principle.” In Figure 4A, the red lines show the results for σ Rel < σ T and the green lines σ Rel > σ T (corresponding to the upper and lower rows, respectively, in Figure 4B). When k = 0, the model always uses the individual-components strategy and so the ellipse's major axis is oriented parallel to the least precise motion cue (see first column of Figure 4B for examples). When k = 1, the model makes choices based on the combination strategy and so the thresholds lie parallel to the oblique. The “ellipse” in this case is not closed (see Figure 4B, end column). Between these values of k, the orientation of the ellipse rotates away from the cardinal axis toward the oblique. The mean deviation from cardinal across observers was 16.4°, suggesting that they used the combination strategy between 10% and 20% of the time (this assumes that the noises present in our observers are within the range used in the simulations). 
Figure 4
 
Model results. (A) Simulated ellipse orientation as a function of k for five levels of σ Rel and σ T. Note that when σ Rel = σ T (blue line) ellipse orientation is undefined for k = 0. (B) Sample ellipses returned by the model at four values of k and three different levels of σ Rel and σ T. Each row illustrates one of three cases of: (top) σ Rel < σ T, (middle) σ Rel = σ T, and (bottom) σ Rel > σ T.
Figure 4
 
Model results. (A) Simulated ellipse orientation as a function of k for five levels of σ Rel and σ T. Note that when σ Rel = σ T (blue line) ellipse orientation is undefined for k = 0. (B) Sample ellipses returned by the model at four values of k and three different levels of σ Rel and σ T. Each row illustrates one of three cases of: (top) σ Rel < σ T, (middle) σ Rel = σ T, and (bottom) σ Rel > σ T.
The blue line in Figure 4A shows the results for σ Rel = σ T. Using individual components in this case (i.e., k = 0) produces a circle because the underlying input signals are equally precise. Hence orientation is undefined at this value of k. As k increases, the circle becomes stretched along the negative oblique (as shown in Figure 4B). Thus, for cases where input noises are equal, the defining feature of mixing the two strategies is a change in shape but not orientation. 
The simulations demonstrate that mixing strategies across trials produces ellipses that rotate away from the cardinal axes toward the negative oblique, so long as the input noises are unequal. The model suggests that the judgments of all observers we tested were dominated by the use of individual components, but that they also switched to a combination strategy on a smaller proportion of trials. Because ellipses were stretched along the T axis for most observers (S1–S4), the motion signals associated with the pursuit target are less precise than those associated with the relative motion of the background. 
Discussion
The data presented here provide a more direct test of the idea that the visual system contains mechanisms tuned to head-centered velocity, one that avoids the more implicit inferences drawn from the measurement of bias. We measured discrimination contours in a space dimensioned by relative motion and target motion, which allowed us to investigate whether observers had independent access to these two motion components or their head-centered sum. Within this space, lines of constant head-centered motion are parallel to the main negative diagonal, so judgments dominated by mechanisms that combine individual components should produce contours with a similar orientation. Conversely, contours oriented along the cardinal axes of the space indicate judgments based on individual components. The results provided evidence for mechanisms tuned to head-centered velocity—discrimination ellipses were oriented away from the cardinal axes and directed toward the main negative diagonal. However, ellipse orientation was closer to the cardinal axes than predicted by a strict head-centered combination of relative motion and pursuit target motion. We proposed that this pattern of data can be explained by observers switching from trial to trial between judgments based on head-centered velocity and individual components, with judgments dominated by the latter. Numerical simulation supported this idea, demonstrating that a discrimination ellipse can be oriented between the negative diagonal and the cardinal axes when observers switch between strategies. The model also showed that this type of switching behavior is most easily seen when the internal noises associated with each dimension are unequal. Lastly, the model showed that ellipses are oriented toward the axis that represents the less precise signal. For most of our observers, we found that ellipses were oriented toward the horizontal axis, indicating that the estimates of target motion are more variable than estimates of relative motion. 
The difference in signal precision suggested by our data agrees with recent data from Freeman, Champion, and Warren (2010). Using a more standard 2AFC speed discrimination task, they found that the pursued stimuli were more difficult to discriminate than fixated stimuli. It is tempting to suggest that this result is caused by differences in the reliability of retinal and extra-retinal motion signals. However, as we have argued here, some care needs to be taken with this conclusion, primarily because moving targets are rarely pursued accurately (especially in older observers: Kolarik, Margrain, & Freeman, 2010). The residual retinal slip is therefore a viable cue to pursuit target motion, so higher thresholds associated with pursuit target motion may possibly reflect less reliable retinal motion signals associated with retinal slip. Alternatively, observers may combine retinal-slip information with extra-retinal estimates of eye velocity. This latter strategy seems more likely because, as Welchman et al. (2009) have shown, psychometric functions are better described by head-centered motion rather than either retinal slip or eye velocity on their own. Further support comes from Krukowski, Pirog, Beutter, Brooks, and Stone (2003), who examined direction discrimination for a single moving point viewed with and without pursuit. They found that direction discrimination was unaffected by the ratio of eye velocity to retinal slip. This led them to suggest that the limiting noise for direction discrimination occurred at the combination stage, a claim bolstered by the fact that they found similar thresholds with and without pursuit, including a similar size of oblique effect in the two conditions. Of course, it is difficult to differentiate between noise at the input stage to combination versus noise at the combination stage itself, especially as Krukowski et al. only investigated a single speed. Nevertheless, their evidence, when combined with ours, suggests that the combination stage limits direction discrimination more so than speed discrimination. If so, then measuring discrimination contours in a space spanned by the direction of relative motion and pursuit target motion may provide clearer evidence of mechanisms tuned to head-centered motion. 
Acknowledgments
The work was supported by the Wellcome Trust. The authors would like to thank two anonymous reviewers for their comments and suggested improvements. 
Commercial relationships: none. 
Corresponding author: T. C. A. Freeman. 
Email: freemant@cardiff.ac.uk. 
Address: School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, CF10 3AT, UK. 
References
Alais D. Burr D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185–194. [CrossRef] [PubMed]
Brenner E. Smeets J. B. J. Van den Berg A. V. (2001). Smooth eye movements and spatial localisation. Vision Research, 41, 2253. [CrossRef] [PubMed]
Brenner E. van den Berg A. V. (1994). Judging object velocity during smooth-pursuit eye-movements. Experimental Brain Research, 99, 316–324. [CrossRef] [PubMed]
Brown R. (2007).FITELLIPSE: Least squares ellipse fitting demonstration. http://www.mathworks.com/matlabcentral/fx_files/15125/1/content/demo/html/ellipsedemo.html.
Dichgans J. Wist E. Diener H. C. Brandt T. (1975). The Aubert–Fleischl phenomenon: A temporal frequency effect on perceived velocity in afferent motion perception. Experimental Brain Research, 23, 529–533. [CrossRef] [PubMed]
Freeman T. C. A. (2001). Transducer models of head-centred motion perception. Vision Research, 41, 2741–2755. [CrossRef] [PubMed]
Freeman T. C. A. Champion R. A. Sumnall J. H. Snowden R. J. (2009). Do we have direct access to retinal image motion during smooth pursuit eye movements? Journal of Vision, 9, (1):33, 1–11, http://www.journalofvision.org/content/9/1/33, doi:10.1167/9.1.33. [PubMed] [Article] [CrossRef] [PubMed]
Freeman T. C. A. Champion R. A. Warren P. A. (2010). Bayesian analysis of perceived speed during smooth eye pursuit. Current Biology, 20, 757–762. [CrossRef] [PubMed]
Freeman T. C. A. Sumnall J. H. (2002). Motion versus position in the perception of head-centred movement. Perception, 31, 603–615. [CrossRef] [PubMed]
Gegenfurtner K. R. Hawken M. J. (1995). Temporal and chromatic properties of motion mechanisms. Vision Research, 35, 1547–1563. [CrossRef] [PubMed]
Haarmeier T. Bunjes F. Lindner A. Berret E. Thier P. (2001). Optimizing visual motion perception during eye movements. Neuron, 32, 527–535. [CrossRef] [PubMed]
Hansen R. M. (1979). Spatial localization during pursuit eye movements. Vision Research, 19, 1213–1221. [CrossRef] [PubMed]
Hillis J. M. Ernst M. O. Banks M. S. Landy M. S. (2002). Combining sensory information: Mandatory fusion within but not between senses. Science, 298, 1627–1630. [CrossRef] [PubMed]
Kolarik A. J. Margrain T. H. Freeman T. C. A. (2010). Precision and accuracy of ocular following: Influence of age and type of eye movement. Experimental Brain Research, 201, 271–282. [CrossRef] [PubMed]
Krukowski A. E. Pirog K. A. Beutter B. R. Brooks K. R. Stone L. S. (2003). Human discrimination of visual direction of motion with and without smooth pursuit eye movements. Journal of Vision, 3, (11):16, 831–840, http://www.journalofvision.org/content/3/11/16, doi:10.1167/3.11.16. [PubMed] [Article] [CrossRef]
Landy M. S. Maloney L. T. Johnston E. B. Young M. (1995). Measurement and modelling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. [CrossRef] [PubMed]
Lappin J. S. Bell H. H. Harm O. J. Kottas B. (1975). On the relation between time and space in the visual discrimination of velocity. Journal of Experimental Psychology: Human Perception and Performance, 1, 383–394. [CrossRef] [PubMed]
Lindner A. Ilg U. J. (2006). Suppression of optokinesis during smooth pursuit eye movements revisited: The role of extra-retinal information. Vision Research, 46, 761–767. [CrossRef] [PubMed]
Mack A. Herman E. (1978). The loss of position constancy during pursuit eye movements. Vision Research, 18, 55–62. [CrossRef] [PubMed]
Mitrani L. Dimitrov G. Yakimoff G. Mateef S. (1979). Oculomotor and perceptual localization during smooth pursuit eye movements. Vision Research, 19, 609–612. [CrossRef] [PubMed]
Morvan C. Wexler M. (2009). The nonlinear structure of motion perception during smooth eye movements. Journal of Vision, 9, (7):1, 1–13, http://www.journalofvision.org/content/9/7/1, doi:10.1167/9.7.1. [PubMed] [Article] [CrossRef] [PubMed]
Noorlander C. Heuts M. J. G. Koenderink J. J. (1980). Influence of the target size on the detection threshold for luminance and chromaticity contrast. Journal of the Optical Society of America, 70, 1116–1121. [CrossRef] [PubMed]
Poirson A. B. Wandell B. A. Varner D. C. Brainard D. H. (1990). Surface characterizations of color thresholds. Journal of the Optical Society of America A, 7, 783–789. [CrossRef]
Reisbeck T. E. Gegenfurtner K. R. (1999). Velocity tuned mechanisms in human motion processing. Vision Research, 39, 3267–3285. [CrossRef] [PubMed]
Rotman G. Brenner E. Smeets J. B. J. (2005). Flashes are localised as if they were moving with the eyes. Vision Research, 45, 355–364. [CrossRef] [PubMed]
Souman J. L. Freeman T. C. A. (2008). Motion perception during sinusoidal smooth pursuit eye movements: Signal latencies and non-linearities. Journal of Vision, 8, (14):10, 1–14, http://www.journalofvision.org/content/8/14/10, doi:10.1167/8.14.10. [PubMed] [Article] [CrossRef] [PubMed]
Souman J. L. Hooge I. T. C. Wertheim A. H. (2005). Vertical object motion during horizontal ocular pursuit: Compensation for eye movements increases with presentation duration. Vision Research, 45, 845–853. [CrossRef] [PubMed]
Souman J. L. Hooge I. T. C. Wertheim A. H. (2006). Frame of reference transformations in motion perception during smooth pursuit eye movement. Journal of Computational Neuroscience, 20, 61–76. [CrossRef] [PubMed]
Spering M. Gegenfurtner K. R. Kerzel D. (2006). Distractor interference during smooth pursuit eye movements. Journal of Experimental Psychology: Human Perception and Performance, 32, 1136–1154. [CrossRef] [PubMed]
Sumnall J. H. Freeman T. C. A. Snowden R. J. (2003). Optokinetic potential and the perception of head-centred speed. Vision Research, 43, 1709–1718. [CrossRef] [PubMed]
Tong J. Aydin M. Bedell H. E. (2007). Direction and extent of perceived motion smear during pursuit eye movement. Vision Research, 47, 1011–1019. [CrossRef] [PubMed]
Tong J. Patel S. S. Bedell H. E. (2006). The attenuation of perceived motion smear during combined eye and head movements. Vision Research, 46, 4387–4397. [CrossRef] [PubMed]
Turano K. A. Heidenreich S. M. (1996). Speed discrimination of distal motion during smooth pursuit eye motion. Vision Research, 36, 3507–3517. [CrossRef] [PubMed]
Turano K. A. Massof R. W. (2001). Nonlinear contribution of eye velocity to motion perception. Vision Research, 41, 385–395. [CrossRef] [PubMed]
Wandell B. A. (1985). Colour measurement and discrimination. Journal of Optical Society of America A, 2, 62–71. [CrossRef]
Welchman A. E. Harris J. M. Brenner E. (2009). Extra-retinal signals support the estimation of 3D motion. Vision Research, 49, 782–789. [CrossRef] [PubMed]
Wertheim A. H. (1994). Motion perception during self-motion—The direct versus inferential controversy revisited. Behavioral and Brain Sciences, 17, 293–311. [CrossRef]
Wichmann F. A. Hill N. J. (2001). The psychometric function: I Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293. [CrossRef] [PubMed]
Yee R. D. Daniels S. A. Jones O. W. Baloh R. W. Honrubia V. (1983). Effects of an optokinetic background on pursuit eye-movements. Investigative Ophthalmology & Visual Science, 24, 1115–1122. [PubMed]
Figure 1
 
Predicted orientation of discrimination contours for a space dimensioned by pursuit target motion and relative motion. The oblique red ellipse labeled “combination” shows predicted thresholds if observers combine the two dimensions to yield head-centered velocity. The blue ellipse labeled “components” shows predicted thresholds for observers that use individual components only. See text for details.
Figure 1
 
Predicted orientation of discrimination contours for a space dimensioned by pursuit target motion and relative motion. The oblique red ellipse labeled “combination” shows predicted thresholds if observers combine the two dimensions to yield head-centered velocity. The blue ellipse labeled “components” shows predicted thresholds for observers that use individual components only. See text for details.
Figure 2
 
Discrimination contours for (top) “same” and (bottom) “opposite” conditions. Each column shows a different observer's data. Error bars correspond to 95% confidence intervals (error bars >20°/s are not plotted for clarity; associated data points are shown as open symbols). The red curves show best-fitting ellipses to the remaining data (closed symbols). Note that the axes for S4 and S5 have different scales to S1–S3.
Figure 2
 
Discrimination contours for (top) “same” and (bottom) “opposite” conditions. Each column shows a different observer's data. Error bars correspond to 95% confidence intervals (error bars >20°/s are not plotted for clarity; associated data points are shown as open symbols). The red curves show best-fitting ellipses to the remaining data (closed symbols). Note that the axes for S4 and S5 have different scales to S1–S3.
Figure 3
 
Pursuit accuracy (retinal slip of pursuit target) averaged across the five observers for “same” (closed symbols) and “opposite” conditions (open symbols). The top panel shows accuracy as a function of direction θ (see Figure 1 for definition). The bottom panel shows accuracy as a function of interval order. Error bars represent ±1 SE.
Figure 3
 
Pursuit accuracy (retinal slip of pursuit target) averaged across the five observers for “same” (closed symbols) and “opposite” conditions (open symbols). The top panel shows accuracy as a function of direction θ (see Figure 1 for definition). The bottom panel shows accuracy as a function of interval order. Error bars represent ±1 SE.
Figure 4
 
Model results. (A) Simulated ellipse orientation as a function of k for five levels of σ Rel and σ T. Note that when σ Rel = σ T (blue line) ellipse orientation is undefined for k = 0. (B) Sample ellipses returned by the model at four values of k and three different levels of σ Rel and σ T. Each row illustrates one of three cases of: (top) σ Rel < σ T, (middle) σ Rel = σ T, and (bottom) σ Rel > σ T.
Figure 4
 
Model results. (A) Simulated ellipse orientation as a function of k for five levels of σ Rel and σ T. Note that when σ Rel = σ T (blue line) ellipse orientation is undefined for k = 0. (B) Sample ellipses returned by the model at four values of k and three different levels of σ Rel and σ T. Each row illustrates one of three cases of: (top) σ Rel < σ T, (middle) σ Rel = σ T, and (bottom) σ Rel > σ T.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×