Open Access
Article  |   February 2016
Tracking the changing feature of a moving object
Author Affiliations
Journal of Vision February 2016, Vol.16, 22. doi:10.1167/16.3.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julian De Freitas, Nicholas E. Myers, Anna C. Nobre; Tracking the changing feature of a moving object. Journal of Vision 2016;16(3):22. doi: 10.1167/16.3.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The mind can track not only the changing locations of moving objects, but also their changing features, which are often meaningful for guiding action. How does the mind track such features? Using a task in which observers tracked the changing orientation of a rolling wheel's spoke, we found that this ability is enabled by a highly feature-specific process which continuously tracks the orientation feature itself—even during occlusion, when the feature is completely invisible. This suggests that the mental representation of a changing orientation feature and its moving object are continuously transformed and updated, akin to studies showing continuous tracking of an object's boundaries alone. We also found a systematic error in performance, whereby the orientation was reliably perceived to be further ahead than it truly was. This effect appears to occur because during occlusion the mental representation of the feature is transformed beyond the veridical position, perhaps in order to conservatively anticipate future feature states.

Object tracking
Moving objects are ubiquitous in the world, and so it is perhaps not surprising that much research has focused on how the mind keeps track of object locations, even through visual interruptions such as occlusion (e.g., a cyclist riding behind a car or a person walking behind a crowd). Although a moving object is invisible during this period, it is perceived as having a continuous, persisting identity, rather than as jumping from one location to the next (for a review, see Scholl & Flombaum, 2010). Two paradigms that are commonly used to study how the mind tracks occluded objects are production tasks using a time-to-contact manipulation, in which observers have to press a button when they think an occluded object reaches the other end of the occluder (e.g., Rosenbaum, 1975), and discrimination tasks, in which the object disappears and then reappears further along its trajectory at either the correct or incorrect time, and observers have to discriminate between these possibilities in a two-alternative forced-choice task (DeLucia & Liddell, 1998). Such studies find that observers are fairly accurate at judging when an object will reach a certain location (Battaglini, Campana, & Casco, 2013; Benguigui & Bennett, 2010; Benguigui, Broderick, & Ripoll, 2004; DeLucia & Liddell, 1998; Makin & Poliakoff, 2011; Peterken, Brown, & Bowman, 1991; Rosenbaum, 1975). This is because even during occlusion the mind continues to track the object as if it were still there (the tracking hypothesis; DeLucia & Liddell, 1998), using the smooth pursuit oculomotor system to continuously allocate visuospatial attention to the invisible object's location (DeLucia, Tresilian, & Meyer, 2000; de'Sperati & Deubel, 2006; de'Sperati & Santandrea, 2005; Gilden, Blake, & Hurst, 1995; Lyon & Waag, 1995; Makin & Poliakoff, 2011). Observers do not, however, appear to employ the equally viable strategy of discretely estimating when an object will reach a certain location based on the visual information that came before occlusion (Hecht & Savelsburgh, 2004; Lee, 1976; Tresilian, 1995, although see Benguigui & Bennett, 2010). 
Feature tracking
The mind can also track changing features, since features often carry useful information that guides action (e.g., a revolving speedometer gauge or a changing facial expression). Several studies have investigated the perception of motion through feature space (Blaser, Pylyshyn, & Holcombe, 2000; Blaser & Sperling, 2008; Sheth, Nijhawan, & Shimojo, 2000), and two recent studies have begun looking into how this is accomplished during occlusion, studying changes in number, color, spatial accumulation (dots increasingly filling a grid), and orientation (Makin & Bertamini, 2014; Makin & Chauhan, 2014). Since similar performance levels were found across these feature dimensions and for the tracking of spatial locations, the authors concluded that tracking of these various features may rely on a common rate control mechanism (Makin & Bertamini, 2014; Makin & Chauhan, 2014). This common rate controller may then be responsible for guiding the rate of attentional allocation to the tracked item. 
Tracking the changing feature of a moving object
The existing studies on feature tracking have mainly investigated feature changes at a static location (e.g., a static circle with a rotating clock hand). But in our dynamic world, the mind must often deal with the more complex task of tracking changing features of objects that themselves are moving (e.g., tracking the changing luminance or size of an approaching vehicle or the changing orientation of a gymnast or diver traveling through the air). How does the mind accomplish this feat? This question, to our knowledge, remains unanswered. There is some existing empirical evidence suggesting that the mind can do this even for a complex stimulus. Baker et al. (2001) discovered neurons that appear to be selective for representing occluded biological motion. Such a task could entail continuously keeping track of the nested features of a stimulus that is certainly more complex than the single bounded objects that are typically used in tracking experiments. Another study looked at rolling motion, although it studied observer's conscious impressions of the motion—for example, by asking them to draw out the trajectory that they thought the end of a wheel traced (Isaak & Just, 1995; Proffitt, Kaiser, & Whelan, 1990). Here we ask how the changing feature of a moving object is processed, both when visible and occluded. 
Object-specific or feature-specific tracking?
To get a grip on the problem, one would like to know how the feature is tracked in relation to its object over time. Tracking the changing feature of a moving object is an especially interesting case of tracking, since the mind must somehow reconcile changing feature information with changing object locations. It is well known that cognitive processes often operate over entire, feature-bound object representations (e.g., Kahneman & Henik, 1981; O'Craven, Downing, & Kanwisher, 1999), such as in memory (Luck & Vogel, 1997; Alvarez & Cavanagh, 2004), spatial attention (Egly, Driver, & Rafal, 1994; Scholl, 2001), and temporal attention (De Freitas, Liverence, & Scholl, 2014). On the other hand, there are also cases in which cognitive processes operate over features alone (Blaser, Pylyshyn, & Holcombe, 2000; Blaser & Sperling, 2008; Howard & Holcombe, 2008; Nobre, Rao, & Chelazzi, 2006; Sheth et al., 2000). Does tracking the changing feature of a moving object rely on object-specific or feature-specific processing? 
Discrete or continuous feature transformations?
Relatedly, one would like to know what happens to the mental representation of both the changing feature and the changing location of the surrounding object boundaries during tracking. Since the recent work by Makin and colleagues (Makin & Bertamini, 2014; Makin & Chauhan, 2014) shows that tracking ability for a changing feature at a static location closely resembles that for tracking changing object locations, this suggests that feature tracking, too, may rely on a continuous tracking mechanism. But, aside from this correlational evidence, no existing experiments have been able to speak to this possibility directly. Does the mind simulate continuous featural change during occlusion, dynamically transforming and updating a representation of the feature? And can it manage this while also tracking the changing location of the object as a whole? 
The current studies
We see the current studies as a step toward understanding more complex, dynamic tracking in the real world, where both feature and object information can be changing simultaneously. To this end, we created a new paradigm using orientation as a case study, in which observers were required to keep track of the changing orientation of a rolling wheel as it went behind an occluder. The wheel then reemerged, but only partially, so that observers could still not see its feature. At this point, observers provided an exact estimate of the feature's orientation. The increased sensitivity of the continuous recall measure allowed us to detect whether there were any systematic biases in observer's responses. Furthermore, a modeling approach in our analyses enabled us to measure the precision of tracking across various experimental manipulations. Finally, eye-tracking measures offered a window into observer's mental representations during occlusion, when the stimulus was invisible. 
Six experiments addressed the extent to which tracking the dynamic feature of a moving object is feature- or object-specific, as well as whether it is continuous or discrete. Experiment 1A and 1B found that observers were able to track the changing feature of a moving object at various speeds of motion, and furthermore, that they demonstrated a significant feature displacement effect—they perceived the feature to be further along its trajectory than it truly was. We found similar feature-tracking performance for stationary objects (Experiment 2A), and when the feature rotated in the opposite direction than it was supposed to, given the direction in which the object was moving (Experiment 2B), suggesting that the tracking process employed in these experiments was highly feature specific (not influenced by the object's spatial behavior). Finally, by manipulating expectations (Experiment 3A) and using eye movements as a window into observer's mental representations (Experiment 3B), we discovered that both the changing orientation of the wheel and its changing location were continuously transformed mentally during occlusion, even though they were invisible during this period. 
Experiment 1A: Tracking the changing feature of a moving object
Experiment 1A investigated how observers track the changing feature of a moving object, and Experiment 1B replicated these results while looking at various motion speeds. 
Method
Paradigm
The paradigm consisted of a wheel-like object with a “spoke” in its wheel that rolled across a computer monitor display, changing orientation as it did so, and then went behind an occluder. The wheel then emerged from the opposite end of the occluder, but only partially, so that observers could still not see the orientation of the spoke at that location (Figure 1A). Observers had to predict its true orientation when the wheel stopped by manually adjusting the orientation of a randomly oriented spoke that appeared soon after the wheel stopped. 
Figure 1
 
(A) Depiction (not to scale) of the feature tracking task in Experiment 1A. (B) Mixture model fits of performance for each observer, showing the mean probability of their prediction errors, which tended toward forward displacement. The histogram shows the distribution of responses across all observers.
Figure 1
 
(A) Depiction (not to scale) of the feature tracking task in Experiment 1A. (B) Mixture model fits of performance for each observer, showing the mean probability of their prediction errors, which tended toward forward displacement. The histogram shows the distribution of responses across all observers.
Observers
Thirteen observers (Mage = 24 years; eight female) with normal or corrected visual acuity completed a 60-min session in exchange for £10 payment. In the absence of any previous studies that had used the paradigm created for the present experiments, we began with the heuristic assumption that the required sample size would be comparable to that of previous studies on object persistence, which had used similar sample sizes (e.g., Liverence & Scholl, 2015; Scholl & Pylyshyn, 1999). In all studies, observers gave written informed consent before testing. All experimental protocols were approved by the University of Oxford Central University Research Ethics Committee, and were carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki (pre-2013 version). 
Apparatus and stimuli
We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study, in line with the recommendation by Simmons, Nelson, and Simonsohn (2012). The sizes of all the stimuli were identical across experiments. Stimuli were created in MATLAB using the Psychophysics Toolbox libraries (Brainard, 1997; Pelli, 1997) and presented on a Dell personal computer. Observers sat in a dimly lit booth at 74 cm from the monitor (22-in. Samsung SyncMaster 2233; resolution: 1680 × 1050 pixels; refresh rate: 60Hz; screen width: 47 cm). A chin rest was used to stabilize observers' heads. Gaze locations were continuously recorded with a video-based eye tracker at 500 Hz (EyeLink 1000, SR Research, Ontario, Canada). On each trial, a 2.69° green disk with a randomly oriented blue spoke centered on it, moved rightwards from the mid-left edge of the screen (with the disc initially centered at 1.08° in the x-axis) at 3.90°/s for 6.82 s. The spoke consisted of a 1.53° × 0.15° bar with a 0.34° × 0.76° ellipse attached to its end. As the disc moved, the spoke rotated at 120°/s around the center of the circle, giving the impression of a rolling wheel. The wheel moved for 4.95 s before touching the left edge of a gray rectangle (7.08° × 9.69°; RGB = 150/150/150) with its left border located at 21.47° in the x-axis. It then proceeded to be occluded by the rectangle. Once completely occluded, the wheel remained occluded for 1.13 s before emerging from the other end of the occluder, stopping so that only 7.06% of its surface was visible. As such, observers could see where the wheel stopped but not its spoke's current orientation. After 0.5 s, the entire wheel then popped up in front of the occluder, but without its spoke. Observers then pressed the spacebar key to make a randomly oriented spoke appear inside the wheel, which they could then adjust using the mouse until the orientation matched that which they thought was the true orientation of the wheel when it stopped. They clicked the left mouse button to submit their response. Each observer completed 300 experimental trials, preceded by three practice trials. Observers were instructed to “just say what you see,” and not to think of the task as a math problem nor to use any special strategies. 
Results
We excluded one observer who responded uniformly (Rayleigh's test, p = 0.420) and one observer whose mean angular error was 3 SDs beyond the group mean. This left 11 observers. For all trials, we calculated the displacement angle between the response orientation and the true orientation at the end of occlusion, which we term the bias. We then used the Memtoolbox (Suchow, Brady, Fougnie, & Alvarez, 2013) to fit each observer's responses with a mixture model with bias, which, as its name indicates, included a bias term. The model treats responses as drawn from a mixture of two distributions: the probability of correctly recalling the orientation (with Gaussian error) and the probability of randomly guessing the orientation. The model's bias term ensures that the central tendency of the data is not fixed at zero. For each observer, we extracted their mean bias, and also their proportion of guesses and precision, in line with recent efforts to understand the precision of the visual system (Holcombe, 2009; Linares, Holcombe, & White, 2009). The resulting parameter estimates were then compared using traditional statistical tests. We note that after reporting the bias in degrees of the circle, we always report in parentheses the amount of time that the bias corresponds to (by dividing the bias by the rotation speed per second, then multiplying by 1,000). Finally, we also note that using just the mean or mode of errors showed the same qualitative and statistical outcomes (i.e., all tests reported here remained significant). 
Since only one out of 12 observers responded randomly, observers could do the task. At the same time, a one-sample t test (two-tailed) found that observers systematically overrotated the spoke relative to its true orientation, displacing it forwards by 33.08° (or 275.67 ms) on average, t(10) = 2.64, p = 0.025, d = 1.12 (Figure 1B); nine out of 11 observers showed this positive bias. This result is consistent with that of time-to-contact production tasks with occlusion durations greater than 1 s, which find that observers perceive an object to have completed its occlusion earlier than it has in reality (Benguigui et al., 2004; Gray & Thornton, 2001; Makin & Bertamini, 2014; Makin & Chauhan, 2014; Peterken et al., 1991; Tresilian, 1995). The mean guess rate was 0.124, which is similar to the guess rates typically found in high-load working memory experiments (e.g., Bays, Catalao, & Husain, 2009; Myers, Stokes, Walther, & Nobre, 2014; Zhang & Luck, 2009). The mean precision was 50.62° (421.83 ms), which is unsurprisingly higher than that of merely estimating when a moving textured object that is invisible for ∼1 s will reach a given location (∼300 ms; Oberfeld, Hecht, & Landwehr, 2011). One may question whether the forward displacement that accompanies this relatively high imprecision and proportion of guesses is serving a useful purpose. However, there may be a number of systematic reasons for these values, including the dynamic nature of the stimulus (which has not been investigated in previous visual working memory experiments), the >0.5-s memory retention level (although this duration is not longer than that of standard visual working memory experiments), and the fact that memory is poorer for rotation than for other kinds of motion (Price & Gilden, 2000). Furthermore, to foreshadow our findings, we find that the bias, precision, and guess rates are quite consistent across the six experiments. Future work on feature tracking can determine whether the parameter values observed in the current experiments are typical for tracking of various kinds of features. 
Experiment 1B: Various motion speeds
Method
Observers
Twenty-nine new observers (Mage = 29 years; 16 female) with normal or corrected visual acuity completed a 60-min session in exchange for £10 payment. Since the effect size for Experiment 1 was very large, but the current experiment tripled the number of conditions in Experiment 1, we conservatively doubled the sample size of Experiment 1A
Apparatus and stimuli
This experiment was identical to Experiment 1A, except as noted here. Randomized for each trial, the speed of both the disc's motion and the spoke's rotation were either half as fast (translation: 1.95°/s, rotation: 60°/s around the center of the circle), the same as in Experiment 1A (translation: 3.90°/s, rotation: 120°/s around the center of the circle), or 5/3 times as fast (translation: 6.50°/s, rotation: 200°/s around the center of the circle), moving for a total of 13.64 s (slow), 6.82 s (medium), and 4.09 s (fast). The wheel moved for 9.90 s (slow), 4.95 s (medium), and 2.97 s (fast) before touching the left edge of the rectangle and proceeding to be occluded by it. Once completely occluded, it remained occluded for another 2.26 s (slow), 1.13 s (medium), and 0.68 s (fast), before emerging from the other end of the occluder, stopping so that only 7.06% of its surface was visible. As such, observers could see where the wheel stopped but not the current spoke orientation. 
Each observer saw 80 trials of each speed, preceded by two practice trials at the medium speed. Relative to Experiment 1A, all stimuli were shifted down vertically by 5.02° in order to make space for a central fixation cross, which was not task-relevant (utilized in a later experiment). 
Results
We excluded four observers for responding uniformly in any of the individual conditions (Rayleigh's test, ps = 0.786, 0.588, 0.228, and 0.138) and one observer whose mean angular error was 3 SDs beyond the group mean. This left 24 observers. We fit the same mixture model from Experiment 1 to each of the speed conditions for each observer. 
First, we wanted to determine the speed of mental rotation across the different conditions. To do this, we calculated (a) the actual amount of rotation during occlusion (true speeds × occlusion durations), (b) amount mentally rotated (errors + actual amount of rotation), (c) mental rotation speed (amount mentally rotated/occlusion durations), and (d) speed difference between the mental speed and actual speed (mental rotation rate – actual speed). A repeated measures analysis of variance (ANOVA) revealed a linear increase in this speed difference across conditions 18.51°/s (slow), 44.48°/s (medium), and 91.43°/s (fast), F(1, 23) = 35.91, p = 4 × 10−6, Image not available = 0.610. There was also a linear increase in proportional speed (mental rotation speed/actual rotation speed), 1.31 (slow), 1.37 (medium), and 1.46 (fast), F(1, 23) = 14.03, p = 0.001, Image not available = 0.379.  
One-sample t tests (two-tailed) revealed that the displacement effect replicated in every speed condition—slow: 41.83° (or 697.09 ms), t(23) = 4.18, p = 3.59 × 10−4, d = 1.21; medium: 50.26° (418.83 ms), t(23) = 4.94, p = 5.48 × 10−5, d = 1.42; and fast: 62.17° (310.86 ms), t(23) = 5.78, p = 6.82 × 10−6, d = 1.67—and a repeated-measures ANOVA found that the effect scaled linearly with speed, F(1, 23) = 14.22, p = 0.001, Image not available = 0.382 (Figure 2). However, performing the same analysis on the corresponding time values for each degree error (thereby controlling for the rotational speed) actually reveals a linear decrease in error as speed condition increases, F(1, 23) = 9.85, p = 0.005, Image not available = 0.300, which could also be due to the shorter occlusion durations for higher speed conditions.  
Figure 2
 
(A–C) Depiction (not to scale) of the feature tracking task in the different speed conditions of Experiment 1B (note the difference in the angular position of the spoke among conditions). (D) Displacement errors extracted from the mixture model scaled linearly with the speed of the stimulus.
Figure 2
 
(A–C) Depiction (not to scale) of the feature tracking task in the different speed conditions of Experiment 1B (note the difference in the angular position of the spoke among conditions). (D) Displacement errors extracted from the mixture model scaled linearly with the speed of the stimulus.
A repeated-measures ANOVA found no difference in precision across the different speed conditions—slow: 44.86° (747.73 ms), medium: 51.61° (430.07 ms), and fast: 47.93° (239.64 ms); F(1, 23) = 0.80, p = 0.379, Image not available = 0.034. However, comparing the corresponding time values for each degree error (thereby controlling for the rotational speed) actually revealed an increase in precision as speed condition increased, F(1, 23) = 78.44, p = 7.17 × 10−9, Image not available = 0.773, which again could also be due to the decreasing occlusion durations as speed condition increased. A repeated-measures ANOVA revealed no significant differences across speeds in guess rate—0.119 (slow), 0.101 (medium), and 0.150 (fast); F(1, 23) = 0.79, p = 0.384, Image not available = 0.033—and once again the guess rates were reasonable given the typical values found in memory research.  
In sum, longer occlusion intervals (or slower speeds) corresponded to larger errors and decreased precision (controlling for rotational speed), although it is hard to definitively know whether this pattern was driven by the slower mental rotation rates or the longer occlusion intervals that were intrinsic to the slower speed conditions. Future work should directly tease these factors apart by parametrically varying occlusion duration over a wide range while keeping speed constant, then measuring final spoke orientation estimates as a function of occlusion duration. Most importantly for the current purposes, however, these results suggest that the bias is not uniform, since it flexibly scaled with speed. 
Experiment 2: Object- or feature-specific tracking?
Having found that observers are able to track the dynamic feature of a moving object (albeit with a systematic forward displacement error), we next wanted to understand the extent to which this behavior is the result of processing the stimulus as a feature-bound object versus in a more feature-specific manner that tracks orientation and location in a separable fashion. This investigation would enable us to understand how the different aspects of the stimulus—location and orientation—are prioritized during tracking. 
To this end, we first compared tracking performance for moving versus stationary objects (Experiment 2A), reasoning that if participants represent the stimulus as an integrated moving object, then the double dose of forward displacement (rotation and translation) should lead to a greater forward bias in the moving condition relative to the stationary condition. In contrast, if rotation and translation are represented in a more separable fashion, then tracking performance should not differ between the moving and stationary conditions, since tracking of translation should not influence tracking of rotation. 
We then compared tracking performance for the moving object with that for an object that was moving in the opposite direction than it would normally, given anticlockwise feature rotation (Experiment 2B). We reasoned that indistinguishable performance between these conditions would provide especially strong evidence for a separable tracking process, since it would mean that even conflicting motion between feature orientations and object locations does not reduce the forward feature displacement effect (which we would now see occur in the opposite, anticlockwise direction). 
Experiment 2A: Moving versus stationary objects
Method
Observers
Twenty-seven new observers (Mage = 28 years; 12 female) with normal or corrected visual acuity completed a 60-min session in exchange for £10 payment. This sample size was chosen to match that of Experiment 1B and can be justified post hoc based on the results of that experiment. 
Apparatus and stimuli
This experiment was identical to Experiment 1A and 1B, except as noted here. All objects traveled at the same speed (3.90°/s). Each observer completed two practice trials, followed by two blocks of 150 trials each, in counterbalanced order between observers. In the standard block, the wheel moved behind the occluder (as in the previous experiments). In the stationary wheel block, the occluder moved over the wheel, which rotated at 120°/s around the center of the circle at a fixed coordinate location centered at 27.14° in the x-axis (i.e., the stopping location of the wheel in the moving wheel condition). 
The moving occluder had the same starting position, translation speed of 3.90°/s, and travel duration of 6.82 s, as did the traveling wheel in the standard block. The occluder moved from the mid-left edge of the screen, then occluded the rotating wheel for the same occlusion duration as in the standard moving wheel condition, continuing to move until the wheel then reappeared partially (and by the same amount as in the standard block) from the left border of the overlain occluder, which then stopped (Figure 3). Thus in both conditions a moving entity translated for 4.95 s before touching a stationary entity and proceeding to either become occluded by or to occlude it. Once completely occluded, the wheel remained occluded for 1.13 s before appearing from the other end of the occluder, with only 7.06% of its surface visible once the moving entity finished translating. 
Figure 3
 
(A, B) Depiction (not to scale) of the feature tracking task in the different motion conditions of Experiment 2A. (C) Displacement errors extracted from the mixture model did not differ significantly between conditions. Error bars indicate 95% confidence intervals (CIs).
Figure 3
 
(A, B) Depiction (not to scale) of the feature tracking task in the different motion conditions of Experiment 2A. (C) Displacement errors extracted from the mixture model did not differ significantly between conditions. Error bars indicate 95% confidence intervals (CIs).
Results
We excluded two observers for responding uniformly in any of the individual conditions (Rayleigh's test, ps = 0.116, 0.179), and one observer whose mean angular error was 3 SDs beyond the group mean. This left 24 observers. The mixture model was fit to each of the movement conditions for each observer. Observer answers to the debriefing question “Do you think you over or under rotated more in each condition?” led us to expect carryover effects between blocks. Specifically, only observers who saw the moving block first and the stationary block second reported that they overrotated in the moving block but not in the subsequent stationary block. Confirming this expectation, a 2 (condition) × 2 (block order) repeated-measures ANOVA revealed a potential trend toward a condition by block order interaction, F(1, 22) = 3.38, p = 0.079, Image not available = 0.133. Paired t tests (two-tailed) revealed that when the traveling wheel block was presented first, the displacement effect was reduced in the subsequent stationary wheel block—28.46° (or 237.17 ms) versus 2.99° (24.92 ms), t(11) = 2.89, p = 0.015, d = 0.74—whereas when the stationary wheel block was presented first, its displacement effect remained unaffected—38.87° (323.92 ms) versus 34.55° (287.92 ms), t(11) = 0.58, p = 0.572, d = 0.09. None of these effects were accompanied by differences in precision. Although we are not sure what caused the block order effect found for 83% observers in the moving → stationary block order, one tentative possibility is that adaptation to the moving wheel somehow improved orientation discrimination in the subsequent stationary wheel block (see Clifford et al., 2001, 2002; Clifford & Wenderoth, 1999).  
In order to factor out this unanticipated order effect, our main comparison focused on only the first block presented to an observer, comparing condition between observers. One-sample t tests (two-tailed) found that the effect replicated in both the traveling wheel condition—28.46° (237.17 ms), t(11) = 2.89, p = 0.015, d = 1.18—and stationary wheel condition—34.55° (287.92 ms), t(11) = 2.21, p < 0.05, d = 0.90. An independent t test (two-tailed) found that observers were able to track the feature in both conditions, and that they displaced the feature to a similar extent in both conditions—28.46° (237.12 ms) versus 34.55° (287.92 ms), t(22) = −0.33, p = 0.745, d = −0.13 (Figure 3C). Independent t tests (two-tailed) also found no significant differences between conditions in guess rate—0.194 versus 0.094, t(22) = 0.97, p = 0.343, d = 0.40—or precision—45.43° (378.58 ms) versus 48.52° (404.33 ms), t(22) = −0.41, p = 0.685, d = −0.17. 
The fact that we did not find any differences between conditions on any of our measures shows that dynamic feature tracking can extend to different kinds of occlusion events, and that the perceptual system was consistently able to hone in on feature information, forming predictions based on this information even as the object moved. Furthermore, the fact that a forward displacement effect also occurred in the stationary location condition suggests that the forward bias is one of feature tracking per se. Finally, although these results provide some evidence that orientation and spatial information were tracked in a separable fashion, Experiment 2B provided an even stronger test of this possibility. 
Experiment 2B: Object-congruent versus -incongruent feature changes
Experiment 2B directly pitted object and feature behavior against each other, by sometimes having the direction of the wheel's rotation (now anticlockwise) conflict with the direction of the object's motion (left to right, which would normally entail clockwise rotation). If dynamic feature tracking is dependent on the behavior of the surrounding boundaries of the object (entailing an integral, object-specific process), then we should expect a reduction in the feature displacement effect in this incongruent condition. By contrast, if dynamic feature tracking is more feature-specific (entailing a more separable process), then we should expect feature tracking and the associated feature displacement effect to be robust even across such an incongruity—that is, in the incongruent condition, observers should still be able to track the feature and should now exhibit forward displacement in the opposite (anticlockwise) direction. 
Method
Observers
Twenty-six new observers (Mage = 23 years; 19 female) with normal or corrected visual acuity completed a 90-min session in exchange for £15 payment. This sample size was chosen to match that of Experiments 1B and 2A and can be justified post hoc based on the results of those experiments. 
Apparatus and stimuli
The experiment was identical to Experiment 2A, except as noted here. The wheel moved behind the occluder on all trials. Each observer completed two practice trials, followed by two blocks of 150 trials each, in counterbalanced order between observers. In the congruent block, the wheel moved from left to right at 3.90°/s and went behind the occluder, with the feature rotating clockwise at 120°/s around the center of the circle as the object moved. In the incongruent block, the object moved from left to right in exactly the same way at 3.90°/s, except that the feature rotated at 120°/s around the center of the circle anticlockwise (rather than clockwise) as the object moved. Aside from the reversed direction of feature rotation, all other aspects of the trial were matched between conditions (Figure 4). That is, in both conditions the wheel moved for 4.95 s before touching the left edge of the gray rectangle and proceeding to be occluded by it. Once completely occluded, the wheel remained occluded for 1.13 s before emerging from the other end of the occluder, stopping so that only 7.06% of its surface was visible. 
Figure 4
 
(A, B) Depiction (not to scale) of the feature tracking task in the different congruency conditions of Experiment 2B (note the angular difference in spoke position between conditions). (C) Displacement errors from the mixture model (clockwise in the congruent condition, and anticlockwise in the incongruent condition) did not differ significantly between conditions. Error bars indicate 95% CIs.
Figure 4
 
(A, B) Depiction (not to scale) of the feature tracking task in the different congruency conditions of Experiment 2B (note the angular difference in spoke position between conditions). (C) Displacement errors from the mixture model (clockwise in the congruent condition, and anticlockwise in the incongruent condition) did not differ significantly between conditions. Error bars indicate 95% CIs.
For both blocks, observers had the same task as in the previous experiments: to predict the wheel's final orientation when it stopped.1 
Results
We excluded one observer for responding uniformly in any of the individual conditions (Rayleigh's test, p = 0.188), and one observer whose mean angular error was three standard deviations beyond the group mean. This left 24 observers. The mixture model was fit to each of the movement conditions for each observer. To facilitate easy comparison of the magnitude of effects between conditions, the following analyses report the absolute value of the error in the direction of rotation for each condition. For example, a bias of −21° in the incongruent condition gets reported as a bias of 21° in the anticlockwise direction, whereas a bias of 21° in the congruent condition gets reported as a bias of 21° in the clockwise direction. Observers were able to track the changing feature in both conditions, and one-sample t tests (two-tailed) found that the positive displacement effect in the direction of rotation was replicated in both the congruent condition (i.e., clockwise displacement; 41.31° [or 344.25 ms], t[23] = 3.04, p = 0.006, d = 0.88) and incongruent condition (i.e., anticlockwise displacement; 34.34° (286.17 ms), t(23) = 2.63, p = 0.015, d = 0.76). Furthermore, a paired t test (two-tailed) found no significant difference between conditions in the magnitude of displacement—41.31° (344.25 ms) versus 34.34° (286.17 ms), t(23) = 0.38, p = 0.708, d = 0.11 (Figure 4C). Therefore, we conclude that the orientation and location information were tracked in a highly separable fashion, since feature-tracking performance in the incongruent condition remained unaffected by the conflicting behavior of the stimulus. 
In line with this interpretation, a 2 (condition) × 2 (block order) repeated-measures ANOVA revealed no significant condition by block order interaction—F(1, 22) = 1.96, p = 0.175, Image not available = 0.082—as well as no significant differences between conditions in guess rate—congruent: 0.083 versus incongruent: 0.067, t(23) = 0.83, p = 0.417, d = 0.10—or precision—congruent: 51.54° (429.50 ms) versus incongruent: 53.51° (445.92 ms), t(23) = −0.60, p = 0.557, d = −0.09.  
Experiment 3: Continuous or discrete feature tracking?
Having established that tracking the changing feature of a moving object relies on a highly separable process, we next wanted to understand the manner in which the feature is tracked. It is known that the changing location of an object is continuously tracked even during occlusion (DeLucia et al., 2000; Gilden et al., 1995; Lyon & Waag, 1995; Makin & Poliakoff, 2011), but is a similar continuous tracking process applied to separable features? Although this conjecture would follow from the feature specificity of our results in the previous experiments, it is certainly not obvious whether during occlusion observers continuously rotate a dynamic representation of the moving wheel. In order to determine whether this was happening, Experiment 3A subtly (i.e., without changing the visible width of the occluder) manipulated how long observers thought the wheel was occluded, by in one condition making the wheel emerge from occlusion prematurely. If observers continuously rotate a mental representation of the wheel, then they should do so to a lesser extent when it emerges prematurely. This is because their estimates should reflect the rotation rate multiplied by the (shortened) occlusion duration. In contrast, if observer's estimates only take into account the visible information before occlusion, then they should not be influenced by whether the wheel emerges prematurely rather than at the correct time, since all the visible information before and during occlusion (including the visible width of the occluder) will have remained identical between the two conditions. 
Finally, Experiment 3B studied observer's eye movements as a clearer window into the nature of their mental representations during occlusion. 
Experiment 3A: Manipulating expectations
Method
Observers
Twenty-five new observers (Mage = 22 years; 17 female) with normal or corrected visual acuity completed a 75-min session in exchange for £15 payment. This sample size was chosen to match those of Experiments 1B, 2A, and 2B and can be justified post hoc based on the results of those experiments. 
Apparatus and stimuli
The experiment was identical to Experiment 2B, except as noted here. The wheel rotated in a manner consistent with its direction of motion (i.e., clockwise) on all trials. In one block, the object reappeared at the correct time, as in the previous experiments. In the other block, the object reappeared 350 ms prematurely (Figure 5). Thus, in both conditions the wheel moved at 3.90°/s for 4.95 s before touching the left edge of the gray rectangle and proceeding to be occluded by it. Once completely occluded, the wheel remained occluded for 1.13 s (on time) or 0.78 s (premature) before emerging from the other end of the occluder, stopping so that only 7.06% of its surface was visible. As such, in both conditions observers could see where the wheel stopped but not its spoke's current orientation. A series of increasingly specific debriefing questions confirmed that not a single observer noticed the difference between the premature and on-time conditions. Block order was counterbalanced between observers. 
Figure 5
 
(A, B) Depiction (not to scale) of the feature tracking task in the different conditions of Experiment 3A. (C) Displacement errors extracted from the mixture model differed significantly between conditions. Error bars indicate 95% CIs.
Figure 5
 
(A, B) Depiction (not to scale) of the feature tracking task in the different conditions of Experiment 3A. (C) Displacement errors extracted from the mixture model differed significantly between conditions. Error bars indicate 95% CIs.
Results
We excluded one observer whose mean angular error was 3 SDs beyond the group mean, leaving 24 observers. We fit the mixture model to each of the conditions for each observer. A one-sample t test (two-tailed) found that the displacement effect replicated when the object emerged on time—31.76° (264.67 ms), t(23) = 3.82, p = 8.69 × 10−4, d = 1.10—but the error was reduced in the premature condition relative to what its orientation would have been had the object emerged from occlusion on time—2.13° (17.75 ms), t(23) = 0.21, p = 0.835, d = 0.06 (Figure 5C). Furthermore, a paired t test (two-tailed) found that the magnitude of the effect (or lack thereof) differed significantly between the on-time and premature conditions—31.76° (264.67 ms) versus 2.13° (17.75 ms), t(23) = 3.39, p = 0.003, d = 0.65. These results provide evidence that the feature was dynamically tracked throughout occlusion, since observers' estimates were sensitive to how long the feature was occluded, rather than being constant across conditions. Paired t tests (two-tailed) found no differences between conditions in the proportion of guesses—on time: 0.087 versus premature: 0.130, t(23) = −1.23, p = 0.233, d = −0.24—or precision of responses—on time: 48.13° (401.08 ms) versus premature: 46.75° (389.58 ms), t(23) = 0.33, p = 0.742, d = 0.08. Although one might have expected higher precision in the premature condition (since decreasing precision accompanied increasing occlusion durations in Experiment 1B), it is possible that the premature emergence in the current experiment added some error of its own (if the tracking system had to reconcile the unexpected emergence of the feature with its current mental representation of that same feature). 
Thus, we conclude that observers mentally rotated the wheel during occlusion for as long as it was invisible, resulting in relatively less rotation for the premature condition compared to the on-time condition. This result also suggests that the main factor that matters to the feature tracking mechanism is the occlusion's temporal duration, rather than the visible width of the occluder (which indicates the spatial distance traveled). This behavior does not cohere strictly with that of an ideal observer, whose estimates should equally take into account both the distance traveled and the time spent behind the occluder, accelerating the rate of mental rotation to compensate for early arrivals in the premature condition. It might be that the continuous tracking system is set up to make predictions based on the assumption that an object will continue along at its current rate, as opposed to changing speed while still completely occluded (an event for which there is no direct visual input, and one which may be less frequent in the dynamic world). 
It should be noted that if we instead calculate the errors for the premature condition relative to the fictional orientation of a wheel whose occlusion period truly is 350 ms shorter (i.e., as if the width of the occluder truly were shortened), then a one-sample t test (two-tailed) finds that the displacement effect replicates as usual relative to this corrected, earlier arrival—44.13° (367.75 ms), t(23) = 4.37, p = 2.25 × 10−4, d = 1.26. Furthermore, a paired-sample t test (two-tailed) finds that the extent of displacement for this corrected condition does not differ significantly from that in the on-time condition—44.13° (367.75 ms) versus 31.76° (264.67 ms), t(23) = 1.42, p = 0.170, d = 0.27. 
Ideally, though, one would like to have a clearer window into how the mental representation evolves throughout the entire occlusion period. Furthermore, although the above results suggest a continuous feature tracking mechanism, they do not actually settle whether observers' mental representations tracked changes in both rotation and location, or whether observers simply rotated a static mental representation of the feature until it reemerged. 
Experiment 3B: Eye movements
Experiment 3B employed eye tracking as a clearer window into observers' mental representations during the entire tracking period. We were curious whether we would find converging evidence for a continuous tracking mechanism, whereby eye positions track both the location and orientation of the rotating, moving stimulus even during occlusion. Such a result would agree with the continuous tracking results from Experiment 3A. Previous work has found that eye movements during visual imagery resemble those during actual viewing of the same visual scene (Laeng & Teodorescu, 2002), suggesting that eye positions serve as a kind of spatial index for an internal visual image (Kosslyn, Thompson, Kim, & Alpert, 1995; Mast & Kosslyn, 2002; Pinker, 1999). It is also well-known that visual imagery is used during rotation tasks, since the amount of time it takes to discriminate the orientation of two objects increases as the orientation disparity between the objects increases—as if observers are mentally rotating the images until they match (Shepard & Cooper, 1986; Shepard & Metzler, 1971)—and this sort of mental chronometry is also found in other types of visual imagery tasks (e.g., Finke, 1989; Finke & Shepard, 1986; Kosslyn 1973, 1994; Kosslyn, Ganis, & Thompson, 2001). Furthermore, a recent study found that the eyes follow the mental trajectory of rotation of an invisible object that was previously visible at a static location (Xu & Franconeri, 2015). 
Therefore, if visual imagery is employed when tracking the changing feature of a moving object, and if the orientation and location are tracked continuously (as suggested by Experiment 3A), then we should find that eye movements follow the changing feature of the moving object even during occlusion. Finally, we were also curious whether eye movements would provide any evidence for why we had been finding a forward feature displacement in the previous five experiments. 
Method
Observers
Twenty-seven new observers (Mage = 25 years; 12 female) with normal or corrected visual acuity completed a 75-min session in exchange for £15 payment. This sample size was chosen to match those of Experiments 1B, 2A, 2B, and 3A and can be justified post hoc based on the results of those experiments. 
Apparatus and stimuli
This experiment was identical to Experiment 3A, except as noted here. The wheel arrived on time for all trials. In one block, observers were free to move their eyes (as in the previous experiments). In the other block, they were required to fixate on a central fixation cross while performing the task in their peripheries. Block order was counterbalanced between observers. Eye movements were recorded with a desktop mount eye tracker at 500 Hz (EyeLink 1000, SR Research, Ontario, Canada), using the Eyelink Toolbox extensions for Matlab (Cornelissen, Peters, & Palmer, 2002). Drift correction was performed before every trial. Eyetracking data were preprocessed offline for eye blink correction. For each trial, vertical eye position was median-corrected. Eye blinks and other artifacts were then identified as time periods with missing samples, high velocity (larger than 500°/s), or high acceleration (larger than 375°/s2). Samples during artifacts (and 50 ms before and after the artifact) were removed and linearly interpolated (based on the last sample before and first sample after the removed period). Interpolation had no effect on the results, as repeating our analyses with data omitting these periods showed the same qualitative and statistical outcomes (i.e., all tests reported here remained significant). 
Results
Behavior
Individual trials in the fixation condition were excluded if fixations fell beyond 2° of the screen fixation point for more than 10% of fixation samples in a trial, and we excluded one observer because all their trials failed this criterion. We also excluded two observers for responding uniformly in any of the individual conditions (Rayleigh's test, ps = 0.995, 0.370). This left 24 observers. Thus, the rest of observers were able to provide nonrandom predictions in both conditions. We fit the mixture model to each of the conditions for each observer. A one-sample t test (two-tailed) found that the feature displacement effect replicated in the free eye movement condition—37.74° (314.50 ms), t(23) = 2.83, p = 0.009, d = 0.82. However, the same test revealed that the effect did not replicate in the fixation condition—6.40° (53.33 ms), t(23) = 0.48, p = 0.634, d = 0.14 (Figure 6A)—with the magnitude of the displacement effect (or lack thereof) differing significantly between free eyes and fixation conditions—37.74° (314.50 ms) versus 6.40° (53.33 ms), t(23) = 3.65, p = 0.001, d = 0.48. Paired t tests (two-tailed) found no significant differences between conditions in the proportion of guesses—free eyes: 0.095 versus fixation: 0.072, t(23) = 0.78, p = 0.443, d = 0.18—although responses were marginally more precise in the free eye movement condition—free eyes: 46.58° (388.17 ms) versus fixation: 52.28° (435.67 ms), t(23) = −2.05, p = 0.052, d = −0.31). The increased accuracy (i.e., lower bias) found in the fixation condition suggests that the typical forward displacement relies on dynamic eye movements. We investigated the nature of these eye movements in the following eye tracking analysis. 
Figure 6
 
(A) Depiction of results from Experiment 3B. Feature displacement was only observed during free eye movements, not fixation. A heat map of eye positions shows that during occlusion, observers continued to track both the object's location (B) and the feature's orientation (C). Note that at the beginning of each trial, gaze is always held at the fixation point, which is above the location of the rotating object. Therefore, object tracking always starts from a positive value (see narrow red stripe at left edge of the plot). (D) Leading of the feature by the eyes during occlusion (i.e., the circular distance between the location of the eyes and that of the feature) correlated positively with the mean displacement effect for the 20 out of 24 observers who showed a significant effect of starting feature position on eye position.
Figure 6
 
(A) Depiction of results from Experiment 3B. Feature displacement was only observed during free eye movements, not fixation. A heat map of eye positions shows that during occlusion, observers continued to track both the object's location (B) and the feature's orientation (C). Note that at the beginning of each trial, gaze is always held at the fixation point, which is above the location of the rotating object. Therefore, object tracking always starts from a positive value (see narrow red stripe at left edge of the plot). (D) Leading of the feature by the eyes during occlusion (i.e., the circular distance between the location of the eyes and that of the feature) correlated positively with the mean displacement effect for the 20 out of 24 observers who showed a significant effect of starting feature position on eye position.
Eye tracking
In addition to testing whether an observer's gaze followed the object (see Figure 6B), we were most interested in whether observers tracked the revolving feature during both presentation and occlusion. To this end, we tested whether the feature's starting angle on each trial affected vertical eye position (i.e., the component of eye motion that was orthogonal to the horizontal movement of the object, and therefore could only be influenced by feature motion within the object). At each time point, we used linear–circular regression (Jupp & Mardia, 1980) to measure the sensitivity of vertical eye position to the starting angle. If observers continuously fixated the rotating feature, we would expect vertical eye position to be highest when the feature was oriented vertically (pointing up). In this case, the preferred feature angle would be 0° (i.e., vertical feature position and vertical eye position are in perfect alignment). However, eye gaze could also consistently lead or lag the rotating feature. In such cases, the preferred angle is unknown, because it depends on the degree of leading/lagging (i.e., if eye gaze consistently leads the feature orientation by 10°, then the preferred angle would be phase-shifted to +10°). Because the preferred angle at each time point was not known a priori, our measure of sensitivity was derived from regressing both the sine and cosine of the starting angle onto the (z-scored) eye position using a general linear model (in analogy to the use of sine and cosine in linear–circular correlations, for example). In other words, at each time point in the trial, we solved the following general linear model:  where is the vector of vertical eye positions (across all trials) at timepoint t, θ is the vector of feature starting angles (across all trials), and b1 and b2 are the two regression coefficients of the sine and the cosine regressor, respectively.  
Sensitivity S was then calculated as the square root of the sum of squared regression coefficients:  Therefore, the sensitivity at each time point t measured the degree to which vertical eye movements were predicted by the vertical position of the spoke at t. We next generated a shuffling distribution (2,000 permutations) of regression amplitudes by randomly permuting starting angles of the spoke across trials, and calculating sensitivity to starting angle for each permutation. The (within-observer) p value of the real effect was then calculated as the rank within the shuffling distribution. We transformed p values into z scores (using the inverse of the normal cumulative distribution function, with M = 0 and SD = 1), and tested z scores against 0 using t tests to measure the strength of the effect at the group level. We expected sensitivity to be significantly higher when the relationship between starting angle and eye position on each trial was preserved, compared to removing that relationship through random permutation. The regression also allowed us to estimate the preferred angle at each time point (using the inverse tangent of the regression weights for the sine and cosine of the starting angle; see Gould, Nobre, Wyart, & Rushworth, 2012). By estimating the change in preferred angle over time (throughout the trial, and during the occlusion period alone), we were able to track the angular velocity of the sinusoidal component of vertical eye position (by calculating the mean change per time). Our aim was to test whether angular velocity (thus defined) predicted feature displacement. As an alternative to this regression approach, we also used circular–linear correlations between feature angle (the circular variable) and vertical eye position (the linear variable). Again, we used permutation tests to assess significance (by permuting the feature angles with respect to the eye position 2,000 times).  
In the baseline period (the 1000 ms of visible feature rotation prior to the object first touching the border of the occluder), vertical eye position strongly depended on starting angle (one-sample t test [two-tailed]) on z-scored sensitivity—t(23) = 34.36, p = 2.86 × 10−21, d = 14.33; circular–linear correlation coefficient rho = 0.60 ± 0.04, permutation test p = 3.1 × 10−21 (Figure 6C)—indicating that observers followed the rotating feature (p < 10−5 at every time point in the 5 s leading up to occlusion onset). This effect persisted during the occlusion period (i.e., the period during which the entire stimulus was invisible; one-sample t-test (two-tailed) on z-scored sensitivity—t(23) = 13.75, p = 1.40 × 10−12, d = 5.73, rho = 0.37 ± 0.03, p = 9.8 × 10−12—although it was reduced compared to the baseline period—18.4% ± 5.1% reduction, t(23) = 3.49, p = 0.002, d = 0.73 (see Figure 6C and Figure 7). Additionally, within 20 out of 24 individual observers, we saw a significant effect of starting orientation on vertical eye position during the occlusion interval (as measured against the shuffling distribution, p < 0.05 one-sided), indicating that this was a pervasive phenomenon. In these 20 observers, we found that our estimate of the angular velocity during the baseline interval (116.1°/s ± 3.6°/s) roughly matched that of the feature (120°/s), even during occlusion—114.0°/s ± 9.5°/s, compared to baseline: t(19) = −0.199, p = 0.845, d = −0.09. Furthermore, the magnitude of the displacement effect significantly correlated with angular velocity of the eye during occlusion (Spearman's r = 0.699, p = 0.0008), but not during the baseline period (Spearman's r = 0.039, p = 0.871). 
Figure 7
 
Vertical eye position tracks vertical feature position on single trials. Each panel in the top row shows the average vertical eye position (gray line, with shading showing SEM across 24 participants) for a different starting feature angle (90°, 180°, 270°, or 360°, with colored lines indicating vertical feature position on the screen over the course of the trial). The eye position follows the feature position closely over the course of the trial, even when the stimulus is occluded (0–1 s, gray box). The bottom row shows individual participants' vertical eye positions for the same trials (thin gray lines). The individual traces indicate that participants generally used smooth pursuit to track the feature, interrupted by occasional saccades.
Figure 7
 
Vertical eye position tracks vertical feature position on single trials. Each panel in the top row shows the average vertical eye position (gray line, with shading showing SEM across 24 participants) for a different starting feature angle (90°, 180°, 270°, or 360°, with colored lines indicating vertical feature position on the screen over the course of the trial). The eye position follows the feature position closely over the course of the trial, even when the stimulus is occluded (0–1 s, gray box). The bottom row shows individual participants' vertical eye positions for the same trials (thin gray lines). The individual traces indicate that participants generally used smooth pursuit to track the feature, interrupted by occasional saccades.
Next we tested whether eye position led the feature during tracking. Instead of using the feature angle at the beginning of the trial, we recalculated the linear–circular regression using the current feature angle at each time point. If observers' eyes tracked the feature exactly, then the average preferred angle should be 0°. Systematic deviations from zero would indicate that the eyes are leading the feature (values > 0°) or are lagging the feature (values < 0°). We found that eye position began leading the feature during the baseline period—(M ± SEM) leading: 15.4° ± 4.3°, t(23) = 3.23, p = 0.004, d = 1.35—and persisted during occlusion—21.4 ± 8.2, t(23) = 2.74, p = 0.012, d = 1.14. As with the angular velocity analysis, the amount of eye leading during occlusion correlated significantly with mean displacement (Spearman r = 0.534, p = 0.008), but there was no correlation with the amount of eye leading during the baseline period (r = 0.11, p = 0.60). Restricting the correlation to the 20 observers who showed a significant effect of feature starting angle on eye position led to the same result—occlusion: r = 0.522, p = 0.020; baseline: r = 0.152, p = 0.521 (Figure 6D). 
These results show that, in an analogous manner to how the smooth pursuit system allocates attention to entire object locations before and during occlusion (Barborica & Ferrera, 2003, 2004; Makin & Poliakoff, 2011; Makin, Poliakoff, & El-Deredy, 2009; Orban de Xivry, Missal, & Lefèvre, 2008; Xiao, Barborica, & Ferrera, 2007), the changing feature of a moving object is continuously mentally transformed in a manner that closely resembles the veridical behavior of a visible stimulus, even when the stimulus is occluded. These results agree with the recent study by Xu and Franconeri (2015), which found that observers continue to mentally rotate an object that was previously seen rotating at a stationary location. Furthermore, although the authors do not discuss this aspect of their results, their results are also consistent with a forward displacement error, since on the majority of trials, observers in their experiment moved their eyes beyond the true final position of the invisibly rotating object. Finally, we found that the eyes led the feature during occlusion, and that the amount of leading during occlusion (and only during occlusion) predicted the extent of forward displacement, suggesting that observers' responses were, in a sense, read from the eyes. We discuss the implications of these results for shedding light on the mechanism in the General Discussion below. 
General discussion
The current experiments investigated how the mind tracks the changing feature of a moving object. Experiment 1A and 1B found that observers were capable of doing this, although they exhibited a forward displacement error (Experiment 1A), which scaled with various speeds of motion (Experiment 1B). We also found that observers reliably misperceived the feature to be further ahead than it truly was. Experiment 2A and 2B then investigated the extent to which the tracking process is object- versus feature-specific, finding that tracking was not impaired for moving versus stationary objects (Experiment 2A), nor object-incongruent versus -congruent feature changes (Experiment 2B).2 We next investigated whether this separable tracking process is discrete or continuous. Using both a manipulation of temporal expectations (Experiment 3A) and eye tracking as a window into observer's mental representations, we found that the feature was continuously tracked, even during occlusion. 
Our forward displacement results are consistent with previous work on representational momentum (wherein the last perceived location of a suddenly disappearing object is extrapolated in the direction of motion; Freyd & Finke, 1984) and the flash lag effect (wherein an object flashed next to a moving object is perceived to lag the moving object, which is perceptually displaced in the direction of motion; Hazelhoff & Wiersma, 1924). And as in the current studies, those effects also increase with speed. There are multiple existing accounts of how representational momentum-like effects arise (e.g., Eagleman & Sejnowski, 2000; Hubbard, 2015; Kerzel, 2003; Nijhawan, 1994; Whitney & Cavanagh, 2000), and arguably the most prominent one is that these effects compensate for a ∼100 ms delay in the neural relay of motion information from photoreceptors to early visual processing regions (Berry, Brivanlou, Jordan, & Meister, 1999; Nijhawan, 1994). Such a compensatory mechanism might have played a partial role in the present feature displacement effect, although it could not fully account for our results, since there was still a significant displacement effect after we deducted 12° (= 100 ms displacement error) from each observer's average displacement in the identical conditions across the six experiments (one-sample t test [two-tailed] against zero)—26.45°, t(130) = 5.83, p = 4.20 × 10−8, d = 0.72. 
The current results also resemble findings from sensorimotor synchronization studies, in which participants must synchronize their actions with a predictable external event (for a review, see Repp, 2005). These studies often find a negative mean asynchrony, whereby participants hyper-anticipate the arrival of the external event (Miyake, 1902; Woodrow, 1932). Similarly, our results also resemble findings from interception studies (for a review, see Zago, McIntyre, Senot, & Laacquanti, 2009), which find hyper-anticipatory behavior when people intercept or avoid collision with moving objects, as well as studies of motion tracking in sports, which find that athletes from various ball sports (e.g., tennis, baseball, cricket, squash) make predictive saccades to the locations where they expect the ball to land next (Bahill & LaRitz, 1984; Hayhoe, McKinney, Chajka, & Pelz, 2012; Land & Furneaux, 1997; Land & McLeod, 2000). Furthermore, akin to the representational momentum literature, all three literatures often attribute these effects to neural compensation for the delay of transforming sensory signals into timed motor responses. 
Since the forward displacement effect was only observed during free eye movements but not during fixation, and since the amount of eye leading during occlusion predicted the extent of displacement, we conclude that eye positions served as a spatial index for an internal visual image (Kosslyn et al., 1995; Mast & Kosslyn, 2002; Pinker, 1999) that was dynamically rotated (Shephard & Cooper, 1986; Shephard & Metzler, 1971). Thus, it may be that a central rate controller (Makin & Bertamini, 2014; Makin & Chauhan, 2014) is responsible for driving representational momentum of the eyes, whose positions are then used to make predictions about the feature. Such an account would also predict a lack of forward displacement in the fixation condition, as was observed, since in this condition the eyes could not move. 
Potential mechanism
There appear to be at least four possible explanations for how the forward displacement effect arises: 
  1.  
    The dynamic representation of the rolling wheel accelerates during occlusion, perhaps due to a speeding up of a central rate controller that controls the eyes (Makin & Bertamini, 2014; Makin & Chauhan, 2014). This interpretation seems unlikely, since (i) controlling for speed, errors were not multiplicatively larger with increasing occlusion durations of Experiment 1B, and (ii) we found no evidence that angular velocity accelerates during occlusion (Experiment 3B).
  2.  
    There is a uniform bias, whereby displacement is constant regardless of the occlusion duration. This interpretation appears to be ruled out, since (i) the premature condition of Experiment 3A showed a reduced forward displacement for a shorter occlusion duration, and (ii) longer occlusion durations in Experiment 1B were associated with larger forward displacements.
  3.  
    A rate controller (Makin & Bertamini, 2014; Makin & Chauhan, 2014) runs too fast during mental tracking. This interpretation seems most consistent with our forward displacement results, including the larger errors found for longer occlusion periods (Experiments 1B and 3A), with a linear relationship in Experiment 1B. It is the combination of positive evidence for this account (and negative evidence for the others) that leads us to favor it, rather than any single piece of evidence on its own. Furthermore, this account is also consistent with previous studies that have used production tasks with a time-to-contact manipulation, which find a linear relationship between response time and occlusion duration. That is, in those studies observers increasingly overanticipate when an occluded object will reach a given point, as occlusion duration increases (e.g., Benguigui et al., 2004; Makin & Bertamini, 2014; Makin & Chauhan, 2014; Tresilian, 1995), a result that has also been explained as a type of representational momentum (Gray & Thornton, 2001). That said, in order to definitively arbitrate among the three accounts above, future studies could parametrically vary occlusion over a wide range while keeping speed constant (e.g., as in Makin & Chauhan, 2014), then measure the final spoke orientation estimates as a function of occlusion duration.
  4.  
    The eyes continue to pursue the feature along its circular trajectory after it disappears, overshooting it for 200–250 ms (Kerzel, 2006). While such overshooting effects cannot provide an explanation for the full range of representational momentum effects (Hubbard, 2005, 2015), they are still directionally consistent with the current results. However, the fact that in the current experiments the eyes led the feature throughout the occlusion period (not just at the end) makes an overshooting explanation seem unlikely. Furthermore, typical overshooting studies (e.g., Kerzel, 2003) involve unpredictable disappearances of an object, in which case it is not surprising that the eyes naturally overshoot the last seen object location. In contrast, the stopping location of the wheel in the current studies was held constant across trials, making it fully predictable. Nonetheless, one might still argue that some overshooting might have occurred in feature space. Although in the current studies we stopped recording eye movements at the point of disocclusion, future work could extend the eye tracking period beyond this point to assess whether any overshooting occurs in feature space, and if so, to what extent it can account for the magnitude of forward displacement. We expect that any such overshooting would only account for a portion of the bias, although this of course remains an empirical question.
Speculations about adaptive benefits
Why does the perceptual system track a changing feature continuously? One possibility is that continuous tracking places the system in a better position to react swiftly to unexpected deviations in a target's behavior, such as a sudden change in a featural state (more likely to occur when tracking animate agents). There might also be a computational and/or memory storage advantage associated with tracking dynamic features continuously versus in another manner, and future work can investigate this possibility, perhaps using dual-task and memory recall paradigms. A related question is whether tracking also behaves this way in real-world environments, where additional information might be incorporated to guide prediction. The fact that observers employed the continuous tracking mechanism in our experiments despite repetitive task parameters may suggest so, yet ultimately this possibility would need to be tested in more ecological settings. 
We have observed a forward displacement in feature tracking. But of course this naturally raises another question: Why would this effect occur in the first place? A conservative prediction account would be that observers benefit from overanticipating the emergence of future states along a dynamic feature trajectory, as when features are action-guiding (see also Hubbard, 2015). One adaptive way to gain this benefit would be to see (in the mind's eye) such a future state before it truly emerges. 
Acknowledgments
The research was supported by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre based at Oxford University Hospitals Trust Oxford University, by a Wellcome Trust Senior Investigator Award (ACN) 104571/Z/14/Z, and by the Rhodes Trust. We thank Brian Scholl, Charles Spence, Nicholas Yeung, George Alvarez, Jordan Suchow, Yaffa Yeshurun, and especially Alexis Makin and Alex Holcombe for helpful suggestions. J. De Freitas and A. C. Nobre developed the study concept, and contributed to the study design. Testing, data collection, and data analysis were performed by J. De Freitas. N. E. Myers contributed to data analysis and interpretation. J. De Freitas and N. E. Myers drafted the manuscript under the supervision of A. C. Nobre. All authors approved the final version of the manuscript for submission. 
Commercial relationships: none. 
Corresponding author: Anna C. Nobre. 
Email: kia.nobre@ohba.ox.ac.uk. 
Address: Department of Experimental Psychology, Oxford Centre for Human Brain Activity, and Department of Psychiatry, University of Oxford, Oxford, United Kingdom. 
References
Alvarez, G. A., Cavanagh P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15, 106–111.
Bahill A. T., LaRitz T. (1984). Why can't batters keep their eyes on the ball? American Scientist, 72, 249–253.
Baker C., Keysers C., Jellema T., Wicker B., Perrett D. (2001). Neuronal representation of disappearing and hidden objects in temporal cortex of the macaque. Experimental Brain Research, 140, 375–381.
Barborica A., Ferrera V. P. (2003). Estimating invisible target speed from neuronal activity in monkey frontal eye field. Nature Neuroscience, 6, 66–74.
Barborica A., Ferrera V. P. (2004). Modification of saccades evoked by stimulation of frontal eye field during invisible target tracking. Journal of Neuroscience, 24, 3260–3267.
Battaglini L., Campana G., Casco C. (2013). Illusory speed is retained in memory during invisible motion. i-Perception, 4, 180–191.
Bays P. M., Catalao R. F. G., Husain M. (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of Vision, 9 (10): 7, 1–11, doi:10.1167/9.10.7. [PubMed] [Article]
Benguigui N., Bennett S. J. (2010). Ocular pursuit and the estimation of time-to-contact with accelerating objects in prediction motion are controlled independently based on first-order estimates. Experimental Brain Research, 202, 327–339.
Benguigui N., Broderick M., Ripoll H. (2004). Age differences in estimating arrival-time. Neuroscience Letters, 369, 197–202.
Berry M. J.,II, Brivanlou I. H., Jordan T. A., Meister M. (1999). Anticipation of moving stimuli by the retina. Nature, 393, 334–338.
Blaser E., Pylyshyn Z. W., Holcombe A. O. (2000). Tracking an object through feature space. Nature, 408, 196–199.
Blaser E., Sperling G. (2008). When is motion “motion”? Perception, 37, 624–627.
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436.
Clifford W. G. (2002). Perceptual adaptation: Motion parallels orientation. Trends in Cognitive Sciences, 6, 136–143.
Clifford C. W., Wenderoth P. (1999). Adaptation to temporal modulation can enhance differential speed sensitivity. Vision Research, 39, 4324–4331.
Clifford C. W. G., Wyatt A. M., Arnold D. H., Smith S. T., Wenderoth P. (2001). Orthogonal adaptation improves orientation discrimination. Vision Research, 41, 151–159.
Cornelissen F., Peters E., Palmer J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, 34, 613–617.
De Freitas J., Liverence B. M., Scholl B. (2014). Attentional rhythm: A temporal analogue of object-based attention. Journal of Experimental Psychology: General, 143, 71–76.
DeLucia P. R., Liddell G. W. (1998). Cognitive motion extrapolation and cognitive clocking in prediction motion tasks. Journal of Experimental Psychology: Human Perception and Performance, 24, 901–914.
DeLucia P. R., Tresilian J. R., Meyer L. E. (2000). Geometrical illusions can affect time-to-contact estimation and mimed prehension. Journal of Experimental Psychology: Human Perception and Performance, 26, 552–567.
de'Sperati C., Deubel H. (2006). Mental extrapolation of motion modulates responsiveness to visual stimuli. Vision Research, 46, 2593–2601.
de'Sperati C., Santandrea E. (2005). Smooth pursuit-like eye movements during mental extrapolation of motion: The facilitatory effect of drowsiness. Cognitive Brain Research, 25, 328–338.
Eagleman D. M., Sejnowski T. J. (2000). Motion integration and postdiction in visual awareness. Science, 287, 2036–2038.
Egly R., Driver J., Rafal R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123, 161–177.
Finke R. A. (1989). Principles of mental imagery. Cambridge: MIT Press.
Finke R. A., Shepard R. N. (1986). Visual functions of mental imagery. In Boff K. R. Kaufman L. Thomas J. P. (Eds.) Handbook of perception and human performance (pp. 37.1–37.55). New York: Wiley.
Fougnie, D., Alvarez G. A. (2011). Object features fail independently in visual working memory: Evidence for a probabilistic feature-store model. Journal of Vision, 11 (12): 3, 1–12, doi:10.1167/11.12.3. [PubMed] [Article]
Freyd J. J., Finke R. A. (1984). Representational momentum. Journal of Experimental Psychology: Learning, Memory, & Cognition, 10, 126–132.
Garner W. R. (1974). The processing of information and structure. Potomac, MD: Lawrence Erlbaum.
Gilden D., Blake R., Hurst G. (1995). Neural adaptation of imaginary visual-motion. Cognitive Psychology, 28, 1–16.
Gould I. C., Nobre A. C., Wyart V., Rushworth M. F. (2012). Effects of decision variables and intraparietal stimulation on sensorimotor oscillatory activity in the human brain. The Journal of Neuroscience, 32, 13805–13818.
Gray R., Thornton I. M. (2001). Exploring the link between time to collision and representational momentum. Perception, 30, 1007–1022.
Hayhoe M. M., McKinney T., Chajka K., Pelz J. B. (2012). Predictive eye movements in natural vision. Experimental Brain Research, 217, 125–136.
Hazelhoff F., Wiersma H. (1924). Die wahrnehmungszeit [Translation: The time of sensation]. Zeitschrift für Psychologie, 96, 171–188.
Hecht H., Savelsburgh G. (2004). Theories of time-to-contact judgment. In Hecht H. Savelsburgh G. (Eds.) Time-to-contact (pp. 1–11). Amsterdam, The Netherlands: Elsevier.
Holcombe, A. O. (2009). Seeing slow and seeing fast: Two limits on perception. Trends in Cognitive Sciences, 13, 216–221.
Howard C. J., Holcombe A. O. (2008). Tracking the changing features of multiple objects: Progressively poorer perceptual precision and progressively greater perceptual lag. Vision Research, 48, 1164–1180.
Hubbard T. L. (2005). Representational momentum and related displacements in spatial memory: A review of the findings. Psychonomic Bulletin & Review, 12, 822–851.
Hubbard T. L. (2015). The varieties of momentum-like experience. Psychological Bulletin, 141, 1081–1119.
Isaak M. I., Just M. A. (1995). Constraints on the processing of rolling motion: The curtate cycloid illusion. Journal of Experimental Psychology: Human Perception and Performance, 21, 1391–1408.
Jupp P. E., Mardia K. V. (1980). A general correlation coefficient for directional data and related regression problems. Biometrika, 67, 163–173.
Kahneman D., Henik A. (1981). Perceptual organization and attention. In Kubovy M. Pomerantz J. (Eds.) Perceptual organization (pp. 181–211). Hillsdale, NJ: Erlbaum.
Kerzel, D. (2003). Centripetal force draws the eyes, not memory of the target, toward the center. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29, 458–466.
Kerzel D. (2006). Why eye movements and perceptual factors have to be controlled in studies on “representational momentum.” Psychonomic Bulletin & Review, 13, 166–173.
Kosslyn S. M. (1973). Scanning visual images: Some structural implications. Cognitive Psychology, 8, 441–480.
Kosslyn S. M. (1994). Image and brain. Cambridge: Harvard University Press.
Kosslyn S. M., Ganis G., Thompson W. L. (2001). Neural foundations of imagery. Nature Reviews Neuroscience, 9, 635–642.
Kosslyn S. M., Thompson W. L., Kim I. J., Alpert N. M. (1995). Topographic representations of mental images in primary visual cortex. Nature, 378, 496–498.
Kwon O. S., Tadin D., Knill D. C. (2015). Unifying account of visual motion and position perception. Proceedings of the National Academy of Sciences, 112, 8142–8147.
Laeng B., Teodorescu D. S. (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science, 26, 207–231.
Land M. F., Furneaux S. (1997). The knowledge base of the oculomotor system. Philosophical Transactions of the Royal Society B: Biological Sciences, 352, 1231–1239.
Land M. F., McLeod P. (2000). From eye movements to actions: how batsmen hit the ball. Nature Neuroscience, 3, 1340–1345.
Lee D. N. (1976). Theory of visual control of braking based on information about time-to-collision. Perception, 5, 437–459.
Linares D., Holcombe A. O., White A. L. (2009). Where is the moving object now? Judgments of instantaneous position show poor temporal precision (SD = 70 ms). Journal of Vision, 9 (13): 9, 1–14, doi:10.1167/9.13.9. [PubMed] [Article]
Liverence B. M., Scholl B. J. (2015). Object persistence enhances spatial navigation: A case study in smartphone vision science. Psychological Science, 26, 955–963.
Luck S., Vogel E. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281.
Lyon D. R., Waag W. L. (1995). Time-course of visual extrapolation accuracy. Acta Psychologica, 89, 239–260.
Makin A. D. J., Bertamini M. (2014). Do different types of dynamic extrapolation rely on the same mechanism? Journal of Experimental Psychology: Human Perception and Performance, 4, 1566–1579.
Makin A. D. J., Chauhan T. (2014). Memory-guided tracking through physical space and feature space. Journal of Vision, 14 (13): 10, 1–15, doi:10.1167/14.13.10. [PubMed] [Article]
Makin A. D. J., Poliakoff E. (2011). Do common systems control eye movements and motion extrapolation? The Quarterly Journal of Experimental Psychology, 64, 1327–1343.
Makin A. D. J., Poliakoff E., El-Deredy W. (2009). Tracking visible and occluded targets: Changes in event related potential during motion extrapolation. Neuropsychologia, 47, 1128–1137.
Mast F. W., Kosslyn S. M. (2002). Eye movements during visual mental imagery. Trends in Cognitive Sciences, 6, 271–272.
Miyake I. (1902). Researches on rhythmic activity. Studies From the Yale Psychological Laboratory, 10, 1–48.
Myers N. E., Stokes M. G., Walther L., Nobre A. C. (2014). Oscillatory brain state predicts variability in working memory. Journal of Neuroscience, 34, 7735–7743.
Nijhawan R. (1994). Motion extrapolation in catching. Nature, 370, 256–257.
Nobre A. C., Rao A., Chelazzi L. (2006). Selective attention to specific features within objects: Behavioral and electrophysiological evidence. Journal of Cognitive Neuroscience, 18, 539–561.
Oberfeld D., Hecht H., Landwehr K. (2011). Effects of task-irrelevant texture motion on time-to-contact judgments. Attention, Perception, & Psychophysics, 73, 581–596.
O'Craven K., Downing P., Kanwisher N. (1999). fMRI evidence for objects as the units of attentional selection. Nature, 401, 584–587.
Orban de Xivry J.-J., Missal M., Lefèvre P. (2008). A dynamic representation of target motion drives predictive smooth pursuit during target blanking. Journal of Vision, 8 (15): 6, 1–13, doi:10.1167/8.15.6. [PubMed] [Article]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics transforming numbers into movies. Spatial Vision, 10, 437–442.
Peterken C., Brown B., Bowman K. (1991). Predicting the future position of a moving target. Perception, 20, 5–16.
Pinker S. (1999). How the mind works. New York: Oxford University Press.
Price C. M., Gilden D. L. Representations of motion and direction. Journal of Experimental Psychology: Human Perception and Performance, 26, 18–30.
Proffitt D. R., Kaiser M. K., Whelan S. M. (1990). Understanding wheel dynamics. Cognitive Psychology, 22, 342–373.
Repp B. H. (2005). Sensorimotor synchronization: A review of the tapping literature. Psychonomic Bulletin & Review, 6, 969–992.
Rosenbaum D. A. (1975). Perception and extrapolation of velocity and acceleration. Journal of Experimental Psychology: Human Perception and Performance, 1, 395–403.
Scholl B. J. (2001). Objects and attention: The state of the art. Cognition, 80, 1–46.
Scholl B. J., Flombaum J. I. (2010). Object persistence. In Goldstein B. (Ed.) Encylopedia of perception, Volume 2 (pp. 653–657). Thousand Oaks, CA: Sage.
Scholl, B. J., Pylyshyn Z. W. (1999). Tracking multiple items through occlusion: Clues to visual objecthood. Cognitive Psychology, 38, 259–290.
Shepard R. N., Cooper L. A. (1986). Mental images and their transformations. Cambridge: MIT Press.
Shepard R. N., Metzler J. (1971). Mental rotation of three-dimensional objects. Science, 171, 701–703.
Sheth B. R., Nijhawan R., Shimojo S. (2000). Changing objects lead briefly flashed ones. Nature Neuroscience, 3, 489–495.
Simmons J. P., Nelson L. D., Simonsohn U. (2012). A 21 word solution. Dialogue, The Official Newsletter of the Society for Personality and Social Psychology, 26, 4–7.
Suchow J. W., Brady T. F., Fougnie D., Alvarez G. A. (2013). Modeling visual working memory with the MemToolbox. Journal of Vision, 13 (10): 9, 1–8, doi:10.1167/13.10.9. [PubMed] [Article]
Tresilian J. R. (1995). Perceptual and cognitive processes in time-to-contact estimation: Analysis of prediction-motion and relative judgment tasks. Perception & Psychophysics, 57, 231–245.
Whitney D., Cavanagh P. (2000). The position of moving objects. Science, 289, 1107.
Woodrow H. (1932). The effect of rate of sequence upon the accuracy of synchronization. Journal of Experimental Psychology, 15, 357–379.
Xiao Q., Barborica A., Ferrera V. P. (2007). Modulation of visual responses in macaque frontal eye field during covert tracking of invisible targets. Cerebral Cortex, 17, 918–928.
Xu Y., Franconeri S. L. (2015). Capacity for visual features in mental rotation. Psychological Science, 26, 1241–1251.
Zago M., McIntyre J., Senot P., Lacquaniti F. (2009). Visuo-motor coordination and internal models for object interception. Experimental Brain Research, 192, 571–604.
Zhang W., Luck S. J. Sudden death and gradual decay in visual working memory. Psychological Science, 20, 423–428.
Footnotes
1  Another unpublished study, albeit with older adults, confirmed that the forward displacement effect also occurs for leftward motion with (congruent) anticlockwise rotation and that the magnitude of this effect is indistinguishable from that found for rightward motion with (congruent) clockwise rotation. This makes it unlikely that any potential difference between conditions would be due to using anticlockwise motion per se.
Footnotes
2  It would be interesting to investigate the conditions under which a changing feature might be processed in an integral rather than separable manner, and whether this is a balance that can be shifted (see also Fougnie & Alvarez, 2011; Garner, 1974). As an example, the system might switch between separable and integral processing depending on the extent to which object positions are reliable (see Kwon, Tadin, & Knill, 2015).
Figure 1
 
(A) Depiction (not to scale) of the feature tracking task in Experiment 1A. (B) Mixture model fits of performance for each observer, showing the mean probability of their prediction errors, which tended toward forward displacement. The histogram shows the distribution of responses across all observers.
Figure 1
 
(A) Depiction (not to scale) of the feature tracking task in Experiment 1A. (B) Mixture model fits of performance for each observer, showing the mean probability of their prediction errors, which tended toward forward displacement. The histogram shows the distribution of responses across all observers.
Figure 2
 
(A–C) Depiction (not to scale) of the feature tracking task in the different speed conditions of Experiment 1B (note the difference in the angular position of the spoke among conditions). (D) Displacement errors extracted from the mixture model scaled linearly with the speed of the stimulus.
Figure 2
 
(A–C) Depiction (not to scale) of the feature tracking task in the different speed conditions of Experiment 1B (note the difference in the angular position of the spoke among conditions). (D) Displacement errors extracted from the mixture model scaled linearly with the speed of the stimulus.
Figure 3
 
(A, B) Depiction (not to scale) of the feature tracking task in the different motion conditions of Experiment 2A. (C) Displacement errors extracted from the mixture model did not differ significantly between conditions. Error bars indicate 95% confidence intervals (CIs).
Figure 3
 
(A, B) Depiction (not to scale) of the feature tracking task in the different motion conditions of Experiment 2A. (C) Displacement errors extracted from the mixture model did not differ significantly between conditions. Error bars indicate 95% confidence intervals (CIs).
Figure 4
 
(A, B) Depiction (not to scale) of the feature tracking task in the different congruency conditions of Experiment 2B (note the angular difference in spoke position between conditions). (C) Displacement errors from the mixture model (clockwise in the congruent condition, and anticlockwise in the incongruent condition) did not differ significantly between conditions. Error bars indicate 95% CIs.
Figure 4
 
(A, B) Depiction (not to scale) of the feature tracking task in the different congruency conditions of Experiment 2B (note the angular difference in spoke position between conditions). (C) Displacement errors from the mixture model (clockwise in the congruent condition, and anticlockwise in the incongruent condition) did not differ significantly between conditions. Error bars indicate 95% CIs.
Figure 5
 
(A, B) Depiction (not to scale) of the feature tracking task in the different conditions of Experiment 3A. (C) Displacement errors extracted from the mixture model differed significantly between conditions. Error bars indicate 95% CIs.
Figure 5
 
(A, B) Depiction (not to scale) of the feature tracking task in the different conditions of Experiment 3A. (C) Displacement errors extracted from the mixture model differed significantly between conditions. Error bars indicate 95% CIs.
Figure 6
 
(A) Depiction of results from Experiment 3B. Feature displacement was only observed during free eye movements, not fixation. A heat map of eye positions shows that during occlusion, observers continued to track both the object's location (B) and the feature's orientation (C). Note that at the beginning of each trial, gaze is always held at the fixation point, which is above the location of the rotating object. Therefore, object tracking always starts from a positive value (see narrow red stripe at left edge of the plot). (D) Leading of the feature by the eyes during occlusion (i.e., the circular distance between the location of the eyes and that of the feature) correlated positively with the mean displacement effect for the 20 out of 24 observers who showed a significant effect of starting feature position on eye position.
Figure 6
 
(A) Depiction of results from Experiment 3B. Feature displacement was only observed during free eye movements, not fixation. A heat map of eye positions shows that during occlusion, observers continued to track both the object's location (B) and the feature's orientation (C). Note that at the beginning of each trial, gaze is always held at the fixation point, which is above the location of the rotating object. Therefore, object tracking always starts from a positive value (see narrow red stripe at left edge of the plot). (D) Leading of the feature by the eyes during occlusion (i.e., the circular distance between the location of the eyes and that of the feature) correlated positively with the mean displacement effect for the 20 out of 24 observers who showed a significant effect of starting feature position on eye position.
Figure 7
 
Vertical eye position tracks vertical feature position on single trials. Each panel in the top row shows the average vertical eye position (gray line, with shading showing SEM across 24 participants) for a different starting feature angle (90°, 180°, 270°, or 360°, with colored lines indicating vertical feature position on the screen over the course of the trial). The eye position follows the feature position closely over the course of the trial, even when the stimulus is occluded (0–1 s, gray box). The bottom row shows individual participants' vertical eye positions for the same trials (thin gray lines). The individual traces indicate that participants generally used smooth pursuit to track the feature, interrupted by occasional saccades.
Figure 7
 
Vertical eye position tracks vertical feature position on single trials. Each panel in the top row shows the average vertical eye position (gray line, with shading showing SEM across 24 participants) for a different starting feature angle (90°, 180°, 270°, or 360°, with colored lines indicating vertical feature position on the screen over the course of the trial). The eye position follows the feature position closely over the course of the trial, even when the stimulus is occluded (0–1 s, gray box). The bottom row shows individual participants' vertical eye positions for the same trials (thin gray lines). The individual traces indicate that participants generally used smooth pursuit to track the feature, interrupted by occasional saccades.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×