Free
Research Article  |   April 2009
‘Zigzag motion’ goes in unexpected directions
Author Affiliations
Journal of Vision April 2009, Vol.9, 17. doi:https://doi.org/10.1167/9.4.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stuart Anstis; ‘Zigzag motion’ goes in unexpected directions. Journal of Vision 2009;9(4):17. https://doi.org/10.1167/9.4.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In a novel ‘zigzag motion’ display, random dots made alternate long and short jumps, 10 mm downward and 1 mm to the right. The zigs and zags were either at right angles (differing by 90°) or in opposite directions (180°). Result: The perceived direction of motion varied with the viewing distance or spatial scale. During close-up [or distant] viewing the display appeared to move in the direction of the short [or long] jumps. When the motion was stopped after 30 s, a motion aftereffect (MAE) was seen, driven by the short jumps but not the long jumps. Therefore, the perceived direction of motion was dissociated from its aftereffect. A picture rotated alternately 5° clockwise (CW) and 1° counterclockwise (CCW) and appeared to rotate jerkily CW. When stopped, a clockwise MAE was seen, appropriate to the small 1° jumps. If the test field contained blurred, dynamic visual noise, the MAE was now CCW, appropriate to the large 5° jumps; the large jumps drove the perceived motion direction and dynamic MAE, but the small jumps drove the static MAE. Conclusion: Winner-take-all competition between pathways tuned to fast and slow movements. Their independent adaptation gave opposite static and dynamic MAEs.

Introduction
In the natural world, motions are usually smooth. When an object passes through three nearby equally spaced points P, Q, R, the position of R is typically in line with PQ, or deviates from such a line by only a small angle. Velocity also tends to vary smoothly; except under rapid acceleration, an object usually takes approximately the same time to get from P to Q as from Q to R. It is true that bounces, ricochets, and car crashes provide exceptions to these rules of thumb, but in most cases these generalizations are reliable enough to form the basis for ‘smoothness constraint’ heuristics in computational models of motion perception (Hildreth, 1984; Hildreth & Koch, 1987). 
However, this paper introduces a special kind of non-smooth apparent movement, which we shall call “zigzag motion”. In this section we shall describe our informal observations on zigzag motion, and afterwards we shall present four experiments. 
Our basic building block is a three-frame movie in which a spot jumps vertically down from P to Q, then jumps to the right to point R ( Figure 1). The distance from P to Q is typically three to ten times the distance from Q to R, so the motion is L-shaped with two unequal arms. This motion pattern repeats over space, with a whole field of sparse random dots moving rigidly along the same path. It also repeats over time indefinitely, with the dots making a sequence of horizontal and vertical motions that follow a steep downward staircase. The dots did not move smoothly but were stationary for 50 ms at each position before jumping to their next position. 
Figure 1
 
(a) Trajectory of a rigid random-dot field that makes alternate long jumps downward and short jumps to the right. Each arrow is one movie frame. (b) If the long jumps exceed Dmax, then the dots appear to drift to the right. (c) Identical display at a smaller spatial scale appears (d) to be drifting downward.
Figure 1
 
(a) Trajectory of a rigid random-dot field that makes alternate long jumps downward and short jumps to the right. Each arrow is one movie frame. (b) If the long jumps exceed Dmax, then the dots appear to drift to the right. (c) Identical display at a smaller spatial scale appears (d) to be drifting downward.
What does this motion look like? One might predict a perceived motion tangent to the staircase, with some added jitter. But instead, we found in Experiment 1 that the perceived direction of motion was critically dependent upon the spatial scale or (which comes to the same thing) the viewing distance. Movies 1 3 illustrate this. Seen from close-up ( Movies 2 and 3), the short horizontals predominate and the dots appear to move to the right. Seen from further away ( Movie 1), the long verticals predominate and the dots appear to move downward. At intermediate distances, one does not see a vector that swung around from horizontal through oblique to vertical, but instead one perceives two transparent motions, with the horizontal fading out and the vertical gaining in strength as one moves away from the screen. The motion can also look noisy and ambiguous at these intermediate distances (Movie 2
 
Movie 1
 
These movies are identical, at respective magnifications of ×1, ×2, ×4. Yet #1 seems to move downward, #3 to the right, and #2 in between. View them in Loop mode from different distances. Also, fixate a point and adapt, then notice that they all give motion aftereffects (MAE) to the left.
 
Movie 2
 
These movies are identical, at respective magnifications of ×1, ×2, ×4. Yet #1 seems to move downward, #3 to the right, and #2 in between. View them in Loop mode from different distances. Also, fixate a point and adapt, then notice that they all give motion aftereffects (MAE) to the left.
 
Movie 3
 
These movies are identical, at respective magnifications of ×1, ×2, ×4. Yet #1 seems to move downward, #3 to the right, and #2 in between. View them in Loop mode from different distances. Also, fixate a point and adapt, then notice that they all give motion aftereffects (MAE) to the left.
Like Goldilocks the visual system ignores motion paths that are ‘too long’ or ‘too short’ but responds to motion paths that are ‘just right.’ We shall conclude that parts of the visual system are critically tuned to preferred jump sizes. On the other hand, Boulton and Hess ( 1990) measured the optimal spatial displacement for the detection of the apparent motion of a narrow-band spatial stimulus. This optimum was equivalent to 1/6 of the spatial wavelength of the stimulus for low contrast stimuli and 1/5 of the spatial wavelength for higher contrast stimuli. This suggests that the spatial subunits of motion detectors may be separated by less than 1/4 spatial wavelength. Our optimum, however, was not tied to the spatial wavelength. If it were, it would have remained constant under changes in magnification, but in fact it varied systematically with the magnification. We do not know why our findings differed from theirs, except that their stimuli and procedures were very different from ours. 
In Experiment 2 we increased the angle between the long and short jumps from 90° to 180°. Thus the random-dot field repetitively jumped to the right through (say) 10 mm and then back to the left through 1 mm, so the net motion was 9 mm to the right. Results were similar. From close up, the short jumps predominated and the dot field appeared to drift to the left. Seen from further away, the long jumps predominated and the dot field appeared to drift to the right. 
Motion aftereffect
If one adapts for 30 s to a moving pattern, and the motion is then stopped, a negative aftereffect of motion (MAE) is seen in the opposite direction. For a brief review, see Anstis, Verstraten, and Mather (1998), and for a more comprehensive survey, see the book edited by Mather, Verstraten, and Anstis (1998). In most cases, the MAE is in a direction opposite to the perceived direction of the adapting motion. However, there are exceptions, particularly with successive presentations (Riggs & Day,1980; Verstraten, Fredericksen, Grüsser, & van de Grind,1994), with transparent adapting motions (Shioiri & Matsumiya,2006; van der Smagt, Verstraten, & van de Grind,1999), and with selective attention (Culham,2003). Zigzag motion provides another exception. When we adapted to zigzag motion (here, short jump left, long jump right) of a sparse random-dot field, the dots did shift to the right at a mean rate of (say) 9 mm per two timeframes, and when viewed from a long viewing distance, they did appear to drift to the right. However, when the motion was stopped, an MAE was seen to theright, in the same direction as the perceived adapting motion. Seen from close-up, the same adapting stimulus now appeared to drift to the left; and it also gave an MAE to the right. InExperiments 3 and4 below, we shall also show that these effects are not limited to translating random dots. Instead, we used a picture (Botticelli's Birth of Venus) that rotated back and forth. To anticipate, we found that static and dynamic test fields elicited MAEs in opposite directions, appropriate to the short and long jumps, respectively. Thus, MAE direction can be dissociated from the perceived direction of the adapting movement (Verstraten, Fredericksen, & van de Grind,1994). Finally, Experiment 5 showed that perceived motion coherence could also vary as a function of viewing distance. 
There are many studies on motion aftereffects (MAEs) following adaptation to more than one direction. After adaptation to motion in one direction, a static test field generally shows a motion aftereffect (MAE) in the opposite direction. This led Sutherland (1961) to propose an opponent-motion model, in which motion detectors for opposite directions feed into a common path. He suggested that ratios of firing rates of motion detectors tuned to opposite directions determine the direction of the MAE. This model was called in question by various studies of MAEs from double motions. If two fields of random dots slide transparently over each other in opposite directions, both motions are seen, but when the motion is stopped no MAE is seen. If the two fields move transparently at right angles to each other, the resulting MAE is opposite to the vector sum of the two adapting motions. For instance, adaptation to two separate fields that move up-right and up-left, either transparently or in temporal alternation (Riggs & Day,1980; Verstraten et al.,1994), yields a single downward MAE. If the two patterns move at different speeds, MAE direction can be predicted by an inverse vector average, using the observer's motion sensitivity to each individual pattern as vector magnitudes (Alais, Verstraten, & Burr,2005; Verstraten, Fredericksen et al.,1994). In addition, following adaptation to two superimposed fields that drift orthogonally at different speeds, the direction of the MAE depends upon the nature of the test field (van der Smagt, Verstraten, & van de Grind, 1999). On a static test field the direction of the MAE is mainly opposite to the slower adapting speed, and on a dynamic twinkling test field like a detuned TV, the direction of the MAE is mainly opposite to the faster adapting speed. The two MAEs can differ in direction by as much as 50°. This suggests that fast and slow neural channels can adapt independently, followed by an additive gain control stage. 
Thus the two adapting motions give rise to a single direction in the MAE. Sutherland's model cannot explain this, and it has been superseded by Mather's model (Mather,1980; Mather & Harris,1998; Mather & Moulden, 1980), in which motion detectors in all directions around a clock converge on a common path, providing a balanced (zero) output when a static field is viewed. Adaptation to the up-left and up-right motions adapts the two corresponding motion detectors, leading again to an unbalanced output that is experienced subjectively as an MAE. 
Qian, Andersen, and Adelson (1994) introduced limited-life ‘locally paired dots’ (LPD), in which a single dot splits into two dots that move in different directions. If they move in opposite directions,no motion is seen, only directionless flicker. If they move at relative angles lying between 45° and 120°, observers report not two transparent motions but a single motion in the vector-sum direction. Thus, the mere presence of two populations of dots moving in different directions does not guarantee transparency—rather, motion integration and surface segmentation depend crucially on the local spatiotemporal relationships among the different motion vectors (Braddick,1997). Local integration mechanisms result in loss of transparency when these different local motion directions originate from the same spatiotemporal position. The subsequent MAE is in the direction opposite to this vector sum (Curran & Braddick,2000). This shows that the two motions were combined rather than interfering destructively. Vidnyánszky, Blaser, and Papathomas ( 2002) point out in their valuable review that the MAE after adaptation to bivectorial, transparent motion is not itself transparent for the same reason that LPD motion is not transparent: for each position, local mechanisms integrate the different motion signals into a common direction. 
Nishida and Sato (1995) found that first-order motion gave an MAE on a static test field, while second-order motion primarily gave an MAE on a flickering test field that comprised a counterphasing grating. This suggests that static MAEs reflect adaptation of a low-level mechanism, while the flicker MAE reveals a high-level motion processing where both first- and second-order motion signals are available. (Shioiri & Matsumiya, 2006). 
Verstraten, van der Smagt, and van der Grind ( 1998) adapted to two superimposed random-dot patterns, one moving in one direction at low speed and the other moving in the opposite direction at high speed. They found that for exactly the same adaptation conditions (oppositely directed transparent motion with different speeds), the aftereffect direction differed by 180° depending on the test pattern. The motion aftereffect was opposite to the slowly moving pattern when the test pattern was static and opposite to the high-speed pattern for a dynamic test pattern. This suggests the presence of least two sub-populations of motion detectors. 
Hirahara ( 2006) displaced a grating of equally spaced parallel lines slightly in a direction perpendicular to the lines. Low-speed motion toward the displacement direction can be perceived. But the stimulus embodies both a visible, low-speed and an unseen, high-speed component in opposite directions (phase shifts < 180° and > 180°). Hirahara measured coherence thresholds for random-dot test motion following adaptation to the low-speed motion of the equally spaced parallel lines. The results depended on the test speeds. At low speeds, the coherence thresholds for the same direction as that perceived during the adaptation phase increased and the coherence thresholds for the opposite direction decreased. At high speeds, the same adaptation resulted in an opposite effect. This suggests that a high-speed processing channel was adapting to the unperceived high-speed component. 
Experiment 1: Jump size
We have noted that the perceived direction of zigzag motion varies with viewing distance. We now measured this phenomenon, plus the direction of the resulting motion aftereffects (MAEs). 
Methods
We varied the jump sizes (10, 20, 40, 80 dot diameters) and also the magnification of the entire screen (×1, ×2, ×4, ×8). The vertical jumps were always ten times the horizontal jumps. We set the vertical jump to a value of 0.125°, 0.25°, 0.5°, 1°, 2°, 4°, or 8° of visual angle, and the horizontal jump correspondingly ranged from 0.0125° (0.75 min arc) to 0.8°. The jump rate was 20 Hz, so each jump took 0.05 s. This meant that equivalent velocity ranged from 2.5°/s for the shortest vertical jump of 0.125°, up to 20°/s for the longest vertical jump of 1°. Note that these conversions from jump sizes to equivalent speeds are for convenience only—the discrete jumps of apparent motion gives a much weaker stimulus than smooth motion, and when jump size becomes large, motion energy in the opposite direction increases, which is a factor that will reduce the strength of MAEs in Experiments 13
Stimuli were programmed in Adobe Director MX 2004 and presented on the screen of an iMac computer that refreshed at 60 Hz. The display size was 20° wide × 15° high in visual angle, with a small fixation point in the middle of the window. The dots were black (2.7 cd m −2) on a white surround (132 cd m −2) and filled 15% of the area. Five observers viewed the display from a distance of 57 cm in a dimly lit room. Each adapting duration was 30 s. 
The observer's task was to report the perceived direction of drift. In pilot work we provided an arrow that the observer could rotate via the mouse to indicate perceived directions, but we soon found that observers much preferred to report the perceived direction verbally with reference to a clock face (‘6 o'clock,’ ‘2:30,’ and so on). This proved accurate enough for our purposes. 
Results
Results are shown in Figures 2a and 2b. In Figure 2a, the x-axis shows the size of the long vertical jumps in min arc; the short rightward jumps (not shown) were one-tenth the size. The lower and upper lines show the perceived direction of the drifting dots and of their motion aftereffect, respectively. Usually, the perceived direction of a moving display is unaffected by viewing distance or size, so that the graph should show a flat horizontal line. However, in Figure 2a the lower line slopes down to the right. This means that as the display size increased, the mean perceived direction of motion shifted from 180° (downward, in the direction of the large jumps) to 90° (to the right, in the direction of the small jumps). So small magnifications favored the long vertical jumps, while large magnifications favored the short rightward jumps. Observers reported that as the magnification was gradually increased they saw two transparent motions of changing efficiencies, with the vertical motion decreasing and the horizontal motion increasing in strength, and their readings struck a balance between these. They never reported an oblique vector-sum motion that gradually swung around in direction. 
Figure 2
 
(a) Perceived direction of drift (lower curve) and resulting MAE (upper curve) as a function of jump size. (Mean of 5 Os: vertical bars show ±1 SE.) At small spatial scales (left-hand part of lower curve), the long vertical jumps drive the perceived direction downward (180°), but at large scales (right-hand part) the short horizontal jumps drive it to the right (90°). (b) MAE is expected to differ by 180° (top edge of graph) from the direction of the adapting motion. It does so for large jumps, but not for small, because long jumps drive perceived motion direction, but the orthogonal small jumps, one-tenth the size, drive the MAE.
Figure 2
 
(a) Perceived direction of drift (lower curve) and resulting MAE (upper curve) as a function of jump size. (Mean of 5 Os: vertical bars show ±1 SE.) At small spatial scales (left-hand part of lower curve), the long vertical jumps drive the perceived direction downward (180°), but at large scales (right-hand part) the short horizontal jumps drive it to the right (90°). (b) MAE is expected to differ by 180° (top edge of graph) from the direction of the adapting motion. It does so for large jumps, but not for small, because long jumps drive perceived motion direction, but the orthogonal small jumps, one-tenth the size, drive the MAE.
Now look at the motion aftereffect (MAE) data, shown in the upper line of Figure 2a. For any normal display, the MAE is always opposite in direction to the adapting drift. Thus the two lines in the graph would be expected to be parallel and to differ in direction by 180°. This is clearly not the case. In fact, the lines diverge toward the right, being only 140° apart for small jumps (left side of graph), gradually expanding to 180° apart for large jumps (right side of graph). Direction of MAEs was not always opposite to the perceived direction of the adapting drift. 
Figure 2b replots the vertical gap between the two lines plotted in Figure 2a to show the discrepancy between the expected and observed directions of the MAE. The expected direction is always 180° from the adapting direction. It will be seen that this is true for large jumps, but for small jumps the MAE differs by up to 40° from its expected direction. This means that at long viewing distances, or small spatial scales (left side of Figure 2b), although the long jumps drove the perceived direction of the adapting motion, it was the small jumps that adapted visual pathways tuned to slow motions, which yielded the MAE. 
There is nothing magic about the 10:1 ratio of long to short jumps. We obtained similar results (not shown here) when this ratio was reduced to 3:1. 
Experiment 2: Long and short jumps in opposite directions
Experiment 1 was repeated with minor modifications. Again a random-dot field made alternate long and short jumps, but instead of the jumps being orthogonal they were now in opposite directions. So, instead of long vertical jumps alternating with short horizontal jumps as they had done in Experiment 1, the random-dot field now jumped alternately to the left through (say) 1 mm, then jumped back to the right through ten times the distance, in this example 1 cm, so that it moved 9 mm to the right on every two movie frames. This cycle repeated indefinitely. We used the same ranges of jump sizes and magnifications as before. 
Results
Results are shown in Figures 3a and 3b. The x-axis shows the amplitude of the long jump in min arc. The short jump was always one-tenth of the long jump. The y-axis shows the perceived direction of the adapting motion (falling line) and of the resulting MAE (rising line). As before, the perceived direction of motion depended upon the viewing distance. From far away, or at a small spatial scale, the long jumps predominated and the dots appeared to drift rightward toward 3 o'clock. From closer up, or at a large spatial scale, the short jumps predominated and the dots appeared to drift leftward toward 9 o'clock. Thus the direction of drift appeared to reverse when the observer moved toward or away from the screen. 
Figure 3
 
(a) Random-dot field made small jumps to the left (upper horizontal line of short arrows) interspersed with large jumps to the right (lower horizontal line of long arrows). X = long-jump size, y = perceived direction of adapting drift (falling blue curve) and of MAE (rising pink curve). At small spatial scales ( x < 80), drift direction was determined by the long jumps, but at large spatial scales ( x > 80) by the short jumps. (b) Deviation of MAE directions from the expected 180°. These deviations were maximum when the long jumps varied between 20 and 200 min arc.
Figure 3
 
(a) Random-dot field made small jumps to the left (upper horizontal line of short arrows) interspersed with large jumps to the right (lower horizontal line of long arrows). X = long-jump size, y = perceived direction of adapting drift (falling blue curve) and of MAE (rising pink curve). At small spatial scales ( x < 80), drift direction was determined by the long jumps, but at large spatial scales ( x > 80) by the short jumps. (b) Deviation of MAE directions from the expected 180°. These deviations were maximum when the long jumps varied between 20 and 200 min arc.
When the motion was stopped, a motion aftereffect (MAE) was seen. As before, the direction of this MAE was not always opposite to the adapting motion. Figure 3b shows a V-shaped region, centered on a long-jump size of ∼80 min arc, for which the MAE deviated from the expected opposite direction. So as the long jump increased from 20 to 200 min arc, the direction of both the adapting motion and the MAE gradually reversed. Although gradual, this changeover was still much more abrupt than the changes in direction shown in Figure 2a
In sum, the results resembled those for Experiment 1, except that the transition of perceived direction through 180° (from leftward to rightward motion) was more perceptually abrupt than the previous transition through 90° (from vertical to horizontal). So, as before, the MAEs were not always opposite to the perceived direction of the adapting motion. This suggests that the adapting visual pathways were tuned to jumps ranging from 20 to 200 min arc. 
Experiment 3: Opposed aftereffects from fast and slow rotations of Botticelli
The zigzag motion effects that we found for translating random-dot fields can also be replicated on a rotating structured field, in this case a Botticelli painting. This display appeared to rotate clockwise and yet gave either a clockwise or a counterclockwise MAE, depending on the nature of the test field that followed the adapting motion. 
Methods
In Movie 4, Botticelli's Birth of Venus rotates alternately 5° clockwise and then back 1° counterclockwise on successive frames. (These are degrees of rotation, not of visual angle.) So on two movie frames it rotates a net 4° clockwise, and in 90 frames it completes a 360° rotation, taking 12 s. The effective frame rate of 7.5 fps was set by the computer's plotting speed. The observers fixated on the central red spot while the picture made one complete rotation, which took 12 s. The rotating picture was concealed by a gray masking screen (96 cd m −2) except for an annular aperture that had inner and outer radii of 7° and 10°, so the annulus was 3° thick. In addition, we used two different test fields. 
 
Movie 4
 
Botticelli's Birth of Venus rotates in 5° clockwise steps, alternating with 1° counterclockwise steps. Result: It appears to rotate clockwise; but on a static test field it gives a clockwise MAE, and on a blurred twinkling test field it gives a counterclockwise MAE. We attribute these MAEs to adaptation of visual pathways tuned to slow and fast movements, respectively.
Five naive observers viewed this annular rotating display on the screen of a Macintosh iMac computer from a distance of 57 cm and fixated a small point at the center of the annulus. After 20 s the motion was stopped and all observers reported a clockwise motion aftereffect, in the same direction as the perceived adapting motion. The picture was again rotated for 20 s and then stopped. But this time the annular test field consisted of blurred, randomly twinkling noise that was refreshed at a frame rate of 7.5 fps. The mean luminance of the noise was 96 cd m −2. Now all observers reported a counterclockwise MAE. Thus one and the same adapting motion produced MAEs in opposite directions, depending solely upon the test field. The cycle of rotation–stationary picture–rotation–dynamic noise was continued indefinitely, and on every occasion the MAEs were CW on the stationary picture and CCW on the dynamic noise. 
In our interpretation, the large CW jumps (5° of rotation) determined the perceived direction of adapting motion, while the direction of the small CCW jumps (1° of rotation) was barely noticed and simply made the CW motion look jerky. We believe that the large jumps adapted visual motion channels tuned to fast velocities, while the small CCW jumps adapted pathways tuned to slow velocities. Following these opposed adaptations, the static test field stimulated primarily the slow pathways. Since these had adapted to CCW motion (1° jumps), they gave a CW MAE. Conversely the twinkling dynamic test field stimulated primarily the fast pathways. Since these had adapted to CCW motion (5° jumps), they gave a CW MAE. 
Verstraten et al. ( 1998) obtained similar results from two transparently superimposed random-dot fields that moved over each other in opposite directions. One dot field moved slowly upward at 2°/s and the other field moved rapidly downward at 32°/s. Following adaptation to this transparent motion, a static test field elicited a downward MAE, appropriate to the slow adapting motion, whereas a dynamic test field of twinkling noise elicited an upward MAE, appropriate to the fast adapting motion. If the fast and slow adapting motions were at right angles to each other, then so were the MAEs. They noted that the adapting velocity for static MAEs peaks at about 3°/s, but that dynamic MAEs can be elicited from adapting velocities that are three times as high as for static MAEs. They believe that the static MAEs reflect adaptation of neural pathways tuned to slow speeds, while dynamic MAEs reflect adaptation of neural pathways tuned to fast speeds. We believe the same was true for our MAEs. 
Our MAEs were like those of Verstraten et al. (1998), but our adapting stimuli were very different. Their drifting dots looked like two transparent motions sliding over each other. Our Botticelli stimulus inExperiment 3 looked like a single, jerkily rotating field, and our random-dot fields inExperiment 1 looked like a single motion whose perceived direction was determined by the short jumps (at large scale) or by the long jumps (at small scales). Following Verstraten et al. (1998), we believe that inExperiment 3 our MAEs on a static test field arose from adaptation to slow motion (short jumps), and our MAEs on a dynamic test field arose from adaptation to fast motion (long jumps). Others have reported MAEs in two different directions on a static and a dynamic test field, following adaptation to two transparent streams of motion (Alais et al.,2005; van der Smagt, Verstraten, & van de Grind,1999; Verstraten, Fredericksen et al., 1994). In this experiment, however, the Botticelli stimulus offers not two but only one perceptual motion stream, namely jerky clockwise motion, yet the visual system effectively filters this into a fast and a low motion that selectively adapt different motion pathways. 
We noticed that these two MAEs looked subjectively different (Hiris & Blake,1992); the CW MAE on the stationary test picture looked like a slow powerful heave, while the CCW MAE on the noise appeared to race around at great speed. We conjecture that that the static MAE looks slow because it reveals adaptation of ‘slow’ detectors, whereas the dynamic MAE looks fast because it reveals adaptation of ‘fast’ detectors. When Alais et al. (2005) adapted to transparent, orthogonal motions they varied the temporal frequency of this dynamic test field by filtering it with five different octave-band filters. This filtering could alter the direction of the MAE by up to 90°. The temporal frequency of our dynamic test field was fixed at a frame rate of 7.5 fps, by replacing the random-dot noise with fresh random dots on every frame. But we did make one small but useful technical improvement. Following Shioiri and Matsumiya (2006), weblurred the twinkling dynamic noise, on the grounds that at high temporal frequencies, which are presumably involved in responding to high speeds, the visual system is very sensitive to low spatial frequencies (Kelly, 1979). Pilot work suggested that this blurring of the test field greatly increased the visibility of the CCW MAE. 
Experiment 4: Separating out the MAEs from long and short jumps
In Experiment 2 a field of random dots made long jumps to the right, alternating with short jumps to the left. In some conditions, although the field was perceived as moving bodily to the right, determined by the long jumps, the motion aftereffect was also seen to the right, driven by the short jumps. We speculated that small jumps are more effective at adapting neural motion detectors (given a static test field). To test this hypothesis, we compared the ability of long and short jumps to generate an MAE when the jumps were separated out and applied to two different dot fields. It is difficult to measure the strength or duration of MAEs reliably (Mather et al.,1998), so we used a method of paired comparisons, in which Os simply decided on each trial which of the two MAEs was stronger. Thus, instead of a single random-dot field that made alternate long jumps to the right and short jumps to the left, we split the display into two panels of random dots, one panel above the other, and moved the upper panel to the right in a continuous series of long jumps, while the lower panel moved to the left in a continuous series of short jumps.Movie 5 illustrates this process. Two fields of sparse random dots were viewed, in the form of wide horizontal strips one above the other. We systematically varied the jump size applied to the upper and lower fields, over a 32-fold range. These jumps were randomly selected from the following range: 1.5, 3, 6, 12, 24, or 48 min arc. The effective frame rate was 22 fps, so the corresponding velocities of the dot fields were 0.57, 1.13, 2.25, 4.5, 9, or 18°/s. 
 
Movie 5
 
Demonstration that slow movement (upper panel) can give stronger MAEs than fast movement (lower panel). (Due to size changes during reproduction, etc., this is not an exact simulation of a particular condition in Experiment 5).
Using a paired-comparison method, two different jump sizes were randomly selected for each trial and were assigned one to each field. Every possible combination of jump sizes was presented five times and results were averaged, generating a 6 × 6 matrix of stimulus conditions. On each trial the observer fixated a point at the junction of the two fields for 20 s. The motion was then stopped and the observer judged which field gave the stronger MAE. For instance, if on a given trial a 3 min arc jump gave a stronger MAE than a 24 min arc jump, this scored one for 3 min arc and zero for 24 min arc. 
Results
Results are graphed in Figure 4. Each datum point shows the percentage of trials ( y) on which each jump size ( x) beat out any other jump sizes with which it was compared. Results show a clear monotonic decline in MAE when plotted against log velocity, with slower stimulus speeds reliably giving greater MAEs than faster speeds. A single line was fitted to the points from the mean of six observers, with an R 2 of 0.988. This is consistent with our idea that the MAEs on static test fields that we found in Experiment 14 arise from adapting ‘slow’ visual pathways. It is also consistent with the results of van de Grind, Verstraten and Zwamborn (1994), who found that speeds greater than 10–20°/s generated little or no MAE. 
Figure 4
 
MAEs were greatest for the slowest adapting speed of 0.57°/s and smallest for the highest adapting speed of 18°/s. (Means of 6 observers × 5 trials).
Figure 4
 
MAEs were greatest for the slowest adapting speed of 0.57°/s and smallest for the highest adapting speed of 18°/s. (Means of 6 observers × 5 trials).
Experiment 5: Perceived coherence varies with spatial scale
We have shown that during alternations between long and short jumps, the perceived direction depends upon the spatial scale, with large magnifications favoring the short jumps and small magnifications favoring the long jumps. This is true when the long and short jumps are at either 90° or 180° to each other. In our final experiment, the long jumps are always vertical but the short jumps are in random directions (or vice versa), and we examine how the spatial scale affects the degree of perceived motion coherence. The logic is similar to Experiment 1, where long vertical jumps were pitted against short horizontal jumps, and one or the other prevailed at different magnifications. Here we pitted long vertical jumps against short jumps in random directions, or vice versa. At the magnifications for which the size of the random jumps prevailed, the motion appeared to be incoherent. 
We are generally good at seeing motion under low signal/noise conditions, a skill that could be useful in discerning predators approaching in the jungle. Motion coherence is often studied with a field of limited-lifetime random dots. Some percentage of the dots all move coherently in the same direction. The remaining dots move in random directions. The threshold for just-detectable coherence can be as low as 6% (Newsome & Paré, 1988). In our experiment, however, we used a rigid field of random dots that moved along a partly random path. On every odd-numbered movie frame the dots moved downward through a distance S (for Signal). On the even frames they moved through a distance N (for Noise) in a random direction. So their mean motion was downward, plus some random jitter. There were two stimulus conditions. In the S > N condition the vertical jump size S was ten times the random jump size N. In the N > S condition the reverse was true, and the random jump size N was ten times the vertical jump size S. Sample paths are shown in Figure 5. Since in the S > N condition the mean downward velocity was ten times as great, and the signal/noise ratio was 100 times as great, as for N > S, one might reasonably expect that the motion would always look more coherent for S > N than for N > S. But such was not the case. 
Figure 5
 
Sample trajectories of the random-dot field as it jumped alternately downward and in random directions. The directions of the random motions are the same in both sequences, but in (a) the S > N condition, the vertical Signal jumps were ten times as long as the random Noise jumps. In (b) the N > S condition, the random jumps were ten times as long as the vertical jumps. Surprisingly, the N > S condition looked more perceptually coherent on half the trials, namely at high magnifications when the correspondence problem failed for the long random jumps.
Figure 5
 
Sample trajectories of the random-dot field as it jumped alternately downward and in random directions. The directions of the random motions are the same in both sequences, but in (a) the S > N condition, the vertical Signal jumps were ten times as long as the random Noise jumps. In (b) the N > S condition, the random jumps were ten times as long as the vertical jumps. Surprisingly, the N > S condition looked more perceptually coherent on half the trials, namely at high magnifications when the correspondence problem failed for the long random jumps.
The zoom or magnification of the stimulus was set over a four-octave range so that the long jumps were 0.25°, 0.5°, 1°, 2°, or 4° of visual angle in length. The short jumps were always one-tenth the length of the long jumps. The displays were presented in a window of fixed size (16° H × 21° W of visual angle). Thus there were ten conditions (5 magnifications × 2 stimuli, namely S > N and N > S) and each condition was presented twice in random order. There were five observers, four of them naive, and the observer's task was to assign a verbal coherence rating from 1 to 10 for each presentation, from 1 = ‘random noise’ to 10 = ‘fully coherent motion.’ Results are shown in Figure 6
Figure 6
 
Coherence Ratings as a function of display size (mean of 5 Os. Vertical bars show SEs). X = size of large jumps. Small jumps (not shown) were 1/10th as large. The S > N condition (green squares) looked coherent for small displays but looked progressively less coherent as the display size increased. Conversely, the N > S condition (red circles) looked incoherent for small displays but paradoxically looked more coherent than the S > N condition at large display sizes. See text.
Figure 6
 
Coherence Ratings as a function of display size (mean of 5 Os. Vertical bars show SEs). X = size of large jumps. Small jumps (not shown) were 1/10th as large. The S > N condition (green squares) looked coherent for small displays but looked progressively less coherent as the display size increased. Conversely, the N > S condition (red circles) looked incoherent for small displays but paradoxically looked more coherent than the S > N condition at large display sizes. See text.
Instead of the S > N condition giving the highest ratings, as one might expect, it was judged more coherent than N > S only at small magnifications, in the left-hand half of Figure 6. As the magnification increased, the coherence ratings for S > N fell steadily and were actually overtaken at the three highest magnifications, for which N > S actually looked far more coherent than S > N. The reason is that at small magnifications the small random jumps are too tiny to be significant so one sees predominantly the vertical motions in S > N, whereas at high magnifications the large vertical jumps get ‘lost in the noise’ and exceed Dmax, so the correspondence process fails and no motion is computed for them in S > N. So N > S gives more perceptual coherence. Dmax is the maximum distance across which apparent motion can be seen in a random-dot array (Baker & Braddick, 1985a, 1985b; Cleary & Braddick, 1990; Nakayama & Silverman, 1984). 
The observers' subjective reports are of some interest. At low magnifications observers reported that in the S > N condition the dots seemed to be streaming down together along a common, slightly jittery vertical path, plus some twinkling noise (that was not physically there); while in the N > S condition the dots all seemed to be jittering around together along a common random path, with some added twinkling noise (which again was not really there). Apart from the illusory noise, these results are not surprising. But at high magnifications the results were truly paradoxical, and observers' reports were reversed. Now it was in the N > S stimulus that all the dots appeared to stream down together, plus some illusory noise; and it was in the S > N condition that the dots all seemed to jitter around along a common random path, plus some illusory noise. 
These results show that observers were responding neither to the signal/noise ratio, nor to the mean velocity, nor to the motion power or energy. These quantities were always far greater in the S > N than in the N > S condition, yet in this experiment they were almost irrelevant. Instead, the visual system, again acting like Goldilocks, ignored motion jumps that were ‘too big’ or ‘too small’ and based the coherence judgments upon jumps that were ‘just right.’ This invariant spatial property lies not in the stimulus, whose size varied over a 16-fold range, but in the visual system. Our results give an index of the search space that the visual system examines to solve the correspondence problem, that is, the decision of which dot in frame #2 should be matched up with a given dot in frame #1. A simple possible strategy would be to match up nearest neighbors. But this is not what the visual system generally does (Ullman, 1979). The dimensions of the visual search space constrain Dmax, which has been measured experimentally a number of times. 
Discussion
What can zigzag motion tell us about visual motion processing? It is a process of combining two successive motions that alternate over time, namely long jumps in one direction interspersed with short jumps in a different direction. The resulting percept of global motion is aligned with one, but only one, of the local motions—a series of vertical and horizontal jumps leads to a percept of either horizontal motion or vertical motion, but not both, except over a limited changeover range of magnifications when both directions of motion are seen transparently. Oblique motion in a vector-sum direction was virtually never reported. We shall now consider different ways of integrating moving random dots over space or time. 
Vector summation versus transparency
There are a number of studies of ‘double motion,’ in which random dots move alternately in two different directions, or at two different speeds. In general, when the dots all change their speed or direction in synchrony, observers integrate the motion signals into a single perceived direction. When the dots change at different times—asynchronously—observers segregate the motion signals into the percept of two transparently moving surfaces. 
Our random dots moved as a single rigid sheet, alternately to the right and downward. Thus they all changed direction in synchrony. Observers perceived the whole sheet as moving in one of these two directions, depending on viewing distance, and almost never saw the stimulus break up into two transparent motions in orthogonal directions. 
On the other hand, a number of studies have found that random dots that change their speeds or directions asynchronously are perceptually segregated into two surfaces moving over each other transparently. For instance, Kanai, Paffen, Gerbino, and Verstraten (2004) used random dots that oscillated back and forth horizontally along a sawtooth waveform. Half the dots (chosen at random) moved to the left as the other half moved to the right, so the two sets moved synchronously in counterphase. Observers reported bouncing motion back and forth. Kanai et al. now randomized the phases of the oscillating dots, so that they reversed their directions at random times. Result: Observers were now blind to the oscillations and reported transparent motion of two sheets, one moving to the left, the other to the right. Further experiments showed that this blindness was due to the incompatibility of the oscillation representation with the global percept of streaming motion. 
Bravo and Watamaniuk (1995) used a motion display in which each dot moved with two speeds (slow and fast) alternately but in the same direction. In this case, synchronous speed changes gave the percept of a single sheet of dots lurching across the screen, while asynchronous speed changes resulted in the percept of two superimposed sheets of dots moving at different speeds. Similarly, Watamaniuk, Flinn, and Stohr (2003) showed that dots moving in two directions in asynchronous alternation also result in the percept of transparent motion. These examples show dissociation of the behavior of individual dots and the global percept. Common to all these stimulus types is that the asynchronous alternations in speed or direction are the key to obtain a clear percept of transparent motion without being disturbed by the changes in the local dots. 
In sum, when all the dots change their speed or direction at the same instant (synchronously) the resulting percept is a vector average of the component speeds or directions, while if the dots change independently at different times (asynchronously) the resulting percept is of two surfaces sliding transparently over each other. Our zigzag display is an exception. Our dots always changed synchronously, but this never gave vector averaging (oblique perceived motion). Instead, the outcome was in one direction or another, decided by the display size, except for a changeover range of magnifications where the longer jump was 10 to 100 min arc (see Figure 2a). Over this range two noisy, transparent motions of changing efficiencies were seen. 
Vector summation and motion integration
A random-dot kinematogram (RDK) comprising dots, each of which takes a random walk in direction over time, can appear to flow in a single direction (Williams & Sekuler, 1984). This has been interpreted as evidence for the existence of a cooperative network linking neurons sensitive to different directions and different spatial locations. Similarly, dots flowing in the same direction but at different speeds can be integrated into a mean perceived velocity (Watamaniuk & Duchon, 1992). Festa and Welch (1997), Snowden and Braddick (1989), and Watamaniuk and Sekuler (1992) showed that performance increased as the number of movie frames was increased (temporal recruitment). Watamaniuk and Sekuler also found that performance improved as the width of the distribution of directions in the cinematogram was reduced and as the area of the motion increased, up to a diameter of 9°, which is approximately the diameter of a typical MT neuron. 
Barton, Rizzo, Nawrot, and Simpson (1996) found that 3.25 diopters of blur (low pass spatial filtering) reduced direction discrimination in RDKS for displacements below 16′ but improved discrimination for displacement greater than 21′. Since optical blur attenuates high spatial frequencies, this suggests that high spatial frequencies are important for motion perception when dot displacements are less than 16′ to 21′ but reduce motion perception at larger dot displacements. 
Smith, Snowden, and Milne (1994) investigated the possibility that global motion perception in such patterns might simply reflect motion energy detection at a coarse spatial scale (such that many dots fall in the receptive field of one energy detector) without the need to encode local dot motions on a fine spatial scale and then integrate their motions over space. They created random-walk RDKs and then spatially high-pass filtered them to remove low spatial frequencies. Perception of global motion was unimpaired for both direction and speed random walks, showing that the phenomenon is not reliant on low spatial frequencies and must, therefore, involve integration of local motion signals across space, as originally postulated. 
All these studies emphasize the importance of spatial and temporal averaging in the perception of random-dot motion, and these averaging processes, together with the hysteresis in motion thresholds discovered by Williams and Sekuler (1984) point to the existence of cooperative connections between direction-selective neurons. 
Vector summation or winner take all?
Zohary, Scase, and Braddick (1996) examined the two principal models of motion integration: the vector summation model, which suggests that the responses of neurons encoding all directions of motion are weighted and pooled to obtained an accurate estimate of the mean direction of motion; and the winner-take-all model, which is based on a competition between different direction-specific channels, so that decisions are cast in favor of the channel generating the strongest directional signal. They concluded that the perceptual judgment of direction of motion is not based on any rigid algorithm generating a single valued output. Rather, human observers are able to judge different aspects of the distribution of activity in a cortical area depending on the task requirements. 
Vidnyánszky et al. (2002) discuss the special case of locally paired dots (LDP: Qian et al., 1994). Usually, when two sets of random dots move in opposite or orthogonal directions, one sees two transparent motions. However, if the positions of the dots are carefully matched, with each dot pair at virtually the same location, transparency is lost. The bivectorial motion is seen no more; instead, if the dots move orthogonally one sees motion in the vector-sum direction, and if they move in opposite directions one sees only directionless flicker. Vidnyánszky et al. argue that the MAE resembles LDP in that it stimulates different motion detectors that are essentially in the same place, so MAEs are not transparent but follow a vector-sum direction. In our experiments, the MAE did follow a vector sum, but this vector sum, unlike the perceived direction of the adapting motion, was heavily weighted toward the shorter of the two jumps. This meant that the MAE was not always at 180° to the adapting motion. 
The display that most resembles ours was the ‘split motion’ of Anstis and Ramachandran (1982). Each dot presented at time T1 split into two dots at time T2. One of these dots jumped down through a large distance dy, and the other dot jumped to the right through a smaller distance dx. (Thus a complete random-dot field split into two random-dot fields, and at time t2 there were twice as many dots on the screen as at time t1.) The dots jumped back and forth between these two states, forming an endlessly repeating two-frame movie of two transparent, orthogonal back and forth movements of different amplitudes. Observers were asked to report the subjective axis along which the dots appeared to jump back and forth. As with zigzag motion, the perceived direction varied systematically with the viewing distance, favoring the long jumps at longer viewing distances and the short jumps at shorter viewing distances. Hildreth (1984) explained this split-motion phenomenon with her model of smoothness constraints in motion. 
Finally, it is clear which of the two principal models fits zigzag motion better. Our results suggest that zigzag motion is based not on averaging, vector summation, or cooperativity, but on the actions of independent pathways, perhaps aided by inhibitory interactions. We found that when the jumps in one direction were 3 to 10 times bigger than jumps in the competing direction, then at low magnifications the long jumps completely dominated the perceived motion and the short jumps completely lost their influence. Conversely, at high magnifications the short jumps completely dominated. Over the intermediate range studied in Experiment 1, the competing long and short jumps were both transparently visible, with dominance gradually switching over as the magnification changed. Careful inspection shows that as one reduces the retinal size of the display (by backing away from the screen) the percept changes gradually from individual dots streaming to the right, to clumps or galaxies of dots streaming downward. One can see both percepts at once, like two transparent motions at right angles, but one never sees anything moving obliquely down to the right. Thus there is no perceptual vector averaging. A summed collection of reported sudden switches between horizontal and vertical motions can give a spurious impression of a gradual swing around of the perceived motion, when the data are actually the sum of a set of jittered steps. 
The changes in perceived direction that accompany simple changes in magnification cannot be explained by anything in the stimulus but must reflect properties of the visual system—probably, a changeover from ‘slow’ to ‘fast’ visual motion channels. The fact that this changeover is complete within less than a twofold change in viewing distance, that is, less than a one-octave change in spatial frequency content, suggests that the fast and slow channels have steep roll-off functions less than one octave wide where they overlap. Alternatively, some mutual inhibition may be sharpening up the changeover, leading to a winner-take-all outcome further away from the changeover. 
Acknowledgments
This work was supported by a grant from the UCSD Senate. The author would like to thank V. S. Ramachandran, his co-author on split motion (1982), which was the ancestor of zigzag motion; Erica Hughes, Da Jung, Michael Nuñez, Sarah Shubert, and especially Wesley Sauret, for their assistance in collecting and analyzing the data; and two anonymous referees. 
Commercial relationships: none. 
Corresponding author: Stuart Anstis, Ph.D. 
Email: sanstis@ucsd.edu. 
Address: Department of Psychology, UC San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0109, USA. 
References
Alais, D. Verstraten, F. A. Burr, D. C. (2005). The motion aftereffect of transparent motion: Two temporal channels account for perceived direction. Vision Research, 45, 403–412. [PubMed] [CrossRef] [PubMed]
Anstis, S. M. Ramachandran, V. S. (1982). Anomalous movement of random-dot fields. ARVO Conference, Sarasota, FL.
Anstis, S. M. Verstraten, F. A. J. Mather, G. (1998). The motion aftereffect: A review. Trends in Cognitive Sciences, 2, 111–117. [CrossRef] [PubMed]
Baker, Jr., C. L. Braddick, O. J. (1985a). Eccentricity-dependent scaling of the limits for short-range apparent motion perception. Vision Research, 25, 803–812. [PubMed] [CrossRef]
Baker, Jr., C. L. Braddick, O. J. (1985b). Temporal properties of the short-range process in apparent motion. Perception, 14, 181–192. [PubMed] [CrossRef]
Barton, J. J. Rizzo, M. Nawrot, M. Simpson, T. (1996). Optical blur and the perception of global coherent motion in random dot cinematograms. Vision Research, 36, 3051–3059. [PubMed] [CrossRef] [PubMed]
Boulton, J. C. Hess, R. F. (1990). The optimal displacement for the detection of motion. Vision Research, 30, 1101–1106. [PubMed] [CrossRef] [PubMed]
Braddick, O. (1997). Local and global representations of velocity: Transparency, opponency, and global direction perception. Perception, 26, 995–1010. [PubMed] [CrossRef] [PubMed]
Bravo, M. J. Watamaniuk, S. N. (1995). Evidence for two speed signals: A coarse local signal for segregation and a precise global signal for discrimination. Vision Research, 35, 1691–1697. [PubMed] [CrossRef] [PubMed]
Cleary, R. Braddick, O. J. (1990). Masking of low frequency information in short-range apparent motion. Vision Research, 30, 317–327. [PubMed] [CrossRef] [PubMed]
Culham, J. (2003). Attention-grabbing motion in the human brain. Neuron, 40, 451–452. [PubMed] [Article] [CrossRef] [PubMed]
Curran, W. Braddick, O. J. (2000). Speed and direction of locally-paired dot patterns. Vision Research, 40, 2115–2124. [PubMed] [CrossRef] [PubMed]
Festa, E. K. Welch, L. (1997). Recruitment mechanisms in speed and fine-direction discrimination tasks. Vision Research, 37, 3129–3143. [PubMed] [CrossRef] [PubMed]
Hildreth, E. C. (1984). The measurement of visual motion (ACM Distinguished Dissertation)..
Hildreth, E. C. Koch, C. (1987). The analysis of visual motion: From computational theory to neuronal mechanisms. Annual Review of Neuroscience, 10, 477–533. [PubMed] [CrossRef] [PubMed]
Hirahara, M. (2006). Reduction in the motion coherence threshold for the same direction as that perceived during adaptation. Vision Research, 46, 4623–4633. [PubMed] [CrossRef] [PubMed]
Hiris, E. Blake, R. (1992). Another perspective on the visual motion aftereffect. Proceedings of the National Academy of Sciences of the United States of America, 89, 9025–9028. [PubMed] [Article] [CrossRef] [PubMed]
Kanai, R. Paffen, C. L. Gerbino, W. Verstraten, F. A. (2004). Blindness to inconsistent local signals in motion transparency from oscillating dots. Vision Research, 44, 2207–2212. [PubMed] [CrossRef] [PubMed]
Kelly, D. H. (1979). Motion and vision II Stabilized spatio-temporal threshold surface. Journal of the Optical Society of America, 69, 1340–1349. [PubMed] [CrossRef] [PubMed]
Mather, G. (1980). The movement aftereffect and a distribution-shift model for coding the direction of visual movement. Perception, 9, 379–392. [PubMed] [CrossRef] [PubMed]
Mather, G. Harris, J. Mather,, G. (1998). Theoretical models of the motion aftereffect. The motion aftereffect. (pp. 157–188). Cambridge, MA: MIT Press.
Mather, G. Moulden, B. (1980). A simultaneous shift in apparent direction: Further evidence for a “distribution-shift” model of direction coding. Quarterly Journal of Experimental Psychology, 32, 325–333. [PubMed] [CrossRef] [PubMed]
(1998). The motion aftereffect: A modern perspective. Cambridge, MA: MIT Press.
Nakayama, K. Silverman, G. H. (1984). Temporal and spatial characteristics of the upper displacement limit for motion in random dots. Vision Research, 24, 293–299. [PubMed] [CrossRef] [PubMed]
Newsome, W. T. Paré, B. E. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT. Journal of Neuroscience, 8, 2201–2211. [PubMed] [Article] [PubMed]
Nishida, S. Sato, T. (1995). Motion aftereffect with flickering test patterns reveals higher stages of motion processing. Vision Research, 35, 477–490. [PubMed] [CrossRef] [PubMed]
Qian, N. Andersen, R. A. Adelson, E. H. (1994). Transparent motion perception as detection of unbalanced motion signals I Psychophysics. Journal of Neuroscience, 14, 7357–7366. [PubMed] [Article] [PubMed]
Riggs, L. A. Day, R. H. (1980). Visual aftereffects derived from inspection of orthogonally moving patterns. Science, 208, 416–418. [PubMed] [CrossRef] [PubMed]
Shioiri, S. Matsumiya, K. (2006). High spatial frequency of motion aftereffect [Abstract]. Journal of Vision, 6, (6):1086, [CrossRef]
Smith, A. T. Snowden, R. J. Milne, A. B. (1994). Is global motion really based on spatial integration of local motion signals? Vision Research, 34, 2425–2430. [PubMed] [CrossRef] [PubMed]
Snowden, R. J. Braddick, O. J. (1989). The combination of motion signals over time. Vision Research, 29, 1621–1630. [PubMed] [CrossRef] [PubMed]
Sutherland, N. S. (1961). Figural aftereffects and apparent size. Quarterly Journal of Experimental Psychology, 13, 222–228. [CrossRef]
Ullman, S. (1979). The interpretation of visual motion. Cambridge, MA: MIT Press.
van de Grind, W. A. Verstraten, F. A. Zwamborn, K. M. (1994). Ensemble models of the movement aftereffect and the influence of eccentricity. Perception, 23, 1171–1179. [PubMed] [CrossRef] [PubMed]
van der Smagt, M. J. Verstraten, F. A. van de Grind, W. A. (1999). A new transparent motion aftereffect. Nature Neuroscience, 2, 595–596. [PubMed] [CrossRef] [PubMed]
Verstraten, F. A. Fredericksen, R. E. Grüsser, O. J. van de Grind, W. A. (1994). Recovery from motion adaptation is delayed by successively presented orthogonal motion. Vision Research, 34, 1149–1155. [PubMed] [CrossRef] [PubMed]
Verstraten, F. A. Fredericksen, R. E. van de Grind, W. A. (1994). Movement aftereffect of bi-vectorial transparent motion. Vision Research, 34, 349–358. [PubMed] [CrossRef] [PubMed]
Verstraten, F. A. van der Smagt, M. J. van de Grind, W. A. (1998). Aftereffect of high-speed motion. Perception, 27, 1055–1066. [PubMed] [CrossRef] [PubMed]
Vidnyánszky, Z. Blaser, E. Papathomas, T. V. (2002). Motion integration during motion aftereffects. Trends in Cognitive Sciences, 6, 157–161. [PubMed] [CrossRef] [PubMed]
Watamaniuk, S. N. Duchon, A. (1992). The human visual system averages speed information. Vision Research, 32, 931–941. [PubMed] [CrossRef] [PubMed]
Watamaniuk, S. N. Flinn, J. Stohr, R. E. (2003). Segregation from direction differences in dynamic random-dot stimuli. Vision Research, 43, 171–180. [PubMed] [CrossRef] [PubMed]
Watamaniuk, S. N. Sekuler, R. (1992). Temporal and spatial integration in dynamic random-dot stimuli. Vision Research, 32, 2341–2347. [PubMed] [CrossRef] [PubMed]
Williams, D. W. Sekuler, R. (1984). Coherent global motion percepts from stochastic local motions. Vision Research, 24, 55–62. [PubMed] [CrossRef] [PubMed]
Zohary, E. Scase, M. O. Braddick, O. J. (1996). Integration across directions in dynamic random dot displays: Vector summation or winner take all? Vision Research, 36, 2321–2331. [PubMed] [CrossRef] [PubMed]
Figure 1
 
(a) Trajectory of a rigid random-dot field that makes alternate long jumps downward and short jumps to the right. Each arrow is one movie frame. (b) If the long jumps exceed Dmax, then the dots appear to drift to the right. (c) Identical display at a smaller spatial scale appears (d) to be drifting downward.
Figure 1
 
(a) Trajectory of a rigid random-dot field that makes alternate long jumps downward and short jumps to the right. Each arrow is one movie frame. (b) If the long jumps exceed Dmax, then the dots appear to drift to the right. (c) Identical display at a smaller spatial scale appears (d) to be drifting downward.
Figure 2
 
(a) Perceived direction of drift (lower curve) and resulting MAE (upper curve) as a function of jump size. (Mean of 5 Os: vertical bars show ±1 SE.) At small spatial scales (left-hand part of lower curve), the long vertical jumps drive the perceived direction downward (180°), but at large scales (right-hand part) the short horizontal jumps drive it to the right (90°). (b) MAE is expected to differ by 180° (top edge of graph) from the direction of the adapting motion. It does so for large jumps, but not for small, because long jumps drive perceived motion direction, but the orthogonal small jumps, one-tenth the size, drive the MAE.
Figure 2
 
(a) Perceived direction of drift (lower curve) and resulting MAE (upper curve) as a function of jump size. (Mean of 5 Os: vertical bars show ±1 SE.) At small spatial scales (left-hand part of lower curve), the long vertical jumps drive the perceived direction downward (180°), but at large scales (right-hand part) the short horizontal jumps drive it to the right (90°). (b) MAE is expected to differ by 180° (top edge of graph) from the direction of the adapting motion. It does so for large jumps, but not for small, because long jumps drive perceived motion direction, but the orthogonal small jumps, one-tenth the size, drive the MAE.
Figure 3
 
(a) Random-dot field made small jumps to the left (upper horizontal line of short arrows) interspersed with large jumps to the right (lower horizontal line of long arrows). X = long-jump size, y = perceived direction of adapting drift (falling blue curve) and of MAE (rising pink curve). At small spatial scales ( x < 80), drift direction was determined by the long jumps, but at large spatial scales ( x > 80) by the short jumps. (b) Deviation of MAE directions from the expected 180°. These deviations were maximum when the long jumps varied between 20 and 200 min arc.
Figure 3
 
(a) Random-dot field made small jumps to the left (upper horizontal line of short arrows) interspersed with large jumps to the right (lower horizontal line of long arrows). X = long-jump size, y = perceived direction of adapting drift (falling blue curve) and of MAE (rising pink curve). At small spatial scales ( x < 80), drift direction was determined by the long jumps, but at large spatial scales ( x > 80) by the short jumps. (b) Deviation of MAE directions from the expected 180°. These deviations were maximum when the long jumps varied between 20 and 200 min arc.
Figure 4
 
MAEs were greatest for the slowest adapting speed of 0.57°/s and smallest for the highest adapting speed of 18°/s. (Means of 6 observers × 5 trials).
Figure 4
 
MAEs were greatest for the slowest adapting speed of 0.57°/s and smallest for the highest adapting speed of 18°/s. (Means of 6 observers × 5 trials).
Figure 5
 
Sample trajectories of the random-dot field as it jumped alternately downward and in random directions. The directions of the random motions are the same in both sequences, but in (a) the S > N condition, the vertical Signal jumps were ten times as long as the random Noise jumps. In (b) the N > S condition, the random jumps were ten times as long as the vertical jumps. Surprisingly, the N > S condition looked more perceptually coherent on half the trials, namely at high magnifications when the correspondence problem failed for the long random jumps.
Figure 5
 
Sample trajectories of the random-dot field as it jumped alternately downward and in random directions. The directions of the random motions are the same in both sequences, but in (a) the S > N condition, the vertical Signal jumps were ten times as long as the random Noise jumps. In (b) the N > S condition, the random jumps were ten times as long as the vertical jumps. Surprisingly, the N > S condition looked more perceptually coherent on half the trials, namely at high magnifications when the correspondence problem failed for the long random jumps.
Figure 6
 
Coherence Ratings as a function of display size (mean of 5 Os. Vertical bars show SEs). X = size of large jumps. Small jumps (not shown) were 1/10th as large. The S > N condition (green squares) looked coherent for small displays but looked progressively less coherent as the display size increased. Conversely, the N > S condition (red circles) looked incoherent for small displays but paradoxically looked more coherent than the S > N condition at large display sizes. See text.
Figure 6
 
Coherence Ratings as a function of display size (mean of 5 Os. Vertical bars show SEs). X = size of large jumps. Small jumps (not shown) were 1/10th as large. The S > N condition (green squares) looked coherent for small displays but looked progressively less coherent as the display size increased. Conversely, the N > S condition (red circles) looked incoherent for small displays but paradoxically looked more coherent than the S > N condition at large display sizes. See text.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×