Studies of shape perception have typically focused on static shapes. Studies of motion perception have mainly investigated speed and direction. None have addressed performance for judging the shape of moving objects. We investigated this by determining the discrimination of geometric angles under various dynamic conditions (translation, rotation, and expansion). Angles were parts of imaginary triangles, defined by three vertex dots. Compared to static angles, results show no significant decline in the precision of angle judgments for any of the three motion types, up to speeds high enough to impair target visibility. Additional experiments provide evidence against a uniform mechanism underlying static and dynamic performance, which could rely on “snapshots” when processing moving angles. Rather, we find support for distinct mechanisms. Firstly, adding noise dots to the display affects rotating and expanding angles substantially more than those which are translating or static. Secondly, the ability to judge angles is unaffected when vertex dots are occluded for short periods. Given the dependence of dot trajectories on the overall triangle motion, the ability to precisely extrapolate the future position of a dot requires distinct computations for translating, expanding, and rotating shapes.

*and*object motion is important when living in, and navigating through, a dynamic environment, and it is not clear whether data from studies of static shapes can be used to predict performance when shapes are in motion.

*r*) describes the normalised luminance value at each pixel in a polar coordinate system,

*r*is the radius in degrees of visual angle with respect to the center of the location of the dot,

*σ*(in degrees of visual angle) determines its peak spatial frequency, and

*c*denotes contrast. In all experiments, the mean distance between the center of the apex dot and each peripheral dot (“

*l*

_{1}” and “

*l*

_{2}”) subtended 1.75 degrees. The distance between dots was fixed for each stimulus presentation but was varied randomly and independently (by up to ±30% of the mean distance) between presentations. The lengths of

*l*

_{1}and

*l*

_{2}were defined by the following equations:

*l*

_{mean}is the average distance and “rand” is a random number from a uniform distribution [0–1]. This randomization prevents observers using the length of the invisible side of the triangle opposite the apex angle (i.e., distance between the two peripheral dots) as a cue to angular magnitude (Regan, Gray, & Hamstra, 1996).

^{−2}. Subjects viewed the stimuli binocularly under dim room illumination and a chin and forehead rest was used to maintain a constant viewing distance of 120 cm. At this distance, each pixel subtended 0.0177 deg. To avoid reference cues, the monitor frame was covered with a white cardboard mask with a circular aperture subtending 9 deg in diameter. Movies were calculated in MATLAB prior to the experiments. The patterns were displayed using custom-written Pascal code within the CodeWarrior environment.

^{−1}for rotation (i.e., the triangle underwent half of a full rotation per second) and 3.7 deg·s

^{−1}for translation and expansion, reference angles were 60° and peak spatial frequency of the dots was 8 c·deg

^{−1}.

^{−1}(2.5 rotations per second). Performance depends on rotational speed and thresholds increase with increasing speed. Performance deteriorates noticeably only for very fast rotational speeds of more than 480 deg·s

^{−1}( Figure 4A). Higher speeds (≥600 deg·s

^{−1}) are required for performance to decrease markedly. At the highest speed (900 deg·s

^{−1}), observers' performance was below 75% correct even for the largest stimulus increments tested (±40°), making it impossible to quantify thresholds. This general pattern of results is also seen for more acute (30°) and more obtuse (120°) reference angles and is therefore independent of the reference angle ( Figure 4B).

*per se*. If this were the case, decreasing the stimulus spatial frequency may improve performance at high speeds. In order to investigate this possibility, angle discrimination was next measured for a reduced dot spatial frequency of 2 c·deg

^{−1}at rotational speeds of 0, 180, 600, 720, and 900 deg·s

^{−1}.

^{−1}) are shown in Figure 5 as a function of rotational speed. For static stimuli and for speeds up to at least 180 deg·s

^{−1}, performance is independent of dot spatial frequency as well as speed. Importantly, for higher rotational speeds (≥600 deg·s

^{−1}), performance is better for the lower spatial frequency. Using the definition that the speed of a grating is equivalent to its temporal frequency divided by its spatial frequency, we can calculate equivalent temporal frequencies for the D4 stimuli in our experiments. This calculation yields temporal frequencies well outside the window of visibility for the higher frequency dots and values close to the limit for the lower frequency dots. This suggests that the reduction in discrimination performance seen at high rotational speeds is, at least to some extent, due to a reduction in dot visibility. We are therefore left with the striking result that the precision of angle judgments is high even when the stimulus dots are moving at a speed that puts them close to their spatio-temporal limit of visibility.

*n*frames, such a mechanism will not be able to distinguish the stimulus dots from the animated background dots if the background dots have a lifetime of

*n*frames or more.

*position*of the three dots and is indifferent to their motion ( Figure 8A). A physiologically plausible way of achieving this would be to consider the activation of cells within a retinotopic map, for example the activation of simple cells in V1 (Hubel & Wiesel, 1962). Such cells respond to the triangle dots whether they are static or moving, so the same performance should result for static and any kind of dynamic condition. This mechanism would, however, predict that dynamic angles could not be discriminated when static background dots are added, contrary to the results shown in Figure 6C. Dynamic angle discrimination remains good even if the background dots generate transient signals (by replacing them every frame). This implies that a single mechanism, based on directionally non-specific channels, is insufficient to explain dynamic angle discrimination.

*shape*of moving objects was the main objective, why would a system do anything but take “snapshots”? In other words, what are the advantages for employing different computations, depending on the type of motion? One advantage of specific mechanisms is that they can be more resistant against the influence of noise or occlusion than a uniform computation. The fact that performance is unaffected by short “blank periods” (of up to 60 ms) suggests that the actual position of the dots during occlusion is available rather than the location of the dot before it disappeared (see Figure 7B). This can only be achieved using accurate extrapolation. The visual system appears to be able to use the motion signal generated by one vertex dot before the blank period to predict the position of that dot after the blank, when it can be combined with the position and/or motion of the second vertex dot. Given that the same robustness against occlusion is seen for translation, expansion, and rotation, and given that the dots in these conditions move on very different trajectories, this again suggests that distinct computations underlie the three types of motion.