Temporal differences in visual information processing between the eyes can cause dramatic misperceptions of motion and depth. Processing delays between the eyes cause the Pulfrich effect: oscillating targets in the frontal plane are misperceived as moving along near-elliptical motion trajectories in depth (Pulfrich, 1922). Here, we explain a previously reported but poorly understood variant: the anomalous Pulfrich effect. When this variant is perceived, the illusory motion trajectory appears oriented left- or right-side back in depth, rather than aligned with the true direction of motion. Our data indicate that this perceived misalignment is due to interocular differences in neural temporal integration periods, as opposed to interocular differences in delay. For oscillating motion, differences in the duration of temporal integration dampen the effective motion amplitude in one eye relative to the other. In a dynamic analog of the Geometric effect in stereo-surface-orientation perception (Ogle, 1950), the different motion amplitudes cause the perceived misorientation of the motion trajectories. Forced-choice psychophysical experiments, conducted with both different spatial frequencies and different onscreen motion damping in the two eyes show that the perceived misorientation in depth is associated with the eye having greater motion damping. A target-tracking experiment provided more direct evidence that the anomalous Pulfrich effect is caused by interocular differences in temporal integration and delay. These findings highlight the computational hurdles posed to the visual system by temporal differences in sensory processing. Future work will explore how the visual system overcomes these challenges to achieve accurate perception.

*p*< 0.01), 2 cpd versus 4 cpd (

*p*< 0.001), and 3 cpd versus 6 cpd (

*p*< 0.001), but not 1 cpd versus 2 cpd (

*p*= 0.25). At the individual observer level, the proportion of “right-side-back” responses was significantly higher when the right eye was presented the lower spatial frequency for observers S1 (

*p*< 0.001) and S3 (

*p*= 0.021), but not S2 (

*p*= 0.119) and S4 (

*p*= 0.063). These results trend in a direction consistent with the experimental hypothesis: that the effective motion signals associated with higher spatial frequencies have amplitudes that are damped due to longer temporal integration periods. But stronger, more direct tests of this hypothesis are possible.

*p*< 0.001 for all spatial frequency combinations) as well as at the individual level (

*p*< 0.001 for all observers).

*r*= 0.90,

*p*< 0.04; Figure 6A). The group average shows a similar trend (

*r*= 0.98,

*p*= 0.02; Figure 6B). For all observers but one, the same qualitative pattern exists: sensory-perceptual- and tracking-based estimates of motion damping increase together. However, the slopes of the best-fitting lines vary substantially across observers (Figure 6C). On an observer-by-observer basis, it will therefore be difficult to predict estimates of visual motion damping in the forced-choice task from the magnitude of the visuomotor motion damping in the tracking task, or vice versa (see Discussion). Nevertheless, the results are largely consistent—at the group level and at the individual observer level—with the hypothesis that effective motion damping underlies anomalous Pulfrich effects.

^{2}. After light loss due to mirror reflections, the maximum luminance was 93.9 cd/m

^{2}. The gamma function of each monitor was linearized using custom software routines. A single AMD FirePro D500 graphics card with 3GB GDDR5 VRAM controlled both monitors to ensure that the left and right eye images were presented simultaneously. To overcome bandwidth limitations of the monitor cables, custom firmware was written so that a single color channel drove each monitor; the red channel drove the left monitor and the green channel drove the right monitor. The single-channel drive to each monitor was then split to all three channels for gray scale presentation.

*E*and

_{L}*E*are the left- and right-eye motion amplitudes in degrees of visual angle, Δ

_{R}*t*is the onscreen delay between the left- and right-eye target images, ω is the temporal frequency of the target movement, ϕ

_{0}is the starting phase, and

*t*is time in seconds.

_{0}was randomly chosen on each trial to equal either 0 or π, which forced the stimuli to start either to the left or to the right of the center.

_{x}and σ

_{y}are the standard deviation in X and Y of the Gaussian envelope,

*f*is the frequency of the carrier, and ϕ is the phase. Five Gabor targets with different carrier frequencies were used: 1 cpd, 2 cpd, 3 cpd, 4 cpd, and 6 cpd. All had the same spatial size because all had the same Gaussian envelope (σ

_{x}= 0.39 and σ

_{y}= 0.32). The octave bandwidths thus equaled 1.5, 0.7, 0.46, 0.35, and 0.23 and the orientation bandwidths equaled 60 degrees, 32 degrees, 22 degrees, 16 degrees, and 11 degrees, respectively. The phase of the carrier frequency was equal to 0.0 for all Gabor stimuli (i.e. all Gabors were in cosine phase).

*p*, of the observed response proportions is given by

*n*is the number of trials in a given condition, π

_{o}is the probability of the observer responding “right-side back” in each of the two matched conditions under the null hypothesis (i.e. the observed mean of the two matched conditions), and

*k*is the difference in the number of “right-side back” responses between the two matched conditions. We computed this probability (Equation 4) both at the group level (with trials combined across observers) and at the individual observer level (with trials combined across spatial frequency conditions, i.e. combining all conditions in which the left eye was presented the lower spatial frequency, and combining all conditions in which the right eye was presented the lower spatial frequency).

*N*is the number of observers and

*s*is the standard error of motion damping estimates (as determined by 68% bootstrapped confidence intervals). Reliability-weighted averaging takes into consideration differences in the reliability of damping estimates across observers. These differences in reliability arise because some observers are more sensitive to onscreen motion damping than others. It is well-known from signal detection theory that greater sensitivity in a task is associated with more reliable estimates of the point of subjective equality (here, estimates of motion damping).

*N*is the total number of conditions for an observer, \(\hat{D}\) is the experiment-derived estimate of motion damping for a given condition, \(\bar{D}\) is a free parameter indicating the expected amount of motion damping for a given condition, σ is the standard error of the motion damping estimate for a given condition (as determined by 68% bootstrapped confidence intervals),

*a*is the y-intercept of the best fit line, and

*b*is the slope of the best fit line.

_{x}= 0.39 degrees and σ

_{y}= 0.32 degrees), and subtended approximately 2.0 degrees × 2.0 degrees of visual angle (i.e. five sigma). Hence, in the five conditions, the octave bandwidths equaled 1.5, 0.7, 0.46, 0.35, and 0.23 and the orientation bandwidths equaled 60 degrees, 32 degrees, 22 degrees, 16 degrees, and 11 degrees, respectively. Data were collected in five intermixed blocks of 20 runs each for a total 20 runs per condition.

*t*+ 1 were generated as follows

_{x}is a random sample of Gaussian noise and

*Q*is the drift variance. The random sample determines the change in target position between the current and the next time step. The drift variance determines the expected magnitude of the position change on each time step, and hence the overall variance of the random walk. The variance of the walk positions across multiple walks σ

^{2}(

*t*) =

*Qt*is equal to the product of the drift variance and the number of elapsed time steps. The value of the drift variance in our task (0.8mm per time step) was chosen to be as large as possible such that each walk would traverse as much ground as possible while maintaining the expectation that less than one walk out of 500 (i.e. less than one per human observer throughout the experiment) would escape the horizontal extent of the gray background area (176 × 131 mm) before the 11 second trial completed.

*h*(

*t*) is a temporal impulse response function corresponding to a specific frequency. Convolving the target velocities with the impulse response function gives the velocities of the effective target images. Integrating these velocities across time gives the effective target positions.

*A*is the amplitude, and

*m*,

*s*, and

*d*are the parameters determining the shape and scale of the fit. The mode (i.e. peak) of the function is given by

*ms*. The full-width at half-height is used as a measure of the temporal integration period, and is computed via numeric methods. The damping associated with a given fitted function is given by the value of the normalized amplitude spectrum at the temporal frequency of the stimulus, which in the current experiments is one cycle per second.

*Scientific Reports,*6, 35805.

*Vision Research,*39(6), 1143–1170.

*Journal of Neuroscience,*24(33), 7305–7323.

*Vision Research,*38(2), 187–194.

*Perception & Psychophysics,*26, 53–68.

*Journal of Vision,*15(3), 1–16.

*Journal of Neurophysiology,*118(3), 1515–1531.

*Spatial Vision,*10(4), 433–436.

*Annual Review of Vision Science,*6, 491–517.

*bioRxiv Preprint*. http://doi.org/10.1101/2020.08.05.238642.

*Proceedings of the National Academy of Sciences,*108(40), 16849–16854.

*Journal of Vision,*14(2), 1–18.

*Nature Communications,*6, 7900.

*Journal of Vision,*16(13), 2.

*Current Biology,*29(15), 2586–2592.e4.

*Journal of Neuroscience,*40(4), 864–879.

*Perception & Psychophysics,*51(4), 319–327.

*Journal of Neurophysiology,*91(6), 2607–2627.

*Perception & Psychophysics,*2, 438–440.

*eLife,*7, e31448.

*PLoS Computational Biology,*16(6), e1007947–e1008026.

*Proceedings of the National Academy of Sciences,*115, E10486–E10494.

*Neuron,*90(1), 165–176.

*Investigative Ophthalmology & Visual Science,*18(7), 714–725.

*The American Journal of Psychology,*62(2), 159–181.

*Vision,*4(20), 1–13.

*Researches in binocular vision*. Philadelphia, London: W B Saunders.

*Nature,*437(7057), 412–416.

*Die Naturwissenschaften,*10(35), 553–564.

*Journal of Neurophysiology,*94(2), 1541–1553.

*Scientific Reports,*7(1), 5587.

*Scientific Reports,*10, 1–16.

*PLoS One,*5(10), e13775.

*Pflugers Archiv Fur Die Gesamte Physiologie Des Menschen Und Der Tiere,*257(1), 48–69.

*Vision Research,*42(7), 851–864.

*Ophthalmologica. Journal International D'ophtalmologie. International Journal of Ophthalmology. Zeitschrift Fur Augenheilkunde,*128(6), 380–388.

*The American Journal of Psychology,*82(3), 350–358.

*Vision Research,*33(10), 1421–1430.

*Network (Bristol, England),*14(3), 371–390.