Free
Article  |   September 2011
Three-dimensional motion aftereffects reveal distinct direction-selective mechanisms for binocular processing of motion through depth
Author Affiliations
Journal of Vision September 2011, Vol.11, 18. doi:https://doi.org/10.1167/11.10.18
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thaddeus B. Czuba, Bas Rokers, Kyle Guillet, Alexander C. Huk, Lawrence K. Cormack; Three-dimensional motion aftereffects reveal distinct direction-selective mechanisms for binocular processing of motion through depth. Journal of Vision 2011;11(10):18. https://doi.org/10.1167/11.10.18.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Motion aftereffects are historically considered evidence for neuronal populations tuned to specific directions of motion. Despite a wealth of motion aftereffect studies investigating 2D (frontoparallel) motion mechanisms, there is a remarkable dearth of psychophysical evidence for neuronal populations selective for the direction of motion through depth (i.e., tuned to 3D motion). We compared the effects of prolonged viewing of unidirectional motion under dichoptic and monocular conditions and found large 3D motion aftereffects that could not be explained by simple inheritance of 2D monocular aftereffects. These results (1) demonstrate the existence of neurons tuned to 3D motion as distinct from monocular 2D mechanisms, (2) show that distinct 3D direction selectivity arises from both interocular velocity differences and changing disparities over time, and (3) provide a straightforward psychophysical tool for further probing 3D motion mechanisms.

Introduction
There is a wealth of psychophysical and physiological evidence for the existence of neurons tuned to roughly frontoparallel (2D) directions of motion in the primate visual system (e.g., as presented on a plane perpendicular to the observer's line of sight; Born & Bradley, 2005; Burr & Thompson, 2011). In contrast, there is relatively little evidence for the existence of neurons tuned to 3D motion (e.g., toward or away from the observer; Akase, Inokawa, & Toyama, 1998; Cynader & Regan, 1982; Maunsell & Van Essen, 1983; Poggio & Talbot, 1974; Regan & Cynader, 1982; Toyama et al., 1985; Zeki, 1981). Further, prior psychophysical work has revealed that such 3D motion processing relies both on estimating changes in binocular disparity over time and by comparing different monocular velocities across the two eyes (Harris, Nefs, & Grafton, 2008; Regan & Gray, 2009). It remains unclear whether such computations are explicitly represented by later processing stages that are directionally selective for 3D motion. Indeed, given the scant electrophysiological evidence for 3D tuning in individual neurons compared to the widespread occurrence of 2D tuning throughout the visual cortex, one might wonder whether such disparity- and velocity-based inferences are not explicitly represented by 3D direction-selective neural populations but are instead extracted by cognitive and motor circuits that only “read out” 3D direction when required for task performance or action. 
In the first experiment, we employed the motion aftereffect (MAE) to test the hypothesis that the visual system contains neural populations tuned to 3D directions of motion. Following the prior logic of 2D MAEs, we reasoned that prolonged viewing of unidirectional motion toward an observer would make subsequently viewed stimuli more likely to appear to be moving away (and vice versa). Such an aftereffect could be interpreted as the result of a post-adaptation imbalance of responses of neurons tuned to motion toward (weaker after adaptation toward) versus those tuned to motion away (unaffected by adaptation toward; Anstis, Verstraten, & Mather, 1998; Barlow & Hill, 1963; Mather, 1980). Alternatively, the lack of an MAE would suggest that neurons tuned to 3D motion do not exist and rather that later stages of cognitive and motor processing, which presumably do not adapt, are involved in a less explicit process of inferring 3D motion from the responses of a population of neurons that are themselves not selective for 3D motion but code the relevant building blocks. 
Although such “if you can adapt it, it's there” logic (Mollon, 1974) has repeatedly been applied to the case of 2D motion, the interpretation of 3D motion aftereffects requires additional care. One major interpretive challenge is due to the fact that 3D motion processing depends at least in part on exploiting the fact that objects moving toward or away from an observer project different horizontal velocities to the two eyes (Brooks & Stone, 2004; Czuba, Rokers, Huk, & Cormack, 2010; Rokers, Cormack, & Huk, 2009; Shioiri, Saisho, & Yaguchi, 2000). Thus, to interpret a 3D MAE as unambiguous evidence for the existence of mechanisms tuned to 3D direction, one must distinguish 3D motion adaptation per se from the inherited adaptation effects of the monocular 2D mechanisms that send signals to the putative 3D mechanism. We addressed this issue by separately measuring the 2D monocular MAEs and then testing whether they could quantitatively account for the magnitude of the 3D MAE. 
Such a directionally selective representation of 3D motion could be based on binocular mechanisms specific to processing motion through depth. In the second experiment, we address the two primary binocular motion cues that could be contributing to the 3D MAE: a disparity-based, changing disparity (CD) cue and a velocity-based, interocular velocity difference (IOVD) cue. 
The CD cue, changing disparity over time, can be computed by taking the time derivative of horizontal binocular disparity (i.e., comparing an object's changing position in depth over time; Cumming & Parker, 1994; Gray & Regan, 1996; Regan & Gray, 2009). The CD cue has traditionally received a great deal of attention because of a compelling ability to generate 3D motion percepts through purely cyclopean pathways, i.e., stimuli that are completely devoid of coherent 2D (monocular) motion signals (Julesz, 1960). This is achieved by dynamically relocating stimulus elements on a plane frontoparallel to the observer on successive display frames, while presenting a series of steadily changing binocular disparities that correspond to motion toward or away from the observer. 
The IOVD cue, interocular velocity difference, takes advantage of the geometry of binocular viewing, wherein an object moving through depth creates different (and often opposite) velocities of motion in the two eyes. The direction of 3D motion can, therefore, be computed by directly comparing monocular velocity signals in corresponding regions of the two retinae. Although the IOVD cue was first proposed by Beverley and Regan (1973), it was not critically addressed until two decades later (Brooks & Mather, 2000; Cumming & Parker, 1994; Portfors-Yeomans & Regan, 1996) and, until recently, has been thought to make little or no contribution to 3D motion processing (Harris et al., 2008; Regan & Gray, 2009). There is, however, a growing body of evidence suggesting the IOVD cue plays an important, if not primary, role in a variety of ecologically plausible viewing conditions (see General discussion section). 
In the second experiment, we examined the relative contributions of the cues potentially underlying the 3D MAE. We approached this using cue-isolating adaptation stimuli to differentially measure the contributions of the velocity-based and disparity-based cues to the full 3D MAE. Further, by adapting to isolated CD or IOVD stimuli and testing with an identical test stimulus (containing both cues), we were able to examine the relative contribution of each binocular cue to the representation of 3D directions of motion. 
In summary, we performed a series of psychophysical experiments identifying a distinct 3D MAE, using methods that allowed us to consider the contributions of monocular adaptation and to quantify the relative contribution of the CD and IOVD cues. Our results reveal a surprisingly large 3D MAE, provide psychophysical evidence for the existence of neurons tuned to 3D direction of motion, and demonstrate that the motion aftereffect can be used to probe the mechanisms of 3D motion perception. 
Methods and materials
Observers
Data were collected in three psychophysical observers (three of the authors, males aged 27–47), all with good stereopsis and normal or corrected-to-normal vision. Experiments were undertaken with the written consent of each observer, and all procedures were approved by the UT-Austin Institutional Review Board. A total of 7776 trials were collected across the 3 observers. The nature of motion adaptation experiments required the use of highly experienced observers capable of maintaining continuous fixation for the entire duration of each experimental session (17–20 min at a time, 17.8 hours total). 
General procedure
We measured the magnitude of motion aftereffect using a motion nulling paradigm (Blake & Hiris, 1993). Following adaptation to unidirectional 2D or 3D motion, we presented a series of test stimuli that contained variable motion coherence in the same or opposite direction of adaptation interleaved with brief top-up adaptation stimuli (see General stimuli section). On each trial, observers reported the perceived direction of test stimulus motion in a 2-alternative forced choice task, responding either leftward/rightward or toward/away, depending on condition as appropriate. No feedback was provided. Using the method of constant stimuli, direction discrimination sensitivity was measured across a range of motion coherences by adjusting the ratio of signal dots to noise dots. 
General stimuli
Observers stereoscopically viewed (via mirror stereoscope; see Apparatus and displays section) moving random dot displays in which 80 dark (0.55 cd/m2) or light (124.25 cd/m2) binocularly paired dots were presented on a mid-gray (61.40 cd/m2) background. In each monocular half-image, half the dots were dark and half the dots were light (Figure 1, screenshot of stimulus spanning two monitors). Individual dots subtended a visual angle of 9 arcmin (0.15°) and were anti-aliased to achieve subpixel position accuracy. Stimulus dots were uniformly distributed within a volume spanning 2.5–8° in eccentricity and ±72 arcmin disparity. Observers fixated on a single, static, bright stimulus dot in the center of a small central square (0.5°) with horizontal (black) and vertical (red) nonius lines located in the center of each monocular half-image. To further aid fixation and confirm proper binocular alignment, a static 1/f noise texture surround was presented around fixation marks (0–0.75° eccentricity, 0 arcmin disparity) and beyond the stimulus annulus (≥9.8° eccentricity, 0 arcmin disparity). 
Figure 1
 
Screen capture of the basic stimulus. In the actual experiments, the right and left halves were split between two monitors and viewed through a mirror stereoscope. Nonetheless, free-fusing will give a reasonable impression of the experimental percepts. We found that the 1/f texture in the center and surround greatly facilitated fusion and held stable vergence (which subjects could monitor with the horizontal and vertical nonius lines at fixation).
Figure 1
 
Screen capture of the basic stimulus. In the actual experiments, the right and left halves were split between two monitors and viewed through a mirror stereoscope. Nonetheless, free-fusing will give a reasonable impression of the experimental percepts. We found that the 1/f texture in the center and surround greatly facilitated fusion and held stable vergence (which subjects could monitor with the horizontal and vertical nonius lines at fixation).
Signal dots moved at a monocular speed of 0.6°/s. Upon reaching the edge of the stimulus volume, dots were wrapped to the opposite end of the volume and were randomly relocated to minimize apparent motion during the wrap. Thus, dot lifetimes were constrained by the duration of travel through the stimulus volume. Because this constraint was most severe in 3D motion conditions ([depth of volume] / [frame rate] = 120 frames = 2.0 s), identical adaptation dot lifetimes of 2.0 s were imposed for frontoparallel and monocular motion stimuli. 
While monocular motion speeds of 0.6°/s are relatively slow compared to those used in most 2D motion research, when stimuli are moving in opposite directions between the two eyes, even relatively slow monocular speeds correspond to brisk 3D motion speeds. Furthermore, at similar speeds and retinal eccentricities, direction discrimination sensitivities are approximately equivalent for the two primary binocular 3D motion cues (Czuba et al., 2010). 
After an initial adaptation period (100 s), observers were presented with a series of test stimuli (1 s) moving in the same or opposite direction as adaptation followed by brief top-up adaptation stimuli (4 s; 1.25-s interstimulus interval) to maintain steady-state adaptation. Test stimuli were similarly distributed throughout the stimulus volume, but had brief dot lifetimes (15 frames, 250 ms), and were presented in a range of motion coherence levels (ranging from 0 to 95% coherence). Short dot lifetimes were selected to reduce perceptual segregation of real and illusory motion, while still providing a clear motion percept and a useful dynamic range in resulting psychometric functions (Lankheet & Palmen, 1998; Watamaniuk, McKee, & Grzywacz, 1995). Observers reported the perceived direction of test stimulus motion with a left or right mouse click. 
Manipulation of 3D motion coherence
In the test stimulus of all experiments, 3D motion coherence, defined as the ratio of signal dots to noise dots (e.g., Newsome & Paré, 1988), was randomly varied on a trial-by-trial basis according to the method of constant stimuli. Signal dots moved coherently and uniformly in the same or opposite direction as the adapter, while noise dots moved in random walks along that same dimension. Regardless of their signal/noise designation, all dots (excluding those exceeding the stimulus volume) were displaced 0.01° either toward or away from the observer on every display frame. 
Based on pilot experiments, motion coherence levels were selected to span the dynamic range of observers' responses before and after adaptation. For 3D test conditions (3D, IOVD, CD, and 3D-planar; see Methods: 2D vs. 3D MAE section and Methods: Binocular cue contribution section for detailed description), direction discrimination was measured at coherence levels of ±5, 20, 50, 80, and 95%. Figure 2 shows an illustrative gradient of coherence for frontoparallel (2D) motion stimuli; each coherence panel depicts a single eye's monocular half-image (in this case, the right eye). By arbitrary convention, we define leftward/away motion coherence as negative coherence and rightward/toward motion as positive coherence. For most 1 2D adaptation conditions, a narrower range of ±5, 20, and 50% coherence provided sufficient coverage for convergence of psychometric fits. 
 
Figure 2
 
Movie 1. A (2D) sampling of various coherence levels as used in our method of constant stimuli. Each panel represents a single monocular half-image across a range of increasing rightward motion coherence levels (5, 20, 50, 80, and 95%). The reader should readily appreciate a continuum of motion strength.
Motion coherence was pseudorandomized across trials within a run. Each run consisted of a single adaptation direction with 12 to 24 trials per coherence level (6 to 10 coherence levels, depending on condition), for a total of 120 to 144 trials per run. Each observer completed 2 runs per condition in randomized order, with a minimum of 30 min between consecutive runs. When feasible, data from compatible conditions (e.g., monocular MAE from left and right eyes) were combined across ocular pairs (Table 1). 
Table 1
 
Adaptation and test motion condition matrix.
Table 1
 
Adaptation and test motion condition matrix.
MAE condition 〈Adapt, Test〉 Adapt directions Response Number of coherences Total trials Data in
3D 〈3D, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 5
2D 〈2D, 2D〉 Left∣Right Left∣Right 6 432 Figure 6
Monocular 〈mono, monosame Left∣Right Left∣Right 10 1440* Figure 7
3D-mono 〈3D, mono〉 Toward∣Away Left∣Right 6 864* Figure 8
Interocular transfer 〈mono, monoopposite Left∣Right Left∣Right 6 864* Figure 8
IOVD 〈IOVD, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
CD 〈CD, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
3D-planar 〈3D, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
Unadapted 3D 〈–, 3D〉 Toward∣Away 6 432 Figure 5
Unadapted 2D 〈–, 2D〉 Left∣Right 6 432 Figure 6
Unadapted monocular 〈–, mono〉 Left∣Right 6 432 Figure 7
 

*Combined across ocular pairs.

Apparatus and displays
Stimuli were presented on a pair of linearized 19″ CRT monitors (Viewsonic G90; 60-Hz progressive scan, 1024 × 768 pixel resolution per display) viewed through a mirror stereoscope. The monitor was driven by Mac Pro computer equipped with an NVIDIA GeForce 8800 GT video card. 
Monocular half-images were presented separately on the two monitors, with a septum and various baffles positioned to assure that each monitor was only visible to the corresponding eye. Viewed through the 90-cm optical path length of the stereoscope, each monocular half-image subtended 22° of visual angle. The displays were driven using a dual-monitor-spanning video splitter (Matrox DualHead2Go) to ensure frame-locked temporal synchrony between the two displays. All stimuli were generated using the Psychophysics Toolbox (Brainard, 1997) and MATLAB (2007a, The Mathworks, Natick, MA). 
Data analysis
Data were analyzed by computing observers' proportion of toward/rightward responses as a function of test motion coherence. For each condition, we combined data across multiple runs for each subject and fit psychometric functions (2-parameter logistic; Equation 1) to data collected before and after adaptation: 
f ( x ) = 1 / ( 1 + e ( 2 α ( x + β ) ) ) .
(1)
Because fitted logistic parameters, β (shift) and α (slope), are equivalent to the effect adaptation had on the perceived direction and sensitivity of observer's direction discriminability, all analyses were performed directly on these fitted parameters. Bootstrapped confidence intervals on the logistic parameters were computed by resampling (with replacement) the binomial responses from each subject to create 1000 repetitions of the experiment and fitting a psychometric function to each resampled experiment. To improve estimates of the slopes of adapted psychometric functions, 2 a single slope was fit for each observer motion condition [i.e., raw data for each adaptation direction shifted by β of an initial logistic fit, to which a single slope (α) was fit]. Finally, fitted logistic parameters were averaged across individual observers to yield a single bootstrapped distribution of psychometric fits for each motion condition. Plotted psychometric functions correspond to the median fit parameters (after checking that the median values were very similar to the means). Individual data points are derived from raw data averaged across observers and are presented to provide a sense of variability across observers. The magnitude of motion aftereffects was estimated from the motion coherence level at which observers were equally likely to report seeing leftward/toward or rightward/away motion [the point of subjective equality (PSE)]. All error bars represent 95% confidence intervals on the bootstrapped distribution (corresponding to approximately ±2 SEM for distributions that are roughly Gaussian). 
Experiment 1
Methods: 2D vs. 3D MAE
In the first experiment, we tried to determine whether the visual system explicitly represents 3D directions of motion by adapting observers to moving dot stereograms moving directly toward or away from the observer (opposite horizontal motions in the two eyes; Figure 3). The resulting 3D MAE was compared against 2D (frontoparallel) MAE that were induced by presenting moving dot stereograms that contained the same, rather than opposite, horizontal motion in the two eyes (Figure 4). This produced a percept of a 3D cloud of dots moving leftward or rightward, frontoparallel to the observer. By only manipulating the relative monocular motions in the two eyes, this method ensured that the overall dot density, distribution of disparities, and monocular stimulation were identical across the two conditions. 
 
Figure 3
 
Movie 2. Depiction of the 3D motion-through-depth stimulus during a series of top-up adaptation and test presentations. The right panel shows a faithful rendition of the stereoscopic stimuli used (fusible stereopair), and the left panel shows a perspective view of the same stimulus sequence.
 
Figure 4
 
Movie 3. Same as Figure 3 but showing the frontoparallel motion stimulus.
Dissociation of 3D MAE from inherited, monocular 2D MAEs
As noted previously, the presence of a 3D MAE is not sufficient evidence for neurons tuned to 3D directions of motion without further distinguishing it from a simple inheritance of monocular MAE. To distinguish the 3D MAE from plausible combinations of inherited aftereffects, we measured adaptation to the monocular components of the 2D and 3D motion conditions. Monocular adaptation and test stimuli were exactly the same as in the previous MAE conditions, except that one monocular half-image was replaced with a mean gray field. Having effectively removed any disparity information, the stimulus percept now appeared as a single monocularly visible plane of dots moving leftward or rightward. 
When considering whether the 3D MAE only reflects a combination of monocular aftereffects, one must also take into account the binocular interaction that occurs during adaptation to opposite monocular motion in the two eyes. We addressed this with two additional control conditions that measured: (1) the frontoparallel monocular MAE following adaptation to motion toward or away from the observer, and (2) the interocular transfer of monocular MAE. 
Results: Existence of a 3D MAE and comparison to 2D MAE
We observed a strong 3D motion aftereffect resulting from prolonged adaptation to unidirectional motion toward or away from the observer (Figure 5). Figure 5A depicts the two adaptation conditions: binocularly presented 3D motion toward or away from the observer, followed by binocularly presented test stimuli moving in either the same or opposite direction as adaptation. Data plotted in Figure 5B show the proportion of “toward” responses as a function of test stimulus coherence (averaged across 3 experienced observers, 72 trials per point, 720 trials total). Increasingly away motion coherence corresponds to negative values on the x-axis and increasingly toward motion coherence to positive values. A value of 0.5 on the y-axis corresponds to the point at which an observer is equally likely to report either toward or away on a given trial and indicates an observer's point of subjective equality. The black line shows a logistic fit to subjects' responses prior to adaptation, essentially the observer's 3D direction discrimination sensitivity. As expected, this line is centered on zero motion coherence and has a moderate sensitivity (α −1 = 0.153, CI95 = [0.109, 0.197]). Following adaptation to toward-direction motion (green), observers were much more likely to report noisy 3D motion stimuli as moving away (β = 0.356, CI95 = [0.307, 0.403]). Similarly, after adaptation to away-direction motion (red), observers were more like to report directionally ambiguous 3D motion as moving toward (β = −0.557, CI95 = [−0.508, −0.601]). 
Figure 5
 
(A) A schematic of the basic experimental paradigm; subjects adapted to equal and opposite motion in the two eyes (producing a 3D motion percept of dots moving either toward or away from the observer), and then judged the perceived direction of motion in depth of a test stimulus with a coherence that varied from trial to trial. (B) The psychometric functions (parametrically combined across observers; see Methods section) mapping the coherence of the test stimulus (x-axis) to the percent of trials judged as “toward” the observer (y-axis). The green curve is a “toward” adapter, the red curve is an “away” adapter, and the black curve is a reference curve collected without any adaptation. Gray error bars show bootstrapped 95% confidence intervals. The abscissa corresponding to the 0.5 ordinate on each curve represents the point of subjective equality for each condition, and bootstrapped 95% confidence intervals are shown by the black horizontal bars. Clearly, a substantial 3D MAE is present.
Figure 5
 
(A) A schematic of the basic experimental paradigm; subjects adapted to equal and opposite motion in the two eyes (producing a 3D motion percept of dots moving either toward or away from the observer), and then judged the perceived direction of motion in depth of a test stimulus with a coherence that varied from trial to trial. (B) The psychometric functions (parametrically combined across observers; see Methods section) mapping the coherence of the test stimulus (x-axis) to the percent of trials judged as “toward” the observer (y-axis). The green curve is a “toward” adapter, the red curve is an “away” adapter, and the black curve is a reference curve collected without any adaptation. Gray error bars show bootstrapped 95% confidence intervals. The abscissa corresponding to the 0.5 ordinate on each curve represents the point of subjective equality for each condition, and bootstrapped 95% confidence intervals are shown by the black horizontal bars. Clearly, a substantial 3D MAE is present.
Across a range of 3D motion coherences, these 3D MAEs shifted the psychometric functions leftward or rightward (relative to an unadapted control condition) and, thus, could be quantified in terms of relative displacement along the x-axis, i.e., in units of the test stimulus motion coherence. Three-dimensional MAEs were equivalent to about ∼45% (CI95 = [42.4, 48.9]) motion coherence. This effect struck us as surprisingly large: for the test stimuli to be judged as having no net motion on average, approximately half of the dots had to move in the direction opposite that of adaptation. 
As a basis for comparison, we measured conventional 2D frontoparallel MAEs for the same observers under stereoscopic viewing conditions (Figure 6A; dots moved in the same direction in both eyes but were otherwise identical to those used to generate 3D MAEs). We performed the same analysis, except that the directions of motion were leftward or rightward, instead of toward or away. The y-axis now represents the proportion of rightward responses; negative values on the x-axis correspond to increasingly leftward motion coherence, and positive values correspond to increasingly rightward motion coherence (Figure 6B). Unadapted direction discrimination sensitivity (black) is again centered on zero motion coherence but with a noticeable ∼2.5-fold improvement in sensitivity (α −1 = 0.054, CI95 = [0.016, 0.076]) over the 3D motion case. Increased direction discrimination sensitivity for 2D motion relative to 3D motion is unsurprising given the previously mentioned stereomotion suppression effect (Tyler, 1971). Further, Welchman, Lam, and Bülthoff (2008) have shown that greater 2D sensitivity (as measured by increment thresholds) is a consequence of a Bayesian model in which the visual system incorporates a low-speed prior; their model thus predicts this aspect of our data. What is more interesting, however, is the relative magnitude of the 2D and 3D MAEs. The shift in psychometric function following 2D motion adaptation was equivalent to approximately 18% motion coherence (β = 0.183, CI95 = [0.165, 0.205]). Given that previous studies on 2D MAEs using similar dynamic test stimuli have observed effects of similar magnitudes (Blake & Hiris, 1993; van Wezel & Britten, 2002), this confirms our initial impression that 3D MAEs are uniquely large. 
Figure 6
 
(A) Schematic of and (B) data from the frontoparallel motion condition. The aftereffect is much smaller than when the adaptation stimulus moved though depth, and the magnitude is also consistent with what has been reported previously for similar experiments (see text for references).
Figure 6
 
(A) Schematic of and (B) data from the frontoparallel motion condition. The aftereffect is much smaller than when the adaptation stimulus moved though depth, and the magnitude is also consistent with what has been reported previously for similar experiments (see text for references).
Dissociation of 3D MAE from inherited, monocular 2D MAEs
One rationale for why the 3D MAE is so large could be that it reflects multiple stages of adaptation: a 2D monocular stage (that processes the individual direction of motion for each eye, which are opposite between the two eyes) and a later 3D cyclopean stage (which extracts motion through depth after binocular integration). We therefore measured 2D monocular MAEs to assess the amount of adaptation that occurred in the earlier stages. If the magnitude of the MAEs from early stages could completely account for the 3D MAEs we observed, this would suggest that the 3D MAE was simply the result of inherited adaptation. Of course, given the fact that the 3D MAE is larger than the stereoscopically viewed 2D MAE, this possibility struck us as extremely unlikely—but we still wished to quantify the relative contribution of the 2D stage. 
We measured monocular MAEs by presenting adaptation and test stimuli in only one eye, as the observer performed the same left–right direction discrimination task as they had in the stereoscopic 2D MAE experiment (Figure 7B). These “pure” monocular MAEs were equivalent to approximately 20% motion coherence (Figure 7B; β = 0.198, CI95 = [0.189, 0.208]). Although monocular MAEs were considerably smaller than the 3D MAE, their magnitudes were similar to previously reported (binocular) 2D MAE, suggesting that monocular adaptation can fully account for the 2D frontoparallel MAE. 
Figure 7
 
(A) Schematic of and (B) data from the monocular motion condition (monocular adaptation, monocular test presented to the same eye). The magnitude of the MAE is no different (statistically) than for the frontoparallel condition shown in Figure 6.
Figure 7
 
(A) Schematic of and (B) data from the monocular motion condition (monocular adaptation, monocular test presented to the same eye). The magnitude of the MAE is no different (statistically) than for the frontoparallel condition shown in Figure 6.
If, in the 3D MAE, the two eyes' (monocular) channels adapted independently of one another, the amount of monocular adaptation in the 3D MAE could be assessed simply by adapting and testing each eye independently. However, motion processing channels exhibit significant binocular crosstalk, evidenced in varying degrees of reported interocular transfer of monocular MAE (Anstis & Duncan, 1983; Grunewald & Mingolla, 1998; Lehmkuhle & Fox, 1976). Monocular MAEs assessed after dichoptic 3D adaptation were weaker than either 2D or 3D MAEs, equivalent to approximately 9% motion coherence (Figure 8A, 3 β = 0.093, CI95 = [0.082, 0.104]). The associated interocular transfer of monocular MAEs is shown in Figure 8B 3 (β = 0.151, CI95 = [0.137, 0.164]). The amount of monocular adaptation following dichoptic viewing of the 3D motion stimulus likely reflects a combination of direct and indirect monocular MAEs, i.e., after contamination by partial interocular transfer of the opposite direction of adaptation in the other eye. Clearly, neither scenario of monocular 2D motion adaptation could straightforwardly account for the magnitude of the 3D MAE. The relative sizes of these effects suggest that the majority of the 3D MAE arises de novo, after the adaptation of monocular mechanisms sensitive to the 2D patterns of motion falling upon each eye. 
Figure 8
 
Data and psychometric functions from the (A) 3D adapt, monocular test condition (3D-mono) and (B) the interocular transfer condition (IOT). Under 3D adaptation conditions, one would expect the MAE resulting from monocular adaptation in the tested eye to be partially canceled by the interocular transfer of the (opposing) adaptation in the untested eye. The magnitude of the 3D-mono MAE is smaller than either 2D or 3D MAEs; confirming this expectation. (B) The data resulting from monocular adaptation in one eye and testing in the other eye (i.e., a direct measurement of the interocular transfer). Note that the reversal in the direction of the shift for the “toward” and “away” curves is as expected.
Figure 8
 
Data and psychometric functions from the (A) 3D adapt, monocular test condition (3D-mono) and (B) the interocular transfer condition (IOT). Under 3D adaptation conditions, one would expect the MAE resulting from monocular adaptation in the tested eye to be partially canceled by the interocular transfer of the (opposing) adaptation in the untested eye. The magnitude of the 3D-mono MAE is smaller than either 2D or 3D MAEs; confirming this expectation. (B) The data resulting from monocular adaptation in one eye and testing in the other eye (i.e., a direct measurement of the interocular transfer). Note that the reversal in the direction of the shift for the “toward” and “away” curves is as expected.
Experiment 2
Methods: Binocular cue contribution
The inability to explain the 3D motion aftereffect by simple combination of monocular adaptation effects suggests that the 3D MAE must be the result of adaptation of an additional mechanism. Such a mechanism would be selective to 3D motion per se and could compute 3D motion information based on changing disparities over time, interocular velocity differences, or both. This second experiment explores how binocular motion cues contribute to 3D adaptation by measuring adaptation to the two primary binocular 3D motion cues: changing disparity (CD) and interocular velocity difference (IOVD), using adaptation stimuli that have been shown to effectively isolate binocular motion cues. By adapting observers to stimuli that isolated either the CD or IOVD cue and measuring the resulting MAE using a test stimulus identical to the first experiment (containing both cues), we were able to examine the relative contribution of each binocular cue adaptation to the full 3D MAE. 
The general adaptation and test procedure in the second experiment was largely identical to the first experiment. After a sustained initial adaptation to unidirectional 3D motion either toward or away from the observer, subjects were presented with a series of variable motion coherence test stimuli moving in either the same or opposite direction as adaptation. The time course of test and top-up adaptation stimuli was identical to the first experiment. The only manipulation was that adaptation stimuli were adjusted to isolate either the velocity- or disparity-based 3D motion cue. 
Isolated cue conditions: CD, IOVD, and 3D-planar
The CD cue was isolated by presenting temporally uncorrelated dots that maintain steadily changing disparities (Figure 9). This is achieved by relocating stimulus dots within the frontoparallel plane on every display frame (60 Hz) while presenting steadily changing disparities (Braddick, 1974; Cumming & Parker, 1994; Julesz, 1971), creating a percept similar to a plane of TV snow moving through depth. 
 
Figure 9
 
Movie 4. Same format as the previous stimulus movies (Figures 3 and 4), but showing the CD-isolating stimulus.
The IOVD stimulus was created by pairing each bright dot in one eye with a dark dot in the other, and vice versa (i.e., anti-correlated dot contrast between the two eyes; Figure 10). Anti-correlation has been shown to disrupt disparity based position-in-depth information while maintaining monocular velocity information necessary for 3D motion percepts (Rokers, Cormack, & Huk, 2008). This creates a curious percept of a stimulus that moves through depth, yet does not appear to possess a coherent position in depth. Rokers et al. found that the ability to discriminate position in depth from disparities decreased monotonically with decreasing contrast correlation; we used full anti-correlation in order to maximally disrupt the perception of depth from disparities. Nevertheless (and unlike the CD cue), interocular velocity differences cannot be geometrically isolated from disparities. Therefore, anti-correlated displays cannot be said to truly “isolate” the IOVD cue but to strongly bias observers toward using the IOVD cue. For the remainder of the paper, we will simply refer to the CD and IOVD stimuli with the “isolated” and “biased” qualifiers understood. 
 
Figure 10
 
Movie 5. Same as the previous figure but showing the anti-correlated stimulus we used to bias the observers toward using the IOVD cue. Note the relative contrast of dots presented to the left and right eyes in the fusible stereopair (right side).
Because our disparity-isolating stimulus requires a planar configuration, all cue-isolating adaptation stimuli were subsequently arranged in a frontoparallel planar annulus around fixation (2.5–8° eccentricity). This planar stimulus geometry was designed to provide the smoothest, most continuous 3D motion by: (1) splitting the stimulus plane into a 4-quadrant planar annulus around fixation (centered on 45, 135, 225, and 315°) with alternating quadrants distributed in depth; (2) restricting the stimulus volume to ±54 arcmin disparity; and (3) introducing a temporal contrast ramp to the near and far limits of the stimulus volume. Quadrant pairs were distributed evenly in depth (27 arcmin, or 50% of stimulus volume, between pairs) so that at any given moment 50% of the stimulus area contained a full-contrast stimulus moving in the direction of adaptation. Relative disparity of the pinwheel pairs (e.g., quadrants 1 and 3 and quadrants 2 and 4) was randomized on each adaptation stimulus presentation to avoid monocular cues to 3D motion direction. Each quadrant pair progressed through the entire volume at a monocular velocity (or disparity equivalent) of 0.6°/s, wrapping to the opposite side after reaching the depth limit. The stimulus volume spanned ±54 arcmin (0.9°) of disparity in order to maximize the duty cycle within Panum's fusion area and avoid diplopia. A temporal contrast ramp (1/4 wave sinusoid) was applied to the outer 18 arcmin of near and far depth limits to soften the apparent motion due to the stimulus wrap. This generated percepts of alternating pinwheels of stimulus dots moving continuously toward or away from the observer. To insure that this stimulus geometry did not have an unintended effect on motion adaptation, we also measured a matched planar-wedge version of the original 3D motion stimulus that contained both binocular motion cues (Figure 11). To avoid confusion with the first experiment, we refer to this as the 3D-planar condition. However, the only distinction between this condition and the 3D MAE measured in the first experiment is the planar geometry; both conditions contain the same IOVD and CD cues present in naturally occurring stimuli. 
 
Figure 11
 
Movie 6. Same as the previous figure but showing the planar wedge stimulus configuration used in Experiment 2.
For all conditions, test stimuli were identical to Experiment 1, consisting of a 3D cloud of dots randomly located throughout the stimulus volume that contained both binocular motion cues. This allowed direct comparison of MAE magnitude across adaptation conditions and experiments. 
Results: 3D MAEs from isolated binocular cues
Adaptation to the planar stimulus containing both binocular cues (3D-planar; Figure 12A) generated 3D MAE magnitudes of approximately 44% motion coherence (β = 0.440, CI95 = [0.405, 0.474]), which is nearly identical the original 3D MAE, confirming that the change in stimulus geometry had no effect on adaptation. Isolated velocity cue adaptation (IOVD; Figure 12B) yielded large 3D motion aftereffects equivalent to approximately 41% motion coherence (β = 0.413, CI95 = [0.377, 0.448]), very similar to the magnitude of the 3D MAE. On the other hand, disparity cue adaptation (CD; Figure 12C) produced markedly weaker MAE, equivalent to only 19% motion coherence (β = 0.191, CI95 = [0.164, 0.219]). Although the magnitude of the CD MAE is similar to the previous 2D MAE, the explicit lack of coherent monocular motion in our CD stimuli makes it unlikely that they share an underlying locus of adaptation. 
Figure 12
 
(A) Essentially, a replication of the main data from Experiment 1 (i.e., Figure 5, right panel). The close agreement between the experiments indicates that the specific geometry of the stimulus was of little importance. (B) The data from the IOVD-biased adaptation stimulus, and crucially, it is nearly identical to the standard 3D MAE. (C) The data from the CD-isolating adaptation stimulus. Despite generation of a clear depth percept during adaptation, this stimulus produced a surprisingly weak MAE.
Figure 12
 
(A) Essentially, a replication of the main data from Experiment 1 (i.e., Figure 5, right panel). The close agreement between the experiments indicates that the specific geometry of the stimulus was of little importance. (B) The data from the IOVD-biased adaptation stimulus, and crucially, it is nearly identical to the standard 3D MAE. (C) The data from the CD-isolating adaptation stimulus. Despite generation of a clear depth percept during adaptation, this stimulus produced a surprisingly weak MAE.
The magnitudes of the isolated velocity-cue and disparity-cue MAEs suggest two main conclusions. First, the similarity of the velocity-based MAE to the full-cue 3D MAE implies that the 3D MAE could be fully accounted for by adaptation of a velocity-based 3D motion mechanism, without the need to consider a disparity-based contribution. Second, the mere existence of a significant (albeit smaller than the others) isolated disparity-cue MAE demonstrates that a 3D MAE can be generated using stimuli that do not contain coherent monocular motions but that do define cyclopean stereomotion. 
The overall pattern of MAE magnitudes is not only present in the combined data but also apparent on the individual observer level. In the upper row of Figure 13, bar graphs of MAE magnitudes for each individual observer (first three columns) reveal moderate interobserver variability in MAE magnitudes, though the relative pattern of results across conditions seen in the combined data (fourth column) is still clearly evident on the individual observer level. The relative pattern of results is further evident when normalizing MAE magnitudes to each observer's 3D MAE (Figure 13, bottom row). The magnitude of MAEs generated by stimuli that contain interocular velocity cues [both 3D MAE conditions (red and orange) and IOVD condition (gray)] is distinct from all other conditions that do not contain IOVD cues. 
Figure 13
 
Bar graphs depicting MAE magnitudes from individual observers (first 3 columns) for all 8 motion conditions as well as the combined data shown in the previous figures (last column). The first row shows MAE magnitudes with bootstrapped 95% confidence intervals. The second row shows the same data normalized to each observer's 3D MAE magnitude.
Figure 13
 
Bar graphs depicting MAE magnitudes from individual observers (first 3 columns) for all 8 motion conditions as well as the combined data shown in the previous figures (last column). The first row shows MAE magnitudes with bootstrapped 95% confidence intervals. The second row shows the same data normalized to each observer's 3D MAE magnitude.
General discussion
In summary, we observed a 3D motion aftereffect following adaptation to 3D motion toward or away from the observer that was substantially larger than the corresponding 2D MAE. We isolated the effects of monocular adaptation and interocular transfer and found that these could not account for the magnitude of the 3D MAE. This implies the existence of a 3D motion stage that is itself direction-selective. 
We then separated and isolated the two primary binocular cues to 3D motion (the disparity-based cue, changing disparity over time, and the velocity-based cue, interocular velocity differences) and compared the magnitudes of these cue-isolated 3D MAEs to a standard 3D MAE elicited by stimuli that contained both cues in concert. We found that the velocity-based 3D MAE was as large as the standard 3D MAE, confirming the central role of the velocity-based cue in 3D motion processing. We also observed a smaller disparity-based 3D MAE, demonstrating that direction-selective mechanisms can be engaged (and adapted) by stimuli that do not contain coherent monocular motions but that do specify cyclopean stereomotion. 
Our two sets of experiments demonstrated that (1) the 3D MAE cannot be accounted for solely by positing “inherited” adaptation effects from earlier, monocular (2D) direction-selective mechanisms and (2) both the velocity-based and the disparity-based binocular cues can generate 3D MAEs, although the velocity-based MAE is approximately twice as large as the disparity-based MAE and is more similar—almost identical actually—to the standard 3D MAE that contains both cues in concert. 
Quantitative comparison of all 3D and 2D MAEs
We will now analyze the results of both of our experiments together and consider both the MAE magnitudes (i.e., shift of the psychometric functions) as well as the underlying sensitivity (i.e., slope of the psychometric functions) in each condition. This analysis further clarifies the relations and distinctions between the constituent mechanisms of 3D motion (monocular 2D direction selectivity, interocular transfer, disparity-based “cyclopean stereomotion,” and interocular velocity differences). 
Figure 14 summarizes the results of all our experimental conditions (averaged over participants). For each condition, this plot shows the MAE magnitude (i.e., motion coherence necessary to perceptually null the MAE) on the x-axis and the sensitivity to change in coherence (i.e., inverse slope of the psychometric function) on the y-axis. These are the two parameters from the logistic fits used to characterize observer's psychometric functions in all of the experiments. The ellipses indicate bootstrapped error ranges (Mahalanobis distances; Mahalanobis, 1936) on these two parameters (68% and 95%, corresponding to ±1 and ±2 SEMs, respectively). The plot presents these distributions of bootstrapped parameter fits for each adaptation condition (collapsed across observer and direction) and reveals more about the relations between these conditions than was clear in the separate analyses of MAE magnitudes already discussed. 
Figure 14
 
Parametric plot summarizing the psychometric functions across all conditions in both experiments. Specifically, the steepness of the psychometric function (threshold sensitivity = α −1) is plotted on the y-axis as a function of the MAE magnitude (β) on the x-axis. The solid and dashed contours show bootstrapped 68% and 95% confidence intervals across all subjects. The most striking observation is that adaptation containing IOVDs produced similar large MAEs (3D, 3D planar, and IOVD), while adaptation lacking IOVDs (CD and all frontoparallel conditions) yielded comparatively small MAEs.
Figure 14
 
Parametric plot summarizing the psychometric functions across all conditions in both experiments. Specifically, the steepness of the psychometric function (threshold sensitivity = α −1) is plotted on the y-axis as a function of the MAE magnitude (β) on the x-axis. The solid and dashed contours show bootstrapped 68% and 95% confidence intervals across all subjects. The most striking observation is that adaptation containing IOVDs produced similar large MAEs (3D, 3D planar, and IOVD), while adaptation lacking IOVDs (CD and all frontoparallel conditions) yielded comparatively small MAEs.
The first thing that leaps to the eye from this parameter space plot is that the 3D MAEs [both Experiment 1 (“3D”) and Experiment 2 (“3D-planar”)] cluster near one another and overlap substantially with the isolated velocity-based (“IOVD”) MAE (red, orange, and gray conditions, respectively). Although our initial discussion of Experiment 1 already emphasized that the magnitudes of the 3D and IOVD MAEs were larger than those for other conditions (x-axis in this plot), it is also apparent that there is a substantial sensitivity difference (y-axis), i.e., sensitivity to 3D, 3D-planar, CD, and IOVD stimuli is lower than for the various 2D MAE conditions [e.g., “2D” (binocular), “2D-mono” (monocular), “IOT” (interocular transfer), and “3D-mono” (dichoptic 3D adaptation, monocular 2D test); black, green, purple, and blue, respectively]. This is indeed visible in the slopes of the psychometric functions previously shown (e.g., compare slopes in Figures 5 and 12 to Figures 68). 
Once again, both the magnitude and sensitivity of the 3D MAE (red) are clearly distinct from the 2D MAE (black). While parameters of the dichoptic 2D MAE can be fully accounted for by monocular adaptation (2D-mono, green), monocular MAE resulting from 3D motion adaptation (3D-mono, blue) cannot account for either magnitude or sensitivity of the full 3D MAE. In fact, it is clear that both sensitivity and magnitude of the 3D MAE [in either “3D” (red) or “3D-planar” (orange)] can be fully explained by isolated velocity cue adaptation (IOVD, gray). Adaptation to the disparity-isolating stimulus (CD, cyan) produces similar direction discrimination sensitivity to other 3D MAEs, but with a much smaller MAE magnitude, on the order of 2D MAE. Such a sensitivity difference is broadly consistent with the phenomenon of stereomotion suppression (Tyler, 1971). The phenomenon of “two eyes being less sensitive than one” has been explained as the result of interocular motion averaging before the 3D motion computation (Harris & Rushton, 2003), which results in eye-specific motion signals that are somewhat reduced in amplitude relative to noise levels. It has also been hypothesized as the result of a bias or prior in the visual system for low retinal velocities. Welchman et al. (2008) have shown that such a prior affects z-axis motion more, resulting in a relative decrease in sensitivity for 3D motion. 
The location of the CD MAE in this parameter space plot is also informative. Again, as previously discussed, the CD MAE is smaller than the 3D and IOVD MAEs. However, the existence of any MAE from CD-isolating adaptation is noteworthy, because it implies the existence of a 3D direction-selective mechanism that can be driven (and adapted) by stimuli that themselves contain no monocular motions. In the 2D motion literature, the ability of stimuli that do not contain retinal motion signals to create motion aftereffects has also been shown to occur through dichoptic combination (Carney & Shadlen, 1993) and even the mere implication of motion (Winawer, Huk, & Boroditsky, 2008). Such adaptation effects are more often taken to reflect indirect stimulation of existing motion processing mechanisms rather than evidence for independent mechanisms specialized for each stimulus case. More directly, Patterson et al. (1994; see also Bowd, Donnelly, Shorter, & Patterson, 2000) have shown that frontoparallel MAEs induced with disparity-isolating (cyclopean) and luminance stimuli exhibit cross-domain adaptation effects. This raises the question of whether the disparity-based 3D MAE might more accurately be thought of as indirectly stimulating an underlying velocity-based 3D motion mechanism rather than uniquely contributing to the representation of 3D directions of motion. 
A subtler point that can also be gleaned from this plot is that sensitivity (slope) to the test stimulus (which contained both binocular cues to 3D motion) was similar for all 3D MAE conditions (3D, 3D-planar, IOVD, and CD), regardless of their binocular cue content. The fact that the CD MAE exhibits the same sensitivity yet lacks the distinctive magnitude of other 3D MAEs further rules out the notion that the 3D MAE is simply larger due to a decreased 3D motion sensitivity relative to 2D motion. The tension between these two conclusions (i.e., the velocity cue can fully account for the 3D MAE, but the disparity cue in isolation can also generate a 3D MAE) is intriguing and should motivate further work about the nature of the mechanisms that combine the two binocular cues to 3D motion. 
The 2D conditions also follow an interpretable pattern in this parameter space. Note that they all fall at about the same sensitivity level, regardless of whether they involved binocular or monocular 2D test stimuli. Thus, the ordering of MAE magnitudes supports a simple set of interpretations. When viewing 2D motion in one or both eyes (and then testing in the same or both eyes), MAEs are similar regardless of whether one or both eyes are tested (“2D mono” and “2D”). This is consistent with a mechanism that is effectively cyclopean, i.e., past the point of binocular combination, so that it can be effectively and similarly driven by either eye or both. However, when one eye is adapted to 2D motion and test stimuli are presented to the other eye (standard interocular transfer, “IOT”), the MAE magnitude was slightly smaller, revealing some degree of monocularity of the 2D direction-selective mechanisms. 
Such IOT results have been previously reported (e.g., Mitchell, Reardon, & Muir, 1975; Raymond, 1993; Tao, Lankheet, Van De Grind, & van Wezel, 2003), but the more interesting result is what happens when the observer adapted to dichoptic 3D motion and then viewed a monocular 2D test stimulus. MAE magnitude was smaller than for the other 2D conditions but can be explained by considering the 2D-mono and IOT conditions. For each eye, there was same-eye adaptation in one direction, and (because the adapter was standard dichoptic 3D motion) there was also other-eye adaptation in the opposite direction. Thus, one might expect the resulting MAE to reflect the same-eye adaptation minus the IOT and, thus, approximately equal to the 2D-mono MAE magnitude minus the IOT MAE magnitude, which seems to be roughly the case. 
These results help unpack the hierarchy of motion processing stages before and after the point of binocular combination or comparison (i.e., monocularly biased and purely cyclopean stages). In addition to demonstrating that a fundamentally cyclopean direction-selective processing stage exists (i.e., which cannot be accounted for by the inputs of monocular stages and which can be adapted using purely cyclopean stereomotion), the interocular transfer results also invite speculation about the functional importance of interactions between the two monocular motion circuits. Although interocular transfer is typically viewed as an experimental convenience for assessing the degree of binocularity of a processing stage, our results also suggest that crosstalk between the left and right eyes' monocular motion stages may have important consequences for the perception of dynamic 3D scenes. Under many conditions, when one views 3D motion, opposite directions of horizontal motion are projected onto corresponding locations in the two retinae. Because there is partial (≳50%) interocular transfer, this effectively means that monocularly biased 2D mechanisms are not adapted as strongly by 3D motion than they would be by monocular or binocular viewing of frontoparallel (2D) motion. It is tempting to speculate that interocular transfer therefore serves a computational purpose akin to conventional motion opponency, suppressing the 2D mechanisms while still allowing 3D mechanisms to be strongly engaged. Future work is, of course, required to test this proposed antagonism between the two stages, and we expect 3D MAEs to be a useful psychophysical tool in this pursuit. 
Relation to past work
Our results also speak to the relation between the two primary binocular cues, IOVD and CD. As we observed in a previous psychophysical study measuring 3D motion sensitivity across a wide range of speeds and eccentricities (Czuba et al., 2010), the IOVD cue is sufficient to explain the pattern of direction discrimination sensitivity when both cues are present. However, our MAE studies also demonstrate that the CD cue, although far weaker in our experimental conditions, is also a directional signal of some sort. Although a previous fMRI study by our laboratory implicated human MT+ as responsive to both the CD and IOVD cues (Rokers et al., 2009), cross-cue adaptation experiments should more directly assess whether the two binocular cues are integrated by a common 3D motion mechanism. 
Other lines of work have investigated related, but possibly distinct, motion mechanisms. Patterson et al. have elicited frontoparallel MAEs using stimuli that contain the motion of disparity-defined patterns (Patterson, Bowd, Phinney, Fox, & Lehmkuhle, 1996; Patterson et al., 1994). It is not clear whether such stereoscopic/cyclopean frontoparallel motions are processed by the same changing disparity or 3D motion mechanisms we have studied. We can only conclude that it now seems both important and experimentally feasible to begin studying the relations between 3D motion (toward/away), 2D motion (frontoparallel), and binocular combination into cyclopean signals, despite the current convention of studying these mechanisms in isolation. Additionally, work on optic flow has demonstrated MAEs and vestibular modulations of these effects (Bunday & Bronstein, 2008; Harris, Morgan, & Still, 1981). It is also not yet clear how optic flow (classically defined as the full-field 2D pattern of retinal velocities specifying observer motion) relates to 3D motion (in our experiments, typically implemented with stereoscopic information in more localized portions of the visual field). Future investigations of the spatial scale of MAEs, and the relative contributions of monocular and binocular information to flow and 3D MAEs, may be helpful in understanding whether distinct 3D (object) motion and optic flow mechanisms exist. 
Finally, some other prior work has investigated 3D motion adaptation under conditions more similar to our study. In fact, some of the earliest explorations into the presence of multiple binocular 3D motion cues (Beverley & Regan, 1973) were based on the effects 3D adaptation had on motion detection thresholds. More recently, monocular and binocular motion adaptation paradigms have been used to assess the contribution of interocular velocity signals to 3D motion processing (Brooks, 2002a, 2002b; Fernandez & Farell, 2005, 2006; Shioiri, Kakehi, Tashiro, & Yaguchi, 2009). Preliminary results from Sakano et al. (conference abstracts: Sakano, Allison, & Howard, 2005; Sakano, Allison, Howard, & Sadr, 2006) also reported 3D motion aftereffects from binocularly paired and unpaired (unmatched between the two eyes) random dot stimuli. In accordance with our diminished disparity-based MAE, they also reported a compelling lack of 3D MAE following adaptation to disparity-isolating 3D motion stimuli. Our results complement other works on the topic by dissecting monocular and binocular contributions to the representation of 3D motion, and we expect direction-selective adaptation and MAE experiments to enjoy a fruitful extension from 2D to 3D applications. 
Conclusions
The 3D MAE provides compelling evidence for distinct neural representation of 3D directions of motion. Our assessment of binocular cue contributions shows that adaptation of the velocity-based (IOVD) mechanism alone generates a concomitantly large 3D MAE, capable of fully accounting for aftereffects generated under normal conditions, in which both IOVD and CD cues are present. These results paint an interesting picture of 3D motion processing in which the visual system explicitly represents 3D directions of motion, distinct from 2D monocular motion components, yet does so primarily based on a mechanism that compares monocular velocity signals. These MAE experiments also provide a basic experimental framework for further study of 3D motion mechanisms. Just as the MAE has been used to investigate 2D motion processing, future work can investigate factors like the spatial and temporal tuning of adaptation to further characterize this mechanism. Ideally, such studies can dovetail with similar adaptation protocols in human fMRI experiments (Rokers et al., 2009; conference abstract: Czuba, Huk, & Cormack, 2011) and ultimately be evaluated for correspondence with single-neuron recordings. 
Acknowledgments
ACH is supported by R01-EY017366 from the National Eye Institute, and CAREER Award BCS-078413 from the National Science Foundation. LKC is supported by IIS-0917175 from the National Science Foundation. 
Commercial relationships: none. 
Corresponding author: Thaddeus B. Czuba. 
Email: czuba@utexas.edu. 
Address: Center for Perceptual Systems, Department of Psychology, and Section of Neurobiology, The University of Texas at Austin, 1 University Station A8000, Austin, TX 78712-0187, USA. 
Footnotes
Footnotes
1  Sampling resolution of monocular MAEs (mono-adapt, mono-test) was increased to 5, 12.5, 20, 25, and 50% coherence.
Footnotes
2  This was desirable because of explosive slope fits in a single instance: observer TBC, rightward 2D motion adaptation; 1 in 54 total psychometric fits on the observer-motion-direction level of analysis.
Footnotes
3  In order to provide a more direct comparison between conditions plotted in Figure 8 (3D adapt-monocular test and interocular transfer) and relevant 3D motion conditions, data were combined across adaptation direction/eye corresponding to 3D motions toward and away from the observer. Therefore, “toward” test presentations are composed of monocular test stimuli moving nasally (i.e., leftward in the right eye and rightward in the left eye), and vice versa.
References
Akase E. Inokawa H. Toyama K. (1998). Neuronal responsiveness to three-dimensional motion in cat posteromedial lateral suprasylvian cortex. Experimental Brain Research, 122, 214–226. [CrossRef] [PubMed]
Anstis S. Duncan K. (1983). Separate motion aftereffects from each eye and from both eyes. Vision Research, 23, 161–169. [CrossRef] [PubMed]
Anstis S. Verstraten F. A. Mather G. (1998). The motion aftereffect. Trends in Cognitive Sciences, 2, 111–117. [CrossRef] [PubMed]
Barlow H. B. Hill R. M. (1963). Evidence for a physiological explanation of the waterfall phenomenon and figural after-effects. Nature, 200, 1345–1347. [CrossRef] [PubMed]
Beverley K. I. Regan D. (1973). Evidence for the existence of neural mechanisms selectively sensitive to the direction of movement in space. The Journal of Physiology, 235, 17–29. [CrossRef] [PubMed]
Blake R. Hiris E. (1993). Another means for measuring the motion aftereffect. Vision Research, 33, 1589–1592. [CrossRef] [PubMed]
Born R. T. Bradley D. C. (2005). Structure and function of visual area MT. Annual Review of Neuroscience, 28, 157–189. [CrossRef] [PubMed]
Bowd C. Donnelly M. Shorter S. Patterson R. (2000). Cross-domain adaptation reveals that a common mechanism computes stereoscopic (cyclopean and luminance plaid motion. Vision Research, 40, 331–339. [CrossRef] [PubMed]
Braddick O. (1974). A short-range process in apparent motion. Vision Research, 14, 519–527. [CrossRef] [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Brooks K. Mather G. (2000). Perceived speed of motion in depth is reduced in the periphery. Vision Research, 40, 3507–3516. [CrossRef] [PubMed]
Brooks K. R. (2002a). Interocular velocity difference contributes to stereomotion speed perception. Journal of Vision, 2, (3):2, 218–231, http://www.journalofvision.org/content/2/3/2, doi:10.1167/2.3.2. [PubMed] [Article] [CrossRef]
Brooks K. R. (2002b). Monocular motion adaptation affects the perceived trajectory of stereomotion. Journal of Experimental Psychology: Human Perception and Performance, 28, 1470–1482. [CrossRef]
Brooks K. R. Stone L. S. (2004). Stereomotion speed perception: Contributions from both changing disparity and interocular velocity difference over a range of relative disparities. Journal of Vision, 4, (12):6, 1061–1079, http://www.journalofvision.org/content/4/12/6, doi:10.1167/4.12.6. [PubMed] [Article] [CrossRef]
Bunday K. L. Bronstein A. M. (2008). Visuo-vestibular influences on the moving platform locomotor aftereffect. Journal of Neurophysiology, 99, 1354–1365. [CrossRef] [PubMed]
Burr D. Thompson P. (2011). Motion psychophysics: 1985–2010. Vision Research, 13, 1431–1456. [CrossRef]
Carney T. Shadlen M. N. (1993). Dichoptic activation of the early motion system. Vision Research, 33, 1977–1995. [CrossRef] [PubMed]
Cumming B. G. Parker A. J. (1994). Binocular mechanisms for detecting motion-in-depth. Vision Research, 34, 483–495. [CrossRef] [PubMed]
Cynader M. (1982). Neurons in cat visual cortex tuned to the direction of motion in depth: Effect of positional disparity. Vision Research, 22, 967–982. [CrossRef] [PubMed]
Czuba T. B. Huk A. C. Cormack L. K. (2011). Isolation of binocular 3D motion cues in human visual cortex. Journal of Vision.
Czuba T. B. Rokers B. Huk A. C. Cormack L. K. (2010). Speed and eccentricity tuning reveal a central role for the velocity-based cue to 3D visual motion. Journal of Neurophysiology, 104, 2886–2899. [CrossRef] [PubMed]
Fernandez J. M. Farell B. (2005). Seeing motion in depth using inter-ocular velocity differences. Vision Research, 45, 2786–2798. [CrossRef] [PubMed]
Fernandez J. M. Farell B. (2006). Motion in depth from interocular velocity differences revealed by differential motion aftereffect. Vision Research, 46, 1307–1317. [CrossRef] [PubMed]
Gray R. Regan D. (1996). Cyclopean motion perception produced by oscillations of size, disparity and location. Vision Research, 36, 655–665. [CrossRef] [PubMed]
Grunewald A. Mingolla E. (1998). Motion after-effect due to binocular sum of adaptation to linear motion. Vision Research, 38, 2963–2971. [CrossRef] [PubMed]
Harris J. M. Nefs H. T. Grafton C. E. (2008). Binocular vision and motion-in-depth. Spatial Vision, 21, 531–547. [CrossRef] [PubMed]
Harris J. M. Rushton S. K. (2003). Poor visibility of motion in depth is due to early motion averaging. Vision Research, 43, 385–392. [CrossRef] [PubMed]
Harris L. R. Morgan M. J. Still A. W. (1981). Moving and the motion after-effect. Nature, 293, 139–141. [CrossRef] [PubMed]
Julesz B. (1960). Binocular depth perception of computer-generated patterns. Bell System Technical Journal, 39, 1125–1163. [CrossRef]
Julesz B. (1971). Foundations of cyclopean perception (p. 406). University of Chicago Press.
Lankheet M. Palmen M. (1998). Stereoscopic segregation of transparent surfaces and the effect of motion contrast. Vision Research, 38, 659–668. [CrossRef] [PubMed]
Lehmkuhle S. W. Fox R. (1976). On measuring interocular transfer. Vision Research, 16, 428–430. [CrossRef] [PubMed]
Mahalanobis P. (1936). On the generalized distance in statistics. Proceedings of the National Institute of Sciences of India, 2, 49–55.
Mather G. (1980). The movement aftereffect and a distribution-shift model for coding the direction of visual movement. Perception, 9, 379–392. [CrossRef] [PubMed]
Maunsell J. Van Essen D. (1983). Functional properties of neurons in middle temporal visual area of the macaque monkey: II Binocular interactions and sensitivity to binocular disparity. Journal of Neurophysiology, 49, 1148–1167. [PubMed]
Mitchell D. E. Reardon J. Muir D. W. (1975). Interocular transfer of the motion after-effect in normal and stereoblind observers. Experimental Brain Research, 22, 163–173. [CrossRef] [PubMed]
Mollon J. (1974). After-effects and the brain. New Scientist, 61, 479–482.
Newsome W. T. Paré E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed]
Patterson R. Bowd C. Phinney R. Fox R. Lehmkuhle S. (1996). Disparity tuning of the stereoscopic (cyclopean) motion aftereffect. Vision Research, 36, 975–983. [CrossRef] [PubMed]
Patterson R. Bowd C. Phinney R. Pohndorf R. Barton-Howard W. J. Angilletta M. (1994). Properties of the stereoscopic (cyclopean) motion aftereffect. Vision Research, 34, 1139–1147. [CrossRef] [PubMed]
Poggio G. F. Talbot W. H. (1981). Mechanisms of static and dynamic stereopsis in foveal cortex of the rhesus monkey. The Journal of Physiology, 315, 469–492. [CrossRef] [PubMed]
Portfors-Yeomans C. V. Regan D. (1996). Cyclopean discrimination thresholds for the direction and speed of motion in depth. Vision Research, 36, 3265–3279. [CrossRef] [PubMed]
Raymond J. E. (1993). Complete interocular transfer of motion adaptation effects on motion coherence thresholds. Vision Research, 33, 1865–1870. [CrossRef] [PubMed]
Regan D. Cynader M. (1982). Neurons in cat visual cortex tuned to the direction of motion in depth: Effect of stimulus speed. Investigative Ophthalmology & Visual Science, 4, 535–550.
Regan D. Gray R. (2009). Binocular processing of motion: Some unresolved questions. Spatial Vision, 22, 1–43. [CrossRef] [PubMed]
Rokers B. Cormack L. K. Huk A. C. (2008). Strong percepts of motion through depth without strong percepts of position in depth. Journal of Vision, 8, (4):6, 1–10, http://www.journalofvision.org/content/8/4/6, doi:10.1167/8.4.6. [PubMed] [Article] [CrossRef] [PubMed]
Rokers B. Cormack L. K. Huk A. C. (2009). Disparity- and velocity-based signals for three-dimensional motion perception in human MT. Nature Neuroscience, 12, 1050–1055. [CrossRef] [PubMed]
Sakano Y. Allison R. S. Howard I. P. (2005). Aftereffects of motion in depth based on binocular cues [Abstract]. Journal of Vision, 5, (8):732, 732a, http://www.journalofvision.org/content/5/8/732, doi:10.1167/5.8.732. [CrossRef]
Sakano Y. Allison R. S. Howard I. P. Sadr S. (2006). Aftereffect of motion-in-depth based on binocular cues: No effect of relative disparity between adaptation and test surfaces [Abstract]. Journal of Vision, 6, (6):626, 626a, http://www.journalofvision.org/content/6/6/626, doi:10.1167/6.6.626. [CrossRef]
Shioiri S. Kakehi D. Tashiro T. Yaguchi H. (2009). Integration of monocular motion signals and the analysis of interocular velocity differences for the perception of motion-in-depth. Journal of Vision, 9, (13):10, 1–17, http://www.journalofvision.org/content/9/13/10, doi:10.1167/9.13.10. [PubMed] [Article] [CrossRef] [PubMed]
Shioiri S. Saisho H. Yaguchi H. (2000). Motion in depth based on inter-ocular velocity differences. Vision Research, 40, 2565–2572. [CrossRef] [PubMed]
Tao R. Lankheet M. J. M. Van De Grind W. A. van Wezel R. J. A. (2003). Velocity dependence of the interocular transfer of dynamic motion aftereffects. Perception, 32, 855–866. [CrossRef] [PubMed]
Toyama K. Komatsu Y. Kasai H. Fujii K. Umetani K. (1985). Responsiveness of Clare-Bishop neurons to visual cues associated with motion of a visual stimulus in three‐dimensional space. Vision Research, 25, 407–414. [CrossRef] [PubMed]
Tyler C. W. (1971). Stereoscopic depth movement: Two eyes less sensitive than one. Science, 174, 958–961. [CrossRef] [PubMed]
van Wezel R. J. A. Britten K. H. (2002). Motion adaptation in area MT. Journal of Neurophysiology, 88, 3469–3476. [CrossRef] [PubMed]
Watamaniuk S. McKee S. Grzywacz N. (1995). Detecting a trajectory embedded in random-direction motion noise. Vision Research, 35, 65–77. [CrossRef] [PubMed]
Welchman A. Lam J. Bülthoff H. (2008). Bayesian motion estimation accounts for a surprising bias in 3D vision. Proceedings of the National Academy of Sciences of the United States of America, 105, 12087–12092. [CrossRef] [PubMed]
Winawer J. Huk A. C. Boroditsky L. (2008). A motion aftereffect from still photographs depicting motion. Psychological Science, 19, 276–283. [CrossRef] [PubMed]
Zeki S. M. (1974). Cells responding to changing image size and disparity in the cortex of the rhesus monkey. The Journal of Physiology, 242, 827–841. [CrossRef] [PubMed]
Figure 1
 
Screen capture of the basic stimulus. In the actual experiments, the right and left halves were split between two monitors and viewed through a mirror stereoscope. Nonetheless, free-fusing will give a reasonable impression of the experimental percepts. We found that the 1/f texture in the center and surround greatly facilitated fusion and held stable vergence (which subjects could monitor with the horizontal and vertical nonius lines at fixation).
Figure 1
 
Screen capture of the basic stimulus. In the actual experiments, the right and left halves were split between two monitors and viewed through a mirror stereoscope. Nonetheless, free-fusing will give a reasonable impression of the experimental percepts. We found that the 1/f texture in the center and surround greatly facilitated fusion and held stable vergence (which subjects could monitor with the horizontal and vertical nonius lines at fixation).
Figure 5
 
(A) A schematic of the basic experimental paradigm; subjects adapted to equal and opposite motion in the two eyes (producing a 3D motion percept of dots moving either toward or away from the observer), and then judged the perceived direction of motion in depth of a test stimulus with a coherence that varied from trial to trial. (B) The psychometric functions (parametrically combined across observers; see Methods section) mapping the coherence of the test stimulus (x-axis) to the percent of trials judged as “toward” the observer (y-axis). The green curve is a “toward” adapter, the red curve is an “away” adapter, and the black curve is a reference curve collected without any adaptation. Gray error bars show bootstrapped 95% confidence intervals. The abscissa corresponding to the 0.5 ordinate on each curve represents the point of subjective equality for each condition, and bootstrapped 95% confidence intervals are shown by the black horizontal bars. Clearly, a substantial 3D MAE is present.
Figure 5
 
(A) A schematic of the basic experimental paradigm; subjects adapted to equal and opposite motion in the two eyes (producing a 3D motion percept of dots moving either toward or away from the observer), and then judged the perceived direction of motion in depth of a test stimulus with a coherence that varied from trial to trial. (B) The psychometric functions (parametrically combined across observers; see Methods section) mapping the coherence of the test stimulus (x-axis) to the percent of trials judged as “toward” the observer (y-axis). The green curve is a “toward” adapter, the red curve is an “away” adapter, and the black curve is a reference curve collected without any adaptation. Gray error bars show bootstrapped 95% confidence intervals. The abscissa corresponding to the 0.5 ordinate on each curve represents the point of subjective equality for each condition, and bootstrapped 95% confidence intervals are shown by the black horizontal bars. Clearly, a substantial 3D MAE is present.
Figure 6
 
(A) Schematic of and (B) data from the frontoparallel motion condition. The aftereffect is much smaller than when the adaptation stimulus moved though depth, and the magnitude is also consistent with what has been reported previously for similar experiments (see text for references).
Figure 6
 
(A) Schematic of and (B) data from the frontoparallel motion condition. The aftereffect is much smaller than when the adaptation stimulus moved though depth, and the magnitude is also consistent with what has been reported previously for similar experiments (see text for references).
Figure 7
 
(A) Schematic of and (B) data from the monocular motion condition (monocular adaptation, monocular test presented to the same eye). The magnitude of the MAE is no different (statistically) than for the frontoparallel condition shown in Figure 6.
Figure 7
 
(A) Schematic of and (B) data from the monocular motion condition (monocular adaptation, monocular test presented to the same eye). The magnitude of the MAE is no different (statistically) than for the frontoparallel condition shown in Figure 6.
Figure 8
 
Data and psychometric functions from the (A) 3D adapt, monocular test condition (3D-mono) and (B) the interocular transfer condition (IOT). Under 3D adaptation conditions, one would expect the MAE resulting from monocular adaptation in the tested eye to be partially canceled by the interocular transfer of the (opposing) adaptation in the untested eye. The magnitude of the 3D-mono MAE is smaller than either 2D or 3D MAEs; confirming this expectation. (B) The data resulting from monocular adaptation in one eye and testing in the other eye (i.e., a direct measurement of the interocular transfer). Note that the reversal in the direction of the shift for the “toward” and “away” curves is as expected.
Figure 8
 
Data and psychometric functions from the (A) 3D adapt, monocular test condition (3D-mono) and (B) the interocular transfer condition (IOT). Under 3D adaptation conditions, one would expect the MAE resulting from monocular adaptation in the tested eye to be partially canceled by the interocular transfer of the (opposing) adaptation in the untested eye. The magnitude of the 3D-mono MAE is smaller than either 2D or 3D MAEs; confirming this expectation. (B) The data resulting from monocular adaptation in one eye and testing in the other eye (i.e., a direct measurement of the interocular transfer). Note that the reversal in the direction of the shift for the “toward” and “away” curves is as expected.
Figure 12
 
(A) Essentially, a replication of the main data from Experiment 1 (i.e., Figure 5, right panel). The close agreement between the experiments indicates that the specific geometry of the stimulus was of little importance. (B) The data from the IOVD-biased adaptation stimulus, and crucially, it is nearly identical to the standard 3D MAE. (C) The data from the CD-isolating adaptation stimulus. Despite generation of a clear depth percept during adaptation, this stimulus produced a surprisingly weak MAE.
Figure 12
 
(A) Essentially, a replication of the main data from Experiment 1 (i.e., Figure 5, right panel). The close agreement between the experiments indicates that the specific geometry of the stimulus was of little importance. (B) The data from the IOVD-biased adaptation stimulus, and crucially, it is nearly identical to the standard 3D MAE. (C) The data from the CD-isolating adaptation stimulus. Despite generation of a clear depth percept during adaptation, this stimulus produced a surprisingly weak MAE.
Figure 13
 
Bar graphs depicting MAE magnitudes from individual observers (first 3 columns) for all 8 motion conditions as well as the combined data shown in the previous figures (last column). The first row shows MAE magnitudes with bootstrapped 95% confidence intervals. The second row shows the same data normalized to each observer's 3D MAE magnitude.
Figure 13
 
Bar graphs depicting MAE magnitudes from individual observers (first 3 columns) for all 8 motion conditions as well as the combined data shown in the previous figures (last column). The first row shows MAE magnitudes with bootstrapped 95% confidence intervals. The second row shows the same data normalized to each observer's 3D MAE magnitude.
Figure 14
 
Parametric plot summarizing the psychometric functions across all conditions in both experiments. Specifically, the steepness of the psychometric function (threshold sensitivity = α −1) is plotted on the y-axis as a function of the MAE magnitude (β) on the x-axis. The solid and dashed contours show bootstrapped 68% and 95% confidence intervals across all subjects. The most striking observation is that adaptation containing IOVDs produced similar large MAEs (3D, 3D planar, and IOVD), while adaptation lacking IOVDs (CD and all frontoparallel conditions) yielded comparatively small MAEs.
Figure 14
 
Parametric plot summarizing the psychometric functions across all conditions in both experiments. Specifically, the steepness of the psychometric function (threshold sensitivity = α −1) is plotted on the y-axis as a function of the MAE magnitude (β) on the x-axis. The solid and dashed contours show bootstrapped 68% and 95% confidence intervals across all subjects. The most striking observation is that adaptation containing IOVDs produced similar large MAEs (3D, 3D planar, and IOVD), while adaptation lacking IOVDs (CD and all frontoparallel conditions) yielded comparatively small MAEs.
Table 1
 
Adaptation and test motion condition matrix.
Table 1
 
Adaptation and test motion condition matrix.
MAE condition 〈Adapt, Test〉 Adapt directions Response Number of coherences Total trials Data in
3D 〈3D, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 5
2D 〈2D, 2D〉 Left∣Right Left∣Right 6 432 Figure 6
Monocular 〈mono, monosame Left∣Right Left∣Right 10 1440* Figure 7
3D-mono 〈3D, mono〉 Toward∣Away Left∣Right 6 864* Figure 8
Interocular transfer 〈mono, monoopposite Left∣Right Left∣Right 6 864* Figure 8
IOVD 〈IOVD, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
CD 〈CD, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
3D-planar 〈3D, 3D〉 Toward∣Away Toward∣Away 10 720 Figure 12
Unadapted 3D 〈–, 3D〉 Toward∣Away 6 432 Figure 5
Unadapted 2D 〈–, 2D〉 Left∣Right 6 432 Figure 6
Unadapted monocular 〈–, mono〉 Left∣Right 6 432 Figure 7
 

*Combined across ocular pairs.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×