Open Access
Article  |   August 2016
Early, local motion signals generate directional preferences in depth ordering of transparent motion
Author Affiliations
Journal of Vision August 2016, Vol.16, 24. doi:10.1167/16.10.24
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexander C. Schütz, Pascal Mamassian; Early, local motion signals generate directional preferences in depth ordering of transparent motion. Journal of Vision 2016;16(10):24. doi: 10.1167/16.10.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Superposition of two dot clouds moving in different directions results in the perception of two transparent layers. Despite the ambiguous depth order of the layers, there are consistent preferences to perceive the layer, which is moving either rightward or downward in front of the other layer. Here we investigated the origin of these depth order biases. For this purpose, we measured the interaction with stereoscopic disparity and the influence of global and local motion properties. Motion direction and stereoscopic disparity were equally effective in determining depth order at a disparity of one arcmin. Global motion properties, such as the aperture location in the visual field or the aperture's motion direction did not affect directional biases. Local motion properties however were effective. When the moving elements were oriented lines rather than dots, the directional biases were shifted towards the direction orthogonal to the lines rather than the actual motion direction of the lines. Therefore, depth order was determined before the aperture problem was fully resolved. Varying the duration of the stimuli, we found that the time constant of the aperture problem was much lower for depth order than for perceived motion direction. Altogether, our results indicate that depth order is determined in one shot on the basis of an early motion signal, while perceived motion direction is continuously updated. Thus, depth ordering in transparent motion appears to be a surprisingly fast process, that relies on early, local motion signals and that precedes high-level motion analysis.

Introduction
The spatial overlay of two motion directions leads to an illusory impression of depth (Wallach & O'Connell, 1953). Depending on the distribution of dots and their exact motion profile, either two transparent layers (transparent motion) or a rotating three-dimensional object (structure-from-motion) can be seen. In such a situation, the depth order of the layers and/or the rotation direction of the object are ambiguous and the perceptual system has to decide for one interpretation. Intriguingly, these decisions are not completely random because the layer moving rightward or downward is seen more often in front (Mamassian & Wallace, 2010), or a rotating cylinder is seen more often, such that the front is moving downward (Schütz, 2014). These directional preferences are observable as biases in the initial percept with short presentation durations, or as biases in the reversal probability during prolonged viewing. These directional preferences are not just random fluctuations, but represent stable individual traits because they remain constant over several weeks (Mamassian & Wallace, 2010; Schütz, 2012, 2014; Wexler, Duyck, & Mamassian, 2015). 
The directional preferences in depth ordering point towards an interaction in the processing of motion and depth, but their origin and their ecological function are completely unknown. Motion analysis is a complex process that can be divided into different stages, according to the type of stimuli and the underlying computations (for review see Burr & Thompson, 2011; Nishida, 2011). This is also reflected in the brain, where a wide range of different areas have been associated with the analysis of motion (for review see Culham, He, Dukelow, & Verstraten, 2001). Since each neuron in early visual areas only receives input from a small area on the retina, it can only signal the motion direction orthogonal to the orientation of an elongated stimulus that spans its receptive field (Wallach, 1935). It has been shown that, over time, this one-dimensional (1D) motion signal is refined by signals from the corners of the stimulus to represent the actual two-dimensional (2D) motion direction (Lorenceau, Shiffrar, Wells, & Castet, 1993; Pack & Born, 2001; Born, Pack, Ponce, & Yi, 2006; Huang, Albright, & Stoner, 2007). Moreover, depending on the type of spatio-temporal changes, first-order motion defined by luminance changes can be distinguished from second-order motion defined by contrast changes (Lu & Sperling, 2001). These different motion types are supported by partially different networks (Smith, Greenlee, Singh, Kraemer, & Hennig, 1998; Vaina, Cowey, & Kennedy, 1999; Vaina & Soloviev, 2004). All of these examples concern linear motion on the retina, which is typically caused by the translation of objects in our environment. However, linear motion is only one specific class of motion. Self-motion of the observer through the environment creates characteristic radial flow patterns on the retina (Gibson, 1950; Lappe, Bremmer, & van den Berg, 1999). These optic flow patterns are also analyzed in specialized brain areas (Morrone et al., 2000). 
Here we use psychophysical techniques to specify how illusory depth signals from transparent motion interact with genuine depth signals from stereoscopic disparity, and to narrow down the level of motion processing at which the directional biases in depth ordering arise. This will provide new insights about the interaction of motion and depth perception, the perception of motion transparency, and the potential ecological function of directional biases. 
Overview of experiments
In seven experiments, we measured how directional biases in depth ordering are modulated by disparity signals and different global and local motion properties. In Experiments 1 and 2, we investigated the relationship between directional biases in depth ordering and stereoscopic disparity as a genuine signal for distance in depth. Experiment 1 studied how the magnitude of directional biases is affected by the presence of disparity signals. Experiment 2 compared the speed of processing of directional biases and disparity signals by varying the stimulus duration. In Experiments 3 and 4, we tested how global properties of the transparent motion display affect depth order preferences. Experiment 3 tested if preferred direction seen in front varies across different locations in the visual field. Experiment 4 studied if the preferred direction seen in front depends on the local motion direction of the dots or on the global motion direction of the aperture. In Experiments 5 and 6, we investigated whether directional preferences are based on 1D or 2D motion signals. In Experiment 7 we assessed how quickly depth order is determined, by changing the motion direction at different points in time during the trial. In the following section we report general methods and the general distribution of directional biases. 
General methods
Observers
Students from Giessen University participated in the study as naïve observers and received monetary compensation or partial course credit. All experiments were in accordance with the principles of the Declaration of Helsinki and approved by the local ethics committee LEK FB06 at Giessen University (Proposal Number 2009-0008). Observers gave informed consent prior to the experiments. 
Visual stimuli
Random-dot kinematograms (RDKs) were composed of black and white dots (0.14 × 0.14 degrees of visual angle [dva]) on a gray background. The dots moved at a speed of 10 dva/s and had a limited lifetime of 200 ms. The initial lifetime was randomized for each dot separately. Each RDK was composed of two layers, moving in opposite directions and with opposite luminance polarities. Dots overlapped each other and half of the elements of each layer were drawn on top, so that occlusion was not an informative cue for depth ordering. The overall dot density of the RDK was one dot/dva2. Motion was displayed within stationary, circular apertures with a radius of 5 dva. A red crosshair was presented as fixation target throughout the whole trial (Thaler, Schütz, Goodale, & Gegenfurtner, 2013). 
Experimental procedure
Observers had to fixate the red crosshair and press an assigned button to start a trial. After the button-press the transparent motion display was shown for 1 s. After the extinction of the display, observers had to indicate the luminance polarity of the layer perceived in front (i.e., black or white). 
Data analysis and modeling
The motion direction of the transparent layers was varied and the proportion of front choices was calculated as a function of motion direction. We used a cosine model that relates a particular motion direction with the strength to perceive this direction in front (Mamassian & Wallace, 2010; Schütz, 2014). Variants of this model are described below but they always contain at least two free parameters: the preferred direction (θm) and the magnitude of directional preferences (bdir). The preferred direction corresponds to the direction that is seen most often in front. Compared to a psychometric function in classical psychophysics, this parameter is analogous to the location of the psychometric function. The magnitude of this directional bias specifies how much depth ordering depends on motion direction. Compared to a psychometric function in classical psychophysics, this parameter is analogous to the slope of the psychometric function. The model was fit to each observer and each experimental condition separately.    
The magnitude of directional biases was expressed in an exponential scale, because its distribution was closer to normal distribution on an exponential rather than a linear scale (Schütz, 2014). The model responses were transformed into proportion of choices using a logit model:    
Individual preference functions are shown in Figures 2A, 2C, 4A, and 6A. As in a previous study (Schütz, 2014), the model accounted for about 95% of the variability in depth order choices (Figure 1A). As descriptive statistics, average values and standard deviations across observers are reported throughout the manuscript. 
Figure 1
 
Overview of experiments and individual directional biases in depth order. (A) Distribution of R2 of the model for the datasets in (B). The vertical line indicates the median R2 of 0.95. (B) The graph shows the preferred directions seen in front (θm) and the magnitude of these directional biases (bdir) in the baseline conditions of all experiments. Not all of the data points represent independent measurements, because some observers participated in more than one experiment and consequently will appear several times in the graph. The following baseline conditions were selected from each experiment: Experiment 1: condition with zero disparity; Experiment 2: duration of 600 ms; Experiment 3: aperture located at the center of the screen; Experiment 4: local and global motion in the same direction; Experiment 5: lines moving orthogonal to their orientation; Experiment 6: lines moving orthogonal to their orientation, duration of 400 ms; Experiment 7: no change in motion direction.
Figure 1
 
Overview of experiments and individual directional biases in depth order. (A) Distribution of R2 of the model for the datasets in (B). The vertical line indicates the median R2 of 0.95. (B) The graph shows the preferred directions seen in front (θm) and the magnitude of these directional biases (bdir) in the baseline conditions of all experiments. Not all of the data points represent independent measurements, because some observers participated in more than one experiment and consequently will appear several times in the graph. The following baseline conditions were selected from each experiment: Experiment 1: condition with zero disparity; Experiment 2: duration of 600 ms; Experiment 3: aperture located at the center of the screen; Experiment 4: local and global motion in the same direction; Experiment 5: lines moving orthogonal to their orientation; Experiment 6: lines moving orthogonal to their orientation, duration of 400 ms; Experiment 7: no change in motion direction.
Equipment for Experiments 1 and 2
Stimuli were displayed on two 19-in. LCD Dell UltraSharp 1907FP monitors driven by a Nvidia Quadro NVS 285 with a refresh rate of 75 Hz. At a viewing distance of 55.5 cm, the active screen area subtended 39 dva horizontally and 31 dva vertically. With a spatial resolution of 1280 × 1024 pixels, this resulted in 33 pixels/dva2. The luminance of white, gray, and black pixels was 97.9, 30.3, and 0 cd/m2 (below the sensitivity of a Photo Research PR 650), respectively. Stimulus presentation was controlled by Matlab, using the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Stimuli for the left eye were presented on the left monitor screen and stimuli for the right eye on the right monitor screen. A Wheatstone mirror stereoscope, consisting of two first surface mirrors (169 × 194 mm), was used to bring the two views into alignment. 
Equipment for Experiments 2 through 7
Stimuli were displayed on a 21-in. SONY GDM-F520 CRT monitor driven by a Nvidia Quadro NVS 290 graphics board with a refresh rate of 100 Hz. At a viewing distance of 47 cm, the active screen area subtended 45° horizontally and 36° vertically. With a spatial resolution of 1280 × 1024 pixels, this results in 28 pixels/dva. The luminance of white, gray, and black pixels was 94, 48, and 1 cd/m2, respectively. 
Eye position signals of the right eye were recorded with a video-based eye tracker (EyeLink 1000; SR Research, Ottawa, Ontario, Canada) and were sampled at 1000 Hz. The eye tracker was driven by the Eyelink Toolbox (Cornelissen, Peters, & Palmer, 2002). 
Trials were excluded from further analysis if the eye position deviated by more than 2 dva from the central fixation (between 1% and 7% in Experiments 2 through 7). 
General results
To provide an overview of the directional preferences in depth ordering and individual differences in these preferences, we report the measured biases from the neutral conditions (without disparity or conflicts between local and global motion properties) of all seven experiments. Replicating previous studies (Mamassian & Wallace, 2010; Wexler et al., 2015), we found directional biases in depth ordering of transparent motion (Figure 1B). Downward and rightward motion directions were most often seen in front. Together with the previous evidence for directional biases, these results show clearly that the visual system uses motion direction as a cue to overcome the ambiguous depth order of transparent motion. In the following experiments, we studied how the preferred direction seen in front, and the magnitude of this bias are modulated by disparity signals and by global and local motion signals. 
Experiments 1 and 2: Motion direction versus stereoscopic disparity
Of course, directional preferences are only one of several potential cues to depth order in transparent motion. Previous research distinguished surface features that are effective cues to depth order, such as speed (Moreno-Bote, Shpiro, Rinzel, & Rubin, 2008; Mamassian & Wallace, 2010; Schütz, 2011), from surface features that do not affect depth order, such as dot density (Moreno-Bote et al., 2008; Schütz, 2011). While directional preferences and surface features are merely ambiguous cues to depth order, stereoscopic disparity is an unambiguous cue to distance in depth. Since the effect of stereoscopic disparity on perceived depth in transparent motion has not been tested so far, we investigated if disparity signals interact with directional preferences or if disparity signals completely abolish the effect of directional preferences. 
We were interested in the time course of directional preferences and disparity signals in particular, because the temporal properties of disparity signals are currently debated (Caziot, Valsecchi, Gegenfurtner, & Backus, 2015). On one hand, the upper temporal frequency of stereopsis is about 10 Hz (Richards, 1972; Kane, Guan, & Banks, 2014), suggesting that disparity is processed slowly. On the other hand, disparity signals can be effective even at short presentations (Uttal, Fitzgerald, & Eskin, 1975). A recent study showed that the speed–accuracy trade-off is similar for luminance and disparity signals (Caziot et al., 2015), suggesting that the processing of disparity is not particularly slow compared to other visual features. Depending on the relative processing speed of motion direction and stereoscopic disparity, it might be that initial depth ordering is dominated by directional preferences, disparity or an interaction of both. To distinguish between these alternatives, we varied the amount of stereoscopic disparity and the duration of motion in two experiments. 
Methods
Observers
Nine and seven naïve observers participated in Experiments 1 and 2, respectively. All observers had normal stereo vision and stereo thresholds below 40 arcsec assessed by a Stereo Optical graded circle test (Stereo Optical, Chicago, IL). 
Visual stimuli
The motion duration was 600 ms in Experiment 1 and varied in Experiment 2. In Experiment 2, the stimuli were immediately followed by a 600 ms mask, in which the motion direction (−180 to 180° and the disparity (± one arcmin) of each dot was randomized separately. This mask was used to effectively limit visual processing to the specified presentation duration. 
Design
The motion direction of the black dots was varied in 24 steps of 15° from −180 to 165°. In Experiment 1 the stereoscopic disparity was varied in six logarithmic levels at 0, 0.3, 0.53, 0.95, 1.68, and 3 arcmin. Each condition was repeated eight times, leading to a total of 1,152 trials. In Experiment 2 the stereoscopic disparity was fixed at 1 arcmin and the motion duration was varied in five levels at 40, 80, 160, 300, and 600 ms. Each condition was repeated 10 times, leading to a total of 1,200 trials. 
Data analysis and modeling
We analyzed the proportion of front choices as a function of the motion direction of the layer with negative disparity. Negative disparity corresponds to near/front. We extended the cosine model to extract the magnitude of the disparity bias (bdisp), in addition to the magnitude of the directional bias (bdir).    
Using this formula, the magnitude of the direction bias and the disparity bias can be compared directly to each other. Since disparity was varied independently of motion direction, disparity signals should not lead to a change in preferred motion direction (θm), but to an overall weakening of the magnitude of directional preferences (bdir). Thus, we only compared the magnitude of biases between conditions and fitted the same preferred direction (θm) to all disparity and motion duration conditions, thereby improving the stability of the fits. 
Results
In Experiment 1, we varied the stereoscopic disparity in six steps from zero to three arcmin and measured the joint influence of stereoscopic disparity and motion direction on perceived depth order (Figure 2A, B). The magnitude of the direction bias (bdir in Equation 3) decreased with disparity, F(5, 40) = 12.54, p < 0.001, most notably at larger disparities (0.19 ± 0.24, 0.15 ± 0.27, 0.15 ± 0.19, 0.01 ± 0.43, −0.12 ± 0.75, and −0.81 ± 0.39 for the six disparity levels, respectively). The magnitude of the disparity bias (bdisp in Equation 3) increased with disparity, F(5, 40) = 101.15, p < 0.001, most notably at low disparities (−0.99 ± 0.02, −0.37 ± 0.30, −0.15 ± 0.31, 0.19 ± 0.26, 0.45 ± 0.36, and 0.45 ± 0.20 for the six disparity levels, respectively). This differential effect of stereoscopic disparity on the direction and disparity biases was supported by a significant interaction between stereoscopic disparity and type of bias, F(5, 40) = 69.04, p < 0.001. Interestingly, disparity and directional biases were equal in magnitude at a stereoscopic disparity of about one arcmin. This means that the magnitude of typical directional preferences is equivalent to a stereoscopic disparity of about one arcmin. 
In Experiment 2 (Figure 2C, D), the transparent motion display was presented with a fixed disparity of one arcmin and the presentation duration was varied in five steps (40, 80, 160, 300, and 600 ms). To limit visual processing to the presentation duration, the stimuli were followed by a 600 ms mask in which the motion direction and the disparity of each dot were randomized. The magnitude of the direction bias increased with duration, F(4, 24) = 9.72, p < 0.001, most notably at short durations (−0.46 ± 0.33, −0.32 ± 0.53, 0.33 ± 0.45, 0.17 ± 0.38, and 0.33 ± 0.54 for the five durations, respectively). Similarly, the magnitude of the disparity bias increased with duration, F(4, 24) = 15.29, p < 0.001, (−0.38 ± 0.37, −0.05 ± 0.46, 0.43 ± 0.41, 0.36 ± 0.36, and 0.48 ± 0.52 for the five durations, respectively). There was no significant difference between the two types of biases, F(1, 6) = 1.87, p = 0.221, and there was also no significant interaction between duration and the type of bias, F(4, 24) = 0.47, p = 0.759, suggesting that directional and disparity biases were affected by the presentation duration in the same way. 
Discussion
When stereoscopic disparity was varied in Experiment 1, the magnitude of the direction and disparity biases corresponded at a disparity of about one arcmin. This means that the disambiguating signal from motion direction had the same strength as stereoscopic disparity of one arcmin. To emphasize this point, our result is not that observers perceive a one arcmin distance in depth in transparent motion, but that the influence of direction and disparity on depth ordering is balanced at a disparity of one arcmin. The value of one arcmin is clearly above the typical disparity thresholds of about 30 arcsec in the population (Coutant & Westheimer, 1993) and of below 40 arcsec in our observers and emphasizes the importance of directional preferences in the perception of depth order in transparent motion. When the motion duration was varied in Experiment 2, both direction and disparity biases increased with increasing motion duration up to about 160 ms. These results point towards a common origin of directional biases and disparity signals. A close connection between the perception of transparent motion and disparity is also supported by the finding that disparity differences between layers facilitate the perception of transparent motion (Hibbard & Bradshaw, 1999; Greenwood & Edwards, 2006). 
A common processing of disparity and motion signals points to several candidate areas for the neural substrate of directional preferences in depth order. Neurons selective for disparity have been found in several visual areas, including the primary visual cortex (V1; Cumming & Parker, 1999), the middle temporal area (MT; Maunsell & Van Essen, 1983b), and the medial superior temporal area (MST; Roy, Komatsu, & Wurtz, 1992). Neurons sensitive for motion direction have been identified in the very same areas: V1 (De Valois, Yund, & Hepler, 1982), MT and MST (Maunsell & Van Essen, 1983a). Hence, the directional biases in transparent motion could potentially arise at either of these areas. Previous research showed that these areas differ with respect to the type of motion processing. Thus, we used motion psychophysics to distinguish between different types of motion signals in the following experiments. 
Figure 2
 
Pitting direction bias against disparity. (A) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different magnitudes of stereoscopic disparity. (B) Magnitude of direction (bdir) and disparity biases (bdisp) as a function of stereoscopic disparity. (C) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different motion durations in seconds. (D) Magnitude of direction and disparity biases as a function of presentation duration. (A, C) Symbols represent data and lines represent the fit of the model. (B, D) Direction and disparity biases are displayed in blue squares and magenta triangles, respectively. Error bars indicate 95% confidence intervals.
Figure 2
 
Pitting direction bias against disparity. (A) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different magnitudes of stereoscopic disparity. (B) Magnitude of direction (bdir) and disparity biases (bdisp) as a function of stereoscopic disparity. (C) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different motion durations in seconds. (D) Magnitude of direction and disparity biases as a function of presentation duration. (A, C) Symbols represent data and lines represent the fit of the model. (B, D) Direction and disparity biases are displayed in blue squares and magenta triangles, respectively. Error bars indicate 95% confidence intervals.
Experiment 3: Directional preferences in different locations of the visual field
Area MST is one of the candidate areas in which neurons are sensitive to motion direction (Maunsell & Van Essen, 1983a) as well as to stereoscopic disparity (Roy et al., 1992). In particular, area MST is involved in the perception of optic flow (Duffy & Wurtz, 1991; Smith, Wall, Williams, & Singh, 2006): forward or backward locomotion generates a specific field of visual motion on the retina (Gibson, 1950), where motion expands or contracts radially relative to a focal point. 
In Experiment 3, we investigated a potential ecological interpretation of the directional preferences in depth ordering. We hypothesized that the typical motion direction in optic flow is interpreted as background motion and thus, is more likely to be perceived in the background with transparent motion. In this case, the preferred axis in depth order judgments should be aligned with the typical direction of optic flow at different locations in the visual field. For instance, in the lower visual field, forward locomotion typically creates downward optic flow, and if this motion is interpreted as background, we should expect an upward depth order bias in this part of the visual field. Of course, this is a gross simplification because the exact pattern of optic flow on the retina depends on the eye and head position relative to the heading direction. However, even a slight imbalance in the distribution of motion directions from optic flow across the visual field might contribute to the directional preferences in transparent motion. To test this hypothesis, we presented transparent motion displays at four different eccentric locations as well as at the center of the visual field (Figure 3A; Movie 1). 
Figure 3
 
Influence of aperture location on direction bias. (A) Illustration of tested locations (see also Movie 1). The dashed lines indicate possible aperture locations. The thin lines indicate the axes of optic flow on the retina based on forward (blue arrows) or backward locomotion (red arrows) towards the fixation location. The gray arrows indicate the motion direction of black and white dots in one trial. (B) Preferred directions for the different aperture locations. Directions of optic flow for backward and forward locomotion are indicated by the red and blue lines, respectively. (C) Visual field gain over R2. Positive values indicate an alignment of direction biases with motion directions caused by forward locomotion; negative values with motion directions caused by backward locomotion. (D) Correlation between the average magnitude of preferences (parameter bdir in Equation 4) and the standard deviation of preferred directions across the five tested locations. The black line represents a linear regression. (B–D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Figure 3
 
Influence of aperture location on direction bias. (A) Illustration of tested locations (see also Movie 1). The dashed lines indicate possible aperture locations. The thin lines indicate the axes of optic flow on the retina based on forward (blue arrows) or backward locomotion (red arrows) towards the fixation location. The gray arrows indicate the motion direction of black and white dots in one trial. (B) Preferred directions for the different aperture locations. Directions of optic flow for backward and forward locomotion are indicated by the red and blue lines, respectively. (C) Visual field gain over R2. Positive values indicate an alignment of direction biases with motion directions caused by forward locomotion; negative values with motion directions caused by backward locomotion. (D) Correlation between the average magnitude of preferences (parameter bdir in Equation 4) and the standard deviation of preferred directions across the five tested locations. The black line represents a linear regression. (B–D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Methods
Observers
The author ACS and nine naïve observers participated in these experiments. 
Visual stimuli
The radius of the aperture was three dva. 
Design
The motion direction of the black dots was varied in 16 steps of 22.5° from −180° to 157°. The stationary apertures were presented either at the screen center or at an eccentricity of six dva in one of the four cardinal directions (−90, 0, 90, 180°). Each condition was repeated 10 times, leading to a total of 800 trials. 
Data analysis and modeling
To study directional biases, the preference for each motion direction of the black dots was calculated as the proportion of black-in-front choices. We added a parameter (bpol) to account for potential biases to see black or white in front:    
The polarity bias was added to improve the fitting, but it did not affect the overall results for the directional biases. In this experiment we were mainly interested in the motion direction θm, which is preferentially seen in front, and how this preferred direction varies across the visual field. 
We tested whether the optic flow that would be generated by locomotion contributed to the directional preferences in depth ordering with a simple model. Given that forward locomotion optic flow direction is predictably different at different eccentric visual field locations (Figure 3A), we calculated whether this motion at one of the four eccentric locations (θfw; 0°, 90°, 180°, and −90° at locations of 0°, 90°, 180°, and 90°, respectively) significantly biased the baseline preferred direction that is identical at all locations (θb):    
Since the typical optic flow directions during forward and backward self-motion are exactly opposite, the weighting parameter (bfw) could take negative and positive values, indicating an alignment of the preferred direction seen in front with optic flow resulting from backward and forward self-motion, respectively. A value close to zero indicates that preferred directions are not aligned with optic flow directions across the visual field. 
Results
We found that visual field location had very little influence on the directional biases in perceived depth ordering. The average preferred directions were −107° ± 56°, −115° ± 39°, −111° ± 46°, −102° ± 58°, and −122° ± 54° for the central, lower, right, upper and left locations, respectively (Figure 3B). Hence, on average, motion to the lower left was perceived in the front and this was not related to the typical motion direction in optic flow at different locations in the visual field. Nevertheless, the preferred directions for some individual participants varied between the different locations. To analyze possible effects of typical optic flow direction in more detail at an individual level, we calculated the contribution of the typical optic flow direction during forward self-motion at these locations on top of the baseline preferred direction across all locations (Equation 5). This model could match the variations in preferences across locations for some observers (Figure 3C), but the average weight (0.08 ± 0.58) was not significantly different from zero, t(9) = 0.44, p = 0.670, and the weights for most observers were close to zero. This means that the distribution of preferred directions was not aligned with the direction of optic flow during forward or backward self-motion. 
Furthermore, there was a negative correlation between the standard deviation of preferred directions across the five locations and the average magnitude of preferences (parameter bdir in Equation 4), r(9) = −0.67, p = 0.036: participants with weaker preferences showed more variability across locations (Figure 3D). This indicates that differences in preferences between locations were more likely if these preferences were weak in the first place. 
Discussion
Directional preferences were not related to the typical direction of optic flow in different locations of the visual field. Thus, an ecological explanation that the depth order preferences in transparent motion are caused by the typical distribution of retinal motion directions can be dismissed. This finding is consistent with psychophysical results showing that optic flow does not allow the perception of transparent motion: the combination of a translating dot field and an optic flow pattern does not lead to an overlay of two layers. In contrast, both fields are perceived in one layer, causing a shift in the perceived focus of expansion of the optic flow (Duffy & Wurtz, 1993). These findings also suggest that visual areas which process optic flow, such as MST (Duffy & Wurtz, 1991; Morrone et al., 2000) are most likely not involved in the depth order preferences in transparent motion. 
Although there was no consistent pattern of directional biases across different locations, there were some minor idiosyncratic variations of preferred directions. We note that similar idiosyncratic variations of perceptual biases in different parts of the visual field have been found in onset binocular rivalry (Carter & Cavanagh, 2007) and in the perception of facial features (Afraz, Pashkam, & Cavanagh, 2010). In the next experiment, we remain interested in the spatial properties of the motion display, but focus this time on its size rather than location. 
Experiment 4: Global versus local motion signals
Along the hierarchy of motion processing, motion signals are integrated over larger areas, leading to a global motion percept (Burr & Thompson, 2011; Nishida, 2011). Moving from V1 to MT, the size of receptive fields increases and neurons integrate motion over larger areas of the visual field (Movshon & Newsome, 1996). 
Theoretically, directional preferences could be present in both local and global motion computations. Local and global motions can be disentangled by moving the aperture in a direction that is not necessarily consistent with the motion of the elements inside the aperture (Hedges et al., 2011). Therefore, in the following experiment, the apertures of the RDKs were moving independently from the local motion direction of the dots (Movie 2). We used an aperture size of five dva radius, which is much larger than the typical receptive field size in V1 (Dumoulin & Wandell, 2008). Hence the aperture motion could not be easily detected by neurons in V1. 
Methods
Observers
Eight naïve observers participated in this experiment. The data of one additional observer had to be excluded from further analysis because that observer had weak directional preferences, which made it difficult to determine the preferred direction precisely. 
Visual stimuli
The apertures of the two opposite motions were separated and could move or were stationary. If they were moving, the start locations of the apertures were chosen such that the borders of the apertures were touching each other at the beginning of the trial. At the end of the trial, the apertures were touching each other on the opposite side. In two control conditions, the apertures were stationary at the center of the screen, either overlapping completely or overlapping to 25%. An overlap of 25% was chosen because it corresponds to the average overlap across one trial in the moving conditions. 
Design
The motion direction of the black dots was varied in 16 steps of 22.5° from −180° to 157°. Four conditions with moving apertures (−90, 0, 90, and 180° rotation relative to local dot motion) and two control conditions with stationary apertures (overlapping completely or overlapping to 25%) were tested. Each condition was repeated 10 times, leading to a total of 960 trials. Data from the two stationary control conditions and the 0° condition were combined because there were no systematic differences. 
Data analysis and modeling
To quantify the relative gain of global over local motion direction, we calculated preferred directions in terms of local motion direction separately for the four different offsets between global and local motion direction and the 0°/stationary conditions (Figure 4A, B). Data were fitted using Equation 4. For each global motion direction θi, we calculated the difference between the preferred direction in this condition and the preferred direction in the 0°/stationary conditions (Figure 4C). The global motion gain was defined as the slope of a linear regression of these differences on the actual offsets between global and local motion (Figure 4D). A gain of unity is obtained when depth order is solely determined by global motion, and it is zero when depth order is solely determined by local motion. 
Figure 4
 
Influence of aperture motion on direction bias. (A) Depth order preferences as a function of local motion direction for one observer. The different colors denote different separations between local and global motion direction. Symbols represent data and lines represent the fit of the model. (B) Preferred motion direction as a function of the difference between global and local motion direction. Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals. (C) Shift of preferred motion direction relative to the preference in the 0° condition. The horizontal line indicates a global motion gain of zero and the diagonal line, a global motion gain of unity. (D) Gain of global motion direction.
Figure 4
 
Influence of aperture motion on direction bias. (A) Depth order preferences as a function of local motion direction for one observer. The different colors denote different separations between local and global motion direction. Symbols represent data and lines represent the fit of the model. (B) Preferred motion direction as a function of the difference between global and local motion direction. Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals. (C) Shift of preferred motion direction relative to the preference in the 0° condition. The horizontal line indicates a global motion gain of zero and the diagonal line, a global motion gain of unity. (D) Gain of global motion direction.
Results
The preferred directions in terms of local motion were quite stable over the different global motion directions (Figure 4). The relative gain of global versus local motion (see definition above in Methods) was close to zero for the tested rotations of −180° (−0.02 ± 0.02), −90° (0.02 ± 0.06) and 90° (−0.03 ± 0.04). Thus the perceived depth order was only determined by the local motion direction of the dots and completely independent of the global motion direction of the aperture. This is quite surprising since the motion direction of the dots is hard to see as it is subjectively masked by the salient motion of the aperture (Movie 2), and yet it is this dot motion that determined depth order. 
Discussion
The finding that global properties, such as the motion direction of the aperture, have no influence on the directional preferences for depth ordering is interesting with respect to the computations forming the basis of motion transparency. It has been demonstrated that the segmentation of two overlapping motion patterns into two transparent motion layers depends on locally unbalanced signals, such that local elements can be distinguished by location, disparity or spatial frequency (Qian, Andersen, & Adelson, 1994). Locally balanced signals lead to the perception of flicker instead of transparent motion. While direction-selective neurons in V1 are similarly activated by locally balanced and unbalanced motion, neurons in MT are more activated by the unbalanced motion (Qian & Andersen, 1994). Even though motion transparency has been shown to be influenced by local motion signals, the segregation of two layers in depth reflects an integration of these local signals. Indeed, the percept of two overlapping layers covering the whole display, instead of a patchy collection of depths in the neighborhood of the moving dots, suggests that depth order in motion transparency is the result of a global integration process of depth information. Our findings that this global depth percept relies on local but not on global motion signals is therefore remarkable. This result emphasizing local properties of the motion motivated us to investigate further the fine characteristics of local motion. 
Experiments 5 and 6: 1D versus 2D motion signals
The results of Experiments 3 and 4 showed no evidence that global properties of the stimuli such as aperture location or aperture motion are involved in the directional biases of depth ordering. Instead it might be that the biases arise at an early local stage of motion processing. However, even within early local motion processing, different processing steps can be distinguished. Since individual neurons only receive input from stimuli within their small receptive field, they cannot differentiate between different types of motion of a bar traversing their receptive field (Wallach, 1935). As a result, they initially represent the 1D motion signal orthogonal to the bar's orientation. Over time, this 1D signal is transformed into the actual 2D signal when 2D motion information from the line endings propagates through the system. This transition has been measured previously for perception of motion direction (Lorenceau et al., 1993), for neural activity in area MT (Pack & Born, 2001), and for smooth pursuit eye movements (Pack & Born, 2001; Masson & Stone, 2002; Born et al., 2006). Here, we studied if the biases in depth ordering are affected by the 1D or the 2D motion signal; that is, before (1D) or after (2D) the aperture problem is solved. By using long, low-contrast lines we weakened the 2D motion signal from the line endings and by rotating the lines' orientation relative to the 2D motion direction, we created different offsets between 1D and 2D signals. 
Methods
Observers
The author ACS and 10 naïve observers participated in Experiment 5. The author ACS and seven naïve observers participated in Experiment 6. The data of one additional observer in Experiment 5 and three additional observers in Experiment 6 had to be excluded from further analysis because they had weak directional preferences that made it difficult to determine the preferred direction precisely. 
Visual stimuli
Instead of dots, the single elements were lines of 2.5 dva length, 0.07 dva width, and 20% contrast. Long, low contrast lines were used to increase the potential effect of 1D signals (Lorenceau et al., 1993; Born et al., 2006). We used lines as stimuli because they contain 1D signals from the line centers and 2D signals from the line endings at the same time, such that perceived depth order could rely on both signals. Although more complex stimuli have recently been introduced to study the interaction between 1D and 2D signals, in particular the multiple-aperture stimulus (Amano, Edwards, Badcock, & Nishida, 2009), we refrained from using these stimuli for two main reasons. First, we were not interested in the global pooling of motion directions across space that these multiple-aperture stimuli have been specifically designed for. Second, in contrast to these stimuli, our moving lines stimulus seems to create a stronger percept of two overlapping, transparent surfaces and apparent depth, presumably because of the grouping by similarity of the identical line orientations. 
The contrast polarity of the lines was randomized across trials, but all lines had the same polarity in one trial to avoid depth cues from occlusion. The apertures were stationary at the screen center and had a radius of 5 dva. The line density was 0.5 lines/dva2. The orientation of the lines was varied relative to their motion direction. The standard condition is 0°, in which the lines are moving orthogonal to their orientation, so that 1D and 2D motion signals are identical. Motion duration was varied in Experiment 5 and 1 s in Experiment 6
Experimental procedure
In order to measure perceived motion direction and perceived depth order at the same time, observers had to rotate a response line such that it matched the perceived motion direction of the layer they perceived as being in front. 
Design
The motion direction of one of the layers was varied in 12 steps of 15° from 0 to 165°. In Experiment 5, the lines' orientation relative to the motion direction was varied in seven levels: −45°, −30°, −15°, 0°, 15°, 30°, and 45°. Each condition was repeated eight times, leading to a total of 672 trials. In Experiment 6, the lines' orientation relative to the motion direction was varied in three levels: −45°, 0°, and 45°. The motion duration was varied at 100, 200, and 400 ms. Each condition was repeated six times, leading to a total of 648 trials. 
Data analysis and modeling
We transformed the settings of the observers into two measurements. First, we classified the RDK's motion direction that was closest to the direction of the response line as the motion seen in front to obtain a binary depth order judgment. Second, we calculated the deviation of perceived motion direction from the actual 2D motion direction as the difference between the motion direction seen in front and the direction of the response line. 
Based on the depth order judgments, we analyzed the depth order preferences for directions from 0° to 165° as the proportion of front-choices in these directions. The depth order preferences for the other directions from −180° to −15° were calculated as 1 minus the proportion of their opposite direction. Since the preferences for half of the directions were derived from the preferences for the opposite directions, the average proportion of choices across all directions has to be 0.5 and constant biases are impossible. Since all lines had the same luminance polarity, it was not necessary to fit a polarity bias (bpol) and we used Equation 1 instead of Equation 4 to fit the data. 
To quantify the relative gain of 1D over 2D motion, we calculated preferred directions and perceived motion directions for the different line orientations separately, relative to the 2D motion direction. We then divided the differences between the 0° condition and all other conditions by the line orientations and defined the line orientation gain as the slope of the fitted linear regression (Figure 5A). In other words, a line orientation gain of unity is obtained when depth order is solely determined by 1D motion, and it is zero when depth order is solely determined by 2D motion. The decay of line orientation gain with increasing motion duration in Experiment 6 was also quantified by linear regression. 
Figure 5
 
Contrasting 1D versus 2D motion signals. (A) Change in direction relative to the condition in which the line is moving orthogonal to its orientation (0°). The horizontal black line indicates data if only the 2D motion direction matters and line orientation has no influence on perception. The diagonal black line indicates data if only 1D motion direction matters. Perceived motion direction and depth order are displayed in red diamonds and blue squares, respectively. (B) Line orientation gain. A value of unity means that only the 1D motion direction matters; a value of zero means that only the 2D motion direction matters. (C) Line orientation gain as a function of motion duration. The lines show linear fits to the data. The black decay function represents the average decay of 1D gain of pursuit eye movements in response to diamond stimuli, taken from Masson & Stone (2002). This function is shifted leftwards by 100 ms (as an estimate of smooth pursuit latency) to account for the delay in motor execution. (D) Intercepts from the linear fits from (C). (E) Slopes from the linear fits from (C). (A–E) Data of individual observers are plotted in light colors and the average across observers in dark colors. Error bars indicate 95% confidence intervals.
Figure 5
 
Contrasting 1D versus 2D motion signals. (A) Change in direction relative to the condition in which the line is moving orthogonal to its orientation (0°). The horizontal black line indicates data if only the 2D motion direction matters and line orientation has no influence on perception. The diagonal black line indicates data if only 1D motion direction matters. Perceived motion direction and depth order are displayed in red diamonds and blue squares, respectively. (B) Line orientation gain. A value of unity means that only the 1D motion direction matters; a value of zero means that only the 2D motion direction matters. (C) Line orientation gain as a function of motion duration. The lines show linear fits to the data. The black decay function represents the average decay of 1D gain of pursuit eye movements in response to diamond stimuli, taken from Masson & Stone (2002). This function is shifted leftwards by 100 ms (as an estimate of smooth pursuit latency) to account for the delay in motor execution. (D) Intercepts from the linear fits from (C). (E) Slopes from the linear fits from (C). (A–E) Data of individual observers are plotted in light colors and the average across observers in dark colors. Error bars indicate 95% confidence intervals.
Results
To disentangle 1D and 2D motion signals, lines with different orientations relative to their motion direction were used as individual elements (Figure 5A, B; Movie 3). Please note that the line orientation itself cannot create a depth order preference, because the lines in both layers had the same orientation. As in the previous experiments, the layers were constituted by opposite motion directions, such that motion direction (either 1D or 2D) was the only potential cue for depth ordering. We calculated the relative gain of 1D and 2D signals as the impact of line orientation on perceived motion direction and depth order preferences (see definition above in Methods). We found that line orientation affected perceived motion direction and perceived depth order differently, F(6, 60) = 9.06, p < 0.001. The line orientation gain was significantly lower, t(10) = 4.85, p < 0.001, for perceived motion direction (0.12 ± 0.11) than for perceived depth order (0.45 ± 0.26). This indicates that depth order was much more influenced by the early 1D signal than perceived motion direction. Given that 2D motion signals have a slower time course than 1D signals, these differences could be due to differences in the timing of perception of depth order and motion direction. By presenting the RDKs for different durations in Experiment 6, we aimed to quantify the timing of depth order and motion direction judgments (Figure 5C, D). In fact, motion duration affected the line orientation gain for perceived motion direction and perceived depth order differently, F(2, 14) = 7.10, p = 0.007. The baseline orientation gain was significantly lower, t(7) = 2.38, p = 0.05, for perceived motion direction (0.60 ± 0.09) than for perceived depth order (0.71 ± 0.07). More importantly, the decay of line orientation gain over time was significantly faster, t(7) = 3.92, p = 0.01, for perceived motion direction (−1.07 ± 0.38) than for perceived depth order (−0.31 ± 0.25). This confirms our result above that depth ordering was determined by an early, 1D motion signal and shows that this depth order was kept afterward, even for long motion durations. Perceived motion direction, however, was updated continuously as the motion representation shifted from 1D to 2D motion signals. 
As can be seen from Figure 5C, an even faster decay was reported for smooth pursuit eye movements in another study (Masson & Stone, 2002), but this difference is likely caused by differences in the stimuli. Since we used long lines at low contrast, the 1D motion signal is comparatively strong, which typically slows down the transition to from 1D to 2D motion (Lorenceau et al., 1993; Born et al., 2006). 
Discussion
Using fields of moving lines, we demonstrated that perceived depth order is dominated by 1D motion signals orthogonal to the orientation of the moving lines. This means that depth order is based on an early motion signal, when the aperture problem (Wallach, 1935) is not yet completely resolved. Neurophysiological studies in monkeys showed that non–end-stopped neurons in areas V1 (Pack, Livingstone, Duffy, & Born, 2003) and early responses of neurons in area MT (Pack & Born, 2001) are subject to the aperture problem. End-stopped neurons in V1 and later responses of neurons in MT are independent of line orientation and signal the true 2D motion direction. Since depth order was subject to the aperture problem, it is possible that non–end-stopped neurons in V1 or early neural responses in MT form the basis of the directional biases in depth ordering. 
Experiment 7: Changes in motion direction
To further test the claim that depth order is determined quickly based on motion direction and then kept for the first dominance period, we used dots as individual elements and changed their motion direction by 45° at various delays after stimulus onset (Movie 4). 
Methods
Observers
The author ACS and 11 naïve observers participated in this experiment. The data of one additional observer had to be excluded from further analysis because that observer had weak directional preferences, which made it difficult to determine the preferred direction precisely. 
Design
The motion direction of the black dots was varied in 16 steps of 22.5° from −180° to 157°. The motion direction of the dots either changed 20, 40, or 80 ms after motion onset by 45° clockwise or counter clockwise or it remained identical throughout the trial in a control condition. Each condition was repeated eight times, leading to a total of 896 trials. 
Data analysis and modeling
Here, depth order preferences were calculated as function of the second motion direction. Data were fitted using Equation 4. For each experimental condition, we calculated the difference between the preferred direction in this condition and the preferred direction when the motion direction remained identical throughout the trial (Figure 6B). The first motion gain was defined as the slope of a linear regression of these differences on the offsets between the first and second motion direction (Figure 6C). In other words, an initial motion gain of unity is obtained when depth order is solely determined by the initial motion direction, and it is zero when depth order is solely determined by the motion after the change of direction. 
Results
We calculated the initial motion gain as the probability that depth order is determined by the motion direction before the change (see definition above in Methods). The initial motion gain increased from 0.22 (± 0.16) to 0.45 (± 0.21) and 0.52 (± 0.24) at 20, 40, and 80 ms, respectively; F(2, 22) = 29.70; p < 0.001; Figure 6C). Even at the shortest duration of 20 ms, the gain of the first motion direction was significantly larger than zero, t(11) = 4.63, p = 0.001. At the same time, the magnitude of directional biases decreased only slightly from 0.98 (± 0.40) when motion direction did not change to 0.89 (± 0.40), 0.77 (± 0.24) and 0.77 (± 0.22) when motion direction changed after 20, 40, and 80 ms, respectively, F(3, 33) = 3.31; p = 0.032; (Figure 6D). This means that, quite remarkably, an extremely brief motion impulse is sufficient to trigger depth order preferences and even a change of motion direction by 45° does not completely reset the perceived depth order. 
Discussion
The results of Experiments 5 to 7 showed that depth order is determined very quickly and kept for one dominance period, even if the physical or perceived motion direction changes. This is consistent with previous results on structure-from-motion showing that depth ordering is preserved even when motion direction is changed (Pastukhov & Braun, 2013). The rapid determination of depth order, as shown in Experiments 5 to 7, means that initial depth ordering will be only affected by surfaces features which are processed fast enough. However, this requirement on the speed of processing presumably does not apply to the reversal rate during prolonged viewing because by the time of the first or second reversal, there was enough time to fully analyze the stimulus. This dissociation might explain why some surface cues, such as dot size, have been found to be effective for reversal rates during prolonged viewing (Moreno-Bote et al., 2008), but not for biases in initial depth ordering (Schütz, 2011). 
Figure 6
 
Influence of early motion direction on direction bias. (A) Depth order judgments as a function of the direction of the second motion for one observer. The different colors denote different durations of the first motion direction. The different symbols denote different offsets between the first and second motion direction. Symbols represent data and lines represent the fit of the model. Vertical lines indicate preferred directions seen in front. (B) Change in preferred direction relative to the preferred direction when motion direction does not change as a function of the offset between first and second motion. The horizontal black line indicates data if only the second motion direction matters and the first motion direction has no influence on depth ordering. The diagonal black line indicates data if only the first motion direction matters for depth ordering. (C) Gain of first motion direction as a function of duration of first motion direction. A value of unity means that only the first motion direction matters; a value of zero means that only the second motion direction matters. (D) Magnitude of directional biases as a function of duration of first motion direction. Data are averaged across different offsets between the first and second motion direction. (C, D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Figure 6
 
Influence of early motion direction on direction bias. (A) Depth order judgments as a function of the direction of the second motion for one observer. The different colors denote different durations of the first motion direction. The different symbols denote different offsets between the first and second motion direction. Symbols represent data and lines represent the fit of the model. Vertical lines indicate preferred directions seen in front. (B) Change in preferred direction relative to the preferred direction when motion direction does not change as a function of the offset between first and second motion. The horizontal black line indicates data if only the second motion direction matters and the first motion direction has no influence on depth ordering. The diagonal black line indicates data if only the first motion direction matters for depth ordering. (C) Gain of first motion direction as a function of duration of first motion direction. A value of unity means that only the first motion direction matters; a value of zero means that only the second motion direction matters. (D) Magnitude of directional biases as a function of duration of first motion direction. Data are averaged across different offsets between the first and second motion direction. (C, D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
General discussion
In replicating previous studies (Mamassian & Wallace, 2010; Schütz, 2011, 2012, 2014; Wexler et al., 2015), we found directional preferences in depth ordering in transparent motion (Figure 1). Here we investigated how these directional preferences interact with stereoscopic disparity and from which motion signals they originate. When stereoscopic disparity was added, directional preferences and disparity were equally effective at about one arcmin and the depth ordering based on directional preferences and stereoscopic disparity depended in the same way on presentation duration (Figure 2), which points towards a common processing of directional preferences and stereoscopic disparity. Furthermore, we found no evidence that global signals, such as the aperture location (Figure 3) in the visual field or the motion direction of the apertures (Figure 4) affect depth ordering. When the individual elements were lines rather than dots, directional preferences were shifted in the 1D motion direction, orthogonal to the line orientation, rather than in the actual 2D motion direction (Figure 5). At the same time, the perceived motion direction was affected by line orientation to a lesser degree. When the motion duration was varied, it became evident that depth order is determined in one shot, based on an early local motion signal, and this result should be contrasted to perceived motion direction that is continuously updated (Debono, Schutz, & Gegenfurtner, 2012). In the same way, depth ordering was affected by initial motion direction, even when it changed after 20 to 80 ms of motion (Figure 6). These results suggest that the directional preferences in depth ordering are caused by early local motion signals and remain afterwards, thereby avoiding rapid alternations in the interpretation of depth order. 
Neural basis of depth order in transparent motion
Our finding that depth order preferences were determined by an early local motion signal when the aperture problem was not yet resolved points toward V1 and area MT as potential neural substrates. This is compatible with existing knowledge about the common processing of motion and depth in area MT. It has been shown that neurons in the middle temporal area (MT) are selective for motion direction (Maunsell & Van Essen, 1983a) and disparity (Maunsell & Van Essen, 1983b), and are causally involved in the perception of motion direction (Salzman, Murasugi, Britten, & Newsome, 1992) and depth (DeAngelis, Cumming, & Newsome, 1998). Consequently, the apparent depth organization of structure-from-motion can be decoded from activity in MT (Bradley, Chang, & Andersen, 1998; Dodd, Krug, Cumming, & Parker, 2001) and its human homologue (Brouwer & van Ee, 2007). More specifically, the two motion directions in transparent motion are only represented by pattern cells but not by component cells, which signal the motion average of the two motion directions (McDonald, Clifford, Solomon, Chen, & Solomon, 2014). Based on these observations, it is likely that the directional preferences originate in area MT. For instance, an imbalance in the joint representation of motion direction and depth in area MT could result in the observed directional preferences. Alternatively, neurons in MT could inherit directional preferences from neurons in V1. 
Magnitude of directional preferences and direction tuning
The magnitude of directional preferences provides a measurement of the potential impact on perception and might also be revealing about the underlying mechanism leading to these preferences. 
The directional preferences corresponded to a stereoscopic disparity of about one arcmin in Experiment 1, which is well above stereoscopic thresholds (Coutant & Westheimer, 1993). However, in this experiment, the average magnitude of directional preferences (parameter bdir in Equation 1) at zero disparity (0.19) was considerably smaller than the average magnitude of preferences in Experiments 3 to 7 (0.87). Hence, it might be that the directional preferences in these experiments would even have exceeded a value of one arcmin. The difference in magnitudes might be due to different setups, different populations of observers, or possibly due to the fact that in the disparity experiments, motion direction was pitted against disparity within a block, and this by itself could weaken the trust in the direction cue. However, these differences do not affect the conclusions from these experiments at all. 
The average magnitude of directional preferences across Experiments 3 to 7 of 0.87 can also be compared to the typical sensitivity to motion direction. On the basis of the first derivative of the preference function, we can calculate the bandwidth of the direction tuning, resulting in a full-width at half-height of about 27° or a just-noticeable difference (JND) of about 8.6°. For comparison, the direction tuning bandwidth in macaque V1 and MT is on average 68° and 91° (Albright, 1984), while the JNDs for direction discrimination in humans can be as low as 1.8° (Bruyn & Orban, 1988). The minimal separation in direction that is necessary to perceive transparent motion depends on the task and the stimulus conditions and ranges between 25° and 45° (Smith, Curran, & Braddick, 1999; Braddick, Wishart, & Curran, 2002; Greenwood & Edwards, 2007). This means that the depth ordering is narrowly tuned to specific directions but that it does not reach the lower limit given by direction discrimination thresholds. 
Consequences of depth order preferences in early motion analysis
Although depth order in transparent motion does not represent actual depth, and although the observed directional biases merely reflect the attempt to solve the ambiguous depth order, they have clear consequences for perception and action. First, our results show that the directional biases correspond to a stereoscopic disparity of about one arcmin. This value is clearly above stereoscopic thresholds for our observers and across the population (Coutant & Westheimer, 1993), which indicates that directional preferences have a considerable impact on perception of depth order. Second, depth order in transparent motion can affect the perception of surface features; for instance, leading to an overestimation of numerosity in the back and an underestimation of numerosity in the front layer (Schütz, 2012). Third, depth order also affects actions; for instance, biases the initial direction of smooth pursuit eye movements (Schütz, 2012). Fourth, the directional biases might constrain the effectiveness of surface cues to determine depth order; for instance, the impact of relative speed of the two layers is constrained by the strength of directional biases (Mamassian & Wallace, 2010). 
The observed directional preferences in transparent motion are stable over several weeks (Mamassian & Wallace, 2010; Schütz, 2012) and can be rotated by an attention task only to a certain degree (Chopin & Mamassian, 2011). This suggests that the preferences for these types of stimuli are stable and difficult to modify (Wexler et al., 2015). However, directional biases seem to be more flexible for the ambiguous rotation of a Necker cube. For this kind of stimulus, biases can be created rather quickly by repeated presentation of unambiguous stimuli (Haijiang, Saunders, Stone, & Backus, 2006). Interestingly, this procedure can even lead to stable preferences in the long term (Harrison & Backus, 2014). This difference in adaptability could be either due to differences in stimuli or due to differences in the manipulation of biases. 
Acknowledgments
We thank Rosalie Böhme, Annelie Göhler and Svenja Orthen for help with data collection. ACS was supported in part by DFG grant SFB/TRR 135. PM was supported in part by ANR-10-LABX-0087 IEC and ANR-10-IDEX-0001-02 PSL*. 
Commercial relationships: none. 
Corresponding author: Alexander C. Schütz. 
Email: a.schuetz@uni-marburg.de. 
Address: Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Gutenbergstr. 18, 35032 Marburg, Germany. 
References
Afraz A, Pashkam M. V, Cavanagh P. (2010). Spatial heterogeneity in the perception of face and form attributes. Current Biology, 20, 2112–2116.
Albright T. D. (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque. Journal of Neurophysiology, 52, 1106–1130.
Amano K, Edwards M, Badcock D. R, Nishida S. (2009). Adaptive pooling of visual motion signals by the human visual system revealed with a novel multi-element stimulus. Journal of Vision, 9 (3): 4, 1–25. doi:10.1167/9.3.4. [PubMed] [Article]
Born R. T, Pack C. C, Ponce C. R, Yi S. (2006). Temporal evolution of 2-dimensional direction signals used to guide eye movements. Journal of Neurophysiology, 95, 284–300.
Braddick O. J, Wishart K. A, Curran W. (2002). Directional performance in motion transparency. Vision Research, 42, 1237–1248.
Bradley D. C, Chang G. C, Andersen R. A. (1998). Encoding of three-dimensional structure-from-motion by primate area MT neurons. Nature, 392, 714–717.
(1997). The Pyschophysics Toolbox. Spatial Vision, 10 (4), 433–436..
Brouwer G. J, van Ee R. (2007). Visual cortex allows prediction of perceptual states during ambiguous structure-from-motion. Journal of Neuroscience, 27, 1015–1023.
Bruyn B.de, Orban G. A. (1988). Human velocity and direction discrimination measured with random dot patterns. Vision Research, 28 (12), 1323–1335.
Burr D, Thompson P. (2011). Motion psychophysics: 1985–2010. Vision Research, 51, 1431–1456.
Carter O, Cavanagh P. (2007). Onset rivalry: Brief presentation isolates an early independent phase of perceptual competition. PLoS One, 2, e343.
Caziot B, Valsecchi M, Gegenfurtner K. R, Backus B. T. (2015). Fast perception of binocular disparity. Journal of Experimental Psychology: Human Perception and Performance, 41, 909–916.
Chopin A, Mamassian P. (2011). Usefulness influences visual appearance in motion transparency depth rivalry. Journal of Vision, 11 (7): 18, 1–8. doi:10.1167/11.7.18. [PubMed] [Article]
(2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, 34 (4), 613–617..
Coutant B. E, Westheimer G. (1993). Population distribution of stereoscopic ability. Ophthalmic and Physiological Optics, 13 (1), 3–7.
Culham J, He S, Dukelow S, Verstraten F. A. (2001). Visual motion and the human brain: What has neuroimaging told us? Acta Psychologica (Amsterdam), 107 (1–3), 69–94.
Cumming B. G, Parker A. J. (1999). Binocular neurons in V1 of awake monkeys are selective for absolute, not relative, disparity. Journal of Neuroscience, 19, 5602–5618.
De Valois R. L, Yund E. W, Hepler N. (1982). The orientation and direction selectivity of cells in macaque visual cortex. Vision Research, 22, 531–544.
DeAngelis G. C, Cumming B. G, Newsome W. T. (1998). Cortical area MT and the perception of stereoscopic depth. Nature, 394, 677–680.
Debono K, Schutz A. C, Gegenfurtner K. R. (2012). Illusory bending of a pursuit target. Vision Research, 57, 51–60.
Dodd J. V, Krug K, Cumming B. G, Parker A. J. (2001). Perceptually bistable three-dimensional figures evoke high choice probabilities in cortical area MT. Journal of Neuroscience, 21, 4809–4821.
Duffy C. J, Wurtz R. H. (1991). Sensitivity of MST neurons to optic flow stimuli. I. A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology, 65, 1329–1345.
Duffy C. J, Wurtz R. H. (1993). An illusory transformation of optic flow fields. Vision Research, 33, 1481–1490.
Dumoulin S. O, Wandell B. A. (2008). Population receptive field estimates in human visual cortex. NeuroImage, 39, 647–660.
Gibson J. J. (1950). The perception of the visual world. Boston, MA: Houghton Mifflin.
Greenwood J. A, Edwards M. (2006). Pushing the limits of transparent-motion detection with binocular disparity. Vision Research, 46, 2615–2624.
Greenwood J. A, Edwards M. (2007). An oblique effect for transparent-motion detection caused by variation in global-motion direction-tuning bandwidths. Vision Research, 47, 1411–1423.
Haijiang Q, Saunders J. A, Stone R. W, Backus B. T. (2006). Demonstration of cue recruitment: Change in visual appearance by means of Pavlovian conditioning. Proceedings of the National Academy of Sciences, USA, 103, 483–488.
Harrison S. J, Backus B. T. (2014). A trained perceptual bias that lasts for weeks. Vision Research, 99, 148–153.
Hedges J. H, Gartshteyn Y, Kohn A, Rust N. C, Shadlen M. N, Newsome W. T, Movshon J. A. (2011). Dissociation of neuronal and psychophysical responses to local and global motion. Current Biology, 21, 2023–2028.
Hibbard P. B, Bradshaw M. F. (1999). Does binocular disparity facilitate the detection of transparent motion? Perception, 28, 183–191.
Huang X, Albright T. D, Stoner G. R. (2007). Adaptive surround modulation in cortical area MT. Neuron, 53, 761–770.
Kane D, Guan P, Banks M. S. (2014). The limits of human stereopsis in space and time. Journal of Neuroscience, 34, 1397–1408.
Lappe M, Bremmer F, van den Berg A. V. (1999). Perception of self-motion from visual flow. Trends in Cognitive Sciences, 3, 329–336.
Lorenceau J, Shiffrar M, Wells N, Castet E. (1993). Different motion sensitive units are involved in recovering the direction of moving lines. Vision Research, 33, 1207–1217.
Lu Z. L, Sperling G. (2001). Three-systems theory of human visual motion perception: Review and update. Journal of the Optical Society of America A, 18, 2331–2370.
Mamassian P, Wallace J. M. (2010). Sustained directional biases in motion transparency. Journal of Vision, 10 (13): 23, 1–12. doi:10.1167/10.13.23. [PubMed] [Article]
Masson G. S, Stone L. S. (2002). From following edges to pursuing objects. Journal of Neurophysiology, 88, 2869–2873.
Maunsell J. H, Van Essen D. C. (1983a). Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. Journal of Neurophysiology, 49, 1127–1147.
Maunsell J. H, Van Essen D. C. (1983b). Functional properties of neurons in middle temporal visual area of the macaque monkey. II. Binocular interactions and sensitivity to binocular disparity. Journal of Neurophysiology, 49, 1148–1167.
McDonald J. S, Clifford C. W. G, Solomon S. S, Chen S. C, Solomon S. G. (2014). Integration and segregation of multiple motion signals by neurons in area MT of primate. Journal of Neurophysiology, 111, 369–378.
Moreno-Bote R, Shpiro A, Rinzel J, Rubin N. (2008). Bi-stable depth ordering of superimposed moving gratings. Journal of Vision, 8 (7): 20, 1–13. doi:10.1167/8.7.20. [PubMed] [Article]
Morrone M. C, Tosetti M, Montanaro D, Fiorentini A, Cioni G, Burr D. C. (2000). A cortical area that responds specifically to optic flow, revealed by fMRI. Nature Neuroscience, 3, 1322–1328.
Movshon J. A, Newsome W. T. (1996). Visual response properties of striate cortical neurons projecting to area MT in macaque monkeys. Journal of Neuroscience, 16, 7733–7741.
Nishida S. (2011). Advancement of motion psychophysics: Review 2001–2010. Journal of Vision, 11 (5): 11, 1–53. doi:10.1167/11.5.11. [PubMed] [Article]
Pack C. C, Born R. T. (2001). Temporal dynamics of a neural solution to the aperture problem in visual area MT of macaque brain. Nature, 409, 1040–1042.
Pack C. C, Livingstone M. S, Duffy K. R, Born R. T. (2003). End-stopping and the aperture problem: Two-dimensional motion signals in macaque V1. Neuron, 39, 671–680.
Pastukhov A, Braun J. (2013). Structure-from-motion: Dissociating perception, neural persistence, and sensory memory of illusory depth and illusory rotation. Attention, Perception, & Psychophysics, 75, 322–340.
(1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10 (4), 437–442.
Qian N, Andersen R. A. (1994). Transparent motion perception as detection of unbalanced motion signals. II. Physiology. Journal of Neuroscience, 14, 7367–7380.
Qian N, Andersen R. A, Adelson E. H. (1994). Transparent motion perception as detection of unbalanced motion signals .1. Psychophysics. Journal of Neuroscience, 14, 7357–7366.
Richards W. (1972). Response functions for sine-and square-wave modulations of disparity. Journal of the Optical Society of America, 62, 907.
Roy J. P, Komatsu H, Wurtz R. H. (1992). Disparity sensitivity of neurons in monkey extrastriate area MST. Journal of Neuroscience, 12, 2478–2492.
Salzman C. D, Murasugi C. M, Britten K. H, Newsome W. T. (1992). Microstimulation in visual area MT: Effects on direction discrimination performance. Journal of Neuroscience, 12, 2331–2355.
Schütz A. C. (2011). Motion transparency: Depth ordering and smooth pursuit eye movements. Journal of Vision, 11 (14): 21, 1–19. doi:10.1167/11.14.21. [PubMed] [Article]
Schütz A. C. (2012). There's more behind it: Perceived depth order biases perceived numerosity/density. Journal of Vision, 12 (12): 9, 1–16. 16, doi:10.1167/12.12.9. [PubMed] [Article]
Schütz A. C. (2014). Interindividual differences in preferred directions of perceptual and motor decisions. Journal of Vision, 14 (12): 16, 1–17. doi:10.1167/14.12.16. [PubMed] [Article]
Smith A. T, Curran W, Braddick O. J. (1999). What motion distributions yield global transparency and spatial segmentation? Vision Research, 39, 1121–1132.
Smith A. T, Greenlee M. W, Singh K. D, Kraemer F. M, Hennig J. (1998). The processing of first-and second-order motion in human visual cortex assessed by functional magnetic resonance imaging (fMRI). The Journal of Neuroscience, 18, 3816–3830.
Smith A. T, Wall M. B, Williams A. L, Singh K. D. (2006). Sensitivity to optic flow in human cortical areas MT and MST. European Journal of Neuroscience, 23, 561–569.
Thaler L, Schütz A. C, Goodale M. A, Gegenfurtner K. R. (2013). What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision Research, 76, 31–42.
Uttal W. R, Fitzgerald J, Eskin T. E. (1975). Parameters of tachistoscopic stereopsis. Vision Research, 15, 705–712.
Vaina L. M, Cowey A, Kennedy D. (1999). Perception of first- and second-order motion: Separable neurological mechanisms? Human Brain Mapping, 7 (1), 67–77.
Vaina L. M, Soloviev S. (2004). First-order and second-order motion: Neurological evidence for neuroanatomically distinct systems. Progress in Brain Research, 144, 197–212.
Wallach H. (1935). Über visuell wahrgenommene Bewegungsrichtung. Psychologische Forschung, 20 (1), 325–380.
Wallach H, O'Connell D. N. (1953). The kinetic depth effect. Journal of Experimental Psychology, 45 (4), 205–217.
Wexler M, Duyck M, Mamassian P. (2015). Persistent states in vision break universality and time invariance. Proceedings of the National Academy of Sciences, USA, 112, 14990–14995.
Figure 1
 
Overview of experiments and individual directional biases in depth order. (A) Distribution of R2 of the model for the datasets in (B). The vertical line indicates the median R2 of 0.95. (B) The graph shows the preferred directions seen in front (θm) and the magnitude of these directional biases (bdir) in the baseline conditions of all experiments. Not all of the data points represent independent measurements, because some observers participated in more than one experiment and consequently will appear several times in the graph. The following baseline conditions were selected from each experiment: Experiment 1: condition with zero disparity; Experiment 2: duration of 600 ms; Experiment 3: aperture located at the center of the screen; Experiment 4: local and global motion in the same direction; Experiment 5: lines moving orthogonal to their orientation; Experiment 6: lines moving orthogonal to their orientation, duration of 400 ms; Experiment 7: no change in motion direction.
Figure 1
 
Overview of experiments and individual directional biases in depth order. (A) Distribution of R2 of the model for the datasets in (B). The vertical line indicates the median R2 of 0.95. (B) The graph shows the preferred directions seen in front (θm) and the magnitude of these directional biases (bdir) in the baseline conditions of all experiments. Not all of the data points represent independent measurements, because some observers participated in more than one experiment and consequently will appear several times in the graph. The following baseline conditions were selected from each experiment: Experiment 1: condition with zero disparity; Experiment 2: duration of 600 ms; Experiment 3: aperture located at the center of the screen; Experiment 4: local and global motion in the same direction; Experiment 5: lines moving orthogonal to their orientation; Experiment 6: lines moving orthogonal to their orientation, duration of 400 ms; Experiment 7: no change in motion direction.
Figure 2
 
Pitting direction bias against disparity. (A) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different magnitudes of stereoscopic disparity. (B) Magnitude of direction (bdir) and disparity biases (bdisp) as a function of stereoscopic disparity. (C) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different motion durations in seconds. (D) Magnitude of direction and disparity biases as a function of presentation duration. (A, C) Symbols represent data and lines represent the fit of the model. (B, D) Direction and disparity biases are displayed in blue squares and magenta triangles, respectively. Error bars indicate 95% confidence intervals.
Figure 2
 
Pitting direction bias against disparity. (A) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different magnitudes of stereoscopic disparity. (B) Magnitude of direction (bdir) and disparity biases (bdisp) as a function of stereoscopic disparity. (C) Depth order preferences as a function of motion direction of the negative disparity layer for one observer. The different colors denote different motion durations in seconds. (D) Magnitude of direction and disparity biases as a function of presentation duration. (A, C) Symbols represent data and lines represent the fit of the model. (B, D) Direction and disparity biases are displayed in blue squares and magenta triangles, respectively. Error bars indicate 95% confidence intervals.
Figure 3
 
Influence of aperture location on direction bias. (A) Illustration of tested locations (see also Movie 1). The dashed lines indicate possible aperture locations. The thin lines indicate the axes of optic flow on the retina based on forward (blue arrows) or backward locomotion (red arrows) towards the fixation location. The gray arrows indicate the motion direction of black and white dots in one trial. (B) Preferred directions for the different aperture locations. Directions of optic flow for backward and forward locomotion are indicated by the red and blue lines, respectively. (C) Visual field gain over R2. Positive values indicate an alignment of direction biases with motion directions caused by forward locomotion; negative values with motion directions caused by backward locomotion. (D) Correlation between the average magnitude of preferences (parameter bdir in Equation 4) and the standard deviation of preferred directions across the five tested locations. The black line represents a linear regression. (B–D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Figure 3
 
Influence of aperture location on direction bias. (A) Illustration of tested locations (see also Movie 1). The dashed lines indicate possible aperture locations. The thin lines indicate the axes of optic flow on the retina based on forward (blue arrows) or backward locomotion (red arrows) towards the fixation location. The gray arrows indicate the motion direction of black and white dots in one trial. (B) Preferred directions for the different aperture locations. Directions of optic flow for backward and forward locomotion are indicated by the red and blue lines, respectively. (C) Visual field gain over R2. Positive values indicate an alignment of direction biases with motion directions caused by forward locomotion; negative values with motion directions caused by backward locomotion. (D) Correlation between the average magnitude of preferences (parameter bdir in Equation 4) and the standard deviation of preferred directions across the five tested locations. The black line represents a linear regression. (B–D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Figure 4
 
Influence of aperture motion on direction bias. (A) Depth order preferences as a function of local motion direction for one observer. The different colors denote different separations between local and global motion direction. Symbols represent data and lines represent the fit of the model. (B) Preferred motion direction as a function of the difference between global and local motion direction. Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals. (C) Shift of preferred motion direction relative to the preference in the 0° condition. The horizontal line indicates a global motion gain of zero and the diagonal line, a global motion gain of unity. (D) Gain of global motion direction.
Figure 4
 
Influence of aperture motion on direction bias. (A) Depth order preferences as a function of local motion direction for one observer. The different colors denote different separations between local and global motion direction. Symbols represent data and lines represent the fit of the model. (B) Preferred motion direction as a function of the difference between global and local motion direction. Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals. (C) Shift of preferred motion direction relative to the preference in the 0° condition. The horizontal line indicates a global motion gain of zero and the diagonal line, a global motion gain of unity. (D) Gain of global motion direction.
Figure 5
 
Contrasting 1D versus 2D motion signals. (A) Change in direction relative to the condition in which the line is moving orthogonal to its orientation (0°). The horizontal black line indicates data if only the 2D motion direction matters and line orientation has no influence on perception. The diagonal black line indicates data if only 1D motion direction matters. Perceived motion direction and depth order are displayed in red diamonds and blue squares, respectively. (B) Line orientation gain. A value of unity means that only the 1D motion direction matters; a value of zero means that only the 2D motion direction matters. (C) Line orientation gain as a function of motion duration. The lines show linear fits to the data. The black decay function represents the average decay of 1D gain of pursuit eye movements in response to diamond stimuli, taken from Masson & Stone (2002). This function is shifted leftwards by 100 ms (as an estimate of smooth pursuit latency) to account for the delay in motor execution. (D) Intercepts from the linear fits from (C). (E) Slopes from the linear fits from (C). (A–E) Data of individual observers are plotted in light colors and the average across observers in dark colors. Error bars indicate 95% confidence intervals.
Figure 5
 
Contrasting 1D versus 2D motion signals. (A) Change in direction relative to the condition in which the line is moving orthogonal to its orientation (0°). The horizontal black line indicates data if only the 2D motion direction matters and line orientation has no influence on perception. The diagonal black line indicates data if only 1D motion direction matters. Perceived motion direction and depth order are displayed in red diamonds and blue squares, respectively. (B) Line orientation gain. A value of unity means that only the 1D motion direction matters; a value of zero means that only the 2D motion direction matters. (C) Line orientation gain as a function of motion duration. The lines show linear fits to the data. The black decay function represents the average decay of 1D gain of pursuit eye movements in response to diamond stimuli, taken from Masson & Stone (2002). This function is shifted leftwards by 100 ms (as an estimate of smooth pursuit latency) to account for the delay in motor execution. (D) Intercepts from the linear fits from (C). (E) Slopes from the linear fits from (C). (A–E) Data of individual observers are plotted in light colors and the average across observers in dark colors. Error bars indicate 95% confidence intervals.
Figure 6
 
Influence of early motion direction on direction bias. (A) Depth order judgments as a function of the direction of the second motion for one observer. The different colors denote different durations of the first motion direction. The different symbols denote different offsets between the first and second motion direction. Symbols represent data and lines represent the fit of the model. Vertical lines indicate preferred directions seen in front. (B) Change in preferred direction relative to the preferred direction when motion direction does not change as a function of the offset between first and second motion. The horizontal black line indicates data if only the second motion direction matters and the first motion direction has no influence on depth ordering. The diagonal black line indicates data if only the first motion direction matters for depth ordering. (C) Gain of first motion direction as a function of duration of first motion direction. A value of unity means that only the first motion direction matters; a value of zero means that only the second motion direction matters. (D) Magnitude of directional biases as a function of duration of first motion direction. Data are averaged across different offsets between the first and second motion direction. (C, D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
Figure 6
 
Influence of early motion direction on direction bias. (A) Depth order judgments as a function of the direction of the second motion for one observer. The different colors denote different durations of the first motion direction. The different symbols denote different offsets between the first and second motion direction. Symbols represent data and lines represent the fit of the model. Vertical lines indicate preferred directions seen in front. (B) Change in preferred direction relative to the preferred direction when motion direction does not change as a function of the offset between first and second motion. The horizontal black line indicates data if only the second motion direction matters and the first motion direction has no influence on depth ordering. The diagonal black line indicates data if only the first motion direction matters for depth ordering. (C) Gain of first motion direction as a function of duration of first motion direction. A value of unity means that only the first motion direction matters; a value of zero means that only the second motion direction matters. (D) Magnitude of directional biases as a function of duration of first motion direction. Data are averaged across different offsets between the first and second motion direction. (C, D) Data of individual observers are plotted in gray and the average across observers in black. Error bars indicate 95% confidence intervals.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×