**To perceive multiple overlapping surfaces in the same location of the visual field (transparency), the visual system must determine which surface elements belong together, and should be integrated, and which should be kept apart. Spatial relations between surfaces, such as depth order, must also be determined. This article details two experiments examining the interaction of motion direction and disparity cues on the perception of depth order and surface segmentation in transparency. In Experiment 1, participants were presented with random-dot stereograms, where transparent planes were defined by differences in motion direction and disparity. Participants reported the direction of motion of the front surface. Results revealed marked effects of motion direction on perceived depth order. These biases interact with disparity in an additive manner, suggesting that the visual system integrates motion direction with other available cues to surface segmentation. This possibility was tested further in Experiment 2. Participants were presented with two intervals: one containing motion and disparity defined transparent planes, the other containing a volume of moving dots. Interplane disparity was varied to find thresholds for the correct identification of the transparent interval. Thresholds depended on motion direction: Thresholds were lower when disparities and directions in the transparency interval matched participants' preferred depth order, compared to conditions where disparity and direction were in conflict. These results suggest that motion direction influences the judgment of depth order even in the presence of other visual cues, and that the assignment of depth order may play an important role in segmentation.**

^{2}.

*A*or

*B*, and obtained the proportion of trials on which surface

*A*was perceived as being in front. For manipulations of disparity, we defined positive values as those that moved surface

*A*closer to, and surface

*B*farther from, the observer, increasing the probability that surface

*A*would be perceived as in front of surface

*B*. Negative disparity values were those that moved surface

*A*farther from, and surface

*B*closer to, the observer, increasing the probability that surface

*A*would be perceived as behind surface

*B*(Figure 1d). This encoding of the data allowed for the plotting of six psychometric functions, describing the effects of disparity on perceived depth order for motion orientations from −90° (surface

*A*moving to the left), through 180° (surface A moving downward), to 120° (surface

*A*moving down and to the right). These functions are shown, for an example observer, in Figure 2a. Note that functions for the complementary designation of surfaces

*A*and

*B*may be obtained by reversing the sign on the disparity axis and subtracting each proportion in front score from 1.

*a*and

*b*describing, respectively, the scale and direction of the bias. Functions describing these cosine relationships are given for zero disparity response biases in Equation 1, and for changes in PSE in Equation 2.

*a*≤ 1 and –

*π*≤

*b*≤

*π*, where

*x*defines the direction of motion in radians. For PSEs,

*a*≥ 0. From these equations, the directional bias may be defined as the peak of the function for zero disparity responses, and as the function minimum for PSEs. The directional bias is thus the direction most likely to be perceived as in front for zero disparity responses, and, for PSEs, the direction for which the greatest opposing (i.e., negative) disparity must be added to counteract the directional bias.

*a*is a scaling parameter indicating the strength of the bias,

*b*is the direction of the bias,

*c*is a constant used to shift the curve on the

*y*-axis, and

*d*alters the shape of the curve, allowing for differences in observer sensitivity to small changes in motion orientation. Note that, unlike the cosine fit, the strength of directional bias for the logit fit is determined by a combination of

*a*and

*d*parameters. For PSEs

*b*is replaced by

*π*+

*b*, as in Equation 2. Logit fits for zero disparity and PSE measures are shown, alongside cosine fits, in Figure 2b and c.

*a*(summarized in Table 1 for both zero disparity responses and PSEs).

*θ*and

*n*. Under this definition, for any cue

*i*,

*θ*is the expected value of the distribution for that cue, and

_{i}*n*is the assigned cue weight; as

_{i}*n*increases in value, so too does cue reliability. The

_{i}*θ*and

*n*parameters together define the standard

*α*and

*β*parameters of the beta distribution where

*α*=

*nθ*and

*β*=

*n*(1 −

*θ*). Note that, contrary to more typical weighted averaging models of signal integration (Landy, Maloney, Johnston, & Young, 1995), cue weights in the MBE models need not sum to one.

*α*and

*β*values are summed across each of

*N*available cues, resulting in a posterior probability distribution

*π*(

*θ*), equivalent to a linear weighted average of available information sources (see Equation 4, or equation 1 from Backus, 2009).

*θ*, with

*M*= 0.5 and

*SD*= 0.05, and a fixed

*n*value of

*n*= 10. The posterior probability distribution

*π*(

*θ*) for the MBE model then becomes the sum of

*N*available cues, including the random noise term. Here, we use the same deterministic decision rule as Backus (2009) to select between perceptual choices: the expected value of the posterior distribution

*θ̂*is calculated on each trial, where

*θ̂*=

*α*/ (

*α*+

*β*), and judged against the criterion of

*θ̂*≥ 0.5.

*θ*. Changes in direction

*x*were restated as sinusoidal changes in

*θ*, following Equation 1, fitting parameters

_{ori}*a*and

_{ori}*b*to define

_{ori}*θ*. The parameter

_{ori}*n*was also fitted to allow for a full definition of the beta distribution, conditional on orientation

_{ori}*π*(

*θ*|

_{ori}*x*) (Equations 5 and 6). where

*R*

^{2}values for these fits range from 0.88 to 0.93, with

*M*= 0.91, indicating that the model of directional bias provides an excellent fit to the data (typically better than either cosine or logit fits). Note that, like the logit fit model, but unlike the standard cosine fit, fits derived from a beta-distributed representation of directional preference are able to account for variations in the extent to which small changes in orientation affect depth ordering. Summaries of the fitted parameters for the beta-distributed model of directional preference are provided in Table 1.

*δ*were restated as values of

*θ*, under the assumption that the two measures were linearly related, such that

_{disp}*θ*=

_{disp}*a*+ 0.5. The full MBE model was then fit to each participant's data, using the previously fitted

_{disp}δ*a*,

_{ori}*b*and

_{ori}*n*parameters for directional bias, together with free parameters

_{ori}*a*and

_{disp}*n*. Although free to vary between participants,

_{disp}*a*and

_{disp}*n*values were fixed across experimental conditions within participants. The results of this fitting process are summarized in Figure 4a through c. The full MBE model provided an excellent fit to participants' responses across all conditions and to measured PSEs. Bootstrapped 95% confidence intervals (CIs) for the

_{disp}*R*

^{2}values for these fits ranged from 0.948 to 0.963, with

*M*= 0.955, for proportional responses, and from 0.898 to 0.968, with

*M*= 0.942, for PSEs. Although performance was somewhat poorer on this measure, the full MBE model also provided a good fit to the slopes for each fitted psychometric function.

*R*

^{2}values for this measure ranged from 0.493 to 0.802, with

*M*= 0.690.

*R*

^{2}values for MBE model responses were, on average, 0.072 higher than for the threshold model (95% CIs ranged from 0.057 to 0.097). The threshold model also did a noticeably poorer job of fitting observer PSEs and slopes.

*R*

^{2}values for fitted PSEs were, on average 0.628 higher for the MBE model (95% CIs ranged from 0.679 to 0.874), while values for fitted slopes were an average of 0.595 higher (95% CIs ranged from 0.311 to 0.783). As a further analysis, Akaike Information Criteria (AIC) and associated Akaike weights were calculated for each model (Akaike, 1974; Burnham & Anderson, 2002). AIC values were calculated using mean squared error values on the model fits, where

*AIC*=

*n*ln

*σ*

^{2}+ 2

*k*,

*k*is the number of model parameters,

*σ*

^{2}the mean squared error, and

*n*the number of points at which model and observed responses are compared. Akaike weights for the MBE model approached 1, where weights were defined as

*T*and

*X*junctions in image parsing (Adelson, 1993; Dresp, Durand, & Grossberg, 2002; Kawabe & Miura, 2006; Metelli, Da Pos, & Cavedon, 1985), but has not typically been considered in the processing of motion transparency, where explanations have concerned themselves with the integration of local motion signals differing in direction, orientation, and spatial frequency (Curran, Hibbard, & Johnston, 2007; Kanai, Paffen, Gerbino, & Verstraten, 2004; Qian et al., 1994b; Raudies & Neumann, 2010; Smith, Curran, & Braddick, 1999; Snowden & Verstraten, 1999). Interestingly, Schütz (2012) found that perceived numerosity in transparency stimuli is affected by depth order, but not by disparity magnitude. As with our findings, this suggests that the assignment of depth order is itself a critical processing step in the segmentation of multiple surfaces. Further research is required on the mechanisms for, and consequences of, depth order assignment.

*Nature Neuroscience*, 7 (10), 1057–1058.

*Science*, 262, 2042–2044.

*Nature*, 300, 523–525.

*IEEE Transactions on Automatic Control AC*, 19, 716–723.

*Vision Research*, 47, 919–924.

*Nature*, 392, 714–717.

*Spatial Vision*, 10, 433–436.

*Journal of Neuroscience*, 30 (21), 7269–7280.

*Model selection and multimodal inference: A practical information-theoretic approach*. New York: Springer-Verlag.

*Perception*, 35, 1219–1232.

*Spatial Vision*, 4, 103–129.

*Proceedings of the Royal Society B: Biological Sciences*, 274, 1049–1056.

*Spatial Vision*, 15, 255–276.

*Vision Research*, 49, 660–670.

*Vision Research*, 46, 2615–2624.

*Vision Research*, 46, 1440–1449.

*Proceedings of the National Academy of Sciences of the United States of America*, 103 (2), 483–488.

*Vision Research*, 50, 1905–1911.

*Perception*, 28, 183–191.

*Vision Research*, 44, 2207–2212.

*Psychological Research*, 70, 375–383.

*Vision Research*, 33, 2479–2489.

*Perception*, 36, ECVP Abstract Supplement 14.

*Current Biology*, 23, 1454–1459.

*Vision Research*, 35, 389–412.

*Journal of Physiology*, 250, 347–366.

*Perception & Psychophysics*, 38, 354–366.

*Philosophical Transactions of the Royal Society B: Biological Sciences*, 357, 1053–1062.

*Spatial Vision*, 10, 437–442.

*Journal of Neuroscience*, 14 (12), 7357–7366.

*Journal of Neuroscience*, 14 (12), 7381–7392.

*Journal of Physiology – Paris*, 104, 71–83.

*Vision Research*, 39, 1121–1132.

*Trends in Cognitive Sciences*, 3 (10), 369–377.

*Proceedings of the National Academy of Sciences of the United States of America*, 112 (48), 14990–14995.

*Visual Neuroscience*, 11, 1205–1220.