**Early vision proceeds through distinct ON and OFF channels, which encode luminance increments and decrements respectively. It has been argued that these channels also contribute separately to stereoscopic vision. This is based on the fact that observers perform better on a noisy disparity discrimination task when the stimulus is a random-dot pattern consisting of equal numbers of black and white dots (a “mixed-polarity stimulus,” argued to activate both ON and OFF stereo channels), than when it consists of all-white or all-black dots (“same-polarity,” argued to activate only one). However, it is not clear how this theory can be reconciled with our current understanding of disparity encoding. Recently, a binocular convolutional neural network was able to replicate the mixed-polarity advantage shown by human observers, even though it was based on linear filters and contained no mechanisms which would respond separately to black or white dots. Here, we show that a subtle feature of the way the stimuli were constructed in all these experiments can explain the results. The interocular correlation between left and right images is actually lower for the same-polarity stimuli than for mixed-polarity stimuli with the same amount of disparity noise applied to the dots. Because our current theories suggest stereopsis is based on a correlation-like computation in primary visual cortex, this postulate can explain why performance was better for the mixed-polarity stimuli. We conclude that there is currently no evidence supporting separate ON and OFF channels in stereopsis.**

*N*dots on either side of the step-edge. Despite the noise, if observers averaged enough dots on each side of the boundary, they would correctly judge the sign of the step. Harris and Parker (1995) worked out what

*N*would have to be for this ideal observer to match the performance of their observers. They found that the implied number of dots was around twice as large for mixed-polarity stimuli as for same-polarity stimuli, representing a doubling of statistical efficiency. Harris and Parker related this to the problem of stereo correspondence. Before the disparity of a dot can be identified, it has to be successfully matched up with the corresponding dot in the other eye. When all the dots are the same color, each dot in one eye could potentially be the correct match for any dot in the other eye. But if the stereo system only matches black dots with black dots and white dots with white ones, the number of false matches is halved for mixed-polarity stimuli. This could enable more dots to be successfully matched up, increasing the number

*N*of disparities which can be averaged on each side of the step-edge and so improving performance. They suggested that the independent processing of black and white dots could be mediated by the separate ON and OFF channels that are well-established early in the visual system (Jiang, Purushothaman, & Casagrande, 2015; Schiller, 1992, 2010).

*r. L*,

_{j}*R*is the value of the

_{j}*j*th pixel in the left, right eye. Then the sample Pearson correlation coefficient is

*j*. Equation 1 describes the correlation at zero disparity, but a similar expression holds for any uniform disparity if 〈

*LR*〉 is computed between appropriately displaced pixels. In Figure 2 and Figure 3, we plot this correlation coefficient for zero-disparity images. In Figure 6, we plot it as a function of image displacement for disparate images.

*x*and

*y*coordinates of each dot were chosen from a uniform random distribution across the image. The appropriate disparity was then applied to the

*x*coordinate. In the Overlap condition, the dot was then simply drawn at the resulting location, overwriting any pixels belonging to existing dots. In the No-overlap condition, we checked to see if this dot would overwrite any pixels belonging to existing dots. If it did, the dot was abandoned and a new one was chosen. This process was repeated until the desired number of dots had been placed.

*d*of being covered by a dot. If none of the dots overlap, then the number of dots required is

*A*/

_{im}*A*is the ratio of the area of the whole image to the area of a single dot. If dots are allowed to overlap, obviously more dots are required in order to achieve the same probability. It can be shown that

_{dot}*r*(Equation 1), which is unchanged when a constant luminance offset is added to both images or when both images are scaled. The normalisation of variance ensures that all images have the same contrast energy. Without this step, same-polarity stimuli would have lower contrast than mixed-polarity. We wished to study the response of model neurons to correlation differences in the stimuli, unconfounded by changes in contrast.

^{2}arcmin and, scattered without overlap, occupied 0.28 of the stimulus area; the step size was ∼3 arcmin and the noise ∼2 arcmin (precise values differed between observers, depending on what was needed to bring their performance to around 75% correct on average). In our simulation, the images were 241 × 241 pixels, and the dot density was the same as in the experiments of Read et al. (2011), i.e.,

*d*= 0.28 in the No Overlap condition and correspondingly lower in the Overlap condition. We took 1 pixel to represent 0.5 arcmin and made the dot size 6 pixels, stimulus disparity 6 pixels, and disparity noise 4 pixels.

*λ*was 128 pixels or 1.1°. For a model neuron with position disparity

*x*

_{0}, we computed the inner product of each of these receptive fields, appropriately shifted, with the monocular images:

_{Lo}. We considered various possible model V1 neurons, as follows:

- ODF TE:
Display Formula \(R = {\left( {{v_{Le}} + {v_{Re}}} \right)^2}\) - ODF TI:
Display Formula \(R = {\left( {{v_{Le}} - {v_{Re}}} \right)^2}\) - ODF ODD:
Display Formula \(R = {\left( {{v_{Le}} + {v_{Ro}}} \right)^2}\) - RPC TE:
Display Formula \(R = {\left( {\left\lfloor {{v_{Le}}} \right\rfloor + \left\lfloor {{v_{Re}}} \right\rfloor } \right)^2}\) - RPC ODD:
Display Formula \(R = {\left\lfloor {\left\lfloor {{v_{Lo}}} \right\rfloor - \left\lfloor {{v_{Re}}} \right\rfloor } \right\rfloor ^2}\)

*x*⌋ =

*x*if

*x > 0*and is 0 otherwise. Within each neuron class we simulated a population of neurons with different position disparities

*x*

_{0}(from −20 to +20 pixels in steps of 1 pixel). The tuning curves shown in Figure 7 represent each neuron's mean response to 10,000 different random-dot patterns with the same disparity, normalised by that neuron's mean response to binocularly uncorrelated stimuli.

*L*,

*R*). For matched pixel-pairs (A), only three of the nine are possible: Pixels may be background in both eyes (color-coded gray in the figure), and pairs that have dots of the same polarity in both eyes (green). For unmatched pixel-pairs (B), we find pixel-pairs that have a dot in one eye but not the other (pink) and pairs that have dots of opposite polarity (blue) as well as pairs that have dots of the same polarity by chance (green). By inspection, the matched pixels have correlation 1, and the unmatched pixels have correlation 0.

*d*to be the probability that any given pixel in a monocular image is covered by a dot, as opposed to being part of the background. In general, this will be different for matching versus nonmatching pixel pairs. We thus define

*d*to be the probability that a given pixel in the left eye is covered by a dot,

_{m}*given*that this pixel belongs to a matched pair (n.b. a pixel covered by the background in both eyes is a matched pair). Similarly, we define

*d*to be the probability that a given pixel in the left eye is covered by a dot,

_{u}*given*that this pixel belongs to an unmatched pair. We define

*m*to be the probability that any given pixel-pair is matched, and

*u*= 1 −

*m*the converse probability that it is unmatched. By the law of total probability,

*m,*the pixel-pair is matched; and then with probability

*d*it contains a dot in the left eye. Then—since it is matched—it also contains a dot in the right eye. Thus on average a fraction

_{m,}*md*pixel-pairs are matched pairs with dots in both eyes. Similarly a fraction

_{m}*m*(1 −

*d*) are matched pairs with background in both eyes.

_{m}*d*. Let

_{u}*x*be the probability that, given this, there is also a dot in the right eye. Then

*d*is the probability that an unmatched pair has a dot in both eyes, and

_{u}x*d*(1 −

_{u}*x*) is the probability that an unmatched pair has a dot in the left eye and background in the right. By symmetry, this must also be the probability that an unmatched pair has a dot in the right eye and background in the left. These are the only three possibilities, so their probabilities must sum to 1:

*d*+ 2

_{u}x*d*(1 −

_{u}*x*) = 1 and thus

*xd*= 2

_{u}*d*– 1. So, a fraction

_{u}*u*(2

*d*− 1) pixel-pairs are unmatched pairs with dots in both eyes;

_{u}*2u*(1 −

*d*) are unmatched pairs with a dot in one eye and background in the other. The precise values of

_{u}*d*depend on how the patterns are generated, but note that

_{m},d_{u}*d*≥ 0.5 to avoid negative probabilities.

_{u}*L*〉 = 〈

*R*〉 = 0. The mean of the square depends on the dot density: 〈

*L*

^{2}〉 = 〈

*R*

^{2}〉 =

*d*. To compute 〈

*LR*〉, pairs where either pixel is background don't contribute to the sum, so we need only consider the situation where both pairs are covered by dots. And for unmatched pairs, the dots are as often opposite-luminance as same, so these also contribute nothing on average. We need only consider the matched pairs, so 〈

*LR*〉 =

*md*. Putting all these into Equation 1, then, we find

_{m}*L*〉 = 〈

*R*〉 = 〈

*L*

^{2}〉 = 〈

*R*

^{2}〉 =

*d*, and when computing 〈

*LR*〉 we now also need to consider the unmatched pairs which have a dot in both eyes, so 〈

*LR*〉 =

*md*+

_{m}*u*(2

*d*− 1). From Equation 1 we find

_{u}*d*and thus for the correlations when dots are scattered at random, occluding other dots where they overlap. Consider an unmatched pair. The probability that it has a dot in the left eye is

_{u},d_{m}*d*. But when dots are scattered truly at random, the probability that the corresponding pixel in the right eye is also a dot is just

_{u}*d*, the overall dot probability. Thus the probability that an unmatched pixel-pair has a dot in both eyes, which earlier we saw was (2

*d*− 1), is just

_{u}*d*. This means that

_{u}d*r*and

_{mixed}*r*, we find

_{same}*d*= 0.5 and

_{u}*d*= (

_{m}*d*−

*u*/2)/

*m*. Then

*r*<

_{same,limNoOv}*r*. Note that these expressions cannot be directly compared with those for Overlap, Equation 3, since the way the pattern is generated may change the proportion of unmatched pixel-pairs,

_{mixed,limNoOv}*u*, even if the overall dot density is held constant.

*d*. This is reflected in the autocorrelation of monocular images. When overlap is allowed, the autocorrelation function is a triangle function (reflecting the autocorrelation function of a single dot), as shown in Figure 6B. But when overlap is not allowed, there are regions of negative correlation at displacements just larger than the dot width, caused by the reduced probability of dots there, as shown by the black dashed lines in Figure 6F. Note however that in mixed-polarity images the auto-correlation function is still a triangle function (pink curves in Figure 6F). This is because, while pixels adjacent to an existing dot are more likely to be gray, the probability of their being white or black is reduced equally, so that the mean product is unaffected.

*, 7 (8): e1002142.*

*PLoS Computational Biology**, 31, 2195–2207.*

*Vision Research**, 24, 203–238.*

*Annual Review of Neuroscience**, 389, 280–283.*

*Nature**, 8, 127, https://doi.org/10.3389/fncom.2014.00127.*

*Frontiers in Computational Neuroscience**, 13 (13): 26, 1–25, https://doi.org/10.1167/13.13.26. [PubMed] [Article]*

*Journal of Vision**, 11 (3): 1, 1–16, https://doi.org/10.1167/11.3.1. [PubMed] [Article]*

*Journal of Vision**, 9 (1): 8, 1–18, https://doi.org/10.1167/9.1.8. [PubMed] [Article]*

*Journal of Vision*_{max}for stereopsis and motion in random dot displays.

*, 38 (6), 925–935.*

*Vision Research**, 27 (10), 1403–1412. e8, https://doi.org/10.1016/j.cub.2017.03.074.*

*Current Biology**, 12 (5), e1004906, https://doi.org/10.1371/journal.pcbi.1004906.*

*PLoS Computational Biology**, 36 (34), 8967–8976.*

*Journal of Neuroscience**, 371 (1697): 20150255, 1–12, https://doi.org/10.1098/rstb.2015.0255.*

*Philosophical Transactions of the Royal Society B: Biological Sciences**, 114 (5), 2816–2829, https://doi.org/10.1152/jn.00560.2015.*

*Journal of Neurophysiology**, 8 (4), 509–515.*

*Current Opinion in Neurobiology**, 249, 1037–1041.*

*Science**, 8 (5), 379–391.*

*Nature Reviews Neuroscencei**, 87*

*Journal of Neurophysiology***,**209–221.

*, 87*

*Journal of Neurophysiology***,**191–208.

*, 37 (13), 1811–1827.*

*Vision Research**, https://doi.org/10.1016/j.neuroscience.2014.05.036.*

*Neuroscience**, 87 (1 Special issue), 77–108.*

*Progress in Biophysics and Molecular Biology**, 91, 1271–1281.*

*Journal of Neurophysiology**, 27 (12), R594–R596, https://doi.org/10.1016/j.cub.2017.05.013.*

*Current Biology: CB**, 19, 735–753.*

*Visual Neuroscience**, 11 (12): 4, 1–14, https://doi.org/10.1167/11.12.4. [PubMed] [Article]*

*Journal of Vision**, 27 (44), 11820–11831.*

*The Journal of Neuroscience**, 15 (3), 86–92, https://doi.org/10.1016/0166-2236(92)90017-3.*

*Trends in Neurosciences**, 107 (40), 17087–17094, https://doi.org/10.1073/pnas.1011782107.*

*Proceedings of the National Academy of Sciences*,*USA**, 18 (1), 101–105, https://doi.org/10.1016/0042-6989(78)90083-4.*

*Vision Research*