Open Access
Perspectives  |   November 2023
What's special about horizontal disparity
Author Affiliations
  • Bart Farell
    Institute for Sensory Research, Department of Biomedical and Chemical Engineering, Syracuse University, Syracuse, NY, USA
    bfarell@gmail.com
Journal of Vision November 2023, Vol.23, 4. doi:https://doi.org/10.1167/jov.23.13.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bart Farell; What's special about horizontal disparity. Journal of Vision 2023;23(13):4. https://doi.org/10.1167/jov.23.13.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Horizontal disparity has been recognized as the primary signal driving stereoscopic depth since the invention of the stereoscope in the 1830s. It has a unique status in our understanding of binocular vision. The direction of offset of the eyes gives the disparities of corresponding image point locations across the two retinas a strong horizontal bias. Beyond the retina, other factors give shape to the effective disparity direction used by visual mechanisms. The influence of orientation is examined here. I argue that horizontal disparity is an inflection point along a continuum of effective directions, and its role in stereo vision can be reinterpreted. The pointwise geometric justification for its special status neglects the oriented structural elements of spatial vision, its physiological support is equivocal, and psychophysical support of its special status may partially reflect biased stimulus sampling. The literature shows that horizontal disparity plays no particular role in the processing of one-dimensional stimuli, a reflection of the stereo aperture problem. The resulting depth is non-veridical, even non-transitive. Although one-dimensional components contribute to the stereo depth of visual objects generally, two-dimensional stimuli appear not to inherit the aperture problem. However, a look at the two-dimensional stimuli that predominate in experimental studies shows regularities in orientation that give a new perspective on horizontal disparity.

Introduction
Our eyes look out on the world from two different locations, and this difference in perspective creates disparities between the left and right retinal images. In the literature on stereoscopic vision, “disparity” is generally understood to mean “horizontal disparity.” This shift in the positions of retinal image features varies in size and direction with changes in parameters of both the observer and environmental sources of the image. Stereopsis is the major qualitative dimension of vision that emerges from these disparities. From flat retinal images, stereopsis fashions the perception of objects in relief within a three-dimensional space. It measurably enhances performance in tasks such as aligning and grasping (Melmoth, Finlay, Morgan, & Grant, 2009; O'Connor, Birch, Anderson, Draper, & FSOS Research Group, 2010; Sheedy, Bailey, Buri, & Bass, 1986), can profoundly affect visual experience when recovered after years of deprivation (Barry, 2009; Bridgeman, 2014), and is becoming an integral part of advanced display technologies. The role of horizontal disparity in extracting relative depth was recognized with the invention of the stereoscope close to 200 years ago (Wheatstone, 1838). It arises from the direction of offset of the eyes and is considered the dominant disparity signal in normal viewing conditions and the signal of primary importance for stereo depth perception (Ogle, 1952; Ogle, 1953; Serrano-Pedraza, Brash, & Read, 2013). As Poggio and Poggio (1984) said about stereopsis, “its sole basis is the horizontal disparity between the two retinal images.” 
Yet, this should be understood in a particular sense. What is generally required for stereo depth is a horizontal component of disparity. In typical viewing conditions, most disparities in the visual field are not strictly horizontal. Helmholtz (1925) pointed out that the ratio of horizontal to vertical varies with viewing distance and retinal eccentricity and direction. Usually, horizontal disparity is dominant in the central retina and is the mean of the distribution of disparity directions. But, over a great expanse of retina, especially in close viewing, most disparities are typically oblique, containing both horizontal and vertical components. This has implications for how disparities are detected and how their components are extracted and used for various purposes. On a large scale, these are the issues addressed here. On a more hands-on scale, the issue involves the interplay of stimulus orientation and disparity direction. 
Horizontal disparity has often been associated with vertical contours, vertical being the orientation that optimally carries the horizontal disparity signal. Other orientations have disparities, too. Whether these orientations belong to edges, lines, bars, or one-dimensional (1-D) components of two-dimensional (2-D) stimuli, their disparities have been studied comparatively little and have contributed only modestly to our understanding of stereo depth. This Perspective looks into the role of disparity direction in stereo vision primarily through its connection to stimulus and receptive field orientation. It considers some interpretations of the data and examines the implications for the processing of horizontal disparity. 
The orientation and disparity connection
The horizontal direction is important not only because disparity components in that direction are largest, most common, sufficient, and usually necessary for stereo depth perception, but also for strictly geometric reasons. Ideally, given a pair of fixating eyes, one retinal image of a point in the world will have a counterpart on the other retina that lies within a highly constrained set of positions. Superimposing the retinas would show that the corresponding points lie somewhere along a particular line. Where along this line they lie depends on the point's location in depth; the distance and direction between the image locations is a continuous function of the point's depth relative to the plane of fixation. Thus, this epipolar line, usually identified as “horizontal,” specifies possible true-match retinal locations. In theory, this is important in making retinal correspondence a one-dimensional problem, greatly simplifying the search process aimed at finding a true match. Still, epipolar lines vary with eye position. In practice, therefore, imperfect knowledge of eye position limits the benefits of the epipolar constraint (Backus, Banks, van Ee, & Crowell, 1999; Erkelens & Collewijn, 1985; Regan, Erkelens, & Collewijn, 1986; Schreiber, Crawford, Fetters, & Tweed, 2001; van Ee & van Dam, 2003) and is one source of vertical disparities. Other sources include ocular misalignment and differential perspective caused by the greater image size on the retina nearest the image source (Howard & Rogers, 2002). Vertical disparities have been interpreted both as noise and as subserving ancillary viewing-related functions. These functions include not only ocular alignment but also those that constructively mold our sense of depth. Theoretically, this is a matter of compensating horizontal disparities for viewing geometry, most notably in the perception of surface slant and the extraction of distance information (Backus et al., 1999; Banks & Backus, 1998; Banks, Hooge, & Backus, 2001; Bishop, 1989; Brenner, Smeets, & Landy, 2001; Gårding, Porrill, Mayhew, & Frisby, 1995; Gillam & Lawergren, 1983; Longuet-Higgins, 1982; Mayhew, 1982; Mayhew & Longuet-Higgins, 1982). Vertical disparity by itself can produce the perception of stereoscopic depth (Matthews, Meng, Xu, & Qian, 2003; Ogle, 1938; Westheimer & Pettet, 1992), but only under restricted conditions; in general, it does not (Ogle, 1964). The exceptions are mediated by stimuli or detectors that are functionally one dimensional and able to convey oblique disparities. 
The epipolar constraint is a standard theoretical underpinning of biological vision models and artificial vision algorithms. Its application to biological vision, however, might be an over-idealization. The epipolar constraint is built on the geometry of points. A true match along an epipolar line is a match between corresponding geometrical points, but geometric points have limited utility for understanding visual processes. They neglect object structure and image redundancies whose recognition has had major influences on the study of stereo vision (Julesz, 1971; Marr & Poggio, 1976). The geometry of lines and edges is more pertinent (e.g., McKee, 1983), although in some ways it complicates the picture. With lines and edges comes orientation, and with orientation, either of the stimulus or the receptive field, comes the stereo aperture problem (Farell, 1998; Morgan & Castet, 1997). One point on a line or edge is much the same as another. Changes in disparity in a direction parallel to the orientation of the line or edge are therefore detectible at the endpoints but not in between. This makes alternative pointwise binocular correspondences possible, blurring the distinction between true and false matches and introducing uncertainty in the direction and amplitude of disparity: the aperture problem. Which correspondence is the effective one in a particular case of biological vision becomes an important question for which the horizontal match is not the only answer. 
Figures 1 and 2 illustrate the issues. When cross-fused, the stereogram in Figure 1A shows what is seen of an oblique line segment positioned behind a segmented occluder. The disparity of the line segment as a whole is horizontal. However, the portion visible through any of the occluder's apertures has a disparity direction that is determined by the aperture's orientation, as seen in the overlaid stereo images in Figure 1B. Figure 1C shows the overlaid images of another stereogram, one in which the line has a different disparity direction, vertical in this case. Within the apertures, Figures 1B and 1C are identical. A mechanism performing local binocular matching within an aperture would detect no difference between the line with horizontal disparity and the line with vertical disparity. 
Figure 1.
 
Aperture disparities. (A) Stereogram of an oblique line segment with horizontal disparity behind a segmented occluder. (B) Combined left- and right-eye views show that the portion of the line segment visible within each aperture has a disparity direction determined by the orientation of the aperture. (C) Combined images from a different stereogram can produce identical within-aperture patterns from a line segment with an arbitrarily different disparity direction, in this case vertical rather than horizontal.
Figure 1.
 
Aperture disparities. (A) Stereogram of an oblique line segment with horizontal disparity behind a segmented occluder. (B) Combined left- and right-eye views show that the portion of the line segment visible within each aperture has a disparity direction determined by the orientation of the aperture. (C) Combined images from a different stereogram can produce identical within-aperture patterns from a line segment with an arbitrarily different disparity direction, in this case vertical rather than horizontal.
Figure 2.
 
Stereo plaids. Two schematic plaids with horizontal disparity are shown with left- and right-eye images superimposed. The oblique lines represent sinusoidal components; the perpendicular separation between the adjacent parallel lines of each eye gives the wavelength. Given the component orientations, equal component phase disparities (ϕ) across the two plaids correspond to horizontal pattern disparities with a ratio of 1:1.93. This approximates the ratio found for horizontal disparity thresholds for the two plaids.
Figure 2.
 
Stereo plaids. Two schematic plaids with horizontal disparity are shown with left- and right-eye images superimposed. The oblique lines represent sinusoidal components; the perpendicular separation between the adjacent parallel lines of each eye gives the wavelength. Given the component orientations, equal component phase disparities (ϕ) across the two plaids correspond to horizontal pattern disparities with a ratio of 1:1.93. This approximates the ratio found for horizontal disparity thresholds for the two plaids.
The amplitude of these aperture disparities is a function of the line's orientation, the aperture orientation, and the disparities of the target stimulus and the occluder (Farell, 1998; Farell & Li, 2004). The overall unoccluded horizontal disparity of the line segment is carried by the line's endpoints, but elsewhere the line's disparity is locally ambiguous, being compatible with disparity directions spanning 180°. This ambiguity arises whether the disparity of the stimulus is sampled by an occluder or a neuron's receptive field. The line's disparity functions as a constraint line, which defines the set of consistent local disparities. 
In Figure 1, only the match made within the horizontal aperture would be counted as a true match in a conventional inventory of image statistics. The others would be counted as false matches—matches between similar image features that arise from different environmental sources. The properties of aperture disparities are different from those of conventional true-match disparities. They are unconstrained in amplitude and direction, have a flat retinal location distribution, and can have a horizontal disparity component with the opposite polarity from that of the global stimulus. Aperture disparities are generally non-veridical depth cues. Stimuli that are effectively one dimensional at a local level are most favorable for the occurrence of fusible aperture disparities.1 In cluttered environments, such as the arboreal habitats of many primate species, locally 1-D stimuli—from rod-like shapes such as branches, from the edges of objects, from shadows and their edges—are common and, presumably, so too are aperture disparities of the sort shown in Figure 1 (see Mitsudo, Sakai, & Kaneko, 2013). 
A related source of aperture-like disparities is 2-D patterns. This can be demonstrated by optically summing a pair of sinusoidal gratings with not-too-different spatial frequencies. A vertical (90°) grating with horizontal disparity might be added to a grating oriented off-vertical by 30°, say. The second grating might have zero disparity, so vertical disparity is found in neither stimulus. Despite their disparity difference, the gratings will not be seen in separate depth planes. What will be seen instead is a depth-coherent plaid (Adelson & Movshon, 1984; Delicato & Qian, 2005; Farell, 1998; Quaia, Sheliga, Optican, & Cumming, 2013) having a disparity direction of +60° or –120°, depending on the polarity of the disparity of the vertical grating (Farell & Li, 2004). 
Figure 2 illustrates another influence of 1-D components on 2-D pattern disparity. It sketches a pair of 2-D stimuli (plaids, each composed of two schematic gratings, with one pair oriented at 75° and 105° and the other at 30° and 150°). Each appears doubled, with left- and right-eye views superimposed. This shows that both plaids, like the line segment in Figure 1A, have horizontal disparity, D and 1.93D in these cases. We could measure threshold disparity for these plaids, the smallest amplitude that can be reliably distinguished from zero. If we scaled the disparities of the two stimuli proportionally, we might expect to find a point where the plaid on the right was seen in depth relative to the background, whereas the plaid on the left would not differ perceptibly from the background depth. Threshold might be 1.5 times D, say; one plaid's disparity would be above threshold and the other's below. As will be discussed later, this point would not be found. Both plaids would be at threshold when their horizontal disparities differ by nearly a factor of two, as illustrated in the figure (Farell, 2003). What is constant at threshold is not horizontal disparity, but rather ϕ, the phase disparity of the 1-D components (and not because the disparity of only one component had been detected). These component threshold disparities are equivalent to the thresholds of the individual gratings that make up the plaids. Horizontal disparity does matter for stereo depth of 2-D stimuli at suprathreshold levels. At threshold, though, the question is, in light of Figure 1, what is the disparity of 2-D stimuli? 
Figure 1 uses 1-D contours to illustrate the aperture problem in stereo vision. Figure 2 raises the question of the influence of 1-D components and their aperture problem on the processing of 2-D stimulus disparity. The aperture problem highlights the potential ambiguity of disparity direction and its orientation specificity, despite the epipolar constraint. Psychophysical data considered below suggest that both perpendicular and horizontal disparities are used in computations of relative disparity, and their use differs between 1-D and 2-D stimuli. In what follows we explore these issues and how they impact the horizontal-centric view of disparity processing. We look into this from the perspective of psychophysical and physiological evidence. The close connection with motion processing—in particular, the component versus pattern motion distinction (Adelson & Movshon, 1982; Rust, Mante, Simoncelli, & Movshon, 2006)—will be evident. Motion is largely isomorphic with stereo (e.g., Marr, 1982; Qian & Andersen, 1997), the crucial difference being that frontoparallel motion is basically isotropic and disparity-derived depth is decidedly anisotropic. This difference shapes how disparity is processed. 
Together with data to be discussed later, the illustrations in Figures 1 and 2 point to the potential of orientation of both stimuli and receptive fields to influence the effective disparity direction. Quite a number of studies, which we will consider in overview first, address the contribution of receptive field orientation. To preview, despite this number, physiological data converge on limited trends rather than a consensus. 
Receptive field orientation and disparity direction
Early psychophysical data and intuitions about stereo vision and the computations behind it have made lasting impressions on subsequent approaches. The focus on horizontal disparity as the essential signal for stereo vision implied that binocular correspondence could be investigated as a 1-D process and suggested that mediators of the correspondence process would ideally be isotropic or vertically oriented (e.g., Howard, 1982; Serrano-Pedraza et al., 2013). Some physiological recordings in primary visual cortex have offered support for the gist of this argument by showing, for example, that vertical disparity could depress the neuronal response to horizontal disparity without eliciting a tuned response of its own (Gonzalez, Justo, Bermudez, & Perez, 2003; Poggio, 1995), very possibly reflecting a correspondence failure. However, other studies, beginning with early reports of orientationally tuned disparity detectors in cat (Barlow, Blakemore, & Pettigrew, 1967; Blakemore, 1970) and monkey (Hubel & Wiesel, 1970), suggest that a strictly 1-D approach to disparity direction is insufficient. This raises questions about the connection between the two sensitivities. Overall, three broad trends are discernible in the physiological data, which rely heavily on the tuning of single cells: responses to horizontal matches only, isotropic responses to matches irrespective of direction, and a compromise between the two—responses to all matching directions anisotropically, with the coding of large disparity magnitudes concentrated in the horizontal direction. In addition, receptive field orientation has been found to interact in various ways with each of these trends. That the data lack consistency may not be surprising given the variation across studies in procedure, species, stimuli, neuronal sampling, and experimental artifacts. Understandably, given the early representation of orientation and disparity, most of the data were recorded from areas V1 and V2/17 and 18 in monkey and cat, some from MT/V5 and roughly equally from awake and anesthetized animals. Binning the data with respect to these categories does not appear to systematically reduce between-study variability; therefore, these differences are for the most part ignored here. 
Neural responses to the horizontal disparity of vertical or unoriented stimuli are ubiquitous in the physiological literature, and these are often the only conditions tested. More informative is evidence of the coding of disparity from neurons with non-vertical receptive fields or as a function of stimulus disparity direction. Disparity direction and stimulus and receptive-field orientation are connected in theory, but data relevant to their empirical relationship, though quite plentiful, have been inconsistent. There is, first, considerable evidence for coding of various directions of disparity (Barlow et al., 1967; Durand, Zhu, Celebrini, & Trotter, 2002; Ferster, 1981; Gonzalez, Relova, Perez, Acuna, & Alonso, 1993; Hubel & Wiesel, 1970; Joshua & Bishop, 1970; LeVay & Voight, 1988; Maunsell & Van Essen, 1983; Nikara, Bishop, & Pettigrew, 1968; Pettigrew, Nikara, & Bishop, 1968; Trotter, Celebrini, & Durand, 2004; Tsao, Conway, & Livingstone, 2003; von der Heydt, Adorjani, Haenny, & Baumgartner, 1978). Whether this directional response is related to stimulus or receptive-field orientation is consequential, for standard versions of the disparity energy model imply a perpendicular relation between receptive-field orientation and direction of highest resolution disparity tuning. A number of studies have interpreted the data as supporting such a relation (Barlow et al., 1967; Bishop & Pettigrew, 1986; Durand, Celebrini, & Trotter, 2007; Ferster, 1981; Hubel & Livingstone, 1987; Hubel & Wiesel, 1970; LeVay & Voigt, 1988; Maske, Yamane, & Bishop, 1984; Nelson, Kato, & Bishop, 1977; Nieder & Wagner, 2001; Ohzawa, DeAngelis, & Freeman, 1990; Poggio, Motter, Squatrito, & Trotter, 1985; Sasaki, Tabuchi, & Ohzawa, 2010; Tsao et al., 2003; von der Heydt et al., 1978). Consistent with this and more specific with regard to receptive field organization are data supporting the coding of phase disparity, whereby disparity is registered via interocular phase shifts rather than position shifts between receptive fields (DeAngelis, Ohzawa, & Freeman, 1991; DeAngelis, Ohzawa, & Freeman, 1995; DeAngelis & Uka, 2003; Maske, Yamane, & Bishop, 1986; Ohzawa, DeAngelis, & Freeman, 1996; Ohzawa & Freeman, 1986). The implication is that receptive field orientation determines the direction of greatest disparity responsivity. Yet subpopulations of phase- and spatial-disparity coding cells have been found within the same animal (Anzai, Ohzawa, & Freeman, 1997) and both coding strategies can coexist within individual cells (Tsao et al., 2003). 
The orientation tuning of cortical neurons is typically non-uniform but effectively continuous. Whether disparity tuning shows a similar distribution is much less certain. The coding of vertical disparity is the biggest issue. Aside from the dissent of studies previously mentioned (Gonzalez et al., 2003; Poggio, 1995), there is rather solid support for some degree of tuning to vertical disparity (Cumming, 2002; DeAngelis et al., 1991; DeAngelis et al., 1995; Durand et al., 2002; Gonzalez et al., 1993; Maunsell & Van Essen, 1983; Nieder & Wagner, 2001; Ohzawa et al., 1996; Trotter et al., 2004). (It is possible theoretically, too, for vertical disparity signals to be coded via the modulation of horizontal disparity detectors [Read, 2010]. Of course, this strategy could hold, again theoretically, for detectors of any particular direction of disparity.) Response as a function of disparity direction frequently shows a relative attenuation around vertical. Usually it is the number of cells responsive to vertical disparity that is diminished (DeAngelis et al., 1991; DeAngelis et al., 1995; Hubel & Wiesel, 1970; Ohzawa et al., 1996; Poggio & Fischer, 1977; von der Heydt et al., 1978). Yet, in other studies, comparable tuning to horizontal and vertical disparities, at least in peripheral retina, has been seen (e.g., Maunsell & Van Essen, 1983; Trotter et al., 2004), in some cases as a component of an isotropic population tuning (e.g., complex cells; see Ohzawa, DeAngelis, & Freeman, 1997). 
Another common form of disparity anisotropy appears in the range of disparity tuning. Disparity preferences can usually be found from small to large values within a population of cells. In some studies, this full range is expressed primarily by cells tuned to near-vertical orientations. Those preferring orientations closer to horizontal tend to be tuned to smaller disparities. The relation approximates a sine function of orientation, as in DeAngelis et al. (1991) for simple cells in anesthetized cat striate cortex (Figure 3), with weaker examples, perhaps diluted by other subpopulations of cells, found elsewhere (Anzai, Ohzawa, & Freeman, 1999a; Cumming, 2002). This has been described as a specialization of vertical receptive fields for detecting horizontal disparities (DeAngelis et al., 1991). As in many cases of generalities, however, consistency is not the rule. For example, this relation between receptive field orientation and disparity range was seen in simple cells in one study that found the reverse in complex cells (Anzai, Ohzawa, & Freeman, 1999b), whereas other studies have found no correlation between the two variables (e.g., Ohzawa et al., 1997). 
Figure 3.
 
Preferred phase disparity as a function of receptive field orientation (DeAngelis et al., 1991). Each dot gives the peak of the Gaussian fit to single-cell responses to disparate drifting sinusoidal gratings in anesthetized cat striate cortex. The closer the orientation preference of the cell to horizontal, the more confined is the tuning of the cells to small values of phase disparity. The solid line is a sinusoid indicating the relative maximum phase disparity for component orientations of a broadband pattern with horizontal disparity.
Figure 3.
 
Preferred phase disparity as a function of receptive field orientation (DeAngelis et al., 1991). Each dot gives the peak of the Gaussian fit to single-cell responses to disparate drifting sinusoidal gratings in anesthetized cat striate cortex. The closer the orientation preference of the cell to horizontal, the more confined is the tuning of the cells to small values of phase disparity. The solid line is a sinusoid indicating the relative maximum phase disparity for component orientations of a broadband pattern with horizontal disparity.
Indeed, more often than not, orientation tuning is reported to have an inconsistent relation to disparity direction preference. A number of studies looking at different cell types, cortical areas, and stimuli have failed to find a correlation (Chino, Smith, Hatta, & Cheng, 1997; DeAngelis & Newsome, 1999; Gonzalez et al., 2003; Gonzalez, Bermudez, Vicente, & Romero, 2010; Pack, Born, & Livingstone, 2003; Prince, Pointon, Cumming, & Parker, 2002; Smith, Chino, Ni, Ridder, & Crawford, 1997). Typically in these studies, either horizontal was the only disparity direction tested and responses were uncorrelated with the cells' orientation preference, or various disparity directions were tested and responses were uncorrelated with orientation preferences. Either way, finding no linkage between disparity tuning and the internal structure of the receptive field does not accord with expectations based on efficiency or the disparity energy model (but of course failure to find an effect could result from inappropriate methods, etc., as noted by, for example, Durand et al., 2007). 
Neither does the energy model predict the results of Cumming (2002), who found a third pattern. Cells in monkey V1 collectively responded to a greater range of horizontal disparities than oblique or vertical disparities in this study, this pattern held to a considerable extent regardless of orientation preference. In effect, these cells showed the disparity tuning expected of cells with horizontal receptive fields. This is a clear specialization for coding horizontal disparity. However, the disparity sensitivity of these cells was nevertheless two dimensional and had highest resolution over the restricted range of vertical disparities. This, too, is a specialization. The direction of maximum response—predominantly horizontal—was also independent of orientation preference in this study; this is not expected of cells with horizontal receptive fields. (A subset of Cumming's cells more closely followed the expected correlation between orientation and disparity tunings.) Another study using comparable methods showed a related compression of the range of vertical disparity sensitivity, but only in the fovea (Durand et al., 2007). Some of this difference can be traced to a change in the orientation-tuning distribution between central and peripheral retina, the former showing a vertical bias. The study by Durand et al. (2007) also showed a strong correlation between the direction of greatest disparity sensitivity and the direction orthogonal to the cell's preferred orientation, in line with energy model predictions. 
Considering the prevalence of orientation selectivity among cortical neurons, it would be surprising if disparity tuning and orientation preference were unrelated. The inconsistency in the data does not mean that current ideas are wrong. Rather, it is possible that individual neurons by themselves are not particularly transparent about the information they provide. Their functional significance is often taken to be self-evident, as if they were detectors of features identified by maximal-response triggers (Geisler & Albrecht, 1995), but neurons operate in networks whose functions are determined at the network level (Yuste, 2015). Data do not provide their own interpretation; the emphasis on horizontal disparity and vertical orientation in the physiological literature (as elsewhere) is understandable, but it is not assumption free. So, too, is the emphasis on vertical as an instance of non-horizontal disparity. What this leaves out of consideration is the issue behind Figures 1 and 2. Individual receptive fields in early cortical areas are local, and responses coming from them are subject to the aperture problem, among other ambiguities. Their signals might contribute a constraint line rather than report a disparity. For some functions the network might contribute to, such as solving the aperture problem or computing relative disparity, single-cell signals may be ambiguous, their decoding may be intrinsically relational, and a response index may be an accurate reading of an irrelevant network property. 
There are several ways in which a population of such neurons, shaped by environmental constraints, might be expected to show a jointly distributed tuning to orientation and disparity. Some trends emerge from the data. Most notable are the many studies showing neural responses to a broad range of disparity directions. The distribution tends to be non-uniform. In particular, the data show a sensitivity to a larger range of horizontal than vertical disparities, especially in central retina, with responsiveness to vertical disparities with higher amplitudes increasing with eccentricity. And, despite contrary evidence, the expected correlation between tuning for orientation and disparity does turn up in a number of studies, as indicated above, and sometimes displays a receptive-field orientation gradient in the disparity range. The disparity sensitivity of vertically tuned neurons tends to have a wider range of preferred disparity amplitudes than do obliquely tuned neurons, which are progressively limited to smaller phase shifts as their orientation preference approaches horizontal. This distribution is seen clearest in data of DeAngelis et al. (1991) and with various degrees of noisiness in other studies (Anzai et al., 1999a; Barlow et al., 1967; Cumming, 2002; DeAngelis et al., 1995; Ohzawa et al., 1996). The distribution has been interpreted as a specialization for the coding of horizontal disparities by neurons with a vertical orientation tuning (DeAngelis et al., 1991), but this ignores the contribution of most of the neural population to the coding of horizontal disparity. Ideally, neurons with such a joint orientation- and disparity-tuned distribution all code for the same range of horizontal disparities (see sine curve in Figure 3). They do so by responding differentially to the disparities of oriented stimulus components, the same principle found in motion processing (Bischof & Di Lollo, 1991). Neurons tuned to oblique orientations can code the same horizontal disparity as vertically tuned neurons. They do so using a smaller range of phase disparities (provided, of course, that the stimulus has contrast energy at these orientations). This has the virtue of finding support in the psychophysical data, considered next. The discussion considers relative disparity processing in three cases distinguished by stimulus dimensionality: relative disparity between a pair of 1-D stimuli, between a 1-D stimulus and a 2-D stimulus, and between a pair of 2-D stimuli. Also considered is the role of 1-D components in the processing of the disparity of 2-D stimuli. 
Psychophysical evidence: 1-D stimuli
Stereoacuity
There has been a long tradition of thinking about stereo correspondence as a 1-D matching problem with spatial limits. This view invites the assumption that the stimuli and the detectors pertinent to disparity coding are vertical (e.g., Howard, 1982; Read & Eagle, 2000) or unoriented (e.g., Bishop, 1970; Julesz, 1971; Marr & Hildreth, 1980; Mayhew & Frisby, 1978; Mayhew & Frisby, 1979; Mayhew & Frisby, 1980). Most stereo psychophysics has been conducted using such stimuli: vertical lines, bars, and gratings; disks, spots, and random-dot stereograms (RDSs). Given the place of horizontal disparity in theory, these stimuli have the advantage of appearing optimal or at least neutral; however, as seen above, physiological evidence offers only modest support for expectations about the role of orientation in stereo processing. 
Psychophysical evidence for unoriented detectors (Julesz 1971; Mayhew & Frisby, 1978; Mayhew & Frisby, 1979) has been questioned, mainly because of subtle stimulus artifacts (see review by Mansfield & Parker, 1993; also, Howard & Rogers, 2002), most involving orientation differences between corresponding stimuli. More reliable methods, as in masking experiments (e.g., Mansfield & Parker, 1993), have revealed an orientation-specific component in the extraction of disparity signals. This evidence points to a conflict. The widely held view that vertical contours are optimal stimuli for stereopsis implies that orientation matters. But the generic depth-from-disparity expectation is that stimuli having equal horizontal disparities carry the same depth signal; orientation is not a factor. 
Figure 4 explores this issue. The experiment measured disparity thresholds for a target Gabor patch relative to an annular Gabor reference stimulus having zero disparity (Farell, 2006) (Figure 4A). No other usable reference stimulus was visible. Figure 4B shows threshold phase disparities for discriminating the depth polarity between the stimuli. The two left-most bars give thresholds for baseline conditions. Threshold is low when both stimuli are vertical (left). Next, the threshold for the same vertical Gabor patch rises fivefold when the reference stimulus is removed, a measure of absolute rather than relative disparity. This ratio of absolute to relative threshold is similar to the values ranging between 2 and 5 found elsewhere (e.g., McKee, Levi, & Bowne, 1990), although under some viewing conditions it can be higher (Chopin, Levi, Knill, & Bavelier, 2016; Erkelens & Collewijn, 1985; Regan et al., 1986; Westheimer, 1984). Next, for the threshold shown in the center, the vertical reference grating is again present, but the target has been rotated 45° from vertical. The threshold disparity rises by a factor of about 3.5 above the level when both stimuli were vertical. Next, simultaneously rotating the reference stimulus 45° in the opposite direction makes the stimulus orientations orthogonal and raises the elevation factor to 4, bringing threshold close to the absolute disparity level. These last two reductions in sensitivity might be due to the replacement of vertical stimuli by less-effective oblique stimuli. However, when both stimuli have the same oblique orientation (bar on right), threshold is as low as when both are vertical. Thus, the increases in disparity threshold are caused by changes in relative, not absolute, orientation. The implications are that absolute disparity threshold is the same across the orientations covered (Chopin et al., 2016), that relative disparity is computed from a joint encoding of disparity and orientation (contrary to a fairly large subset of the physiology reviewed above), and that horizontal disparity plays no special role in the computation (contrary to general expectations). Even small orientation differences elevate thresholds (Figure 5). The increase surpasses the reciprocal-cosine prediction (dashed line in Figure 5) that scales threshold inversely with respect to the amplitude of the horizontal component disparity. 
Figure 4.
 
Disparity threshold and relative orientation. (A) Stereogram showing central target and surrounding reference grating patches. The disparity threshold was measured using a constant stimulus procedure to vary the phase disparity of the target, keeping the disparity of the reference grating at zero. (B) Target phase disparity at threshold. Means and standard errors for four observers for the five conditions of the experiment are shown from left to right: (1) target and reference vertically oriented (as in panel A); (2) target with no reference stimulus; (3) target oriented at 45°, reference at 90°; (4) target oriented at 45°, reference at 135°; and (5) target oriented at 45°, reference at 45°. Equivalent spatial disparities appear on the right ordinate. (Data from Farell, 2006.)
Figure 4.
 
Disparity threshold and relative orientation. (A) Stereogram showing central target and surrounding reference grating patches. The disparity threshold was measured using a constant stimulus procedure to vary the phase disparity of the target, keeping the disparity of the reference grating at zero. (B) Target phase disparity at threshold. Means and standard errors for four observers for the five conditions of the experiment are shown from left to right: (1) target and reference vertically oriented (as in panel A); (2) target with no reference stimulus; (3) target oriented at 45°, reference at 90°; (4) target oriented at 45°, reference at 135°; and (5) target oriented at 45°, reference at 45°. Equivalent spatial disparities appear on the right ordinate. (Data from Farell, 2006.)
Figure 5.
 
Disparity thresholds as a function of small orientation differences. Mean thresholds and standard errors for two observers for center-surround grating stimuli are shown. The stimuli were similar to those of Figure 4 but had an orientation difference no larger than 30°. The dashed line extending from the 0° threshold for observer S2 gives the horizontal disparity prediction. (From Farell, 2006.)
Figure 5.
 
Disparity thresholds as a function of small orientation differences. Mean thresholds and standard errors for two observers for center-surround grating stimuli are shown. The stimuli were similar to those of Figure 4 but had an orientation difference no larger than 30°. The dashed line extending from the 0° threshold for observer S2 gives the horizontal disparity prediction. (From Farell, 2006.)
These results generalize a well-known connection between orientation and stereoacuity for lines. Measured in terms of horizontal disparity, stereoacuity is highest with vertical lines and drops with a rotation from vertical; it generally falls with the cosine of the angular distance of the line from vertical (Blake, Camisa, & Antoinetti, 1976; Davis, King, & Anoskey, 1992; Ebenholtz & Walchli, 1965; see also Remole, Code, Matyas, & McLeod, 1992; for citations of earlier work, see Ogle, 1955). Several explanations for this line orientation effect have been proposed: A rotation away from vertical might raise the disparity threshold by reducing the vertical extent of the line, thereby decreasing the stereo-relevant line length (Blake et al., 1976); by reducing the perpendicular disparity between the lines, thereby inhibiting the effectiveness of the horizontal disparity of the line (Ebenholtz & Walchli, 1965); or by reducing the magnitude of the horizontal component of the line's perpendicular disparity (Arditi, 1982). In each case, the depth cue affected by rotation of the line is horizontal disparity. 
An alternative explanation is that horizontal disparity is not conserved at threshold because it is not the signal used for detection. What is conserved is the line's perpendicular disparity at threshold (as in the two parallel-stimulus conditions of Figure 4B). A constant perpendicular disparity means that horizontal disparity increases as the reciprocal of the cosine of the difference of the orientation of the line from vertical. The constancy holds until the orientation reaches about 60° to 75° from vertical. Beyond that, sensitivity drops.2 
Support for this interpretation comes from a study by Morgan and Castet (1997). They found that stereoacuity for gratings and Gabor patches, judged relative to their square apertures, approximated a constant disparity phase angle over most of the orientation range, from vertical to around 15° of horizontal. By varying both stimulus orientation and disparity direction, they showed the binocular matching direction to be best described as perpendicular to the orientation of these stimuli, consistent with line studies just mentioned. For isotropic stimuli (Gaussian blobs), however, the matching direction was horizontal. Interocular matching was possible over a considerable range around these optimal directions for both stimulus types, as shown by elevated but measurable thresholds. This suggests that oriented versus unoriented is not an all-or-none stimulus distinction; line length, for example, trades off the weight of the line, which is oriented, with that of its terminators, which are not (e.g., van Ee & Schor, 2000). I prefer to think of the contrast as “effectively one dimensional” versus “effectively two dimensional.” This maps more naturally onto the stereo aperture problem and the blurring of the true-match/false-match distinction intrinsic to 1-D patterns. The distinction has proved productive in studies of motion (e.g., Adelson & Movshon, 1982). It will also come up in later discussions of the linkage between the disparity processing of 1-D and 2-D stimuli. 
Perceived depth
Horizontal disparity is the conventional metric for measuring stereoacuity. Yet, the data just reviewed suggest that, for 1-D stimuli, perpendicular disparity rather than horizontal disparity might be the effective stereoacuity metric. It is perpendicular disparity amplitude that is conserved at threshold across stimulus orientations, and the greater the angular difference between disparity directions, the higher the threshold. For 2-D stimuli, by contrast, stereoacuity has been uncomplicatedly a measure of horizontal disparity. A few studies have explored the relationship between perpendicular and horizontal disparity measures. They have done this by finding the disparities that give 1-D and 2-D stimuli equal perceived depths. 
van Ee and Schor (2000) measured the disparity that gave a disk the same apparent depth as an oblique line with a fixed disparity. They found that the depth-matching disparity varied with line length, as expected. For the longest line, the two stimuli appeared at the same depth when the disparity direction of the disk was roughly midway between horizontal and the perpendicular direction of the line. This was taken to be equal to the line's effective disparity direction; the equivalence followed from the assumption that equal perceived depth implies equal horizontal disparity.3 
Evidence for a very different role for horizontal disparity—that it is not a factor in depth-from-disparity calculations involving 1-D stimuli—comes from studies of the phase disparities producing a depth match between a simultaneously presented Gabor patch and plaid (Chai & Farell, 2009; Farell, Chai, & Fernandez, 2009; Farell & Ng, 2014). The plaid had a fixed disparity magnitude and direction. The studies measured how the Gabor's orientation affected the disparity that would establish a perceptual depth match between the two stimuli. Figure 6A shows an oblique (45°) Gabor paired with a symmetrical plaid from one experiment (Farell et al., 2009). In the other condition, the Gabor was vertical. The plaids had the same non-horizontal disparity in both cases. Psychometric functions appear in Figure 6B for two observers. The vertical arrows show the depth-matching Gabor disparities. The interesting feature of these disparities is their polarities: One is positive, and the other is negative. This is true of their horizontal components, as well. 
Figure 6.
 
Psychometric functions for grating-plaid pairs. (A) Target grating and reference plaid. The plaid was symmetrical and had a fixed disparity with a direction of 60°. The grating had an orientation of 45° (shown here) or 90° and a disparity that varied under control of a constant-stimulus procedure. Stimuli were presented for 150 ms in trials blocked by conditions. (B) Psychometric functions for two observers; θ gives the grating orientation; the plaid was identical for the two grating orientations. Vertical arrows point to grating disparity values that yielded a perceived depth match with the plaid. Unblocked conditions, including multiple plaid disparities presented in random order, produced similar data. (From Farell et al., 2009.)
Figure 6.
 
Psychometric functions for grating-plaid pairs. (A) Target grating and reference plaid. The plaid was symmetrical and had a fixed disparity with a direction of 60°. The grating had an orientation of 45° (shown here) or 90° and a disparity that varied under control of a constant-stimulus procedure. Stimuli were presented for 150 ms in trials blocked by conditions. (B) Psychometric functions for two observers; θ gives the grating orientation; the plaid was identical for the two grating orientations. Vertical arrows point to grating disparity values that yielded a perceived depth match with the plaid. Unblocked conditions, including multiple plaid disparities presented in random order, produced similar data. (From Farell et al., 2009.)
For much of the data from the series of experiments, the matching phase disparity of the Gabor, D1, could be approximated by Equation 1:  
\begin{eqnarray} {D_1} = {D_2}{\rm{sin}}(\theta - \phi )\qquad \end{eqnarray}
(1)
where θ (0° ≤ θ < 180°) is the Gabor's orientation, ϕ (0° ≤ ϕ < 360°) is the disparity direction of the plaid, and D2 is its disparity amplitude. D2 is positive; D1 can be positive or negative, depending on the relative values of θ and ϕ. 
Figure 7 illustrates examples of depth matches between a Gabor and a plaid (as in Figure 6A) predicted by Equation 1. The solid line gives the orientation of the Gabor. Arrows depict disparity vectors. Figures 7A and 7B show the Gabor with the disparity vectors (blue arrow) that produce a depth match with the plaid (whose disparity vector is given by the crimson arrow; its direction is +45°). Note that in Figure 7A the two disparity vectors have horizontal components with the same polarity. In Figure 7B they have opposite polarities (as in Figure 6B), yet both Gabors match the same plaid in depth. This is a consequence of the absence of a horizontal disparity term (or any other non-stimulus–specific disparity direction) in Equation 1. The equation is a version of the intersection-of-constraints (IOC) construction often used in the context of pattern motion (Adelson & Movshon, 1982). The dashed lines in Figure 7 are disparity constraint lines. D1 in the equation can also be described as the amplitude of the projection of the vector (D2, ϕ) onto the axis perpendicular to the 1-D stimulus. Also absent from Equation 1 are terms for the orientations of the components of the plaid. These determine the scale of image features as a function of direction and the similarity between the orientations of the components and the grating. Over a wide range, these variables do not affect the perceived depth between the stimuli (Chai & Farell, 2009). 
Figure 7.
 
Predicted 1-D/2-D depth matches. (AD) The disparity vector of a 2-D stimulus is represented by a magenta arrow. The disparity vector of a 1-D stimulus that produces a perceptual depth match according to the IOC construction is represented by a blue arrow. The thick line shows the orientation of the 1-D stimulus, the thin line is the perpendicular orientation, and the dashed line is the disparity constraint line. All predicted relative disparities except (D) agree with experimentally observed values. (E) Superimposed disparity vectors of the 1-D stimuli, shown scaled up, reproduce the disparity vector of the 2-D stimulus, similarly scaled, as an IOC result. The depth-matching 1-D disparities therefore would be component disparities of the 2-D stimulus.
Figure 7.
 
Predicted 1-D/2-D depth matches. (AD) The disparity vector of a 2-D stimulus is represented by a magenta arrow. The disparity vector of a 1-D stimulus that produces a perceptual depth match according to the IOC construction is represented by a blue arrow. The thick line shows the orientation of the 1-D stimulus, the thin line is the perpendicular orientation, and the dashed line is the disparity constraint line. All predicted relative disparities except (D) agree with experimentally observed values. (E) Superimposed disparity vectors of the 1-D stimuli, shown scaled up, reproduce the disparity vector of the 2-D stimulus, similarly scaled, as an IOC result. The depth-matching 1-D disparities therefore would be component disparities of the 2-D stimulus.
The plaid in Figure 7C has a disparity direction different from the others: vertical. Equation 1 predicts that an oblique Gabor with the illustrated disparity should produce a depth match. It does (Chai & Farell, 2009; see also Ito, 2005). The plaid disparity in Figure 7D is the same as in Figures 7A and 7B. The Gabor patch, which is horizontal, is shown with the predicted disparity for a depth match. However, the observed points of subjective equality are highly variable across observers and psychometric functions are shallow (Chai & Farell, 2009). This suggests that horizontal 1-D patterns fall into two classes. In one class, they appear as stand-alone stimuli; in the other, they appear as components of 2-D stimuli. The former has a vertical disparity direction in Figure 7D and makes little or no contribution to depth-from-disparity computations. The latter functions like other components of a 2-D stimulus, even supporting the IOC-predicted stereo depth when all other components have zero disparity (Chai & Farell, 2009). In Figure 7E, the Gabor disparities of Figures 7A, 7B, and 7D are scaled up and superimposed. As 1-D components, their disparities, in IOC combination, recreate the disparity of the reference plaid, scaled proportionately (crimson arrow). 
Figures 6 and 7 show that a 1-D stimulus with positive horizontal disparity can appear to match the depth of a 2-D stimulus that matches the depth of another 1-D stimulus having negative horizontal disparity. Such non-transitive triplets are easy to generate (Farell & Ng, 2014). If stimulus A is seen as far relative to stimulus B, and stimulus B is seen as far relative to stimulus C, A may yet be seen as near relative to C. This and other examples shown here strongly suggest that there are two depth-from-disparity computations. One is associated with 1-D stimuli. Its output is orientation specific and generally non-veridical. The other is associated with 2-D stimuli, conventionally computed on horizontal disparity, to which we turn next. Why there are two computations, how they are related, and what this implies more generally about depth from disparity will be taken up at the end of the paper.4 
Psychophysical evidence: 2-D stimuli
Horizontal disparities have long been recognized as the primary source of stereoscopic information. Vertical disparities, by contrast, have been regarded either as signal or as noise, making it advantageous to detect them or to tolerate them, respectively. 2-D patterns are generally considered to have unambiguous disparities. They do not have an aperture problem. Because of this, disparities that deviate from horizontal might be expected to have greater impact on stereoacuity and depth estimates of stimuli that are two dimensional than those that are one dimensional. 2-D stimuli drawn from a diverse set of categories have been used experimentally, yet the effects of disparity direction on stereo vision are largely independent of this diversity and reasonably well agreed upon. 
Stereoacuity
Two general methods have been used to study disparity sensitivity as a function of disparity direction. One is to measure thresholds for particular directions (usually horizontal and vertical) and to compare them. A second method measures the effect on stereoacuity of the addition of vertical disparity. This is usually framed as a tolerance measurement, with the implication that the vertical disparity is noise. For 2-D stimuli, stereoacuity consistently worsens with increases in vertical disparity (Farell, 2003; Farell, 2006; Lothridge, 1953; McKee et al., 1990; Mitchell, 1970; Nielsen & Poggio, 1984; Ogle, 1955; Ogle, 1958; Stevenson & Schor, 1997; van Ee & Schor, 2000; Westheimer, 1984). How rapidly stereoacuity falls varies considerably across studies. Stimulus size, retinal location, spatial frequency content, and duration contribute to this variation (Howard & Rogers, 2002). A similar set of factors influences the onset of vertical-disparity–induced diplopia (Duwaer, 1982; Duwaer & Brink, 1982; Mitchell, 1970; Prazdny, 1985; Schor & Tyler, 1981). All of this points to mechanisms capable of modulating their responses to vertical disparities, even if their tuning is, by other criteria, strictly to horizontal disparity (see Read, 2010). However, as noted earlier, depth cannot be derived directly from vertical disparity. The high detection threshold for vertical disparity is not a perceived depth threshold, but a threshold for perceived fuzziness, ocular motor sensations, or diplopia (e.g., McKee et al., 1990). For stereo depth, vertical is the null direction. 
One can expect some tolerance for small vertical disparities from mechanisms that make strictly horizontal interocular matches. The vertical spatial extent of horizontal disparity detectors, plus image blurring, both optical and neural, provide buffers against vertical disparity noise. But, as shown below, some studies show a high tolerance, and there is no agreed-upon point beyond which tolerance for vertical disparity becomes evidence for mechanisms tuned to non-horizontal disparities. 
Two contrasting studies, both using random-dot displays, illustrate the variability of tolerance estimates and of the conclusions drawn from them. The stimuli used by Nielsen and Poggio (1984) were static, small (1.5° diameter), and fovea centered. Observers’ depth discrimination failed to reach threshold (75% correct) when vertical disparity was 3.6′ or larger in one stimulus condition and 7.2′ or larger in another. Small vertical disparities (∼2′) barely affected performance relative to the horizontal-disparity-only level. Nielsen and Poggio (1984) took the data as evidence for a 1-D matching process with a tolerance for vertical disparities. The tolerance range marginally exceeded the vertical amplitudes that can be reliably canceled by divergent eye movements. Image blurring and averaging across matching mechanisms were hypothesized to underlie the tolerance, in addition to a global shifter circuit functioning similar to eye movements. 
Stevenson and Schor (1997) used larger dynamic RDSs to measure the range of combinations of horizontal and vertical disparities that permitted discrimination of depth between abutting 12° diameter semicircles. They were able to measure depth discrimination at threshold when disparities in the vertical direction were as large as 30′. With a horizontal disparity of 5′, their observers could reach threshold polarity discrimination performance when the disparity direction was less than 10° from vertical. Reducing the stimulus diameter by a factor of two restricted the ranges of both horizontal and vertical disparity tolerance roughly proportionally. When the observers’ task was changed from depth discrimination to correlation detection, tolerance for vertical disparities increased. Stevenson and Schor (1997) took the data as evidence for a 2-D interocular matching process. Even with the considerable tolerance to vertical disparity found in this study, though, the disparity operating range was clearly anisotropic, horizontal exceeding vertical by a factor of about two. 
Perceived depth
Just as stereoacuity falls as vertical disparity is added to the stimulus, the amplitude of perceived depth diminishes if enough vertical disparity is added. Beyond a certain limit, correspondence will fail entirely. However, within a moderate range and especially if applied locally, vertical disparity has surprisingly little effect on perceived depth. Lothridge (1953), for example, found an effect of vertical disparity on the variability of equal-depth settings between a line segment and a disk but little effect on mean depth estimates. Ogle (1955) found the same using point light sources (see also Howard & Kaneko, 1994; Ogle, 1962; van Ee & Erkelens, 1995). As mentioned above, in addition to sparing depth up to a point, vertical disparity can also modulate perceived depth and slant by providing estimates of viewing parameters (Bishop, 1989; Gillam & Lawergren, 1983; Longuet-Higgins, 1982; Mayhew, 1982; Mayhew & Longuet-Higgins, 1982). 
A contrasting study by Friedman, Kaye, and Richards (1978) is perhaps the strongest demonstration of the detrimental effect of vertical disparity on perceived depth. Yet, the proper interpretation is unclear. The data show that the presence of vertical disparity sharply lowered the perceived depth separation between a disk and a fixation point, relative to the depth expected from the horizontal disparity component alone. The data were normalized by the depth seen when the vertical disparity was zero; how much depth was seen in this case was not reported. Indeed, how much depth could have been seen is unclear. The disparity gradient in the experiment was infinite and the disk would be expected to have appeared diplopic in all conditions. How much depth is derived from such displays is problematic even in the absence of vertical disparity. 
Another study measured the perceived relative depth as a function of the difference in the disparity directions of a pair of stereo plaids (Farell et al., 2010). In one depth-matching task, the disparity amplitude of one of the plaids was varied across trials. The experiment measured the amplitude that delivered a perceptual depth match with the other plaid, a reference stimulus with a fixed disparity magnitude and direction. In the other task, which measured depth interval estimates, the disparities of both plaids were fixed within a condition. Each trial consisted of two intervals, one containing the plaids and the other containing a pair of vertical gratings whose disparity difference varied across trials. In both tasks, depth matches occurred when the horizontal disparities, between stimuli in one condition and between stimulus pairs in the other, were similar. Disparity magnitudes increased with vertical disparity, but the horizontal component of the depth-matching disparity remained equal. This invariance extended over a 120° range of disparity direction differences between the plaids. 
Overall, 2-D stimuli show stereoacuity to be optimal when disparity is strictly horizontal. They also show that the depth derived from horizontal disparity is surprisingly similar to the depth derived from oblique disparities with the same horizontal component. The role played by horizontal components distinguishes the disparity computations of 1-D and 2-D stimuli. One possible implication is that 2-D stimuli might produce multiple disparity signals, as suggested by Figure 2. A pair of disparity signals processed across two stages will be proposed in a later section. 
The role of 1-D components of 2-D stimuli
Stereoacuity
Figure 2 showed two plaid patterns with equal component phase disparities and unequal horizontal spatial disparities. A component analysis of disparity relates the functions of the two measures. Components play a role in modeling the disparity energy processing of 2-D stimuli, yet modeling efforts often consider the case of a single (nominally vertical) detector orientation and the perpendicular (horizontal) disparity (exceptions include Mikaelian & Qian, 2000; Patel, Bedell, & Sampath, 2006; Patel et al., 2003). A question to start with, then, is whether what we know about the processing of the disparity of 1-D stimuli is informative about the role of 1-D components in the analysis of 2-D stimulus disparity. 
Threshold measurements provide some evidence. Horizontal disparity thresholds are not equal for sinusoidal plaids with the component orientations pictured in Figure 2. The thresholds differ by about a factor of two. What are equal at threshold are the component phase disparities of the plaids. These component thresholds are comparable to the thresholds for the gratings presented individually (Farell, 2003). By itself, such a result is ambiguous: The plaids' horizontal phase disparities at threshold are also equal (Morgan & Castet, 1997). That is, at threshold the two plaids have horizontal disparities that are an equal proportion of their periods in the horizontal direction. Additional sources of information are required to adjudicate. Both probability summation between the components' disparities and the effect of a component's horizontal orientation imply that the disparity determining threshold is that of the components, not the plaid (Farell, 2003). This is why the scaling of 2-D image features between the plaids in Figure 2 does not affect the horizontal disparity gain and why such scaling cannot drive threshold below what is required to detect component disparities. van Ee, Anderson, and Farid (2001) showed a similar invariance for spatially intersecting broadband stimuli (bars, letters) that comprise the components of conglomerations of overlapping objects.5 
Component-limited disparity thresholds are also seen in compound gratings, where neither visible surface features nor luminance gradients determine stereoacuity (Heckmann & Schor, 1989; Levinson & Blake, 1979). Binocular fusion is similarly limited by the spatial frequency of the components of compound gratings (Levinson & Blake, 1979; Schor, Heckmann, & Tyler, 1989). Initial disjunctive eye movements, too, are made in response to component disparities, not 2-D pattern disparities (Quaia et al., 2013). Reinforcing this picture are the size-disparity correlation (Blakemore & Hague, 1972; Farell, Li, & McKee, 2004a; Farell, Li, & McKee, 2004b; Levinson & Blake, 1979; Prince & Eagle, 1999; Schor, Wood, & Ogawa, 1984; Smallman & MacLeod, 1994; Tyler, 1974), scale-dependent depth percepts (Julesz & Miller, 1975; Mayhew & Frisby, 1980), and component adaptation effects (Felton, Richards, & Smith, 1972). 
Considering their foundational status in spatial vision generally, it is hardly surprising that 1-D components limit sensitivity to 2-D stimulus disparity. But few studies have looked beyond disparity detection to see whether 1-D components make a discernible contribution to the processing of suprathreshold disparities of 2-D stimuli. These studies are considered next. 
Perceived depth
Patel et al. (2003) tested the hypothesis that only vertically oriented disparity detectors contribute to stereo depth. The stimuli were filtered RDSs with zero disparity everywhere except within a central square region. Within the square, disparity was added to components within two bands of the orientation spectrum. The bands were obliquely and symmetrically oriented (Figure 8A). Phase disparity was equal across the two bands and fixed at 90° for all spatial frequencies. The disparate oblique components sufficed for the central square to appear in depth. The amount of depth, measured with an adjustable probe, varied with the orientation of these components, depth being least when the orientation was near vertical. As the orientation was made more horizontal, the components' perpendicular disparity became more vertical and perceived depth increased as an inverse cosine function of disparity direction, consistent with an IOC result (Figure 8B). Analogous results are seen with motion signals (Prince, Offen, Cumming, & Eagle, 2001). A disparity energy model equipped with obliquely oriented detectors responded strongly in simulations to the disparity of similarly oriented stimulus components, while a model with only vertically oriented detectors responded feebly and could not account for observers’ depth judgments (Patel et al., 2003). 
Figure 8.
 
Depth from oblique disparities. (A) Schematic example of the stimuli used by Patel et al. (2003). The central squares within the RDSs have oblique disparity within two 30°-wide directional bands (θ). The centers of these bands are ±30° in the upper RDS and ±45° in the lower RDS. Oblique arrows represent 90° phase disparities at one spatial scale for components at the extremes of these bands (i.e., disparities perpendicular to central random-dot components with orientations between 105° and 135° and between 45° and 75° in the upper RDS). These component disparities would be found in a RDS having the horizontal disparity given by the IOC lines and proportionally covering the range of the gray arrow. Other components at a larger or smaller scale, also with 90° phase disparities, would extend this range to greater and smaller values. All components outside the disparate orientation range had zero disparity, as did the entire surrounding RDS; disparities are not drawn to scale. The observers in the study by Patel et al. (2003) matched the perceived depth of the inner square of the RDS by adjusting the disparity of a simultaneously displayed and overlapping 3′ × 3′ square probe. (B) Mean depth matching disparities plotted as a function of the central direction of the component disparities; a disparity direction of –75° comes from components centered on an orientation of 15°. The inverse cosine of orientation is pinned to the data point for θ of 30° and center disparity direction of 15°. Both the center orientation of the disparate bands and their bandwidth affect perceived depth, which dissipates as the disparity direction nears vertical. (Panel B was adapted from Patel et al., 2003.)
Figure 8.
 
Depth from oblique disparities. (A) Schematic example of the stimuli used by Patel et al. (2003). The central squares within the RDSs have oblique disparity within two 30°-wide directional bands (θ). The centers of these bands are ±30° in the upper RDS and ±45° in the lower RDS. Oblique arrows represent 90° phase disparities at one spatial scale for components at the extremes of these bands (i.e., disparities perpendicular to central random-dot components with orientations between 105° and 135° and between 45° and 75° in the upper RDS). These component disparities would be found in a RDS having the horizontal disparity given by the IOC lines and proportionally covering the range of the gray arrow. Other components at a larger or smaller scale, also with 90° phase disparities, would extend this range to greater and smaller values. All components outside the disparate orientation range had zero disparity, as did the entire surrounding RDS; disparities are not drawn to scale. The observers in the study by Patel et al. (2003) matched the perceived depth of the inner square of the RDS by adjusting the disparity of a simultaneously displayed and overlapping 3′ × 3′ square probe. (B) Mean depth matching disparities plotted as a function of the central direction of the component disparities; a disparity direction of –75° comes from components centered on an orientation of 15°. The inverse cosine of orientation is pinned to the data point for θ of 30° and center disparity direction of 15°. Both the center orientation of the disparate bands and their bandwidth affect perceived depth, which dissipates as the disparity direction nears vertical. (Panel B was adapted from Patel et al., 2003.)
In a related study, Patel et al. (2006) blurred obliquely oriented bands in RDSs, raising the threshold for a target defined by horizontal disparity. Both studies make a frequently neglected point: Oblique disparities contribute to the stereo depth of orientationally broadband stimuli whose disparity is horizontal. For any particular spatial frequency, the relation between component and object disparities can be expected to follow Equation 1 (see Figure 7E). When the disparity of the object is horizontal, its vertically oriented components have the largest phase disparity. More horizontally oriented components have progressively smaller phase disparities, as reflected in the single-cell recordings of Figure 3. This relation between object disparity direction and component disparity magnitude is, of course, general: The component with the largest disparity will have an orientation orthogonal to whatever happens to be the disparity direction of the object. 
Adaptation is another way of exposing the role of components in the analysis of object disparities. Figure 9A shows a grating with “near” depth relative to the fixation disk and Figure 9B shows a plaid with “far” depth when cross-fused. The grating is a component of the plaid, whose other component has zero disparity. Figure 9C shows qualitatively the depths at which the two gratings would appear relative to the fixation disk if each was presented individually. Suppose we presented these two gratings, differing in orientation, simultaneously and superimposed. If the gratings had the proper orientations and were fairly similar in spatial frequency and contrast, the observer would see nothing in front of or in the plane of fixation, but would see a coherent plaid behind it. Indeed, the disparity of the plaid as a whole has a positive (“far”) horizontal component. This disparity might be detected directly, or it might be calculated from disparities extracted from the components. 
Figure 9.
 
Adapting the disparity of a reversed-depth plaid. (A) Sinusoidal gratings with negative (“near”) disparity when cross-fused. (B) Sinusoidal plaid with positive (“far”) disparity when cross-fused. The plaid is composed of the grating in panel A and a zero-disparity grating with a different orientation. (C) The depth order of the two gratings, as each would appear if displayed individually along with the fixation disk. One grating would appear in front of fixation and the other in the fixation plane. If they were spatially and temporally aligned and given appropriate orientations, they would appear as a coherent plaid behind fixation. Nothing would appear on the near side or in the fixation plane. The “far” horizontal disparity of the plaid could be detected directly or calculated from the disparities of the plaid's 1-D components. An adaptor with “far” disparity would affect the perceived depth of the plaid in the former case; one with “near” disparity would do so in the latter case. Adapting at a “near” disparity with a stimulus having an orientation similar to that of the “near” grating is essential for influencing the perceived “far” depth of the plaid (Farell, 1998).
Figure 9.
 
Adapting the disparity of a reversed-depth plaid. (A) Sinusoidal gratings with negative (“near”) disparity when cross-fused. (B) Sinusoidal plaid with positive (“far”) disparity when cross-fused. The plaid is composed of the grating in panel A and a zero-disparity grating with a different orientation. (C) The depth order of the two gratings, as each would appear if displayed individually along with the fixation disk. One grating would appear in front of fixation and the other in the fixation plane. If they were spatially and temporally aligned and given appropriate orientations, they would appear as a coherent plaid behind fixation. Nothing would appear on the near side or in the fixation plane. The “far” horizontal disparity of the plaid could be detected directly or calculated from the disparities of the plaid's 1-D components. An adaptor with “far” disparity would affect the perceived depth of the plaid in the former case; one with “near” disparity would do so in the latter case. Adapting at a “near” disparity with a stimulus having an orientation similar to that of the “near” grating is essential for influencing the perceived “far” depth of the plaid (Farell, 1998).
If judgments of the plaid's depth could be nudged by prior adaptation, where in depth should we put the adaptor to make this happen? Behind the plane of fixation, where the plaid is seen but no disparity energy resides? Or in front, where nothing is seen but where a component has disparity energy in a particular direction and amplitude (Figure 9C)? Adapting in front of fixation was found to be effective, but only when the orientation of the adaptor was similar to that of the disparate component of the plaid (Farell, 1998). Adapting behind fixation, where the plaid was perceived, had no effect unless the adaptor, like the plaid, had the “near” component's disparity and orientation; adaptors with the plaid's disparity but different component orientations were ineffective. This is consistent with object disparity deriving from component disparity and component disparity being orientation specific. 
To summarize, 1-D signals propagate to 2-D stimuli. Relative orientation affects stereoacuity not only for 1-D stimuli (Figure 4) but also for 2-D patterns (Farell, 2006). Oriented components set threshold limits for 2-D patterns with horizontal disparity. They contribute to the perception of depth of 2-D patterns having either horizontal disparity (Patel et al., 2003; Patel et al., 2006) or non-horizontal disparity (Farell, 1998). The correspondence between oriented components results in a 2-D field of interocular matches, which is compatible with physiological evidence of perpendicular disparity coding. It will be suggested below that orientation links the relative disparity computations used for 1-D and 2-D stimuli. 
Why two computations?
Horizontal disparity has a special place in the stereo vision literature as the loci of true matches and the primary signal behind binocular depth perception. A horizontal metric is built into the depth-from-disparity circuitry of most models of stereo vision. Non-horizontal disparities are either tolerated in the extraction of horizontal disparity or disrupt it (e.g., Nielsen & Poggio, 1984; Prazdny, 1985). Vertical components will cancel (e.g., Patel et al., 2003) or be used to support subsequent calculations (e.g., Backus et al., 1999; Rogers & Bradshaw, 1993). Relative disparity is calculated directly from individual horizontal disparities by a differencing operation. 
However, as reviewed here, horizontal disparity does not always correlate with stereoacuity or perceived depth. The visual system seems to apply a different metric to 1-D stimuli. The effective disparity direction in these cases varies with stimulus orientation, rendering the depth-from-disparity computation non-veridical. It is not even qualitatively veridical. A stimulus with positive horizontal disparity can match the depth of a stimulus with negative horizontal disparity, and the depth order between stimuli can be non-transitive. A horizontal disparity computation is available but apparently not used when the stimulus is 1-D. Despite this, there are connections between the disparity processing of 1-D and of 2-D stimuli. Sensitivity to 1-D component disparity limits stereoacuity to 2-D stimuli, and the disparity of 1-D components contributes to the perceived depth of 2-D stimuli. Yet, as seen above, 1-D stimulus disparity is analyzed along a stimulus-specific disparity axis, while for 2-D stimuli the effective direction for obtaining depth from disparity is horizontal, even when the stimulus has non-horizontal disparity. Why, then, are there two depth-from-disparity computations? 
In a component analysis of disparity, an intersection of constraints or a similar construction operating on 1-D components specifies the overall object disparity (Figure 7E). The IOC in this case operates on multiple superimposed components. Other cases are examined in Figure 10. The figure shows examples of how the depth-from-disparity computations play out for the three pairwise cases of stimulus comparisons that have been considered here: between a pair of 1-D stimuli, between a 1-D stimulus and a 2-D stimulus, and between a pair of 2-D stimuli. 
Figure 10.
 
Relative depth of 1-D and 2-D stimuli. (A) 1-D stimulus pairs (Gabors), shown with disparity vectors. The disparity constraint line of one stimulus appears on the left and that of the other stimulus appears on the right. Each constraint line exceeds the disparity magnitude of the other stimulus, indicating that IOC provides an inconsistent relative disparity. (B) A 1-D stimulus (Gabor) and a 2-D stimulus (plaid) with constraint and projection lines giving consistent relative disparity measures. (C) A pair of 2-D stimuli (plaids) with unequal horizontal disparities (left) and with the same disparity amplitudes after rotation of one disparity (center) has reversed the relative horizontal disparities. The same non-parallel disparities from plaids with differing orientations can be compared along the RDA, restoring the original relative disparity ordering (right).
Figure 10.
 
Relative depth of 1-D and 2-D stimuli. (A) 1-D stimulus pairs (Gabors), shown with disparity vectors. The disparity constraint line of one stimulus appears on the left and that of the other stimulus appears on the right. Each constraint line exceeds the disparity magnitude of the other stimulus, indicating that IOC provides an inconsistent relative disparity. (B) A 1-D stimulus (Gabor) and a 2-D stimulus (plaid) with constraint and projection lines giving consistent relative disparity measures. (C) A pair of 2-D stimuli (plaids) with unequal horizontal disparities (left) and with the same disparity amplitudes after rotation of one disparity (center) has reversed the relative horizontal disparities. The same non-parallel disparities from plaids with differing orientations can be compared along the RDA, restoring the original relative disparity ordering (right).
Figure 10A shows two non-overlapping Gabor patches, their orientations differing. They are shown twice, along with a sketch of their disparity vectors. On the left they appear along with the disparity constraint line for the patch at top. The disparity vector for the patch at the bottom falls short of the constraint line. On the right, they are shown along with the constraint line for the patch at the bottom. This time, the disparity vector for the patch at the top falls short of the constraint line. The choice of which patch to consider the “target” and which the “reference” could determine which constraint line to base a disparity comparison on and therefore which patch is judged “far” relative to the other. Performance in this simple task might not be expected to be a stable function of disparities alone and the data are in agreement (Farell & Ng, 2016). 
A small stimulus change produces stability. In Figure 10B, with the addition of a second sinusoidal component one of the stimuli becomes two dimensional. The disparity comparison between the stimuli can be described in two equivalent ways, shown here. The constraint line in Figure 10B shows that the Gabor patch will be judged to be greater in depth than the plaid. The projection of the plaid's disparity vector onto the Gabor's disparity axis shows the same thing. In neither case does horizontal disparity play a role. For the experimental task (“Say whether the 1-D target is near or far relative to the 2-D reference stimulus”), the algorithm is non-veridical and potentially non-transitive. One may ask where this apparently maladaptive computation comes from. There is a task for which the algorithm provides a veridical answer: “Say whether the 1-D target has greater or less disparity than it would have if it were a component of the 2-D stimulus.” The connection between the two tasks has been pointed out earlier (Figure 7E): When applied to the components of a single object, the IOC algorithm gives the object's disparity; when applied between objects in support of relative depth judgments, it is aberrant, as seen here in Figures 10A and 10B. A plausible reason for its use in relative depth judgments is that there is only one disparity calculation the visual system performs on 1-D stimuli and this is IOC. Its overwhelmingly most common usage—to derive object disparities when 2-D patterns are viewed binocularly—is veridical and adaptive. At the point when coherent depth emerges from the IOC combination of component disparities (Figure 9), object properties, including the apparent horizontal disparity metric, displace component properties (Chai & Farell, 2009). 
In the last of the dimensional pairings examined, both stimuli are two dimensional (Figure 10C). Their disparity directions are both horizontal in the case shown on the left and horizontal and oblique in the middle case. For both cases and unlike cases involving 1-D stimuli, relative disparity means relative horizontal disparity and stimulus orientation is not a factor. 
Yet, the issue of 2-D stimulus orientation has not been examined closely. An observation about laboratory stimuli helps to link horizontal and orientation-specific disparity computations. The stimuli from which our experimental knowledge of stereo processing derives are both one dimensional and two dimensional, oriented and unoriented—disks, dots, edges, bars, outlined figures, Gabor patches, gratings, plaids, fields of random dots. Despite the diversity, across studies these stimuli share, with very few exceptions, certain properties. They have been either 
  • isotropic, or
  • symmetrical about a vertical axis, or
  • vertically oriented, either individually or on average over the pooled ensemble
In each case, the average component orientation is vertical and the average perpendicular disparity direction is horizontal. For isotropic stimuli, such as disks and random-dot patterns, I am taking horizontal and near-horizontal components to be null signals for the purpose of calculating disparity direction; their contribution to the orientation average is discounted or down-weighted, leaving the contribution of the remaining components to tend toward vertical. (Conversely, cyclovergence detection preferentially weights horizontally oriented disparity detectors [Howard, 1991].) This vertical bias likely holds also for stimuli outside the lab, where gravity, the earth/sky contrast, and the left–right equivalence of macroscopic natural objects shape the orientation distribution. Conspicuous peaks in this distribution appear along the cardinal directions in visually relevant sample sizes of natural scenes (Girshick, Landy, & Simoncelli, 2011). Scenes in which the average orientation is oblique are certainly not rare but are often fleeting in time and diverse in angle. 
With vertical as the average of the component orientations, a typical scene, or local region within such a scene, is well suited as input to a stereo system that judges depth from horizontal disparity. (Symmetry detection is another ability that might draw on this vertical-centric organization of visual input.) But, the experimental use of 2-D stimuli has been analogous to studying 1-D stimuli that are constrained to be vertical. Left undetermined is whether the horizontal metric is built into the circuitry that computes depth or is taken from the input itself. For 1-D stimuli, the input supplies the effective disparity axis, which is perpendicular. If we wish to explore the implications of the average orientation of 2-D stimuli, we could start by treating 2-D stimuli as if they were one dimensional—that is, as oriented. Figure 10B can be interpreted as a comparison of disparity components along the axis perpendicular to the 1-D stimulus orientation. In Figure 10C left and center, the stimuli are vertical centric, and the effective disparity direction is horizontal, perpendicular to the average stimulus orientation. Figure 10C right shows a perpendicular direction that is non-horizontal, again based on the average orientation. 
Usually, as in apparently all but a few conditions in studies using 2-D stimuli cited so far (Morgan & Castet, 1997; Patel et al., 2006), the average stimulus orientation is vertical and the perpendicular axis is horizontal. Thus, in the vast majority of cases, depth-from-disparity predictions based on perpendicular disparity do not differ from those based on a horizontal disparity metric. It does not have to be so. A test could examine the joint effects of 2-D stimulus orientation and disparity direction on perceived depth. The horizontal disparity prediction is that the stimuli will match in depth when the horizontal components of their disparities are equal. The stimulus-specific prediction, at its simplest, is that a depth match will occur when the disparities are equal along the axis perpendicular to the overall average orientation, such as the axis shown in Figure 10C right. I will call this the relative disparity axis (RDA). 
The RDA is presumably regional, encompassing the stimuli whose relative disparity is to be calculated and perpendicular to the average orientation of the non-horizontal components within the region. How large a spatial area this includes, we do not know, but it appears to receive input from both task-relevant and unattended irrelevant stimuli within this area (Farell & Ng, 2019). A judgment of relative depth would be a report of a comparison of the RDA components of the relevant stimuli. An experiment designed along these lines pitted horizontal disparity predictions against RDA predictions (Farell & Ng, 2018). The stimuli were stereo plaids arranged in center-surround fashion (Figure 11A). Other stimuli within the visual field were task irrelevant and obscured from binocular viewing. The task was to judge the depth of the central target plaid relative to the comparison plaid whose disparity was fixed in amplitude and took on any one of three alternative directions (0°, ±30°). Figure 11B gives the results; stimulus conditions are sketched along the x-axis. The results take the form of the ratio of the target and comparison disparity amplitudes when the two stimuli have the same apparent depth. Also plotted are predicted values. One set of predictions assumes that the horizontal components of the depth-matching disparities will be equal. The other assumes that the RDA components will be equal. 
Figure 11.
 
Depth from horizontal disparity versus RDA. (A) Monocular example of center-surround plaids, with exaggerated contrast. In the experiment the disparity of the annulus was fixed while the disparity of the center plaid was varied in order to determine the point of subjective equality (PSE) for depth. Component orientations of the plaids were 45°/75°, 75°/105°, and 105°/135° in various center-surround combinations. Disparity directions were 0° and ±30°. Stimuli were presented briefly (180 ms) after horizontal and vertical nonius alignment of the eyes and in the absence of irrelevant binocular stimuli (Farell & Ng, 2018). (B) Means and standard errors of depth-match disparity ratios for four observers. Ratios are between target and reference disparity magnitudes at the depth match. Also shown are the ratios predicted by horizontal disparity and RDA metrics. Fit of RDA predictions to the data exceeds that of horizontal predictions (r2 = 0.83, t(2) = 3.08, p < 0.05 and r2 = 0.20, not significant, respectively). Stimulus conditions, defined by plaid orientations and disparity directions, are sketched along the x-axis. Each condition contained two symmetrical stimulus arrangements (sets 1 and 2). The stimulus appearing in panel A is an example of the two left-most stimulus pairs in set 2.
Figure 11.
 
Depth from horizontal disparity versus RDA. (A) Monocular example of center-surround plaids, with exaggerated contrast. In the experiment the disparity of the annulus was fixed while the disparity of the center plaid was varied in order to determine the point of subjective equality (PSE) for depth. Component orientations of the plaids were 45°/75°, 75°/105°, and 105°/135° in various center-surround combinations. Disparity directions were 0° and ±30°. Stimuli were presented briefly (180 ms) after horizontal and vertical nonius alignment of the eyes and in the absence of irrelevant binocular stimuli (Farell & Ng, 2018). (B) Means and standard errors of depth-match disparity ratios for four observers. Ratios are between target and reference disparity magnitudes at the depth match. Also shown are the ratios predicted by horizontal disparity and RDA metrics. Fit of RDA predictions to the data exceeds that of horizontal predictions (r2 = 0.83, t(2) = 3.08, p < 0.05 and r2 = 0.20, not significant, respectively). Stimulus conditions, defined by plaid orientations and disparity directions, are sketched along the x-axis. Each condition contained two symmetrical stimulus arrangements (sets 1 and 2). The stimulus appearing in panel A is an example of the two left-most stimulus pairs in set 2.
Stimuli similar to these but vertically centric are perceived as matching in depth when the horizontal components of their disparities are equal (Farell, Chai, & Fernandez, 2010). But in this experiment perceiving the target and comparison plaids at the same depth does not depend on their horizontal disparities. Their disparity components in the RDA direction—the direction perpendicular to the display-averaged orientation (+15° or –15° from horizontal)—are strongly correlated (Figure 11B). There are two competing explanations for the fact that 2-D stimuli generally appear at equal depths when they have equal horizontal disparities. One explanation is that an internal horizontal metric based on retinal coordinates is used for relative disparity computations. The other is that a vertically centric orientation structure is typical of laboratory stimuli and relative disparity is calculated along the orthogonal axis. Data from Figure 11B are consistent with this second explanation. 
Other interpretations might work as well, and even if they do not, assumptions behind the RDA predictions are in need of validation. As calculated here, the RDA is the unweighted average of component orientations across two stimuli. This disregards stimulus area, retinal location, contrast, interstimulus spacing, and other potentially influential factors determining pooled orientation estimates (Bocheva, Stefanov, Stefanova, & Genova, 2015; Dakin, 2001). In addition, the RDA should presumably be regional, not global. The inverse distance fall-off suggested by Mitchison and Westheimer (1984) for their salience metric may be related, as might estimates of vertical disparity pooling (e.g., Adams et al., 1996; Gårding et al., 1995; Kaneko & Howard, 1997). Attention, however, appears not to be a factor. Attention has access to computations carried out on the RDA, not to input to the RDA (Farell & Ng, 2019; see also Chopin et al., 2016). 
The argument and a general counterargument
Mechanisms can be designed to detect horizontal disparity from nearly any disparate stimulus. But it is argued here that the horizontal disparities used for stereo depth perception are not those derived from retinal coordinates. For vertically oriented stimuli, disparity computations operate on horizontal disparity signals not because they are horizontal but because they are perpendicular to the stimulus. This stimulus specificity defines a computational strategy over an anisotropic 2-D disparity field. Evidence gathered here shows effects of orientation that are independent of the horizontal content of the disparity signal. The evidence suggests two related computations, one making a clear appearance when the stimuli are one dimensional and the other when stimuli are two dimensional, neither being a horizontal disparity computation. Closely connected to the stimulus dimension difference is the role of 1-D component disparity signals in the construction of 2-D depth-coherent visual objects 
It might be argued instead that for stereo depth there is only the horizontal disparity computation. It reliably manifests itself with naturalistic stimuli and viewing conditions compatible with the operation of a system built for real-world complexity. Outside these limits, according to this argument, artificial or impoverished stimuli can distort the processing of horizontal disparity or fail to engage all steps of the computation. This could happen in different ways for different classes of stimuli. Support for this interpretation would come from data showing either that stimulus-specific phenomena of the type discussed here do not occur with sufficiently naturalistic stimuli and viewing conditions or that the horizontal disparity computation predicts their occurrence where they are observed. 
Revisiting the aperture problem and physiology
As pictured here, the pathways through which disparity signals depth are shaped by stimulus orientation. Some of the impetus for questioning the unique role of horizontal disparity in stereo depth comes from the implications of the role of orientation in the aperture problem and low-level disparity coding. The aperture problem and physiology were considered early on, before we turned to psychophysics, and aspects of both can now be reexamined in light of the psychophysics. 
1-D stimuli have been considered here in isolation and as components of 2-D stimuli. The stereo aperture problem unfolds in different ways in these two contexts. In the case of a 1-D stimulus, the perpendicular disparity signal appears to be sufficient to predict stereo psychophysics. In effect, there is no aperture problem here, or at least no uncertainty; the disparity vector is specific to the orientation and disparity amplitude of the stimulus. The aperture problem is more apparent in the case of a 1-D component. Here, the perpendicular disparity signal determines the detection threshold for the 2-D stimulus it is a part of and also combines with the perpendicular disparities of other components, via IOC, to specify the 2-D stimulus disparity. The combination resolves the aperture problem with an object disparity vector that is constrained by the disparity of each of the components but not predictable from any one of them. 
In one of these cases there is a self-contained stimulus. In the other, multiple components are in superposition. There is another case that has received considerable attention, but not here. This is the case where local 1-D and 2-D stimulus elements are differentially positioned parts of the same object. A line segment with visible endpoints is the standard example. The ends inform about true-match disparities that in theory can disambiguate disparity signals coming from the linear extent between them. 
Clearly, this ideal is satisfied in the limit as line length approaches zero, but it is an ideal otherwise not always reached. In practice, cue combinations may approach but do not necessarily reach a winner-take-all state. It is commonly assumed that the visual system “solves” the stereo aperture problem by ignoring the disparity signal from the linear portion of the stimulus and filling in the gap by propagating depth signals derived from the disparity of the ends. This would work for both lines of Figure 12, for example, where the two stereo line segments have the same disparity magnitude and direction (horizontal). For the vertical line segment, the perpendicular disparity direction is the same as the disparity direction of the ends. For the oblique line segment, the difference is 68°. However, perceptual contributions from both 1-D and 2-D line cues can be seen in stereo (van Ee & Schor, 2000) as well as motion (Lorenceau, Shiffrar, Wells, & Castet, 1993). van Ee and Schor (2000) found that varying line orientation and length and the eccentricities of the end points in displays like this changes the perceived depth of the line. Both the ends' disparities and the linear portion's perpendicular disparity contribute to perceived depth, so cancellation and filling-in may be misleading metaphors. The view developed here suggests that stimuli like those of Figure 12 contain 1-D and 2-D disparity signals. These signals are applied to a combination task rather than the depth comparison task examined above with spatially separate 1-D and 2-D stimuli. The relative weights applied in combining these signals would vary with line length, line orientation, and end eccentricities, an elastic combination representing an accommodation of the aperture problem more than a solution to it. Disparity end-sensitive cells exist as early as V1 in monkey, although use of their output in the context of the aperture problem must occur in areas downstream (Howe & Livingstone, 2006). Data like those of van Ee and Schor imply that this output meets orientation-specific disparity signals wherever these downstream areas might be. There is little evidence that responses to these signals are confined to early cortical areas. 
Figure 12.
 
Stereo lines and endpoints. The vertical and oblique stereo line segments have the same horizontal disparity. The perpendicular disparity of the vertical line is the same as the disparity of the endpoints. The perpendicular and end disparities differ in the oblique line. The aperture problem could be solved, in the sense that the true-match disparity dominates, if the disparity of the two endpoints propagated across the linear extent between them, “filling-in” conflicting perpendicular disparities with the unambiguous disparities of 2-D stimulus features.
Figure 12.
 
Stereo lines and endpoints. The vertical and oblique stereo line segments have the same horizontal disparity. The perpendicular disparity of the vertical line is the same as the disparity of the endpoints. The perpendicular and end disparities differ in the oblique line. The aperture problem could be solved, in the sense that the true-match disparity dominates, if the disparity of the two endpoints propagated across the linear extent between them, “filling-in” conflicting perpendicular disparities with the unambiguous disparities of 2-D stimulus features.
It is reasonably straightforward to use the physiological results from low-level cortical areas to build simple qualitative models of depth from horizontal disparity. However, for modeling the effects of relative orientation, particularly the large and dramatic effects produced by 1-D stimuli (e.g., Figures 4 and 5), the constraints are tighter. These effects suggest computations by mechanisms with correlated orientation and disparity direction sensitivities, broadly distributed on both dimensions across a neural population. Phase disparity detectors serve as a model. Within this population, only mechanisms tuned to similar orientations, and hence also to similar disparity directions, would contribute efficiently to the shared metric needed for calculating relative disparity (e.g., Figure 10A). However, connections drawn between physiology and psychophysics are usually indirect, and evidence on both sides can rarely be known to be adequately sampled. An example bearing on the relative orientation effect is the study of primate V1 responses by Cumming (2002). For most of the neurons in this study, disparity tuning functions were horizontally elongated, independently of the cells' orientation preference. There was no evident imprint of receptive field orientation on their disparity response. Therefore, these neurons appear inconsistent with the psychophysical link between stereo depth and the relative orientation of 1-D stimuli. 
Yet, this impression is both reasonable and possibly in error. Cumming (2002) used sinusoidal gratings to establish the cells' orientation tuning but RDSs to map their disparity tuning. In a different study using a similar preparation, V1 neurons in awake, fixating macaques responded to the disparity of 1-D stimuli (lines) with greatest modulation in the direction perpendicular to receptive field orientation (Howe & Livingstone, 2006). The responses were subject to the aperture problem and followed disparity energy model expectations. Presented with RDSs, most of the neurons in the study by Cumming (2002) showed a 2-D disparity tuning elongated in the horizontal—and RDA—direction. If they had been presented with disparate oriented stimuli, Howe and Livingstone's study suggests they would have displayed a different tuning, one compatible with the psychophysical orientation effect. A difference in disparity tuning to 1-D and 2-D stimuli within a single neuron may reflect a pooling or normalization of responses across simultaneously active input neurons with varying receptive field orientations (Read & Cumming, 2004). Such a process would only occur in response to orientationally broadband stimuli.6 
The stereo stimuli in the study by Cumming (2002) were isotropic RDSs. As suggested here, responses to such stimuli might express a tuning specialized for horizontal disparity for either of two reasons, one built-in and reflecting ocular coordinates and the other stimulus-dependent. It would be most interesting if neurons somewhat similar to those of Cumming (though likely located downstream of V1)—neurons that pooled over differently oriented disparity detectors but output a disparity signal having a variable direction—were found to be responsive to the scene's orientation statistics in setting this output direction. 
Conclusions
This paper has reviewed contrasting data derived from 1-D and 2-D stimuli and noted differences between them in the contribution that horizontal disparity makes to measures of stereoacuity and perceived stereo depth. A great deal of data attests to the importance of horizontal disparity in nearly all cases studied since the stereoscope was invented. Relatively few studies have compared the disparity processing of 1-D and 2-D stimuli or examined effects of stimulus orientation. Even among these, the results have been interpreted almost exclusively in terms of horizontal disparity. A review of this literature suggests a role for perpendicular as well as horizontal disparity. The marked contrast between 1-D and 2-D stimuli in the role played by horizontal disparity has been noted before (Morgan & Castet, 1997; van Ee & Schor, 2000). The contribution of oblique components to the perceived depth of 2-D stimuli has also been demonstrated convincingly (Farell, 1998; Patel et al., 2003; Patel et al., 2006), but vertical stimuli, vertical receptive fields, and horizontal disparity remain the dominant backdrop of most discussions of stereo vision. 
An examination of 1-D stimuli, 2-D stimuli, and 1-D components of 2-D stimuli suggests that the distinction between perpendicular and horizontal disparity may be less secure than previously thought. Most striking are data from 1-D stimuli, which violate the textbook connection between horizontal disparity and stereo vision. The questions that arise are why there are two apparently different depth-from-disparity computations, one for 1-D stimuli and another for 2-D stimuli, and what connection, if any, exists between them. Suggestions of a connection exist. The disparity of 1-D components limits stereoacuity for 2-D stimuli and contributes to the perceived stereo depth of these stimuli. Despite this, evidence shows that disparity is analyzed along a stimulus-specific disparity axis for 1-D stimuli, and vastly more evidence finds it to be analyzed along the horizontal disparity axis for 2-D stimuli. 
This puzzle led to a critical look at the stimuli that have been used in stereo-vision studies. There has been no systematic study of the effect of the orientation of 2-D stimuli, and, with little exception, the non-horizontal components of the stimuli that have been studied have had vertical as their average orientation. The imprint of horizontal as the typical perpendicular disparity direction can be seen in the range of neuronal disparity tunings as a function of receptive field orientation. For detectors tuned to a near-vertical orientation, this range tends to be broad. As the detectors' orientation preference veers toward horizontal, the range of disparity amplitudes they respond to is increasingly confined to smaller values. Yet, this relation is evident only in some of the physiological data, and the clearest example (DeAngelis et al., 1991) (Figure 3) comes from anesthetized cats. The specialization here is not for detecting horizontal disparity signals per se. It is best understood as a specialization for detecting the component disparities of objects whose disparity directions are biased toward the horizontal. 
This gives perspective on what might appear to be a surprising failure to adapt to environmental statistics. Despite the horizontal disparity bias in the binocular input, humans are quite adept using non-horizontal disparities to detect and discriminate depth. For 1-D stimuli, the surprise is muted because of directional ambiguity associated with the aperture problem. But little or no training is required to carry out the same tasks with 2-D stimuli (e.g., Farell & Ng, 2018; Farell & Ng, 2019; Stevenson & Schor, 1997), in apparent disregard for years of exposure to contrary statistical regularities. This might be attributed to the generality, flexibility, and robustness of a component analysis of disparity, which in principle is isotropic. Adaptation to natural disparity statistics can be found in the distribution of disparity tuning as a function of receptive field orientation (Figure 3); object disparities, despite their near-horizontal concentration, engage a broad range of component disparity sensitivities. To see effects of this psychophysically, the place to look is in experiments with large component disparity amplitudes in oblique directions, as seen in the anisotropy of disparity tolerance functions (Stevenson & Schor, 1997) and the limits of depth coherence (Farell & Li, 2004). 
Horizontal is viewed here as the effective relative disparity direction for stimuli whose average non-horizontal component orientation is vertical. Changing the average orientation alters the axis along which disparities are compared, independently of the direction of these disparities. For both 1-D and 2-D stimuli, according to this view, relative disparity is computed along a stimulus-specific disparity axis. In neither case is there an explicit role for horizontal disparity. But there are differences. As can be gathered from Figure 10, the 1-D versus 2-D differences are in the computational signal and whether the signal is generated locally or regionally. A 1-D stimulus is processed as if it were a component of a 2-D stimulus. It evokes the IOC calculation that would, in the presence of superimposed components, result in a veridical measure of object disparity (Figure 7E). The calculation of the disparity constraint line is local, independent of the properties and even the existence of other components. Its use is highly adaptive in the context of a multicomponent stimulus but leads to nonveridical depth estimates when applied between stimuli. By contrast, computing the relative disparity among 2-D stimuli uses a non-local orientation estimate, one that encompasses multiple stimuli. The RDA derived from this estimate functions like the horizontal direction in traditional calculations of relative disparity. Its distribution will have a strong horizontal bias under typical viewing conditions. 
This means that, for both 1-D and 2-D stimuli, horizontal disparity is just another direction along a continuum. Because of the direction of separation of the eyes, this continuum is anisotropic. Horizontal is typically the direction of largest disparities, and, due to the predominance of vertical-centric orientations, it is the most common average component disparity direction. In typical scenes, horizontal is the disparity direction that results in depth judgments that are the most veridical, although not necessarily those with the highest resolution. And, of course, horizontal is the direction maximally distant from vertical disparity. In these ways, horizontal stands out among the directions along the continuum. Arguably more “special” in the uniqueness of its properties is vertical disparity. In a larger context, the cases considered here, those of frontal surfaces and simple judgments of their relative disparities, are a constrained subset. But they do provide examples of how disparity is responsive to stimulus orientation and dimensionality. Such a stimulus-framed specification of disparity may be compatible with a generalization to the processing of three-dimensional shapes, the differential structure of surfaces, and the multiple dimensions of non-positional disparities (e.g., Glennerster, McKee, & Birch, 2002; Koenderinck & van Doorn, 1976; Lappin & Craft, 1997). Together these approaches make up a treatment of stimulus-property disparities as an alternative to the retinal coordinate framework. 
Acknowledgments
Funding for this project was provided by a grant from the National Science Foundation (NSF BCS-1257096) and the Mabel E. Lewis Fund for Eye Research. Thanks go to Suzanne McKee, Cherlyn Ng, and Brad Motter for stimulating discussions; to Andrew Glennerster for helpful suggestions; and to two anonymous reviewers for comments, suggestions, and views. 
Commercial relationships: none. 
Corresponding author: Bart Farell. 
Email: bfarell@syr.edu, bfarell@gmail.com. 
Address: Institute for Sensory Research, Department of Biomedical and Chemical Engineering, Syracuse University, Syracuse, NY, USA. 
Footnotes
1  It is also possible to obtain aperture disparities from periodic 2-D patterns and, in practice, from any pair of patterns, one for the left eye and one for the right, if the patterns are similar enough to be fused. Most stereo-vision data derive from stereograms of one sort or another. Whether they are presented for free-fusing, as in Figure 1, or via other methods, stereograms consist of a physically distinct image source for each eye, i.e., the left and right half-stereogram. For this reason, the resulting depth percepts are, strictly speaking, based on false matches. The point is that, given the opportunity, the visual system is adept at turning false matches into depth percepts; all else being equal, it makes no distinction between true and false matches.
Footnotes
2  The limiting case of a horizontal line with horizontal disparity is considered a special case because the disparity is confined to the line's ends (Ebenholtz & Walchli, 1965; McKee, 1983). This makes the effect of line length different in kind between horizontal and non-horizontal lines. It is also special in another way. A horizontal line with horizontal disparity has overlapping binocular retinal images. The line's disparate ends are connected by its center, which, being uniform, can be regarded as non-disparate. Other stimulus configurations show a strong effect of such connectivity. The effect is to sharply increase stereo thresholds compared to measurements made with horizontal connecting lines removed (McKee, 1983; Mitchison & Westheimer, 1984; Werner, 1937; Westheimer, 1979).
Footnotes
3  The van Ee and Schor (2000) study justified this assumption by comparing the disparities of disks judged to have the same apparent depth as an oblique line, one disk with a horizontal disparity direction, the other with an oblique disparity direction. The disks’ horizontal disparities at the depth match were not significantly different (p > 0.01) and were thereby deemed to be equal, but the horizontal component of the obliquely disparate disk was the smaller of the two. The statistical equality led to the use of a constraint in measurements of the perceived depth of disk versus line: For each of the disk's allowable disparity directions, the only disparity magnitude tested was the one equal to the disparity magnitude of the line in this particular direction. Therefore, the magnitudes of the horizontal and vertical components of the disk's disparity were not independent. The trade off between them was modulated by the disk's disparity direction.
Footnotes
4  Data from 1-D stimuli reviewed here violate the expectations of the depth-from-horizontal-disparity computation, the standard model for stereopsis (Poggio & Poggio, 1984). Stereo depth judgments consistent with this model are the outcomes of a univariate computation on the magnitudes of horizontal disparities. In practical terms this means that horizontal disparity predicts psychophysical performance in stereo tasks. Stimulus or receptive field orientation is not a relevant factor. If disparity and orientation are jointly coded, only a simple calculation is required to make estimates of horizontal disparity available from this code, given a labelling of the orientation. Therefore, stereo depth judgments that are modulated by disparity but cannot be explained in terms of horizontal disparity are counted here as evidence against the standard model within the parameters of the experimental setup.
Footnotes
5  Clearly, though, component disparity thresholds leave unexplained the increase in disparity thresholds with vertical disparity. A change in component orientation can make the disparity direction of a 2-D pattern go from horizontal (where threshold is low) to oblique (where threshold is elevated). But unless a 1-D stimulus is rotated to a near-horizontal orientation, its phase disparity threshold is independent of orientation. So, in addition to a component disparity threshold, 2-D patterns have a second, higher, disparity threshold for non-horizontal disparities (Farell, 2003). Only the horizontal threshold is component-disparity limited. The reason, detailed in a later section, is that, for the 2-D stimuli considered thus far, horizontal is the direction on which relative disparity is computed, regardless of the disparity direction of the pattern.
Footnotes
6  The motion direction tuning of cells in primate cortical area MT shows an analogous dependence on stimulus dimensionality (Majaj, Carandini & Movshon, 2007). Two Gabor patches with variable orientations, presented simultaneously within the cell's RF, generated a directional tuning curve that approximated the sum of their individual tuning curves, but only when the patches were spatially non-overlapping. Superimposing the same Gabors to form a plaid revealed a different directional tuning, one specific for 2-D pattern motion. MT is rich in disparity- as well as motion-sensitive cells (DeAngelis & Newsome, 1999; Maunsell & Van Essen, 1983).
References
Adams, W., Frisby, J. P., Buckley, D., Gårding, J., Hippisley-Cox, S. D., & Porrill, J. (1996). Pooling of vertical disparities by the human visual system. Perception, 25, 165–176, https://doi.org/10.1068/p250165. [CrossRef] [PubMed]
Adelson, E. H., & Movshon, J. A. (1982). Phenomenal coherence of moving visual patterns. Nature, 300, 523–525, https://doi.org/10.1038/300523a0. [CrossRef] [PubMed]
Adelson, E. H., & Movshon, J. A. (1984). Binocular disparity and the computation of two-dimensional motion. Journal of the Optical Society of America A, 1, 1266.
Anzai, A., Ohzawa, I., & Freeman, R. D. (1997). Neural mechanisms underlying binocular fusion and stereopsis: Position vs. phase. Proceedings of the National Academy of Sciences, USA, 94, 5438–5443, https://doi.org/10.1073/pnas.94.10.5438. [CrossRef]
Anzai, A., Ohzawa, I., & Freeman, R. D. (1999a). Neural mechanisms for processing binocular information. I. Simple cells. Journal of Neurophysiology, 82, 891–908, https://doi.org/10.1152/jn.1999.82.2.891. [CrossRef] [PubMed]
Anzai, A., Ohzawa, I., & Freeman, R. D. (1999b). Neural mechanisms for processing binocular information. II. Complex cells. Journal of Neurophysiology, 82, 909–924, https://doi.org/10.1152/jn.1999.82.2.909. [CrossRef] [PubMed]
Arditi, A. (1982). The dependence of the induced effect on orientation and a hypothesis concerning disparity computations in general. Vision Research, 22, 247–256, https://doi.org/10.1016/0042-6989(82)90124-9. [CrossRef] [PubMed]
Backus, B., Banks, M. S., van Ee, R., & Crowell, J. A. (1999). Horizontal and vertical disparity, eye position, and stereoscopic slant perception. Vision Research, 39, 1143–1170, https://doi.org/10.1016/s0042-6989(98)00139-4. [CrossRef] [PubMed]
Banks, M. S., & Backus, B. T. (1998). Extra-retinal and perspective cues cause the small range of the induced effect. Vision Research, 38, 187–194, https://doi.org/10.1016/s0042-6989(97)00179-x. [CrossRef] [PubMed]
Banks, M. S., Hooge, I. T. C., & Backus, B. T. (2001). Perceiving slant about a horizontal axis from stereopsis. Journal of Vision, 1, 55–79, https://doi.org/10.1167/1.2.1. [CrossRef] [PubMed]
Barlow, H. B., Blakemore, C., & Pettigrew, J. D. (1967). The neural mechanism of binocular depth discrimination. Journal of Physiology, 193, 327–342, https://doi.org/10.1113/jphysiol.1967.sp008360. [CrossRef]
Barry, S. R. (2009). Fixing my gaze. New York: Basic Books.
Bischof, W. F., & Di Lollo, V. (1991). On the half-cycle displacement limit of sampled directional motion. Vision Research, 31, 649–660, https://doi.org/10.1016/0042-6989(91)90006-q. [CrossRef] [PubMed]
Bishop, P. O. (1970). Beginning of form vision and binocular depth discrimination in cortex. In Schmitt, F. O. (Ed.), The neurosciences: Second study program. New York: Rockefeller University Press.
Bishop, P. O. (1989). Vertical disparity, egocentric distance and stereoscopic depth constancy: A new interpretation. Proceedings of the Royal Society of London B: Biological Sciences, 237, 445–469, https://doi.org/10.1098/rspb.1989.0059.
Bishop, P. O., & Pettigrew, J. D. (1986). Neural mechanisms of binocular vision. Vision Research, 26, 1587–1600, https://doi.org/10.1016/0042-6989(86)90177-x. [CrossRef] [PubMed]
Blake, R., Camisa, J., & Antoinetti, D. N. (1976). Binocular depth discrimination depends on orientation. Perception and Psychophysics, 20, 113–118, https://doi.org/10.3758/bf03199441. [CrossRef]
Blakemore, C. (1970). The range and scope of binocular depth discrimination in man. Journal of Physiology, 211, 599–622, https://doi.org/10.1113/jphysiol.1970.sp009296. [CrossRef]
Blakemore, C., & Hague, B. (1972). Evidence for disparity detecting neurones in the human visual system. Journal of Physiology, 225, 437–455, https://doi.org/10.1113/jphysiol.1972.sp009948. [CrossRef]
Bocheva, N., Stefanov, S., Stefanova, M., & Genova, B. (2015). Global orientation estimation in noisy conditions. Acta Neurobiologiae Experimentalis, 75, 412–433. [PubMed]
Brenner, E., Smeets, J. B. J., & Landy, M. S. (2001). How vertical disparities assist judgements of distance. Vision Research, 41, 3455–3465, https://doi.org/10.1016/s0042-6989(01)00206-1. [CrossRef] [PubMed]
Bridgeman, B. (2014). Restoring adult stereopsis: A vision researcher's personal experience. Optometry and Vision Science, 91, e135–e139, https://doi.org/10.1097/opx.0000000000000272. [CrossRef]
Chai, Y.-C., & Farell, B. (2009). From disparity to depth: How to make a grating and a plaid appear in the same depth plane. Journal of Vision, 9(10):3, 1–19, https://doi.org/10.1167/9.10.3. [CrossRef] [PubMed]
Chino, Y. M., Smith, E. L., III, Hatta, S., & Cheng, H. (1997). Postnatal development of binocular disparity sensitivity in neurons of the primate visual cortex. Journal of Neuroscience, 17(1), 296–307, https://doi.org/10.1523/jneurosci.17-01-00296.1997. [CrossRef]
Chopin, A., Levi, D., Knill, D., & Bavelier, D. (2016). The absolute disparity anomaly and the mechanism of relative disparities. Journal of Vision, 16(8):2, 1–17, https://doi.org/10.1167/16.8.2. [CrossRef]
Cumming, B. C. (2002). An unexpected specialization for horizontal disparity in primate visual cortex. Nature, 418, 633–636, https://doi.org/10.1038/nature00909. [CrossRef] [PubMed]
Dakin, S. C. (2001). Information limit on the spatial integration of local orientation signals. Journal of the Optical Society of America A, 18, 1016–1026, https://doi.org/10.1364/josaa.18.001016. [CrossRef]
Davis, E. T., King, R. A., & Anoskey, A. M. (1992). Oblique effect in stereopsis? In: Proceedings of SPIE, Volume 1666: Human Vision, Visual Processing, and Digital Display III (pp. 465–475). Bellingham, WA: SPIE.
DeAngelis, G. C., & Newsome, W. T. (1999). Organization of disparity-selective neurons in macaque area MT. Journal of Neuroscience, 19, 1398–1415, https://doi.org/10.1523/jneurosci.19-04-01398.1999. [CrossRef]
DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1991). Depth is encoded in the visual cortex by a specialized receptive field structure. Nature, 352, 156–159, https://doi.org/10.1038/352156a0. [CrossRef] [PubMed]
DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1995). Neural mechanisms underlying stereopsis: How do simple cells in the visual cortex encode binocular disparity? Perception, 24, 3–31, https://doi.org/10.1068/p240003. [CrossRef] [PubMed]
DeAngelis, G. C., & Uka, T. (2003). Coding of horizontal disparity and velocity by MT neurons in the alert macaque. Journal of Neurophysiology, 89, 1094–1111, https://doi.org/10.1152/jn.00717.2002. [CrossRef] [PubMed]
Delicato, L. S., & Qian, N. (2005). Is depth perception of stereo plaids predicted by intersection of constraints, vector average or second-order feature? Vision Research, 45, 75–89, https://doi.org/10.1016/j.visres.2004.07.028. [CrossRef] [PubMed]
Durand, J., Celebrini, S., & Trotter, Y. (2007). Neural bases of stereopsis across visual field of the alert macaque monkey. Cerebral Cortex, 17, 1260–1273, https://doi.org/10.1093/cercor/bhl050. [CrossRef]
Durand, J. B.. Zhu, S., Celebrini, S., & Trotter, Y. (2002). Neurons in parafoveal areas V1 and V2 encode vertical and horizontal disparities. Journal of Neurophysiology, 88, 2874–2879, https://doi.org/10.1152/jn.00291.2002. [CrossRef] [PubMed]
Duwaer, A. L. (1982). Non-motor component of fusional response to vertical disparity. Journal of the Optical Society of America, 72, 871–877, https://doi.org/10.1364/josa.72.000871. [CrossRef] [PubMed]
Duwaer, A. L., & van den Brink, G. (1982). Detection of vertical disparities. Vision Research, 22, 467–478, https://doi.org/10.1016/0042-6989(82)90195-x. [CrossRef] [PubMed]
Ebenholtz, S., & Walchli, R. (1965). Stereoscopic thresholds as a function of head- and object-orientation. Vision Research, 5, 455–461, https://doi.org/10.1016/0042-6989(65)90053-2. [CrossRef] [PubMed]
Erkelens, C. J., & Collewijn, H. (1985). Motion perception during dichoptic viewing of moving random-dot stereograms. Vision Research, 25, 583–588, https://doi.org/10.1016/0042-6989(85)90164-6. [CrossRef] [PubMed]
Farell, B. (1998). Two-dimensional matches from one-dimensional stimulus components in human stereopsis. Nature, 395, 689–693, https://doi.org/10.1038/27192. [CrossRef] [PubMed]
Farell, B. (2003). Detecting disparity in two-dimensional patterns. Vision Research, 43, 1009–1026, https://doi.org/10.1016/s0042-6989(03)00078-6. [CrossRef] [PubMed]
Farell, B. (2006). Orientation-specific computation in stereoscopic vision. Journal of Neuroscience, 26, 9098–9106, https://doi.org/10.1523/jneurosci.1100-06.2006. [CrossRef]
Farell, B., Chai, Y.-C., & Fernandez, J. M. (2009). Projected disparity, not horizontal disparity, predicts stereo depth of 1-D patterns. Vision Research, 49, 2209–2216, https://doi.org/10.1016/j.visres.2009.06.013. [CrossRef] [PubMed]
Farell, B., Chai, Y.-C., & Fernandez, J. M. (2010). The horizontal disparity direction vs. the stimulus disparity direction in the perception of the depth of two-dimensional patterns. Journal of Vision, 10(4):25, 1–15, https://doi.org/10.1167/10.4.25. [CrossRef]
Farell, B., & Li, S. (2004). Seeing depth coherence and transparency. Journal of Vision, 4(3):8, 209–223, https://doi.org/10.1167/4.3.8. [PubMed]
Farell, B., Li, S., & McKee, S. P. (2004a). Disparity increment threshold for gratings. Journal of Vision, 4(3):3, 156–168, https://doi.org/10.1167/4.3.3. [PubMed]
Farell, B., Li, S., & McKee, S. P. (2004b). Coarse scales, fine scales, and their interactions in stereo vision. Journal of Vision, 4(6):8, 488–499, https://doi.org/10.1167/4.6.8. [CrossRef] [PubMed]
Farell, B., & Ng, C. J. (2014). Perceived depth in non-transitive stereo displays. Vision Research, 105, 137–150, https://doi.org/10.1016/j.visres.2014.10.012. [CrossRef] [PubMed]
Farell, B., & Ng, C. J. (2016). Perceiving the stereo depth of simple figures isn't simple: The case of gratings. Journal of Vision, 16(12), 832–832, https://doi.org/10.1167/16.12.832. [CrossRef]
Farell, B., & Ng, C. J. (2018). Why is horizontal disparity important for stereo depth? Journal of Vision, 18(10), 991, https://doi.org/10.1167/18.10.991. [CrossRef]
Farell, B., & Ng, C. J. (2019). Attentional selection in judgments of stereo depth. Vision Research, 158, 19–30, https://doi.org/10.1016/j.visres.2018.08.007. [CrossRef] [PubMed]
Felton, T. B., Richards, W., & Smith, J., R. A. (1972). Disparity processing of spatial frequencies in man. Journal of Physiology, 225, 349–362, https://doi.org/10.1113/jphysiol.1972.sp009944. [CrossRef]
Ferster, D. (1981). A comparison of binocular depth mechanisms in area 17 and 18 of the cat visual cortex. Journal of Physiology, 311, 623–655, https://doi.org/10.1113/jphysiol.1981.sp013608. [CrossRef]
Friedman, R. B., Kaye, M. G., & Richards, W. (1978). Effect of vertical disparity upon stereoscopic depth. Vision Research, 17, 151–152, https://doi.org/10.1016/0042-6989(78)90172-4.
Gårding, J., Porrill, J., Mayhew, J. E. W., & Frisby, J. P. (1995). Stereopsis, vertical disparity and relief transformations. Vision Research, 35, 703–722, https://doi.org/10.1016/0042-6989(94)00162-f. [CrossRef] [PubMed]
Geisler, W. S., & Albrecht, G. D. (1995). Bayesian analysis of identification performance in monkey visual cortex: Nonlinear mechanisms and stimulus certainty. Vision Research, 35, 2723–2730, https://doi.org/10.1016/0042-6989(95)00029-y. [CrossRef] [PubMed]
Gillam, B., & Lawergren, B. (1983). The induced effect, vertical disparity, and stereoscopic theory. Perception and Psychophysics, 34, 121–130, https://doi.org/10.3758/bf03211336. [CrossRef]
Girshick, A. R., Landy, M. S., & Simoncelli, E. P. (2011). Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics. Nature Neuroscience, 14, 926–932, https://doi.org/10.1038/nn.2831. [CrossRef] [PubMed]
Glennerster, A., McKee, S. P., & Birch, M. D. (2002). Evidence for surface-based processing of binocular disparity. Current Biology, 12, 825–828, https://doi.org/10.1016/s0960-9822(02)00817-5. [CrossRef]
Gonzalez, F., Bermudez, M. A., Vicente, A. F., & Romero, M. C. (2010). Orientation preference and horizontal disparity sensitivity in the monkey visual cortex. Ophthalmic and Physiological Optics, 30, 824–833, https://doi.org/10.1111/j.1475-1313.2010.00781.x. [CrossRef]
Gonzalez, F., Justo, M. S., Bermudez, M. A., & Perez, R. (2003). Sensitivity to horizontal and vertical disparity and orientation preference in areas V1 and V2 of the monkey. NeuroReport, 14, 829–832, https://doi.org/10.1097/00001756-200305060-00010. [CrossRef] [PubMed]
Gonzalez, F., Relova, J. L., Perez, R., Acuna, C., & Alonso, J. M. (1993). Cell responses to vertical and horizontal retinal disparities in the monkey visual cortex, Neuroscience Letters, 160, 167–170, https://doi.org/10.1016/0304-3940(93)90405-a. [CrossRef] [PubMed]
Heckmann, T., & Schor, C. M. (1989). Is edge information for stereoacuity spatially channeled? Vision Research, 29, 593–607, https://doi.org/10.1016/0042-6989(89)90045-x. [CrossRef] [PubMed]
Helmholtz von, H. (1925). Treatise on physiological optics. New York: Dover Press.
Howard, I. P. (1982). Human visual orientation. New York: John Wiley & Sons.
Howard, I. P. (1991). Image cyclorotation, cyclovergence and perceived slant. SAE Transactions, Section 1: Journal of Aerospace, 100(Part 1), 991–998, https://doi.org/10.4271/911392.
Howard, I. P., & Kaneko, H. (1994). Relative shear disparities and the perception of surface inclination. Vision Research, 34, 2505–2517, https://doi.org/10.1016/0042-6989(94)90237-2. [CrossRef] [PubMed]
Howard, I. P., & Rogers, B. J. (2002). Seeing in depth. Vol. 1. Depth perception. Toronto: Porteous.
Howe, P. D. L., & Livingstone, M. S. (2006). V1 partially solves the stereo aperture problem. Cerebral Cortex, 16, 1332–1337, https://doi.org/10.1093/cercor/bhj077. [CrossRef]
Hubel, D. H., & Livingstone, M. S. (1987). Segregation of form, color, and stereopsis in primate area 18. Journal of Neuroscience, 7, 3378–3415, https://doi.org/10.1523/jneurosci.07-11-03378.1987. [CrossRef]
Hubel, D. H., & Wiesel, T. N. (1970). Stereographic vision in macaque monkey. Nature, 225, 41–42, https://doi.org/10.1038/225041a0. [CrossRef] [PubMed]
Ito, H. (2005). Illusory depth perception of oblique lines produced by overlaid vertical disparity. Vision Research, 45, 931–942, https://doi.org/10.1016/j.visres.2004.10.008. [CrossRef] [PubMed]
Joshua, D. E., & Bishop, P. O. (1970). Binocular single vision and depth discrimination. Receptive field disparities for central and peripheral vision and binocular interaction on peripheral single units in cat striate cortex. Experimental Brain Research, 10, 389–416, https://doi.org/10.1007/bf02324766. [CrossRef] [PubMed]
Julesz, B. (1971). Foundations of cyclopean perception. Chicago: University of Chicago Press.
Julesz, B., & Miller, J. E. (1975). Independent spatial frequency tuned channels in binocular fusion and rivalry. Perception, 4, 125–143, https://doi.org/10.1068/p040125. [CrossRef]
Kaneko, H., & Howard, I. P. (1997). Spatial limitation of vertical-size disparity processing. Vision Research, 37, 2871–2878, https://doi.org/10.1016/s0042-6989(97)00099-0. [CrossRef] [PubMed]
Koenderink, J. J., & van Doorn, A. J. (1976). Geometry of binocular vision and a model for stereopsis. Cybernetics, 21, 29–35, https://doi.org/10.1007/bf00326670.
Lappin, J. S., & Craft, W. D. (1997). Definition and detection of binocular disparity. Vision Research, 37, 2953–2974, https://doi.org/10.1016/s0042-6989(97)00091-6. [CrossRef] [PubMed]
LeVay, S., & Voigt, T. (1988). Ocular dominance and disparity coding in cat visual cortex. Visual Neuroscience, 1, 395–414, https://doi.org/10.1017/s0952523800004168. [CrossRef] [PubMed]
Levinson, E., & Blake, R. (1979). Stereopsis by harmonic analysis. Vision Research, 19, 73–78, https://doi.org/10.1016/0042-6989(79)90123-8. [CrossRef] [PubMed]
Longuet-Higgins, H. C. (1982). The role of the vertical dimension in stereoscopic depth. Perception, 11, 377–386, https://doi.org/10.1068/p110377. [CrossRef] [PubMed]
Lorenceau, J., Shiffrar, M., Wells., N., & Castet, E. (1993). Different motion sensitive units are involved in recovering the direction of moving lines. Vision Research, 33, 1207–1217, https://doi.org/10.1016/0042-6989(93)90209-f. [CrossRef] [PubMed]
Lothridge, C. D. (1953). Stereoscopic settings as functions of vertical disparity and target declination. Journal of General Psychology, 49, 241–260, https://doi.org/10.1080/00221309.1953.9710089. [CrossRef]
Majaj, N. J., Carandini, M., & Movshon, J. A. (2007). Motion integration by neurons in macaque MT Is local, not global. Journal of Neuroscience, 27, 366–370, https://doi.org/10.1523/jneurosci.3183-06.2007. [CrossRef]
Mansfield, J. S., & Parker, A. J. (1993). An orientation-tuned component in the contrast masking of stereopsis. Vision Research, 33, 1535–1544, https://doi.org/10.1016/0042-6989(93)90146-n. [CrossRef] [PubMed]
Marr, D. (1982). Vision. Cambridge, MA: MIT Press.
Marr, D., & Hildreth, E. (1980). Theory of edge detection. Proceedings of the Royal Society B: Biological Sciences, 207, 187–217.
Marr, D., & Poggio, T. (1976). Cooperative computation of stereo disparity. Science, 194, 283–287, https://doi.org/10.1126/science.968482. [CrossRef] [PubMed]
Maske, R., Yamane, S., & Bishop, P. O. (1984). Binocular simple cells for local stereopsis: Comparison of receptive field organizations for the two eyes. Vision Research, 24, 1921–1929, https://doi.org/10.1016/0042-6989(84)90026-9. [CrossRef] [PubMed]
Maske, R., Yamane, S., & Bishop, P. O. (1986). End-stopped cells and binocular depth discrimination in the striate cortex of cats. Proceedings of the Royal Society of London B: Biological Sciences, 229, 227–256, https://doi.org/10.1098/rspb.1986.0085.
Matthews, N., Meng, X., Xu, P., & Qian, N. (2003). A physiological theory of depth perception from vertical disparity. Vision Research, 43, 85–99, https://doi.org/10.1016/s0042-6989(02)00401-7. [CrossRef] [PubMed]
Maunsell, J. H., & Van Essen, D. C. (1983). Functional properties of neurons in middle temporal visual area of the macaque monkey. II. Binocular interactions and sensitivity to binocular disparity. Journal of Neurophysiology, 49, 1148–1167, https://doi.org/10.1152/jn.1983.49.5.1148. [CrossRef] [PubMed]
Mayhew, J. (1982). The interpretation of stereo-disparity information: The computation of surface orientation and depth. Perception, 11, 387–403, https://doi.org/10.1068/p110387. [CrossRef] [PubMed]
Mayhew, J. E. W., & Frisby, J. P. (1978). Stereopsis masking in humans is not orientationally tuned. Perception, 7, 431–436, https://doi.org/10.1068/p070431. [CrossRef] [PubMed]
Mayhew, J. E. W., & Frisby, J. P. (1979). Surfaces with steep variations in depth pose difficulties for orientationally tuned disparity filters. Perception, 8, 691–698, https://doi.org/10.1068/p080691. [CrossRef] [PubMed]
Mayhew, J. E. W., & Frisby, J. P. (1980). Spatial frequency tuned channels: Implications for structure and function from psychophysical and computational studies of stereopsis. Philosophical Transactions of the Royal Society B, 290, 95–116, https://doi.org/10.1098/rstb.1980.0085.
Mayhew, J. E. W., & Longuet-Higgins, H. C. (1982). A computational model of binocular depth perception. Nature, 297, 376–378, https://doi.org/10.1038/297376a0. [CrossRef] [PubMed]
McKee, S. P. (1983). The spatial requirements for fine stereoacuity. Vision Research, 23, 191–198, https://doi.org/10.1016/0042-6989(83)90142-6. [CrossRef] [PubMed]
McKee, S. P., Levi, D. M., & Bowne, S. F. (1990). The imprecision of stereopsis. Vision Research, 30, 1763–1779, https://doi.org/10.1016/0042-6989(90)90158-h. [CrossRef] [PubMed]
Melmoth, D. R., Finlay, A. L., Morgan, M. J., & Grant, S. (2009). Grasping deficits and adaptations in adults with stereo vision losses. Investigative Visual Science and Ophthalmology, 50, 3711–3720, https://doi.org/10.1167/iovs.08-3229. [CrossRef]
Mikaelian, S., & Qian, N. (2000). A physiologically-based explanation of disparity attraction and repulsion. Vision Research, 40, 2999–3016, https://doi.org/10.1016/s0042-6989(00)00143-7. [CrossRef] [PubMed]
Mitchell, D. E. (1970). Properties of stimuli eliciting vergence eye movements and stereopsis. Vision Research, 10, 145–162, https://doi.org/10.1016/0042-6989(70)90112-4. [CrossRef] [PubMed]
Mitchison, G. J., & Westheimer, G. (1984). The perception of depth in simple figures. Vision Research, 24, 1063–1073, https://doi.org/10.1016/0042-6989(84)90084-1. [CrossRef] [PubMed]
Mitsudo, H., Sakai, A., & Kaneko, H. (2013). Vertical size disparity and the correction of stereo correspondence. Perception, 42, 385–400, https://doi.org/10.1068/p7387. [CrossRef] [PubMed]
Morgan, M. J., & Castet, E. (1997). The aperture problem in stereopsis. Vision Research, 39, 2737–2744, https://doi.org/10.1016/s0042-6989(97)00074-6.
Nelson, J. I., Kato, H., & Bishop, P. O. (1977) Discrimination of orientation and position disparities by binocularly activated neurons in cat striate cortex. Journal of Neurophysiology, 40, 260–283, https://doi.org/10.1152/jn.1977.40.2.260. [CrossRef] [PubMed]
Nieder, A., & Wagner, H. (2001). Encoding of both vertical and horizontal disparity in random-dot stereograms by Wulst neurons of awake barn owls. Vision Neuroscience, 18, 541–547, https://doi.org/10.1017/s095252380118404x. [CrossRef]
Nielsen, K. R., & Poggio, T. (1984). Vertical image registration in stereopsis. Vision Research, 24, 1133–1140, https://doi.org/10.1016/0042-6989(84)90167-6. [CrossRef] [PubMed]
Nikara, T., Bishop, P. O., & Pettigrew, J. D. (1968). Analysis of retinal correspondence by studying receptive fields of binocular single units in cat striate cortex. Experimental Brain Research, 6, 353–372, https://doi.org/10.1007/bf00233184. [PubMed]
O'Connor, A. R., Birch, E. E., Anderson, S., Draper, H., & Research Group, FSOS. (2010). The functional significance of stereopsis. Investigative Ophthalmology & Visual Science, 51, 2019–2023, https://doi.org/10.1167/iovs.09-4434. [PubMed]
Ogle, K. N. (1938). Induced size effect. 1. A new phenomenon of binocular space perception. AMA Archives of Ophthalmology, 20, 604–623, https://doi.org/10.1001/archopht.1938.00850220076005.
Ogle, K. N. (1952). On the limits of stereoscopic vision. Journal of Experimental Psychology, 44, 253–259, https://doi.org/10.1037/h0057643. [PubMed]
Ogle, K. N. (1953). Precision and validity of stereoscopic depth perception from double images. Journal of the Optical Society of America, 43, 906–913, https://doi.org/10.1364/josa.43.000906.
Ogle, K. N. (1955). Stereopsis and vertical disparity. AMA Archives of Ophthalmology, 53, 495–504, https://doi.org/10.1001/archopht.1955.00930010497006.
Ogle, K. N. (1958). Present status of our knowledge of stereoscopic vision. Archives of Ophthalmology, 20, 755–774, https://doi.org/10.1001/archopht.1958.00940080775019.
Ogle, K. N. (1962). Spatial localization through binocular vision. In Davson, H. (Ed.), The eyes. Visual optics and the optical space sense, Vol. 4 (pp. 271–324). New York: Academic Press.
Ogle K. N. (1964). Researches in binocular vision. New York: Hafner.
Ohzawa, I., DeAngelis, G. C., & Freeman, R. D. (1990). Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science, 249, 1037–1041, https://doi.org/10.1126/science.2396096. [PubMed]
Ohzawa, I., DeAngelis, G. C., & Freeman, R. D. (1996). Encoding of binocular disparity by simple cells in the cat's visual cortex. Journal of Neurophysiology, 75, 1779–1805, https://doi.org/10.1152/jn.1996.75.5.1779. [PubMed]
Ohzawa, I., DeAngelis, G. C., & Freeman, R. D. (1997). Encoding of binocular disparity by complex cells in the cat's visual cortex. Journal of Physiology, 77, 2879–2909, https://doi.org/10.1152/jn.1997.77.6.2879.
Ohzawa, I., & Freeman, R. D. (1986). The binocular organization of simple cells in the cat's visual cortex. Journal of Neurophysiology, 56, 221–242, https://doi.org/10.1152/jn.1986.56.1.221. [PubMed]
Pack, C. C., Born, R. T., & Livingstone, M. S. (2003). Two-dimensional substructure of stereo and motion interactions in macaque visual cortex. Neuron, 37, 525–535, https://doi.org/10.1016/s0896-6273(02)01187-x. [PubMed]
Patel, S. S., Bedell, H. E., & Sampath, P. (2006). Pooling signals from vertical and non-vertically orientation-tuned disparity mechanisms in human stereopsis. Vision Research, 46, 1–13, https://doi.org/10.1016/j.visres.2005.07.011. [PubMed]
Patel, S. S., Ukwade, M. T., Stevenson, S. B., Bedell, H. E., Sampath, V., & Ogmen, H. (2003). Stereoscopic depth perception from oblique phase disparities. Vision Research, 43, 2479–2792, https://doi.org/10.1016/s0042-6989(03)00464-4. [PubMed]
Pettigrew, J. D., Nikara, T., & Bishop, P. O. (1968). Binocular interaction on single units in cat striate cortex: Simultaneous stimulation by single moving slit with receptive fields in correspondence. Experimental Brain Research, 6, 391–410, https://doi.org/10.1007/bf00233186. [PubMed]
Poggio, G. F. (1995). Mechanisms of stereopsis in monkey visual cortex. Cerebral Cortex, 3, 193–204, https://doi.org/10.1093/cercor/5.3.193.
Poggio, G. F., & Fischer, B. (1977). Binocular interaction and depth sensitivity in striate and prestriate cortex of behaving rhesus monkey. Journal of Neurophysiology, 40, 1392–1405, https://doi.org/10.1152/jn.1977.40.6.1392. [PubMed]
Poggio, G. F., Motter, B. C., Squatrito, S., & Trotter, Y. (1985). Responses of neurons in visual cortex (V1 and V2) of the alert macaque to dynamic random-dot stereograms. Vision Research, 3, 397–406, https://doi.org/10.1016/0042-6989(85)90065-3.
Poggio, G. F., & Poggio, T. (1984). The analysis of stereopsis. Annual Review of Neuroscience, 7, 379–412, https://doi.org/10.1146/annurev.ne.07.030184.002115. [PubMed]
Prazdny, K. (1985). Vertical disparity tolerance in random-dot stereograms. Bulletin of the Psychonomic Society, 23, 413–414, https://doi.org/10.3758/bf03330200.
Prince, S. J., & Eagle, R. A. (1999). Size-disparity correlation in human binocular depth perception. Proceedings of the Royal Society of London B: Biological Sciences, 266, 1361–1365, https://doi.org/10.1098/rspb.1999.0788.
Prince, S., Offen, S., Cumming, B. G., & Eagle, R. A. (2001). The integration of orientation information in the motion correspondence problem. Perception, 30, 367–380, https://doi.org/10.1068/p3049. [PubMed]
Prince, S. J. D., Pointon, A. D., Cumming, B. G., & Parker, A. J. (2002). Quantitative analysis of the responses of V1 neurons to horizontal disparity in dynamic random-dot stereograms. Journal of Neurophysiology, 87, 191–208, https://doi.org/10.1152/jn.00465.2000. [PubMed]
Qian, N., & Anderson, R. A. (1997). A physiological model for motion-stereo integration and a unified explanation of Pulfrick-like phenomena. Vision Research, 37, 1683–1698, https://doi.org/10.1016/s0042-6989(96)00164-2. [PubMed]
Quaia, C., Sheliga, B. H., Optican, L. M., & Cumming, B. G. (2013). Temporal evolution of pattern disparity processing in humans. Journal of Neuroscience, 33, 3465–3476, https://doi.org/10.1523/jneurosci.4318-12.2013.
Read, J. C. A. (2010). Vertical binocular disparity is encoded implicitly within a model neuronal population tuned to horizontal disparity and orientation. PLoS Computational Biology, 6, e1000754, https://doi.org/10.1371/journal.pcbi.1000754. [PubMed]
Read, J. C. A., & Cumming, B. G. (2004). Understanding the cortical specialization for horizontal disparity. Neural Computation 16, 1983–2020, https://doi.org/10.1162/0899766041732440. [PubMed]
Read, J. C. A., & Eagle, R. A. (2000). Reversed stereo depth and motion direction with anti-correlated stimuli. Vision Research, 40, 3345–3358, https://doi.org/10.1016/s0042-6989(00)00182-6. [PubMed]
Regan, D., Erkelens, J. C., & Collewijn, H. (1986). Necessary conditions for the perception of motion in depth. Investigative Ophthalmology and Visual Science, 27, 584–597.
Remole, A., Code, S. M., Matyas, C. E., & McLeod, M. A. (1992). Multimeridional apparent frontoparallel plane: Relation between stimulus orientation angle and compensating tilt angle. Optometry and Vision Science, 69, 544–549, https://doi.org/10.1097/00006324-199207000-00006.
Rogers, B. J., & Bradshaw, M. F. (1993). Vertical disparities, differential perspective and binocular stereopsis. Nature, 361, 253–255, https://doi.org/10.1038/361253a0. [PubMed]
Rust, N. C., Mante, V., Simoncelli, E. P., & Movshon, J. A. (2006). How MT cells analyze the motion of visual patterns. Nature Neuroscience, 9, 1421–1431, https://doi.org/10.1038/nn1786. [PubMed]
Sasaki, K. S., Tabuchi, Y., & Ohzawa, I. (2010). Complex cells in the cat striate cortex have multiple disparity detectors in the three-dimensional binocular receptive fields. Journal of Neuroscience, 30, 13826–13837, https://doi.org/10.1523/jneurosci.1135-10.2010.
Schor, C., Heckmann, T., & Tyler, C. W. (1989). Binocular fusion limits are independent of contrast, luminance gradient and component phases. Vision Research, 29, 821–835, https://doi.org/10.1016/0042-6989(89)90094-1. [PubMed]
Schor, C. M., & Tyler, C. W. (1981). Spatio-temporal properties of Panum's fusional area. Vision Research, 21, 683–692, https://doi.org/10.1016/0042-6989(81)90076-6. [PubMed]
Schor, C. M., Wood, I. C., & Ogawa, J. (1984). Spatial tuning of static and dynamic local stereopsis. Vision Research, 24, 573–578, https://doi.org/10.1016/0042-6989(84)90111-1. [PubMed]
Schreiber, K. M., Crawford, J. D., Fetters, M., & Tweed, D. (2001). The motor side of depth vision. Nature, 410, 819–822, https://doi.org/10.1038/35071081. [PubMed]
Serrano-Pedraza, I., Brash, C., & Read, J. C. A. (2013). Testing the horizontal-vertical stereo anisotropy with the critical-band masking paradigm. Journal of Vision, 13(11):15, 1–15, https://doi.org/10.1167/13.11.15.
Sheedy, J. E., Bailey, I. L., Buri, M., & Bass, E. (1986). Binocular vs. monocular task performance. American Journal of Optometry and Physiological Optics, 63, 839–846, https://doi.org/10.1097/00006324-198610000-00008. [PubMed]
Smallman, H. S., & MacLeod, D. I. A. (1994). Size-disparity correlation in stereopsis at contrast threshold. Journal of the Optical Society of America, 11, 2169–2183, https://doi.org/10.1364/josaa.11.002169.
Smith, E. L., III, Chino, Y. M., Ni, J., Ridder, W. H., III, & Crawford, M. L. J. (1997). Binocular spatial phase tuning characteristics of neurons in the macaque striate cortex. Journal of Neurophysiology, 78, 351–365, https://doi.org/10.1152/jn.1997.78.1.351. [PubMed]
Stevenson, S. B., & Schor, C. M. (1997). Human stereo matching is not restricted to epipolar lines. Vision Research, 37, 2717–2723, https://doi.org/10.1016/s0042-6989(97)00097-7. [PubMed]
Trotter, Y., Celebrini, S., & Durand, J. B. (2004). Evidence for implication of primate area V1 in neural 3-D spatial localization processing. Journal of Physiology-Paris, 98, 125–134, https://doi.org/10.1016/j.jphysparis.2004.03.004. [PubMed]
Tsao, D. Y, Conway, B. R., & Livingstone, M. S. (2003). Receptive fields of disparity-tuned simple cells in macaque V1. Neuron, 38, 103–114, https://doi.org/10.1016/s0896-6273(03)00150-8. [PubMed]
Tyler, C. W. (1974). Depth perception in disparity gratings. Nature 251, 140–142, https://doi.org/10.1038/251140a0. [PubMed]
van Ee, R., Anderson, B. L., & Farid, H. (2001). Occlusion junctions do not improve stereoacuity. Spatial Vision, 15, 45–59, https://doi.org/10.1163/15685680152692006. [PubMed]
van Ee, R., & Erkelens, C. J. (1995). Binocular perception of slant about oblique axes relative to a visual frame of reference. Perception, 24, 299–314, https://doi.org/10.1068/p240299. [PubMed]
van Ee, R., & Schor, C. M. (2000). Unconstrained stereoscopic matching of lines. Vision Research, 40, 151–162, https://doi.org/10.1016/s0042-6989(99)00174-1. [PubMed]
van Ee, R., & van Dam, L. C. J. (2003). The influence of cyclovergence on unconstrained stereoscopic matching. Vision Research, 43, 307–319, https://doi.org/10.1016/s0042-6989(02)00496-0. [PubMed]
von der Heydt, R., Adorjani, C. S., Haenny, P., & Baumgartner, G. (1978). Disparity sensitivity and receptive field incongruity of units in cat striate cortex, Experimental Brain Research, 31, 523–545, https://doi.org/10.1007/bf00239810. [PubMed]
Werner, H. (1937). Dynamics in binocular depth perception. Psychological Monographs, 49, 1–127, https://doi.org/10.1037/h0093526.
Westheimer, G. (1979). The spatial sense of the eye. Investigative Ophthalmology and Visual Science, 18, 893–912.
Westheimer, G. (1984). Sensitivity for vertical retinal image differences. Nature, 307, 632–634, https://doi.org/10.1038/307632a0. [PubMed]
Westheimer, G., & Pettet, M. W. (1992). Detection and processing of vertical disparity by the human observer. Proceedings of the Royal Society of London B: Biological Sciences, 250, 243–247, https://doi.org/10.1098/rspb.1992.0155.
Wheatstone, C. (1838). Contributions to the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society, 2, 371–393, https://doi.org/10.1098/rstl.1838.0019.
Yuste, R. (2015). From the neuron doctrine to neural networks. Nature Reviews Neuroscience, 16, 487–497, https://doi.org/10.1038/nrn3962. [PubMed]
Figure 1.
 
Aperture disparities. (A) Stereogram of an oblique line segment with horizontal disparity behind a segmented occluder. (B) Combined left- and right-eye views show that the portion of the line segment visible within each aperture has a disparity direction determined by the orientation of the aperture. (C) Combined images from a different stereogram can produce identical within-aperture patterns from a line segment with an arbitrarily different disparity direction, in this case vertical rather than horizontal.
Figure 1.
 
Aperture disparities. (A) Stereogram of an oblique line segment with horizontal disparity behind a segmented occluder. (B) Combined left- and right-eye views show that the portion of the line segment visible within each aperture has a disparity direction determined by the orientation of the aperture. (C) Combined images from a different stereogram can produce identical within-aperture patterns from a line segment with an arbitrarily different disparity direction, in this case vertical rather than horizontal.
Figure 2.
 
Stereo plaids. Two schematic plaids with horizontal disparity are shown with left- and right-eye images superimposed. The oblique lines represent sinusoidal components; the perpendicular separation between the adjacent parallel lines of each eye gives the wavelength. Given the component orientations, equal component phase disparities (ϕ) across the two plaids correspond to horizontal pattern disparities with a ratio of 1:1.93. This approximates the ratio found for horizontal disparity thresholds for the two plaids.
Figure 2.
 
Stereo plaids. Two schematic plaids with horizontal disparity are shown with left- and right-eye images superimposed. The oblique lines represent sinusoidal components; the perpendicular separation between the adjacent parallel lines of each eye gives the wavelength. Given the component orientations, equal component phase disparities (ϕ) across the two plaids correspond to horizontal pattern disparities with a ratio of 1:1.93. This approximates the ratio found for horizontal disparity thresholds for the two plaids.
Figure 3.
 
Preferred phase disparity as a function of receptive field orientation (DeAngelis et al., 1991). Each dot gives the peak of the Gaussian fit to single-cell responses to disparate drifting sinusoidal gratings in anesthetized cat striate cortex. The closer the orientation preference of the cell to horizontal, the more confined is the tuning of the cells to small values of phase disparity. The solid line is a sinusoid indicating the relative maximum phase disparity for component orientations of a broadband pattern with horizontal disparity.
Figure 3.
 
Preferred phase disparity as a function of receptive field orientation (DeAngelis et al., 1991). Each dot gives the peak of the Gaussian fit to single-cell responses to disparate drifting sinusoidal gratings in anesthetized cat striate cortex. The closer the orientation preference of the cell to horizontal, the more confined is the tuning of the cells to small values of phase disparity. The solid line is a sinusoid indicating the relative maximum phase disparity for component orientations of a broadband pattern with horizontal disparity.
Figure 4.
 
Disparity threshold and relative orientation. (A) Stereogram showing central target and surrounding reference grating patches. The disparity threshold was measured using a constant stimulus procedure to vary the phase disparity of the target, keeping the disparity of the reference grating at zero. (B) Target phase disparity at threshold. Means and standard errors for four observers for the five conditions of the experiment are shown from left to right: (1) target and reference vertically oriented (as in panel A); (2) target with no reference stimulus; (3) target oriented at 45°, reference at 90°; (4) target oriented at 45°, reference at 135°; and (5) target oriented at 45°, reference at 45°. Equivalent spatial disparities appear on the right ordinate. (Data from Farell, 2006.)
Figure 4.
 
Disparity threshold and relative orientation. (A) Stereogram showing central target and surrounding reference grating patches. The disparity threshold was measured using a constant stimulus procedure to vary the phase disparity of the target, keeping the disparity of the reference grating at zero. (B) Target phase disparity at threshold. Means and standard errors for four observers for the five conditions of the experiment are shown from left to right: (1) target and reference vertically oriented (as in panel A); (2) target with no reference stimulus; (3) target oriented at 45°, reference at 90°; (4) target oriented at 45°, reference at 135°; and (5) target oriented at 45°, reference at 45°. Equivalent spatial disparities appear on the right ordinate. (Data from Farell, 2006.)
Figure 5.
 
Disparity thresholds as a function of small orientation differences. Mean thresholds and standard errors for two observers for center-surround grating stimuli are shown. The stimuli were similar to those of Figure 4 but had an orientation difference no larger than 30°. The dashed line extending from the 0° threshold for observer S2 gives the horizontal disparity prediction. (From Farell, 2006.)
Figure 5.
 
Disparity thresholds as a function of small orientation differences. Mean thresholds and standard errors for two observers for center-surround grating stimuli are shown. The stimuli were similar to those of Figure 4 but had an orientation difference no larger than 30°. The dashed line extending from the 0° threshold for observer S2 gives the horizontal disparity prediction. (From Farell, 2006.)
Figure 6.
 
Psychometric functions for grating-plaid pairs. (A) Target grating and reference plaid. The plaid was symmetrical and had a fixed disparity with a direction of 60°. The grating had an orientation of 45° (shown here) or 90° and a disparity that varied under control of a constant-stimulus procedure. Stimuli were presented for 150 ms in trials blocked by conditions. (B) Psychometric functions for two observers; θ gives the grating orientation; the plaid was identical for the two grating orientations. Vertical arrows point to grating disparity values that yielded a perceived depth match with the plaid. Unblocked conditions, including multiple plaid disparities presented in random order, produced similar data. (From Farell et al., 2009.)
Figure 6.
 
Psychometric functions for grating-plaid pairs. (A) Target grating and reference plaid. The plaid was symmetrical and had a fixed disparity with a direction of 60°. The grating had an orientation of 45° (shown here) or 90° and a disparity that varied under control of a constant-stimulus procedure. Stimuli were presented for 150 ms in trials blocked by conditions. (B) Psychometric functions for two observers; θ gives the grating orientation; the plaid was identical for the two grating orientations. Vertical arrows point to grating disparity values that yielded a perceived depth match with the plaid. Unblocked conditions, including multiple plaid disparities presented in random order, produced similar data. (From Farell et al., 2009.)
Figure 7.
 
Predicted 1-D/2-D depth matches. (AD) The disparity vector of a 2-D stimulus is represented by a magenta arrow. The disparity vector of a 1-D stimulus that produces a perceptual depth match according to the IOC construction is represented by a blue arrow. The thick line shows the orientation of the 1-D stimulus, the thin line is the perpendicular orientation, and the dashed line is the disparity constraint line. All predicted relative disparities except (D) agree with experimentally observed values. (E) Superimposed disparity vectors of the 1-D stimuli, shown scaled up, reproduce the disparity vector of the 2-D stimulus, similarly scaled, as an IOC result. The depth-matching 1-D disparities therefore would be component disparities of the 2-D stimulus.
Figure 7.
 
Predicted 1-D/2-D depth matches. (AD) The disparity vector of a 2-D stimulus is represented by a magenta arrow. The disparity vector of a 1-D stimulus that produces a perceptual depth match according to the IOC construction is represented by a blue arrow. The thick line shows the orientation of the 1-D stimulus, the thin line is the perpendicular orientation, and the dashed line is the disparity constraint line. All predicted relative disparities except (D) agree with experimentally observed values. (E) Superimposed disparity vectors of the 1-D stimuli, shown scaled up, reproduce the disparity vector of the 2-D stimulus, similarly scaled, as an IOC result. The depth-matching 1-D disparities therefore would be component disparities of the 2-D stimulus.
Figure 8.
 
Depth from oblique disparities. (A) Schematic example of the stimuli used by Patel et al. (2003). The central squares within the RDSs have oblique disparity within two 30°-wide directional bands (θ). The centers of these bands are ±30° in the upper RDS and ±45° in the lower RDS. Oblique arrows represent 90° phase disparities at one spatial scale for components at the extremes of these bands (i.e., disparities perpendicular to central random-dot components with orientations between 105° and 135° and between 45° and 75° in the upper RDS). These component disparities would be found in a RDS having the horizontal disparity given by the IOC lines and proportionally covering the range of the gray arrow. Other components at a larger or smaller scale, also with 90° phase disparities, would extend this range to greater and smaller values. All components outside the disparate orientation range had zero disparity, as did the entire surrounding RDS; disparities are not drawn to scale. The observers in the study by Patel et al. (2003) matched the perceived depth of the inner square of the RDS by adjusting the disparity of a simultaneously displayed and overlapping 3′ × 3′ square probe. (B) Mean depth matching disparities plotted as a function of the central direction of the component disparities; a disparity direction of –75° comes from components centered on an orientation of 15°. The inverse cosine of orientation is pinned to the data point for θ of 30° and center disparity direction of 15°. Both the center orientation of the disparate bands and their bandwidth affect perceived depth, which dissipates as the disparity direction nears vertical. (Panel B was adapted from Patel et al., 2003.)
Figure 8.
 
Depth from oblique disparities. (A) Schematic example of the stimuli used by Patel et al. (2003). The central squares within the RDSs have oblique disparity within two 30°-wide directional bands (θ). The centers of these bands are ±30° in the upper RDS and ±45° in the lower RDS. Oblique arrows represent 90° phase disparities at one spatial scale for components at the extremes of these bands (i.e., disparities perpendicular to central random-dot components with orientations between 105° and 135° and between 45° and 75° in the upper RDS). These component disparities would be found in a RDS having the horizontal disparity given by the IOC lines and proportionally covering the range of the gray arrow. Other components at a larger or smaller scale, also with 90° phase disparities, would extend this range to greater and smaller values. All components outside the disparate orientation range had zero disparity, as did the entire surrounding RDS; disparities are not drawn to scale. The observers in the study by Patel et al. (2003) matched the perceived depth of the inner square of the RDS by adjusting the disparity of a simultaneously displayed and overlapping 3′ × 3′ square probe. (B) Mean depth matching disparities plotted as a function of the central direction of the component disparities; a disparity direction of –75° comes from components centered on an orientation of 15°. The inverse cosine of orientation is pinned to the data point for θ of 30° and center disparity direction of 15°. Both the center orientation of the disparate bands and their bandwidth affect perceived depth, which dissipates as the disparity direction nears vertical. (Panel B was adapted from Patel et al., 2003.)
Figure 9.
 
Adapting the disparity of a reversed-depth plaid. (A) Sinusoidal gratings with negative (“near”) disparity when cross-fused. (B) Sinusoidal plaid with positive (“far”) disparity when cross-fused. The plaid is composed of the grating in panel A and a zero-disparity grating with a different orientation. (C) The depth order of the two gratings, as each would appear if displayed individually along with the fixation disk. One grating would appear in front of fixation and the other in the fixation plane. If they were spatially and temporally aligned and given appropriate orientations, they would appear as a coherent plaid behind fixation. Nothing would appear on the near side or in the fixation plane. The “far” horizontal disparity of the plaid could be detected directly or calculated from the disparities of the plaid's 1-D components. An adaptor with “far” disparity would affect the perceived depth of the plaid in the former case; one with “near” disparity would do so in the latter case. Adapting at a “near” disparity with a stimulus having an orientation similar to that of the “near” grating is essential for influencing the perceived “far” depth of the plaid (Farell, 1998).
Figure 9.
 
Adapting the disparity of a reversed-depth plaid. (A) Sinusoidal gratings with negative (“near”) disparity when cross-fused. (B) Sinusoidal plaid with positive (“far”) disparity when cross-fused. The plaid is composed of the grating in panel A and a zero-disparity grating with a different orientation. (C) The depth order of the two gratings, as each would appear if displayed individually along with the fixation disk. One grating would appear in front of fixation and the other in the fixation plane. If they were spatially and temporally aligned and given appropriate orientations, they would appear as a coherent plaid behind fixation. Nothing would appear on the near side or in the fixation plane. The “far” horizontal disparity of the plaid could be detected directly or calculated from the disparities of the plaid's 1-D components. An adaptor with “far” disparity would affect the perceived depth of the plaid in the former case; one with “near” disparity would do so in the latter case. Adapting at a “near” disparity with a stimulus having an orientation similar to that of the “near” grating is essential for influencing the perceived “far” depth of the plaid (Farell, 1998).
Figure 10.
 
Relative depth of 1-D and 2-D stimuli. (A) 1-D stimulus pairs (Gabors), shown with disparity vectors. The disparity constraint line of one stimulus appears on the left and that of the other stimulus appears on the right. Each constraint line exceeds the disparity magnitude of the other stimulus, indicating that IOC provides an inconsistent relative disparity. (B) A 1-D stimulus (Gabor) and a 2-D stimulus (plaid) with constraint and projection lines giving consistent relative disparity measures. (C) A pair of 2-D stimuli (plaids) with unequal horizontal disparities (left) and with the same disparity amplitudes after rotation of one disparity (center) has reversed the relative horizontal disparities. The same non-parallel disparities from plaids with differing orientations can be compared along the RDA, restoring the original relative disparity ordering (right).
Figure 10.
 
Relative depth of 1-D and 2-D stimuli. (A) 1-D stimulus pairs (Gabors), shown with disparity vectors. The disparity constraint line of one stimulus appears on the left and that of the other stimulus appears on the right. Each constraint line exceeds the disparity magnitude of the other stimulus, indicating that IOC provides an inconsistent relative disparity. (B) A 1-D stimulus (Gabor) and a 2-D stimulus (plaid) with constraint and projection lines giving consistent relative disparity measures. (C) A pair of 2-D stimuli (plaids) with unequal horizontal disparities (left) and with the same disparity amplitudes after rotation of one disparity (center) has reversed the relative horizontal disparities. The same non-parallel disparities from plaids with differing orientations can be compared along the RDA, restoring the original relative disparity ordering (right).
Figure 11.
 
Depth from horizontal disparity versus RDA. (A) Monocular example of center-surround plaids, with exaggerated contrast. In the experiment the disparity of the annulus was fixed while the disparity of the center plaid was varied in order to determine the point of subjective equality (PSE) for depth. Component orientations of the plaids were 45°/75°, 75°/105°, and 105°/135° in various center-surround combinations. Disparity directions were 0° and ±30°. Stimuli were presented briefly (180 ms) after horizontal and vertical nonius alignment of the eyes and in the absence of irrelevant binocular stimuli (Farell & Ng, 2018). (B) Means and standard errors of depth-match disparity ratios for four observers. Ratios are between target and reference disparity magnitudes at the depth match. Also shown are the ratios predicted by horizontal disparity and RDA metrics. Fit of RDA predictions to the data exceeds that of horizontal predictions (r2 = 0.83, t(2) = 3.08, p < 0.05 and r2 = 0.20, not significant, respectively). Stimulus conditions, defined by plaid orientations and disparity directions, are sketched along the x-axis. Each condition contained two symmetrical stimulus arrangements (sets 1 and 2). The stimulus appearing in panel A is an example of the two left-most stimulus pairs in set 2.
Figure 11.
 
Depth from horizontal disparity versus RDA. (A) Monocular example of center-surround plaids, with exaggerated contrast. In the experiment the disparity of the annulus was fixed while the disparity of the center plaid was varied in order to determine the point of subjective equality (PSE) for depth. Component orientations of the plaids were 45°/75°, 75°/105°, and 105°/135° in various center-surround combinations. Disparity directions were 0° and ±30°. Stimuli were presented briefly (180 ms) after horizontal and vertical nonius alignment of the eyes and in the absence of irrelevant binocular stimuli (Farell & Ng, 2018). (B) Means and standard errors of depth-match disparity ratios for four observers. Ratios are between target and reference disparity magnitudes at the depth match. Also shown are the ratios predicted by horizontal disparity and RDA metrics. Fit of RDA predictions to the data exceeds that of horizontal predictions (r2 = 0.83, t(2) = 3.08, p < 0.05 and r2 = 0.20, not significant, respectively). Stimulus conditions, defined by plaid orientations and disparity directions, are sketched along the x-axis. Each condition contained two symmetrical stimulus arrangements (sets 1 and 2). The stimulus appearing in panel A is an example of the two left-most stimulus pairs in set 2.
Figure 12.
 
Stereo lines and endpoints. The vertical and oblique stereo line segments have the same horizontal disparity. The perpendicular disparity of the vertical line is the same as the disparity of the endpoints. The perpendicular and end disparities differ in the oblique line. The aperture problem could be solved, in the sense that the true-match disparity dominates, if the disparity of the two endpoints propagated across the linear extent between them, “filling-in” conflicting perpendicular disparities with the unambiguous disparities of 2-D stimulus features.
Figure 12.
 
Stereo lines and endpoints. The vertical and oblique stereo line segments have the same horizontal disparity. The perpendicular disparity of the vertical line is the same as the disparity of the endpoints. The perpendicular and end disparities differ in the oblique line. The aperture problem could be solved, in the sense that the true-match disparity dominates, if the disparity of the two endpoints propagated across the linear extent between them, “filling-in” conflicting perpendicular disparities with the unambiguous disparities of 2-D stimulus features.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×