Two types of neural representations of binocular disparity—correlation-based and match-based representations—underlie stereoscopic depth perception (Cumming & Parker,
1997; Doi, Tanabe, & Fujita,
2011; Janssen, Vogels, Liu, & Orban,
2003; Krug, Cumming, & Parker,
2004; Kumano, Tanabe, & Fujita,
2008; Parker,
2007; Tanabe, Umeda, & Fujita,
2004). These representations are characterized by disparity tuning functions of the neurons underlying perceptual decisions. In correlation-based representation, the amplitude and sign of the disparity tuning function follow the cross-correlation between images projected to the left and right eyes. In match-based representation, the matched features between the images from the two eyes determine the amplitude of the tuning function. Neurons involved in the two representations have contrasting tuning functions in response to anticorrelated random-dot stereograms (RDSs). In anticorrelated RDSs, the luminance contrast of the dots is reversed between the two eyes: White dots are replaced with black dots, and black dots are replaced with white dots against a gray background. The anticorrelation inverts the sign of cross-correlation and eliminates the matched features between the two eyes. Accordingly, the correlation-based representation has inverted tuning functions for anticorrelated RDSs relative to those for correlated RDSs (Cumming & Parker,
1997; Krug et al.,
2004; Takemura, Inoue, Kawano, Quaia, & Miles,
2001), while the match-based representation has flat (zero or decreased amplitude) functions (Janssen et al.,
2003; Kumano et al.,
2008; Tanabe et al.,
2004; Theys, Srivastava, van Loon, Goffin, & Janssen,
2012). The correlation-based and match-based representations refer to those distinctive sets of tuning curves, but they do not refer to underlying neuronal mechanisms. Specifically, our correlation-based representation does not necessarily employ a disparity energy model (Ohzawa, De-Angelis, & Freeman,
1990), and our match-based representation does not mean that the underlying mechanism is the feature matching such as the matching of second-derivative zero crossings of filtered left-eye and right-eye images (Marr & Poggio,
1979).