Reduced peripheral stereoacuity is likely to be either a consequence of an increase in the bandwidth and size of disparity-tuned mechanisms or a decrease in their number and density across the visual field. Neurophysiological differences between foveal and peripheral vision begin within the retina. The density of cones and ganglion cells in the human retina falls off with eccentricity (Curcio & Allen,
1990; Curcio, Sloan, Kalina, & Hendrickson,
1990), suggesting coarser sampling in the periphery (see Snyder,
1982). However, the optics of the human eye remain nearly constant over a large region of around 10° centered on the optical axis (Jennings & Charman,
1981). It is established that the receptive field size of neurons in striate cortex increases with eccentricity, particularly for complex cells (cat: Wilson & Sherman,
1976). Disparity tuning in V1 is coarser and wider (Prince, Cumming, & Parker,
2002) for the periphery with a larger standard deviation in receptive field disparity (Joshua & Bishop,
1970). Natural scene statistics reveal a similar pattern to these V1 cells. The standard deviation of the distribution of disparities in natural scenes increases in the periphery relative to a virtual observer (Liu, Bovik, & Cormack,
2008). Together, these results suggest that the local estimates of disparity are likely to be noisier in the periphery, thus precision in locating each element in depth, and hence stereoacuity, is reduced. It may also be that there are fewer disparity-tuned mechanisms with peripheral receptive fields. In this case, undersampling of disparity by a coarser representation of space in the peripheral visual field may also increase noise in the estimates.