Purchase this article with an account.
Stuart Wallis, Mark Georgeson; What is binocular fusion? Multiplicative combination of luminance gradients via the geometric mean. Journal of Vision 2012;12(9):47. doi: https://doi.org/10.1167/12.9.47.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When images in the two eyes are sufficiently similar, they are ‘fused’. Fusion has motor (vergence) and sensory components. When vergence is prevented, sensory ‘fusion’ of disparate images still occurs, but the nature of this fusion has received curiously little attention. Summation of signals from the two eyes is fairly well understood, and seems the obvious basis for fusion. But summation of disparate edges should cause the fused edge to appear more blurred. We tested this by studying the perceived blur of single edges with vertical disparities that spanned fusion and diplopia. Single, horizontal, Gaussian-blurred edges (blur, B=1.6 to 40 minarc) were presented to each eye at various disparities (0 to 4B), or were added together in the same eye (monoptic control). Perceived blur was measured by blur-matching, using a 2-interval forced-choice method. In monoptic conditions, matched blur increased with disparity in the fusional range (0 to 2B) as expected. But, surprisingly, when the two edges were in different eyes (dichoptic), matched blur remained almost constant, and did not increase with disparity. This shows that fusion preserves the sharpness or blur of each eye’s image, and that fusion cannot easily be explained by summation or arithmetic averaging of spatial signals across the eyes. We show that fusion of this kind occurs if (a) each monocular signal is the spatial derivative (gradient profile) of the input edge, and (b) binocular combination is the contrast-weighted geometric mean of these signals. This achieves positional averaging (‘allelotropia’) without blurring or smearing the edge information.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only