August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
What is binocular fusion? Multiplicative combination of luminance gradients via the geometric mean
Author Affiliations
  • Stuart Wallis
    Aston University
  • Mark Georgeson
    Aston University
Journal of Vision August 2012, Vol.12, 47. doi:10.1167/12.9.47
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Stuart Wallis, Mark Georgeson; What is binocular fusion? Multiplicative combination of luminance gradients via the geometric mean. Journal of Vision 2012;12(9):47. doi: 10.1167/12.9.47.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When images in the two eyes are sufficiently similar, they are ‘fused’. Fusion has motor (vergence) and sensory components. When vergence is prevented, sensory ‘fusion’ of disparate images still occurs, but the nature of this fusion has received curiously little attention. Summation of signals from the two eyes is fairly well understood, and seems the obvious basis for fusion. But summation of disparate edges should cause the fused edge to appear more blurred. We tested this by studying the perceived blur of single edges with vertical disparities that spanned fusion and diplopia. Single, horizontal, Gaussian-blurred edges (blur, B=1.6 to 40 minarc) were presented to each eye at various disparities (0 to 4B), or were added together in the same eye (monoptic control). Perceived blur was measured by blur-matching, using a 2-interval forced-choice method. In monoptic conditions, matched blur increased with disparity in the fusional range (0 to 2B) as expected. But, surprisingly, when the two edges were in different eyes (dichoptic), matched blur remained almost constant, and did not increase with disparity. This shows that fusion preserves the sharpness or blur of each eye’s image, and that fusion cannot easily be explained by summation or arithmetic averaging of spatial signals across the eyes. We show that fusion of this kind occurs if (a) each monocular signal is the spatial derivative (gradient profile) of the input edge, and (b) binocular combination is the contrast-weighted geometric mean of these signals. This achieves positional averaging (‘allelotropia’) without blurring or smearing the edge information.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×