Purchase this article with an account.
Alexander G. Anderson, Kavitha Ratnam, Austin Roorda, Bruno A. Olshausen; High-acuity vision from retinal image motion. Journal of Vision 2020;20(7):34. doi: https://doi.org/10.1167/jov.20.7.34.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A mathematical model and a possible neural mechanism are proposed to account for how fixational drift motion in the retina confers a benefit for the discrimination of high-acuity targets. We show that by simultaneously estimating object shape and eye motion, neurons in visual cortex can compute a higher quality representation of an object by averaging out non-uniformities in the retinal sampling lattice. The model proposes that this is accomplished by two separate populations of cortical neurons — one providing a representation of object shape and another representing eye position or motion — which are coupled through specific multiplicative connections. Combined with recent experimental findings, our model suggests that the visual system may utilize principles not unlike those used in computational imaging for achieving “super-resolution” via camera motion.
This PDF is available to Subscribers Only