Abstract
Humans are able to navigate through cluttered environments while avoiding obstacles in their way. How this occurs is still unknown despite many years of research. It is well established that the visual image motion on the back of the eyes (vector flow field) can be used to extract information about our trajectory (e.g., heading) as well as the relative depth of points in the world but that rotation of the eye/head or body confounds this extraction process. We have previously shown how local efference signals regarding eye or head movements can be used to compensate for the perturbations in image motion caused by the rotation (Perrone & Krauzlis, JOV, 2008). However, movement of the body along curved paths also introduces visual rotation, yet the mechanisms for detecting and compensating for this remain a mystery. The curvilinear signals from the primate vestibular system that have so far been measured indicate insufficient precision for a direct compensatory role. A curvilinear path generates a flow vector (T+R, θ) made up of a translation and rotation component. We need to find (R, ϕ) which provides the body's curvilinear rotation rate and direction. I have discovered that there exists a trigonometric relationship linking the flow vector to the curvilinear rate, i.e., R = (T+R)sin(α - θ)/sin(α - ϕ). Here, α is a function of the heading. The body's curvilinear rotation can be found by sampling many vectors and testing a sparse array of heading directions (α). However, this purely visual solution occasionally produces errors in the (R, ϕ) estimates. Model simulations show that broadly tuned vestibular signals for the values of α, R and ϕ are sufficient to eliminate these errors. Model tests against existing human psychophysical data revealed comparable curvilinear rotation estimation precision. Combined visual-vestibular signals produce greater accuracy than each on its own.
Meeting abstract presented at VSS 2017