Abstract
The time-to-contact (τ, TTC) between an observer and the environment can be derived from visual information and is highly advantageous for obstacle avoidance. In general, we should turn more quickly away from objects that we are likely to hit sooner. If expansion rate (E) defines the stimulus growth rate in the image plane then TTC = 1/E. The ViSTARS model (Browning et al., 2009, Neural Networks) demonstrated how motion information may be processed in the brain to detect and avoid obstacles while navigating towards a goal. The steering behavior of ViSTARS is similar to humans in simulated obstacle avoidance tasks, in some limited environments. ViSTARS did not include TTC estimates resulting in a turn‑rate proportional to object size rather than to TTC. I present an updated ViSTARS model that accurately estimates the expansion rate of an object in model MSTd through the same mechanisms that are used to estimate heading. Model MSTd consists of a template match between an aperture resolved motion estimate in model MT+ and template cells sensitive to global motion in a particular direction. A recurrent competitive field configured as a winner-take-all network then accurately detects the current heading. Inclusion of inverse-distance weighting in the templates allows the neural circuit to explain human heading estimation bias in the presence of independently moving objects (Layton et al., 2011, VSS). The present analysis demonstrates that, if properly configured, this inverse distance weighting also allows model MSTd to provide accurate expansion, and by extension TTC, estimates when the stimulus fills the cell RF. Further updates to the template match, to compensate for regions containing no motion, provide size independent expansion responses. When the updated ViSTARS model is presented with frontal plane approach trajectories, MSTd produces accurate expansion estimates irrespective of RF or stimulus size and provides accurate TTC information to the steering module.
Meeting abstract presented at VSS 2012