Abstract
We investigate the perceptual principles responsible for the visual control of human locomotor behaviors by modeling the neural mechanisms of human vision. We have built an empirically-grounded perception and action system by coupling a biologically plausible computational model of motion processing in the dorsal stream of the visual cortex (Jhuang et al., ICCV, 2007) to a cognitively valid locomotor control model (Warren & Fajen, In Fuchs & Jirsa, Coordination, 2008). Here we report on initial results of this integration of the two models and show that the coupled system accounts for human data for steering to a goal while avoiding obstacles (Warren & Fajen, In Fuchs & Jirsa, Coordination, 2008). We further report on a recent extension of the vision model to also include the detection of motion boundaries, binocular disparity as well as shape and color information in addition to motion energy. We simulate an agent moving in complex virtual environments using a perception-action loop that includes the vision model, the locomotor control model, and the environment. Visual cues are successfully used as input to the locomotion model to modulate the activation of control laws, leading the agent to avoid obstacles while steering to its goal. As the agent travels through its environment, the virtual optic flow from the agent’s perspective changes, which in turn modifies the detected visual cues. The agent is thus coupled to its environment through information and action, and locomotor trajectories are emergent (Warren, Psychological Review, 2006). We use different degrees of realism and complexity for the virtual scenes, from a few to many objects, and simple to complex textures, to demonstrate the validity of the approach.
Meeting abstract presented at VSS 2013