Abstract
How does parallel processing in the brain lead to visual percepts and object recognition? How do perceptual mechanisms differ from those of movement control? Multiple modeling studies predict how the neocortex is organized into parallel processing streams such that pairs of streams obey complementary computational rules (as when two puzzle pieces fit together) and interactively overcome their complementary weaknesses. For example, visual boundaries and surfaces obey complementary rules in the Interblob and Blob streams from area V1 to V4. They interact to generate visible representations of 3D surfaces in which partially occluded objects are mutually separated and completed. Visual boundaries and motion in the Interblob and Magnocellular cortical processing streams obey complementary computational rules in cortical areas V2 and MT. They interactively form representations of object motion in depth. Predictive target tracking (e.g., targets moving relative to a stationary observer) and optic flow navigation (e.g., an observer moving relative to its world) obey complementary computational rules in ventral and dorsal MST. These regions interact to track moving targets, and to determine an observer's heading and time-to-contact. Spatially-invariant object recognition and spatially-variant attention obey complementary computational rules in Inferotemporal and Parietal cortex, where they, respectively, learn to stably recognize objects in a changing world, and rapidly relearn spatial and action parameters when motor parameters change. Their interaction enables spatially-invariant object recognition categories of valued objects (in the What stream) to direct spatial attention and actions (in the Where stream) towards these objects in space. These results help to quantitatively simulate many data and make surprising predictions.
Supported in part by DARPA, NSF, and ONR.