Abstract
In real life, the human visual system continuously faces the difficulty of obtaining information about objects of interest from the visual scene. Such information is often incomplete or spatial and/or temporally fragmented. It has been shown that the visual system is highly efficient in performing the contour completion. Such filling-in of missing contours occur in static and dynamic situations, but there are cases in which motion becomes critical for this task. A class of phenomenon of particular importance in this context is the contours perception due to the accretion and deletion of texture due to the relative motion between two surfaces (Spatiotemporal Boundary Formation; SBF). Here, we present a neuro-computational model that extracts the contours of a moving figure from the the accretion and deletion of its texture elements. The model consists of three modules: an early stage that mimics the receptive fields of circular on-off LGN cells, and the receptive fields of oriented cells of the primary visual cortex. A second stage that implements the lateral connections among cells, which provide spatial facilitations and inhibitions, constrained by the law of the association fields. A third module that implements temporal facilitations among cells, that work as spatiotemporal correlators. These facilitations are modulated by motion signals such that they are maximal in the direction of motion. We performed model simulations by using sequences of artificial images. The model performance was estimated by calculating the error in the estimate of the contours, respect to the ground truth given by the stimuli. Simulations were performed by varying speed, dot density and shape. Results show that the model can account for most results on SBF found in the literature. We also used anorthoscopic, in addition to SBF stimuli, and found that the model performs acceptably well reconstructing the object shape in this condition.
Meeting abstract presented at VSS 2013