Abstract
Transparency presents a difficult problem for motion segmentation because multiple velocities have to be represented at each spatial location. We analyze a feedforward/feedback neural model of area V1 and MT that is capable to solving the motion aperture problem (Bayerl, Neumann, Neural Comp., 16(10), 2004) and propose an extension of it that explains how transparent motion is integrated and segregated in early parts of the dorsal pathway. First, we incorporate an unspecific attentional signal from higher areas that influences the disambiguation process realized by early feedback between model V1 and MT. We demonstrate how such an attentional signal (e.g. priming any rightward motion) achieves the selection of detected motion patterns in the presence of transparency. Second, we employ a pair of figure/ground layers of motion sensitive cells to represent transparent motion. We show that such neural network architecture is capable of successfully detecting and representing transparent motion. Without explicit transparency detection motion of the figure is segregated from ground motion in the presence of transparency, while no such separation occurs for opaque motion. Model experiments with random dot stimuli consisting of horizontal stripes alternatively showing leftwards and rightwards motion generate results consistent with psychophysical experiments. For thin stripes motion cannot be separated spatially and thus transparent motion estimations are generated, while broad stripes are nicely segregated in space (Van Doorn, Koenderink. Exp. Brain Res., 45, 1982). In conclusion, the presented contributions show how simple local mechanisms in conjunction with feedback processing can generate complex behavior of a neural network, consistent with experimental observations. Importantly, the model consists of model areas described by similar mechanisms and, thus, presents a step towards a modular model of cortical motion processing in the dorsal stream.