Abstract
Although much is known about how we perceive a moving object, such as a single leaf, a great deal of ongoing research explores how local motion is combined to perceive global motion, such as the leaves on a swaying tree. Much of this research utilizes random-dot kinematograms, in which many moving dots are shown. Some dots move in one coherent direction (signal) while the others move in random directions (noise), allowing the proportion of signal to noise (coherence) to be manipulated. Most behavioral and computational modeling research using such stimuli is rooted in a framework whereby the brain first determines the motion direction of each dot, and then combines these estimates (e.g. by averaging) to compute the global motion direction. Here, however, we test a novel prediction of this framework. Participants identified global motion directions, and a mixture model analysis was used to measure 1) how often the correct direction was identified, and 2) the precision of responses when they were correct. Under the standard framework the precision of responses should decrease with decreasing coherence. However, decreasing coherence lowered the probability that the correct global motion direction was identified, but the precision of correct responses was nearly invariant. Likewise, increasing the stimulus duration changed the probability of identification, but had very little effect on precision. These results are not in line with the standard framework, and instead suggest a two-stage model by which some subset of dots are first selected, and then the direction of those dots is precisely determined. This new model requires re-thinking previous behavioral results, and a revision of models rooted in the standard framework.