Abstract
The inverse optics problem presents a significant challenge for any explanation of visual perception. Using the same statistical approach that has successfully rationalized our percepts of brightness, color, and form (Purves & Lotto, 2003; Long & Purves, 2003; Howe & Purves, 2005), here we test the hypothesis that the human visual system generates all of its motion percepts probabilistically, based on what a stimulus sequence has most often signified in the accumulated past experience of observers. To test whether this hypothesis is correct, we have constructed a computer model using the principles of projective geometry to generate stimuli on an image plane from simulated objects moving in a virtual environment. From this motion database of image/source relationships, we then repeatedly sampled the image plane with templates configured in accord with the spatiotemporal parameters of sequential stimuli that have traditionally been used to investigate motion perception. The cumulative probability density functions from these samples were calculated, allowing us to make quantitative predictions about the motion phenomenology human subjects should perceive when presented with sequential displays of the same configuration as the templates. By comparing these predictions with psychophysical data acquired by standard CRT display testing and non-standard testing in a back-projected three-dimensional virtual reality environment, we were able to explain human motion perception of sequential stimuli. These results indicate that human motion perception is likely to be generated entirely on an empirical basis.