Abstract
Accurate perception of motion depends critically on accurate estimation of retinal image motion. Recently, Burge & Geisler (2015) developed an ideal observer for retinal speed estimation based on the statistics of natural image movies and constraints imposed by the visual system's front-end. Psychophysical experiments with natural and artificial stimuli showed that this ideal observer tightly predicted the detailed shapes of a large set of psychometric functions. However, this previous work left unresolved whether the close match between human and ideal performance was driven by the properties of the natural stimuli, or whether it was due instead to spectacular coincidence. Here, we build upon traditional double-pass experimental methodologies to estimate the relative influence of internal noise and natural movie structure on human response variability. Five human observers viewed randomly-selected, contrast-fixed natural image movies in a 2IFC paradigm (1deg, 250ms). The task was to select the interval with the movie having the faster speed. In each pass of the experiment, psychometric functions were measured using the method of constant stimuli across a range of speeds (5 standard speeds x 7 levels/standard x 100 trials/level). Movies were never repeated within a pass. Each pass was identical. Eight passes were collected (28,000 trials). Human response agreement was computed for each comparison speed and compared to the response agreement of an ideal observer degraded by different amounts of internal noise. The pattern of human agreements is diagnostic of the importance of natural image variability; for example, responses would agree perfectly across repeats if image variability swamps internal noise. Our analysis shows that external factors have approximately as much impact as internal factors, and that the degraded ideal observer closely predicts the pattern of response agreement. Future work will examine the particular image movie properties that predict trial-by-trial performance.
Meeting abstract presented at VSS 2016