Abstract
An important component of perceptual learning in complex visual environments is the dynamic optimization of eye movements to maximize the acquisition of visual information. Here, we investigate how the temporal evolution of eye movements and perceptual decision weights results in performance improvements. We used a new classification image - based method to estimate and visualize how observers dynamically vary the parts of the search stimulus used for perceptual decisions on each trial and compare it to eye movements across trials. Further, a Bayesian model observer that predicts the next saccade location from a history of previous stimuli and responses was used to analyze constrains in (1) memory and (2) spatial sampling. The stimuli were 16 Gabor patches on a virtual ring (radius 5.8 degrees). The contrast of each patch was randomly varied on each trial. Observers’ task was to detect a contrast increment (50% target present trials) in one patch. Stimulus duration was 300 ms, allowing for one or two saccades to the target. Observers knew that the target patch was at the same, randomized, location for a learning block of 300 trials. The landing points of the first saccade on every trial were extracted. Decision weights for each location were estimated by a maximum likelihood method that uses a history of stimulus values and responses. In the initial trials of the learning block observers used explorative search patterns. Then, after typically 50 – 100 trials, observers repetitively fixated to a single location that often but not always contained the target. The results show a close correspondence between estimated decision weights and saccade locations. A model observer with rather long trial memory and foveated vision could best predict subsequent saccade locations. Together this suggests dynamic and common representations mediating eye movement and perceptual decision learning.