Purchase this article with an account.
Emre Akbas, Kathryn Koehler, Miguel P Eckstein; Learning of eye movements for human and optimal models during search in complex statistical environments. Journal of Vision 2013;13(9):242. doi: https://doi.org/10.1167/13.9.242.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Little is known about how organisms modify their eye movements to optimize perceptual performance. Here, we investigate changes in human eye movements given practice at a visual search task with a complex statistical structure and compare these to a foveated Bayesian ideal learner (FBIL) that uses posterior probabilities from previous trials as priors in subsequent trials to plan saccades. Methods: Seven participants searched for a vertically aligned Gabor (8 cycles/deg) signal (yes/no task with 50% probability of target presence) embedded in spatiotemporal white-noise. The image was briefly presented (500ms) and subtended 22.2x29.6° visual angle. If present, the signal always appeared in the one of six locations (with equal probability per location) arranged around a circle with a radius of 4.4° whose center was located 16.6° from initial fixation. Participants were informed that there were six possible equi-probability target locations, but were given no information about their spatial configuration. A separate study measured each observer’s detectability of the target as a function of eccentricity (visibility map). Results: All but one participant’s perceptual performance improved across the 3600 trials (mean Δ proportion correct between first and last 100 trials: 0.19±0.03). For these six observers, the mean distance of their saccade endpoints to the nearest possible target locations diminished from 6.22±0.63° in the first session to 1.68±0.05° in the last session. Based on the human visibility maps, the FBIL predicted that human eye movements should converge to the center of the six possible locations. Instead, observers' learned eye movements converged close to one of the six possible locations, a result that was better predicted by a learning saccadic target model (maximum a posteriori probability, MAP). Conclusion: Humans can learn to strategize eye movements to optimize perceptual performance but that for environments with complex statistical structure they fail to fully learn optimal gaze strategies.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only