Purchase this article with an account.
Thomas Tanner, Roland Fleming, Heinrich Bülthoff; Eye movements for active learning of objects. Journal of Vision 2007;7(9):22. https://doi.org/10.1167/7.9.22.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We investigated how humans use eye movements to direct their attention to informative features in a categorization task. More specifically, we test the hypothesis that eye movements are influenced by prior knowledge about a task and by information gathered in previous fixations. Our novel stimuli, which belonged to either one of two probabilistic classes, were large circular contours with several regular perturbations at which the curvature was varied as a continous feature dimension. With this design the spatial separation of single features generally required several closer fixations to make a confident decision about class membership.
Each feature value varied stochastically from trial to trial according to a characteristic distribution for each category (external noise). The features were independent and varied in diagnosticity. Subjects had to learn the categories by using immediate feedback about the true category after each trial (4 subjects, 10 sessions of 250 trials). We estimated the internal noise, which was much smaller then the external noise, based on an independent experiment measuring curvature discrimination performance for different eccentricites (0–12°), finding approx. linear decrease in sensitivity with increasing curvature.
The subjects were able to learn to discriminate the categories (avg. performance for ideal observer vs. subjects was 0.82 vs 0.68). Trial by trial fluctations in performance follow the ideal observer (MAE 0.32). With increasing expertise reaction times became shorter and fixations became more focused, possibly reflecting the subjects' belief about relevant features. We compare the results with Bayesian learner models which take into account the peripheral fall-off in discriminability, while directing their attention to the currently most informative features.
This PDF is available to Subscribers Only