Abstract
Yarbus and others have shown that viewers' eye movements with respect to a particular stimulus image differ according to the viewer's task. We revisited Yarbus' work, giving different subjects different tasks, with the same stimulus images. Tasks varied from ascertaining the weather, to free viewing, to inferring the thoughts of people depicted in a scene. Subjects controlled their viewing time. Subjects usually viewed an image for just a few seconds. As expected, eye movements differed according to task. However, this divergence was observed in the first several fixations, not requiring the 2–3 minutes of viewing time that Yarbus typically used. Eye movement data have usually been visualized by tracing eye movements or plotting fixations, on the stimulus image. Unfortunately, this obscures the most viewed parts of the image. To depict how visual cortex perceives an image, we developed a visualization in which the foveated parts of an image are most clear. In our representation, the clarity of a particular part of an image is determined by that part of the image's cortical magnification factor in V1, given its eccentricity to the nearest fixation. For representative trials see http://mplab.ucsd.edu/~jnelson/foveation.html.
J. Nelson was funded by an NIMH predoctoral fellowship to the Salk Institute for Neural Computation during this research.