Abstract
Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded during a real-world search task and compared with those made by healthy students and age-matched controls. The patient was unable to recognize 3D forms or line drawings, despite normal acuity and intact peripheral fields. We hypothesized that, with her deficit, there should be less top-down guidance in this task than with normal controls. If visual saliency is computed earlier than, or independent from, object recognition then saliency would be predicted to have an effect on eye movements. Furthermore, with reduced top-down biases, the eye movements produced might be closer to a raw saliency map than those made by normal controls.
The patient's deficit in object recognition was seen in poor search performance and inefficient scanning; she made longer fixations and smaller saccades than control participants. The low-level saliency of target objects had more of an effect in visual agnosia than in the control groups, and the most salient region in the scene was more likely to capture attention. Further analyses suggested that the relationship between fixation patterns and saliency in visual agnosia was stronger than that in the control subjects. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye-guidance.