Abstract
Understanding viewing patterns of images is important for inferring how bottom-up and top-down information drive vision. However, it is unknown to what extent viewing patterns depend on specific features. Here, we examined subjects’ eye movements during encoding and recognition and asked whether they are feature, view or task specific. During encoding, subjects viewed faces, cars and corridors in random order. Half of the stimuli were shown in the front view while the other half were shown in the side view. After a distracter task, subjects performed a recognition memory task and reported if images show an exemplar they had seen before or not, regardless of its view. Half of the images contained exemplars presented during encoding, and half were novel. Stimuli view was counterbalanced. In experiment 1 (19 subjects), stimuli view during encoding and recognition were identical. In experiment 2 (21 subjects), stimuli view during recognition and encoding were different. In both experiments, we found that subjects fixate on category-specific features (feature effect: Fs>174, Ps<10[sup]-6[/sup], 3-way ANOVA, factors of feature, view, and task): subjects look significantly more at the eyes and nose of faces, the front hood of cars, and the end of corridors. Furthermore, there is a significant interaction between the features and view: for faces, subjects fixate more at the nose for the front view, while at the cheek for the side view (Fs>7, P<10[sup]-6[/sup]); for cars, subjects fixate more on the center of the hood for the front view, while at the near front hood for the side view (Fs>315, P<10[sup]-6[/sup]); for corridors, subjects fixate more on the end of the corridor for the front view, while at the walls for the side view (Fs>260, P<10-6). These data demonstrate a regular viewing pattern driven by category-specific features that is modulated by the view of the stimulus.
Meeting abstract presented at VSS 2013