Abstract
In the 3-D object recognition task, the patterns of eye movements during the learning phase were different depending on tasks, suggesting that eye movement patterns may reflect different encoding strategies depending on the prespecified task. When the same object is presented repeatedly, information to encode objects exquisitely would be different depending on information that has already been stored. In this study, we measured eye movements during learning of 3-D objects to reveal the effects of stored information on spatiotemporal eye movement patterns, and investigate the 3-D object encoding processes. An unfamiliar 3-D object was presented for 10 seconds at the study phase, during which participants' eye movements were recorded. It was followed by a recognition test in which the test stimulus was presented 500 milliseconds from either the same viewpoint as they learned (non-rotation condition) or a various viewpoint (rotation condition). The task was to respond as to whether or not it was the same object which had been presented earlier regardless of their rotation, and the same learning objects were presented repetitively. In the beginning of experiment, participants fixated on the center of components of objects more frequently in the rotation condition, suggesting that objects were encoded more categorically. The proportions of large saccades were the same at first, but, after a few trials, they changed depending on the test condition: At the beginning of trials, the arising proportion of large saccades was significantly higher in the rotation condition than in the non-rotation condition, and immediately after the large saccade longer durations arose more often in the rotation condition. These results suggest that participants may encode 3-D objects more categorically in the rotation condition, and after a few trials they obtain the global shape of object first based on their categorical stored information.