Fixational eye movement patterns have been widely studied in a variety of domains including reading, scene and face perception, object localization, and visual search (Henderson, Brockmole, Castelhano, & Mack,
2007; Land, Mennie, & Rusted,
1999; Liversedge & Findlay,
2000; Mannan, Ruddock, & Wooding,
1997; Rayner,
1995; Renninger, Verghese, & Coughlan,
2007; Underwood, Foulsham, van Loon, Humphreys, & Bloyce,
2006). Surprisingly, beyond two-dimensional (2D) pattern recognition (e.g., Renninger, Coughlan, & Verghese,
2005; Renninger et al.,
2007), there have been no detailed analyses of eye movement patterns during three-dimensional (3D) visual object recognition. The ability of the human visual system to rapidly categorize 3D objects across variation in viewing conditions caused by changes in scale, lighting and viewpoint, is a truly remarkable accomplishment for a biological system that far surpasses, in adaptability and robustness, the most advanced computer vision systems. Although everyday object recognition can be accomplished quickly, and often within a single fixation for a distal stimulus, previous studies, using 2D stimuli, have shown that fixation patterns can be highly informative about shape processing during perception (e.g., Melcher & Kowler,
1999; Renninger et al.,
2005,
2007; Vergilino-Perez & Findlay,
2004). For example, Melcher and Kowler (
1999) have shown that initial landing position during saccadic localization is driven by a representation of target shape that determines center-of-gravity (COG) landing sites. Recent evidence also suggests that the perception of information about object presence and identity in a scene may be largely restricted to a relatively small region around the current fixation point (Henderson, Williams, Castelhano, & Falk,
2003), although the nature of the shape information processed during fixations and the role of this information in object recognition remain unclear.