Virtual environments (VE) are increasingly relied upon for immersive yet highly controlled experiments across many research domains. This is because they allow for experimenter control of stimuli and their timing while giving subjects a sense of agency and exploration. They also allow recording of behavioral outcomes precisely synchronized with events in the VE, which is necessary for computing measurements of performance (Bohil, Alicea, & Biocca,
2011; Washburn & Astur,
2003). Both humans and nonhuman primate eye movements are a critical element of environmental exploration to ensure optimal survival. They have been extensively studied in their involvement in vision (Haarmeier & Thier,
1999; Tatler, Hayhoe, Land, & Ballard,
2010), cognition (Di Stasi et al.,
2010) and motor control (Watanabe & Munoz,
2011). However, most of these studies have used two-dimensional displays in which most objects and the background remain stationary, requiring subjects to respond with a single type of eye movement behavior (e.g., saccades, smooth pursuits, or fixations). During virtual navigation, objects and environmental features that are common targets of eye movements become dynamic as one moves about them. This creates a nontrivial challenge of determining when subjects are foveating an object, for how long, and how they respond to the dynamics of a scene. Whereas studies in VE have led to insights into spatial working memory (De Lillo & James,
2012) and scene memory (Kit et al.,
2014), as well as paradigms to interrogate hippocampal activity in humans (Miller et al.,
2013) and nonhuman primates (Hori et al.,
2005; Wirth, Baraduc, Planté, Pinède, & Duhamel,
2017), there has not yet been a thorough investigation of the eye movement behaviors in VE. It is also unclear how such behavior compares to that of classical tasks used in vision labs.