We scan the world via a series of fast eye movements (“saccades”) that land our eyes on objects of interest ∼4 times a second. Interestingly, even between saccades (“fixational pauses”) our eyes are not completely at rest but execute miniature movements (fixational eye movements [FEMs]) consisting of occasional brief microsaccades, a continuous very-low-amplitude tremor, and a dominant, slow, low amplitude drift. In a normal scan (
Figure 1), fixational pauses last ∼250 msec (
Gersch, Kowler, Schnitzer, & Dosher, 2008;
Boi, Polleti, Victor, & Rucci, 2017), during which time the visual system transmits, processes, and analyzes a great deal of information; we perceive the whole spatial tapestry in front of us, as well as its many elements. It has been an accepted truism that the extraordinary ability of the visual system to handle vast amounts of information during these brief pauses stems from its built-in parallel processing mechanisms. The two-dimensional (2D) retinal image is transmitted and processed in parallel, with minimal delays, through the complex retinal circuitry, the multi-layered lateral geniculate nucleus (LGN), and the early stages of the primary visual cortex (V1). Even the dominant physiological view that attributes our object perception to expert cells such as “face cells” and even “Gestalt cells” (see review by
Spillman et al., 2023) relies on convergence and integration mechanisms emerging from the parallel organization of V1 cells. Despite the intuitive appeal of a 2D image leading through parallel pathways to our perception that retains the parallel nature of the outside world, there is an emerging consensus that during fixational pauses the drifting eye converts spatial information, at each retinal or V1 locus, into a space-to-time code. This transformation, it is argued, is beneficial and possibly even essential to good visibility (cf.,
Rucci, Ahissar, & Burr, 2018). A partial list of putative benefits includes reformatting a static spatial pattern into a spatiotemporal code (
Ahissar, & Arieli, 2012;
Rucci et al., 2018,
Rucci, Iovin, Poletti, & Santini, 2007), enhanced high spatial frequencies (SFs), orientation and contrast discrimination (
Rucci et al., 2018;
Boi et al., 2017;
Ahissar, & Arieli, 2012;
Rucci et al., 2007), enhancement of feature extraction and estimation (
Kuang, Poletti, Victor, & Rucci, 2012;
Greschner, Bongard, Rujan, & Ammermuller, 2002), improving acuity and hyperacuity (
Ratnam, Domdei, Harmening, & Roorda, 2017;
Anderson, Ratnam, Roorda, & Olshausen, 2020;
Intoy & Rucci, 2020), overcoming retinal inhomogeneity (
Anderson et al., 2020), organizing retinal images (
Lapin & Bell, 2023), and providing efficient coding for neuromorphic vision (
Testa, Sabatini, & Canes, 2023). Others (
Hohl & Lisberger, 2011;
Pitkow, Sompolinsky, & Meister, 2007), while acknowledging that FEMs generate significant neural responses, suggest mechanisms that allow the visual system to overcome these responses. Some studies suggested that drift-generated responses may be useful in preventing image fading (
Martinez-Conde, Otero-Millan, & Macknik, 2013;
Ahissar, & Arieli, 2012;
Engbert, Mergenthaler, Sinn, & Pikovsky, 2011).