Abstract
Humans produce quick eye movements (saccades) to redirect the fovea and obtain visual information about the environment. Saccades are guided by bottom-up information, such as size or luminance (Deubel et al., 1984), and top-down information, such as task goals (Herwig et al., 2010). In some cases when two targets are in close proximity, up to a 30° separation, saccades are directed to an intermediate location (saccade averaging, Coren & Hoenig, 1972), with the highest incidence of averaged movements occurring when reaction time is low (Chou et al., 1999). However, the exact spatial and temporal relationships that contribute to saccade averaging are currently unknown. Recently, Haith and colleagues (2015) examined intermediate (averaged) movements during a time restricted visually guided reaching task. The target shifted at different times before reach onset, providing limited time to re-prepare the movement plan. When re-preparation time was small, the movement was directed to the initially cued target. However, as this time increased, the authors observed intermediate movements aimed between the two targets. Importantly, this time was modified by the spatial distance between competing goals. The authors suggest that these intermediate movements reflect an adaptive behavior of the motor system when the goal of a movement is ambiguous. Here, we applied the same paradigm and framework to saccadic eye movements. We found the re-preparation time that resulted in intermediate movements increased with the spatial distance of competing targets (70ms, 92ms, and 113ms, for 15o, 30o and 45o of separation, respectively). In addition, the variability of this relationship changed as a function of the target shift, with the transition between the two targets becoming sharper with distance. Collectively, our results demonstrate that saccade averaging, similar to reaching movements, depends on the amount of time to prepare the movement plan and the spatial separation of competing movement goals.
Meeting abstract presented at VSS 2016