Purchase this article with an account.
Harold Greene, James Brown, Barry Dauphin; A Visual Field Asymmetry in Pre-saccadic Fixation Durations. Journal of Vision 2014;14(10):1213. doi: 10.1167/14.10.1213.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Many pro-saccadic reaction time studies have revealed a shorter latency to initiate saccades towards the upper visual field than the lower visual field. Our concern was temporal processing during free-scanning eye movements. For free-scanning visual exploration, the temporal metric includes not only saccade reaction time, but also the time used for perceptual processing of information at the fixated location. If asymmetries are reliably present in free-scanning exploration of scenes, this must be considered in computational modelling of human fixation durations. Eye movements were monitored as observers engaged in three different free-scanning visual exploration tasks (i.e., 2 types of visual search tasks, and a Rorschach ambiguous image-interpretation task). Pre-saccadic fixation durations (PSFDs) associated with saccades directed within the visual field were compared for 80 naïve participants (at least 12 per task). For each task, PSFDs were placed in 10-deg direction bins, and analyzed using a 36-level, one-way ANOVA. The analyses indicated that PSFDs were not equally long in different directions (all ps <.01). For post hoc probing, we divided the 360-deg visual field into four 90- deg sections. Orthogonal contrast analyses revealed a vertical field asymmetry for each task, such that PSFDs were shorter by about 50 ms for saccades directed upwards than downwards (all ps <.01). We speculate that the vertical asymmetry observed for PSFDs was more likely related to saccade programing constraints, than to the human experience of biasing visual attention towards extra-personal space in the upper-visual-field. Whatever the correct explanation, the ability to predict PSFDs is important for computational modelling of real-time exploration of visual scenes. The findings make a case for also including directional constraints in computational modelling of when the eyes move.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only