Purchase this article with an account.
Florian Roehrbein, Ruben CoenCagli, Odelia Schwartz; Dynamic scenes vs. static images: Differences in basic gazing behaviors for natural stimulus sets. Journal of Vision 2011;11(11):486. doi: https://doi.org/10.1167/11.11.486.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
For psychophysical experiments a true natural visual input is mostly not feasible and hence experimental stimuli are simplified, e.g. by employing static images instead of video sequences. Here we present empirical data and modeling results that explicitly compare saccadic eye movement parameters and the resulting gaze-selected regions under dynamic and static stimulus conditions. Data were collected in eye-tracking experiments with 40 natural grayscale videos (duration 5 s) and 40 corresponding static stimulus versions. First, we analyzed eye-movement data with respect to various gazing parameters. The most striking effect is a substantial difference in the number of fixations made: Natural videos lead to far less fixations than corresponding static versions (clips 1.89, pics 2.36 fix/s). A close analysis revealed that smooth pursuit eye movements cannot explain this effect and we additionally analyzed first vs. late fixations and total scan path length. We did various controls and also checked with a second set of 21 semi-natural stimuli if the observed differences were due to peculiarities of our data set. This is clearly not the case: Fixation frequency is generally increased, but we find again a marked difference between conditions. An evaluation of the fixated image locations revealed that fixation frequency correlates with luminance variance. To further analyze the dependency on image homogeneity we applied a Gaussian Scale Mixture model to the gaze-selected patches and compared them also to randomly sampled ones. This generative model measures the degree of statistical homogeneity between center and surround locations and we identified conditions for which center and surround are more likely to result from image regions that activate the same set of oriented filters. We finally hypothesize that a major reason for the observed differences lies in the risk of missing information during a saccade, which obviously only applies to dynamic, but not to static stimulus conditions.
This PDF is available to Subscribers Only