Journal of Vision Cover Image for Volume 23, Issue 9
August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Reconstructing pupillary dynamics during free-viewing of movies: the roles of pupil light and orienting responses
Author Affiliations & Notes
  • Yuqing Cai
    Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, The Netherlands
  • Christoph Strauch
    Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, The Netherlands
  • Marnix Naber
    Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, The Netherlands
  • Footnotes
    Acknowledgements  This work was supported by the Chinese Scholarship Council (CSC) scholarship.
Journal of Vision August 2023, Vol.23, 4813. doi:https://doi.org/10.1167/jov.23.9.4813
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuqing Cai, Christoph Strauch, Marnix Naber; Reconstructing pupillary dynamics during free-viewing of movies: the roles of pupil light and orienting responses. Journal of Vision 2023;23(9):4813. https://doi.org/10.1167/jov.23.9.4813.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Pupil size dynamically adapts to changes in low-level visual features, as well as cognitive factors. When cognitive factors are manipulated in pupillometric research, low-level visual features are usually strictly controlled and the fixation position is required to be constant. This severely limits the range of options for experimental designs. Instead of controlling for low-level features, the current study attempts to model and predict pupillary dynamics based on complex changes in low-level features. Any unexplained variance can then be attributed only to higher-level factors. Forty healthy participants free-viewed a collection of 60-second movie clips while gaze position and pupil size were recorded. Visual features, namely luminance changes and color changes, were extracted across the movie frames. Following the idea of linear time-invariant systems, visual feature changes were convolved with pupil response functions (PuRFs) for light and orienting processes separately. To find the model that best fitted the actual pupil size recordings, we systematically varied the peak latency, width, and amplitude of the PuRFs. The fitted models demonstrated that pupil responses predicted by light matched the real pupil size changes. The median proportion of explained variance across data from all movie clips (n = 453) was approximately 30%. In addition, this proportion significantly improved to 34% after including transient pupil orienting responses to changes in color space. In conclusion, these results illustrate that our model of the pupil light and orienting response can explain a substantial proportion of variance of pupil size changes during unconstrained viewing of complex visual stimuli. Extensions of the current model could be used to produce baseline pupil traces that allow researchers to (1) control for confounds of low-level features in experimental designs, (2) discover which low-level visual aspects drive pupillary dynamics, and (3) investigate the effects of higher-order factors such as attention in isolation of confounding factors.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×