September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Classifying binary decision-outcomes from pupil dilation: A random forest approach
Author Affiliations
  • Christoph Strauch
    Utrecht University
  • Teresa Hirzle
    Ulm University
  • Andreas Bulling
    University of Stuttgart
Journal of Vision September 2021, Vol.21, 2373. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christoph Strauch, Teresa Hirzle, Andreas Bulling; Classifying binary decision-outcomes from pupil dilation: A random forest approach. Journal of Vision 2021;21(9):2373.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Pupil dilation may reveal the outcomes of binary decisions. Hereby, pupils dilate stronger for stimuli that are deemed targets by the beholder than for distractors. Respective findings are built on average pupil dynamics, aggregated over multiple trials and participants rather than on single trials. Further, the reported differences between targets and distractors in pupil size hugely vary between investigations, likely due to differences in the proportion of target to distractor stimuli. Previous research addressed the principle question using machine-learning techniques. The respective investigations did not control for effects of motor execution on pupil size, largely ignored possible effects of stimulus probability, and used generally less controlled settings with uncontrolled gaze position. We reanalyzed pupil sizes of n = 18444 trials, lasting 3 s each, gathered from n = 69 participants to classify binary decision-outcomes with a random forest classification approach. Participants were presented with either targets or distractors at a likelihood of 25%, 50%, or 75% in blocks. In half of the trials, participants had to indicate whether the respective presented letter was a target overtly via key press. Results show best classification performances for targets that were rare (25%) relative to distractors with an AUC of 0.75, while AUC was 0.69 at equiprobability and 0.58 if targets were more frequent than distractors with 0.50 as baseline chance level. Classification was better without key press, reaching up to an AUC of 0.77 when targets were rare (25%). The first derivative of pupil size changes provided the most informative features, all of which could be derived within the first second of the trials, suggesting that a successful classification could be reached relatively fast. These results are useful for research where intention cannot be communicated overtly. We further discuss possibilities for applications building on pupil-based intention classification.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.