September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Online eye tracking with consumer-grade webcams: potential and limits
Author Affiliations
  • Kilian Semmelmann
    Developmental Neuropsychology, Ruhr-Universität Bochum
  • Sarah Weigelt
    Developmental Neuropsychology, Ruhr-Universität Bochum
Journal of Vision August 2017, Vol.17, 892. doi:https://doi.org/10.1167/17.10.892
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kilian Semmelmann, Sarah Weigelt; Online eye tracking with consumer-grade webcams: potential and limits. Journal of Vision 2017;17(10):892. https://doi.org/10.1167/17.10.892.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Within the last decade, online experimentation was established as a viable supplement to in-lab experimentation. While this endeavor started with online questionnaires, recently performance- and reaction time-based paradigms that are used in the field of vision science were added to the list of reliable instruments for online research. To add another method to this inventory, this study aimed to explore the potential and limits of webcam-based online eye tracking through a JavaScript-based gaze estimation library supported by HTML5. By using consumer-grade webcams to acquire data from home, we assume that the advantages of lower costs, parallel, independent data conduction, and easier access to a broader or more special population can be utilized. We employed three tasks (fixation task, smooth pursuit, and free viewing) in an in-lab and an online setting to establish a first common ground of spatial and temporal accuracy. The fixation task allowed us to identify initial saccades and the spatial offset towards the target. During the smooth pursuit task, the same factors were analyzed, but while having a moving stimulus. The third task concentrated on identifying the sensitivity to semantic interpretation of an image by replicating earlier work about attention distribution to regions of interests of a face. Overall, we found the spatial accuracy to be at around 200 px (4° visual angle) offset, for both static and moving stimuli, and we were able to reproduce the findings that eyes get pre-dominantly fixated when viewing faces. Online data showed no difference in accuracy to in-lab data, but exhibited a higher variance, lower sampling rate, and longer experimental durations. These results suggest that web-technology-based eye tracking is suitable for all three tasks and we are confident that the technique will be improved continuously to be available for online experimentation in the field of vision science and beyond.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×