Abstract
Within the last decade, online experimentation was established as a viable supplement to in-lab experimentation. While this endeavor started with online questionnaires, recently performance- and reaction time-based paradigms that are used in the field of vision science were added to the list of reliable instruments for online research. To add another method to this inventory, this study aimed to explore the potential and limits of webcam-based online eye tracking through a JavaScript-based gaze estimation library supported by HTML5. By using consumer-grade webcams to acquire data from home, we assume that the advantages of lower costs, parallel, independent data conduction, and easier access to a broader or more special population can be utilized. We employed three tasks (fixation task, smooth pursuit, and free viewing) in an in-lab and an online setting to establish a first common ground of spatial and temporal accuracy. The fixation task allowed us to identify initial saccades and the spatial offset towards the target. During the smooth pursuit task, the same factors were analyzed, but while having a moving stimulus. The third task concentrated on identifying the sensitivity to semantic interpretation of an image by replicating earlier work about attention distribution to regions of interests of a face. Overall, we found the spatial accuracy to be at around 200 px (4° visual angle) offset, for both static and moving stimuli, and we were able to reproduce the findings that eyes get pre-dominantly fixated when viewing faces. Online data showed no difference in accuracy to in-lab data, but exhibited a higher variance, lower sampling rate, and longer experimental durations. These results suggest that web-technology-based eye tracking is suitable for all three tasks and we are confident that the technique will be improved continuously to be available for online experimentation in the field of vision science and beyond.
Meeting abstract presented at VSS 2017