Systems have been developed to attempt to bypass the calibration procedure. Some eye trackers minimize the need for calibration by tracking the first and fourth Purkinje reflection (from the back of the lens), known as dual-Purkinje methods (Crane & Steele,
1985; Sigut & Sidha,
2011). However, since the fourth Purkinje image is weak and very difficult to detect, light conditions must be heavily controlled to use these systems. Other systems use multiple light sources (or multiple cameras) to decrease the sensitivity to head movements (see review in Hansen & Ji,
2010). In the area of deep learning, gaze estimation models have been developed that update their parameters online, i.e., learn incrementally, while participants are looking at highly salient images. These statistical approaches include nonlinear approximation (Betke & Kawai,
1999; Chen & Ji,
2011,
2015), and artificial neural networks (Ji & Zhu,
2003; Schneider, Schauerte, & Stiefelhagen,
2014; Stiefelhagen, Yang, & Waibel,
1997). When the head is constrained, statistical models are accurate to within 2° or better (Chen & Ji,
2011), whereas progress in the unconstrained gaze estimation is slower with measurement error down to 10° (Zhang, Sugano, Fritz, & Bulling,
2017). In the novel approach by Pfeuffer (Pfeuffer, Vidal, Turner, Bulling, & Gellersen,
2013), they used moving objects and tracked smooth pursuit eye movements, rather than saccades to a static image, and achieved errors of 1° or less in accuracy measurements.