As shown in
Figure 1A, the Purkinje images are the reflections given by the various eye structures encountered by light as it travels toward the retina. Because of the shape of the cornea and lens, the first and fourth Purkinje images,
P1 and
P4, form on nearby surfaces and can be imaged together without the need of complex optics. Because these two reflections originate from interfaces at different spatial locations and with different shapes, they move at different speeds as the eye rotates. Thus, the relative motion between
P4 and
P1 provides information about eye rotation. Below we examine the main factors that need to be taken into consideration for using this signal to measure eye movements.
As originally noted by
Cornsweet and Crane (1973), the relative motion of
P1 and
P4 follows a monotonic relation with eye rotation. An approximation of this relation can be obtained, for small eye movements, by using a paraxial model (
Cornsweet & Crane, 1973), in which the anterior surface of the cornea and the posterior surface of the lens are treated as a convex and a concave mirror (
MC and
ML in
Figure 2A). Under the assumption of collimated illumination, this model enables analytical estimation of the relative positions,
X1 and
X4, of the two Purkinje images on the image plane
I:
\begin{eqnarray}X_4 - X_1 \approx d \sin \Delta,\quad\end{eqnarray}
where
d represents the distance between the anterior surface of cornea and the posterior surface of the lens; and Δ is the angle between the incident beam of light and the optic axis of the eye, which represents here the angle of eye rotation (see
Figure 2A).
In this study, we focus on the case of the unaccommodated eye. Inserting in
Equation 1 the value of
d taken for Gullstrand eye model (
d = 7.7 mm) and assuming light to be parallel to the optical axis of the imaging system (θ = 0 in
Figure 2A), we obtain the blue curve in
Figure 2B. These data indicate that for small eye movements—that is, the range of gaze displacement for which the paraxial assumption holds—the relative positions of the two Purkinje images varies almost linearly with eye rotation.
Although the two-mirror approximation of
Figure 2A is intuitive, it holds only for small angular deviations in the line of sight. Furthermore, it ignores several factors of Purkinje images formation, including refraction at the cornea and the lens. To gain a deeper understanding of the positions and shapes of the Purkinje images and how they move across a wide range of eye rotations, we simulated a more sophisticated eye model in an optical design software (Zemax OpticStudio). A primary goal of these simulations was to determine the expected sizes, shapes, and strengths of the Purkinje images as the angle of incident light changes because of eye rotations. Being reflections, these characteristics depend both on the source of illumination and the optical and geometrical properties of the eye.
We used a well-known schematic eye model (
Atchison & Thibos, 2016), developed on the basis of multiple anatomical and optical measurements of human eyes (
Atchison, 2006). This model has been previously compared with other eye models and validated against empirical data (
Bakaraju, Ehrmann, Papas, & Ho, 2008;
Akram, Baraas, & Baskaran, 2018). Briefly, the model consists of four refracting surfaces (anterior and posterior cornea and lens) and features a lens with a gradient refractive index designed to capture the spatial nonhomogeneity of the human lens. In our implementation, we set the model eye parameters to simulate an unaccommodated emmetropic eye with a pupil of 6 mm, as described in (
Atchison, 2006). Because our goal is to develop a system that can be used in vision research experiments, we used light in the infrared range (850 nm), so that it would not interfere with visual stimulation. Finally, because the Purkinje images are formed via reflection, we applied the reflection coefficients of the eye’s multiple layers (e.g., see Chapter 6 in
Orfanidis, 2016).
Under exposure to collimated light, eye rotations only change the angle of illumination relative to the optical axis of the eye. We, therefore, modeled eye movements by keeping the model stationary and rotating illumination and imaging system relative to the eye center of rotation, a point on the optical axis 13.8 mm behind the cornea (
Fry & Hill, 1962). As with most eye-trackers, this simplification assumes a gaze shift to consist of a pure rotation of the eye and neglects the small translations of the eye in the orbit that accompany rotations (∼0.1 mm per 30°) (
Demer & Clark, 2019) (see also analysis of the consequences of eye translations for our prototype in
Figure 7). In all cases, we modeled Purkinje images formation on the best focus plane, the plane orthogonal to the optical axis of the eye (
I in
Figure 2A), where
P4 is formed when the illumination source is right in front of the eye.
We first examined the impact of the power of the illumination on Purkinje image formation.
Figure 3A and B show one of the primary difficulties of using Purkinje images for eye-tracking: the irradiance of
P1 and
P4 differs considerably, by almost two orders of magnitude. As expected, both vary proportionally with the power of the source and decrease with increasing angle between the optical axis of the eye and the illuminating beam. However, their ratio remains constant as the power of the source varies, and the vast difference in irradiance between the two makes it impossible—in the absence of additional provisions—to image
P4 without saturating
P1.
These data are informative for selecting the power of the illumination. There are two primary constraints that need to be satisfied: from one side, the power to the eye needs to be sufficiently low to meet safety standards for prolonged use. From the other, it needs to be sufficiently high to enable reliable detection of the Purkinje images. The latter requirement is primarily determined by the noise level of the camera, which sets a lower bound for the irradiance of
P4, the weaker reflection.
Figure 3A and B show the saturation and noise-equivalent levels of irradiance for the camera used in our prototype (see Section 3). P4 is detectable when its irradiance exceeds the noise level (
Figure 3B), but the eye-tracking algorithm needs to be able to cope with the simultaneous saturation of large portions of
P1 (
Figure 3A).
This model also allows for a more rigorous simulation of how the two Purkinje images move as the eye rotates than the paraxial approximation of
Figure 2A.
Figure 3C shows the chief rays intersections with the image plane,
X1 and
X4, for eye rotations of ±40°. Note that both images are displaced almost linearly as a function of eye rotation. However, the rate of change differs between
P1 and
P4 (the different slopes of the curves in
Figure 3C). Specifically,
P4 moves substantially faster and, therefore, travels for a much larger extent, than
P1. The data in
Figure 3C indicate that the imaging system should capture approximately 20 mm in the objective space of plane
I to enable measurement of eye movements over this range. In practice, with a real eye, the imaged space may be smaller than this, because the occlusion of
P4 by the pupil will limit the range of measurable eye movements, as it happens with the analog DPI.
These simulations also allowed examination of how individual changes in eye morphology and optical properties affect the motion of the Purkinje images. This variability is represented by the error bars in
Figure 3C, which were obtained by varying model parameters according to their range measured in healthy eyes (
Atchison, 2006). These data show that the trajectories of both
P1 and
P4 are minimally influenced by individual changes in eye characteristics.
Because both Purkinje images move proportionally to eye rotation, the difference in their positions maintains an approximately linear relation with eye movements. This function closely matches the paraxial two-mirror approximation of
Equation 1 for small eye movements and extends the prediction to a much wider range of eye rotations (the black line in
Figure 2B). In the simulation of this figure, the direction of illumination and the optical axes of both the eye and the imaging system are aligned. However, this configuration is not optimal, because the two Purkinje images overlap when the optic axis of the eye is aligned with the illumination axis. This makes it impossible to detect
P4, which is smaller and weaker than
P1. As shown in
Figure 3D, the specific position of the illuminating beam plays an important role, both in determining departures from linearity and in placing the region where
P4 will be obscured by
P1. In this figure, the offset between the two Purkinje images is plotted for a range of eye rotations (
x-axis) and for various angular positions of the source (distinct curves). In the system prototype described in this article (
Figure 5), we placed the source at ϕ = 20° on both the horizontal and vertical axes. This selection provides a good linear operating range, while placing the region in which
P4 is not visible away from the most common directions of gaze.