October 2019
Volume 19, Issue 12
Open Access
Article  |   October 2019
Ray tracing 3D spectral scenes through human optics models
Author Affiliations
Journal of Vision October 2019, Vol.19, 23. doi:https://doi.org/10.1167/19.12.23
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Trisha Lian, Kevin J. MacKenzie, David H. Brainard, Nicolas P. Cottaris, Brian A. Wandell; Ray tracing 3D spectral scenes through human optics models. Journal of Vision 2019;19(12):23. https://doi.org/10.1167/19.12.23.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Scientists and engineers have created computations and made measurements that characterize the first steps of seeing. ISETBio software integrates such computations and data into an open-source software package. The initial ISETBio implementations modeled image formation (physiological optics) for planar or distant scenes. The ISET3d software described here extends that implementation, simulating image formation for three-dimensional scenes. The software system relies on a quantitative computer graphics program that ray traces the scene radiance through the physiological optics to the retinal irradiance. We describe and validate the implementation for several model eyes. Then, we use the software to quantify the impact of several physiological optics parameters on three-dimensional image formation. ISET3d is integrated with ISETBio, making it straightforward to convert the retinal irradiance into cone excitations. These methods help the user compute the predictions of optics models for a wide range of spatially rich three-dimensional scenes. They can also be used to evaluate the impact of nearby visual occlusion, the information available to binocular vision, or the retinal images expected from near-field and augmented reality displays.

Introduction
Vision is initiated by the light rays entering the pupil from three-dimensional (3D) scenes. The cornea and lens (physiological optics) transform these rays to form a two-dimensional (2D) spectral irradiance image at the retinal photoreceptor inner segments. How the physiological optics transforms these rays, and how the photoreceptors encode the light, limits certain aspects of visual perception and performance. These limits vary both within an individual over time and across individuals, according to factors such as eye size and shape, pupil size, lens accommodation, and wavelength-dependent optical aberrations (Wyszecki & Stiles, 1982). 
Physiological optics are accounted for in vision science and engineering by a diverse array of models. In certain cases, the application is limited to central vision and flat (display) screens, and in these cases, the optical transformation is approximated using a simple formula: convolution with a wavelength-dependent point spread function (PSF; Wandell, 1995). This approximation is valid when the stimulus is at most 5° away from the fovea, because the PSF changes with eccentricity (Navarro, Artal, & Williams, 1993). To generate a retinal image spanning a large range of eccentricities, one needs PSF data for a full range of eccentricities, a method of interpolating them continuously across the field, and a way to apply wavelength-dependent translation to model transverse chromatic aberrations (TCAs). 
The approximation is also valid only when the stimulus is distant. When viewing nearby 3D objects, the optical transformation is more complex and depends on the 3D geometry and spatial extent of the scene. For example, in the near field, the PSF is depth dependent, and shift-invariant convolution produces incorrect approximations of retinal irradiance at image points near depth occlusions. Accurately calculating the optical transformation for scenes within 1 to 2 m of the eye, which are important for understanding depth perception and vergence, requires more complex formulae and computational power. 
This article describes Image Systems Engineering Toolbox–3D (ISET3d), a set of software tools that simulate the physiological optics transformation from a 3D spectral radiance into a 2D retinal image. The software is integrated with Image Systems Engineering Toolbox–Bio (ISETBio), an open-source package that includes a number of computations related to the initial stages of visual encoding (Brainard et al., 2015; Cottaris, Jiang, Ding, Wandell, & Brainard, 2019; Kupers, Carrasco, & Winawer, 2019). The initial ISETBio implementations modeled image formation for planar or distant scenes with wavelength-dependent PSFs (Farrell, Catrysse, & Wandell, 2012; Farrell, Jiang, Winawer, Brainard, & Wandell, 2014). ISET3d uses quantitative computer graphics to model the depth-dependent effects of the physiological optics and enables the user to implement different schematic eye models for many different 3D scenes. For some of these models, the software can calculate retinal irradiance for a range of accommodative states, pupil sizes, and retinal eccentricities. 
ISETBio uses the retinal irradiance to estimate the excitations in the cone spatial mosaic. These estimated cone excitations can be helpful in understanding the role of the initial stages of vision in limiting visual performance, including accommodation as well as judgments of depth and size. In addition, these calculations can support the design and evaluation of 3D display performance (e.g., light fields, augmented reality, virtual reality), which benefit from a quantitative understanding of how display design parameters affect the retinal image and cone excitations. For example, ISET3d may help develop engineering metrics for novel volumetric displays that render rays as if they arise from a 3D scene or for multiplanar displays that use elements at multiple depths to approximate a 3D scene (Akeley, Watt, Girshick, & Banks, 2004; MacKenzie, Hoffman, & Watt, 2010; Mercier et al., 2017; Narain et al., 2015). In summary, we describe ISET3d as a tool that can be helpful to various different specialties within vision science and engineering. 
Methods
ISET3d combines two key components: quantitative computer graphics based on ray tracing and a method for incorporating eye models within the computational path from scene to retinal irradiance (Figure 1). The calculations begin with a 3D graphics file that defines the geometry, surfaces, and lighting of a scene. The rendering software traces rays in the scene through a configurable eye model that is defined by a series of curved surfaces and an aperture (pupil). The rays arrive at the surface of the curved retina, where they form the retinal irradiance (optical image). Using ISETBio, the retinal irradiance can be sampled by a simulated cone photoreceptor mosaic, and an expected spatial pattern of cone excitations can be calculated. 
Figure 1
 
The computational pipeline. A three-dimensional scene, including objects and materials, is defined in the format used by Physically Based Ray Tracing (PBRT) software (Pharr et al., 2016). The rays pass through an eye model implemented as a series of surfaces with wavelength-dependent indices of refraction. The simulated spectral irradiance at the curved retinal surface is calculated in a format that can be read by ISETBio (Cottaris, Jiang, et al., 2019). That software computes cone excitations and photocurrent at the cone outer segment membrane, in the presence of fixational eye movements (Cottaris, Rieke, Wandell, & Brainard, 2018).
Figure 1
 
The computational pipeline. A three-dimensional scene, including objects and materials, is defined in the format used by Physically Based Ray Tracing (PBRT) software (Pharr et al., 2016). The rays pass through an eye model implemented as a series of surfaces with wavelength-dependent indices of refraction. The simulated spectral irradiance at the curved retinal surface is calculated in a format that can be read by ISETBio (Cottaris, Jiang, et al., 2019). That software computes cone excitations and photocurrent at the cone outer segment membrane, in the presence of fixational eye movements (Cottaris, Rieke, Wandell, & Brainard, 2018).
PBRT integration
We implement rendering computations using the open-source Physically Based Ray Tracing (PBRT) software (Pharr, Jakob, & Humphreys, 2016). This ray-tracing package calculates physically accurate radiance data (renderings) of 3D scenes, which are converted into irradiance by optical modeling. The software incorporates camera models that are defined by multielement lenses and apertures. We extend the PBRT camera models to implement eye models. These are specified by surface position, curvature, and wavelength-dependent index of refraction; aperture position and size; and retinal position and curvature. The eye model in the ISET3d implementation is sufficiently general to incorporate a variety of physiological eye models. 
The software also includes methods to help the user create and programmatically control PBRT scene files that are imported from 3D modeling software (e.g., Blender, Maya, Cinema4D). We implemented ISET3d functions to read and parse the PBRT scene files so that the user can programmatically control scene parameters, including the object positions and orientations, lighting parameters, and eye position (Lian, Farrell, & Wandell, 2018). The scenes used in this article can be read directly into ISET3d. 
The PBRT software and physiological optics extensions are complex and include many library dependencies. To simplify use of the software, we created a Docker container1 that includes the software and its dependencies; in this way, users can run the software on most platforms without further compilation. To run the software described here, the user installs Docker and the ISET3d and ISETBio MATLAB (MathWorks, Natick, MA) toolboxes. 
PBRT modifications
Ray-tracing computations typically cast rays from sample points on the image plane toward the scene; in this application, the image plane is the curved inner segment layer of the retina. Rays from the retina are directed toward the posterior surface of the lens. As they pass through the surfaces and surrounding medium of the physiological optics, they are refracted based on Snell's law and the angle and position of each surface intersection (Snell's Law, 2003). To model these optical effects, we extended the main distribution of PBRT in several ways. 
First, the original lens-tracing implementation was modified to account for the specific surface properties of physiological eye models. We achieved this by converting the flat film plane into the curved retina surface and introducing conic (and biconic) constants to the existing lens surface implementation. Next, each traced ray is assigned a wavelength, which enables the calculation to account for the wavelength-dependent index of refraction of each ocular medium and to produce chromatic aberration effects in the retinal image. Lastly, we implement diffraction modeling, as described in a later section. Once the ray exits the physiological optics and enters the scene, standard path-tracing techniques within PBRT calculate the interaction between the ray, the material properties of the scene assets, and the lighting. 
The modified PBRT calculation returns a multispectral retinal irradiance in relative units. To convert to absolute physical units, we set the spatial mean of the retinal irradiance equal to 1 lux times the pupil area (mm2). The scale factor is applied to the spectral irradiance; its value is chosen so that the mean spatial lux has the desired value (e.g., 5 lux for a 5-mm2 pupil). This level may be adjusted for other analyses, such as modeling scenes with different illumination levels. There are additional methods for bringing ray-traced scenes into calibrated units (Heasly, Cottaris, Lichtman, Xiao, & Brainard, 2014). 
Lens transmittance
The transmittance of the human lens varies strongly with wavelength (Stockman, 2001). As a result, there is a striking color difference between the rendering of the scene radiance and the retinal irradiance (Figure 2). ISET3d applies the lens transmittance when calculating the retinal irradiance. The assumed lens density is a parameter that can be set. The macular pigment, the other major inert pigment, is not included in the ISET3d representation. Rather, the macular pigment is part of the ISETBio calculation that converts the retinal irradiance into the cone excitations. 
Figure 2
 
Lens transmittance. The two rendered images illustrate the impact of lens transmittance on the rendered image. Without including lens transmittance (A), the images are relatively neutral in color; including lens transmittance reduces the short-wavelength photons, and images have a yellow cast (B). The inset at the top is the standard lens transmittance for a 20-year-old.
Figure 2
 
Lens transmittance. The two rendered images illustrate the impact of lens transmittance on the rendered image. Without including lens transmittance (A), the images are relatively neutral in color; including lens transmittance reduces the short-wavelength photons, and images have a yellow cast (B). The inset at the top is the standard lens transmittance for a 20-year-old.
Diffraction
Diffraction is a significant contributor to physiological optics blur for in-focus points when the pupil diameter is near 2 mm (Wandell, 1995). When using computer graphics software to ray trace or render an image, wave phenomena are typically not modeled, and the effects of diffraction are not included. Hence, we added a diffraction module to the PBRT code, based on the technique of Heisenberg uncertainty ray bending (HURB; Freniere, Groot Gregory, & Hassler, 1999). The HURB approximation of diffraction is computationally efficient and fits well into the ray-tracing framework. HURB is a stochastic model: As each ray passes through the limiting aperture of the optical system, the algorithm randomizes the ray direction. The randomization function is a bivariate Gaussian in which the largest variance (major axis) is oriented in the direction of the closest aperture point. The sizes of the variance terms depend on the distance between the aperture and the ray, as well as the ray's wavelength. The degree of randomization approximates optical blur due to diffraction. As the pupil size decreases, relatively more rays come near the edge of the aperture and are scattered at larger angles, thus increasing rendered optical blur. 
Results
First, we validate the ISET3d implementation by comparing the computed modulation transfer function of a schematic eye with calculations and measurements from other sources. Second, we show images modeled using three different eye models. Third, we illustrate the impact of pupil size, accommodation, longitudinal chromatic aberration (LCA), and transverse chromatic aberration (TCA) on the retinal image. Finally, we incorporate ISETBio into the calculation to show how image formation affects foveal cone excitations. 
Validation
To validate the ISET3d computations, we compare the on-axis, polychromatic modulation transfer function (MTF) of the Navarro schematic eye (Escudero-Sanz & Navarro, 1999) with the MTF computed using Zemax. The ISET3d MTF is derived by ray tracing a slanted edge, one side black and the other side white, through the Navarro eye model. The irradiance data of the rendered edge are analyzed at each wavelength to calculate the MTF (Williams & Burns, 2014). The wavelength-specific MTFs are combined across wavelengths using the photopic luminosity function weighting. The MTFs calculated using ISET3d and Zemax are very similar (Figure 3). 
Figure 3
 
On-axis modulation transfer functions (MTF) of the human optics (pupil diameter of 3 mm). (A) A comparison of theoretical and empirical measurements. Two curves were calculated at the point of highest optical acuity for the Navarro eye model using ISET3d and Zemax. The third curve is based on a formula derived by A. B. Watson (see Watson, 2013). The gray-shaded region is the range of estimates derived from adaptive optics measurements along the primary line of sight (Thibos et al., 2002). The spectral radiance of the simulated stimulus is an equal energy polychromatic light, and the curves from different wavelengths were combined using a photopic luminosity function weighting. The differences between the curves from ISET3d, Zemax, and Watson are smaller than the measured individual variation in optical MTFs. (B) The MTFs for the Navarro eye, calculated using ISET3d with diffraction (see the Methods section), are shown separately for different wavelengths.
Figure 3
 
On-axis modulation transfer functions (MTF) of the human optics (pupil diameter of 3 mm). (A) A comparison of theoretical and empirical measurements. Two curves were calculated at the point of highest optical acuity for the Navarro eye model using ISET3d and Zemax. The third curve is based on a formula derived by A. B. Watson (see Watson, 2013). The gray-shaded region is the range of estimates derived from adaptive optics measurements along the primary line of sight (Thibos et al., 2002). The spectral radiance of the simulated stimulus is an equal energy polychromatic light, and the curves from different wavelengths were combined using a photopic luminosity function weighting. The differences between the curves from ISET3d, Zemax, and Watson are smaller than the measured individual variation in optical MTFs. (B) The MTFs for the Navarro eye, calculated using ISET3d with diffraction (see the Methods section), are shown separately for different wavelengths.
The polychromatic on-axis MTF also closely matches an analytical formulation of the mean, radial MTF derived from empirical measurements (Watson, 2013). The analytical formula is based on wavefront aberration data collected from 200 eyes and is a function of pupil diameter. To assess whether the small differences between ISET3d, Zemax, and the analytical formula are meaningful, we compare these curves with the range of MTF functions measured for 100 randomly generated virtual eyes using a statistical model of the wavefront aberration function (Thibos, Bradley, & Xin, 2002). The ISET3d and Zemax calculations are close to the derived formula. All three are well within the measured population variation. 
The luminosity-weighted MTF is a partial description of the human line-spread function in three important ways. First, it fails to make explicit the wavelength dependence of the MTF, which is quite strong (Figure 3B). Second, the MTF is shown for a circularly symmetric optical system; effects of astigmatism are omitted. Third, the MTF expresses the contrast reduction for every spatial frequency but omits frequency-dependent phase shifts. In model eyes, these phase shifts are often assumed to be zero, but in reality, the frequency-dependent phase shifts are substantial. These phase shifts can be modeled by implementing more complex surface shapes in the eye models. 
Eye model comparisons
There are two approaches to defining schematic eye models. Anatomical models define a correspondence between model surfaces and the main surfaces of the eyes optics, such as the anterior or posterior surface of the cornea and lens, and the model surface properties are set to match experimental measurements (Polans, Jaeken, McNabb, Artal, & Izatt, 2015). For example, the Gullstrand No. 1 exact eye models the lens as two nested, homogenous shells with different refractive indices (Atchison & Thibos, 2016; Gullstrand, 1909). Alternatively, phenomenological models eschew a correspondence between anatomy and model surfaces. Instead, these models are designed to approximate the functional properties of image formation. For example, reduced or simplified eyes such as the Gullstrand-Elmsley eye have fewer surfaces than the human eye (Elmsley, 1948). 
Both anatomical and phenomenological models are parameterized by the surface positions along the optical axis, shape, thickness, and wavelength-dependent refractive index of the series of surfaces. The variation of refractive index with wavelength is called the dispersion curve, and its approximate slope is called the Abbe number; this wavelength-dependent refractive index models LCA, which is the largest of the eyes' aberrations (Wandell, 1995; Wyszecki & Stiles, 1982). 
Eye models have been developed and analyzed using powerful optics analysis and design software such as Zemax (Zemax LLC, Kirkland, WA). Model predictions can be compared with human physiological optics for many different measures (Bakaraju, Ehrmann, Papas, & Ho, 2008). These include comparing the Zernike polynomial coefficients that represent the wavefront aberrations, the modulation transfer function, or the wavelength-dependent PSF. Some models seek to match performance near the optical axis, and others seek to account for a larger range of eccentricities. Because of their emphasis on characterizing the optics, such packages have limited image formation capabilities, typically restricting their analyses to points or 2D images. 
The ISET3d implementation builds on these 2D measures by inserting the eye model into the 3D PBRT calculations; this enables us to calculate the impact of the eye model on relatively complicated 3D scene radiances. ISET3d models the physiological optics as a series of curved surfaces with wavelength-dependent indices of refraction, a pupil plane, and a specification of the size and shape of the eye. At present, the implementation specifies surface positions, sizes, and spherical or biconic surface shapes. These parameters are sufficient to calculate predictions for multiple eye models. The parameters for the three model eyes are listed in the Appendix: the Arizona eye (Schwiegerling, 2004), the Navarro eye (Escudero-Sanz & Navarro, 1999), and the Le Grand eye (Atchison, 2017; El Hage & Le Grand, 1980). Figure 4 shows a scene rendered through each of these models, and Figure 5 shows the on-axis, polychromatic MTF calculated using ISET3d, for each model eye at 3-mm and 4-mm pupil diameters. The three schematic eyes perform differently because the authors optimized their models using different data and with different objectives. For example, the Navarro eye was designed to match in vivo measurements from Howcroft and Parker (1977) and Kiely, Smith, and Carney (1982), whereas the Le Grand eye reproduces first-order, Gaussian properties of an average eye (Escudero-Sanz & Navarro, 1999). 
Figure 4
 
Retinal irradiance calculated using three schematic eye models: (A) Arizona eye (Schwiegerling, 2004), (B) Navarro (Escudero-Sanz & Navarro, 1999), and (C) Le Grand (El Hage & Le Grand, 1980). The letters are placed at 1.4 (0.714), 1.0 (1), and 0.6 (1.667) diopters (meters) from the eye. The eye models are focused at infinity with a pupil size of 4 mm. Variations in the sharpness of the three letters illustrate the overall sharpness and the depth of field. The images are renderings of the spectral irradiance into sRGB format.
Figure 4
 
Retinal irradiance calculated using three schematic eye models: (A) Arizona eye (Schwiegerling, 2004), (B) Navarro (Escudero-Sanz & Navarro, 1999), and (C) Le Grand (El Hage & Le Grand, 1980). The letters are placed at 1.4 (0.714), 1.0 (1), and 0.6 (1.667) diopters (meters) from the eye. The eye models are focused at infinity with a pupil size of 4 mm. Variations in the sharpness of the three letters illustrate the overall sharpness and the depth of field. The images are renderings of the spectral irradiance into sRGB format.
Figure 5
 
A comparison of on-axis, polychromatic MTF from three different model eyes, calculated using ISET3d. The top figure corresponds to a 3-mm pupil diameter; the bottom figure corresponds to a 4-mm pupil diameter. The gray-shaded region is the range of estimates derived from on-axis adaptive optics measurements (Thibos et al., 2002). The Arizona and Navarro eye models are mostly within the range of the measurements; the Le Grand eye is outside the range
Figure 5
 
A comparison of on-axis, polychromatic MTF from three different model eyes, calculated using ISET3d. The top figure corresponds to a 3-mm pupil diameter; the bottom figure corresponds to a 4-mm pupil diameter. The gray-shaded region is the range of estimates derived from on-axis adaptive optics measurements (Thibos et al., 2002). The Arizona and Navarro eye models are mostly within the range of the measurements; the Le Grand eye is outside the range
For many analyses in this article, we use the Navarro eye model, although the same calculations can be repeated with any model eye that can be described by the set of parameters implemented in the ray tracing. The selection of model might depend on the application; for example, analysis of a wide field-of-view display requires a more computationally demanding model that performs accurately at wide angles. The goal of this article is not to recommend a particular eye model but to provide software tools to help investigators implement, design, evaluate, and use eye models. 
Pupil size: MTF
The optical performance of the eye changes as the pupil size changes. We measure the change by calculating the MTF using a simulation of the slanted-edge test (Williams & Burns, 2014). The MTF is measured from a patch centered on the optical axis (i.e., on-axis) and calculated for an equal energy wavelength spectral power distribution, with the final result weighted across wavelengths by the luminosity function, as is done in previous sections. The values shown in Figure 6 for the Navarro eye model are qualitatively consistent with the early measurements of the effect of pupil aperture on the width of the line spread function (Campbell & Gubisch, 1966; Wandell, 1995). 
Figure 6
 
The variation of a model eye modulation transfer function (MTF) with pupil diameter. The on-axis MTF was computed using the Navarro eye model. The best performance is for a 3-mm pupil diameter. The smallest natural pupil size is about 2 mm (Watson & Yellott, 2012). The simulations include the HURB calculation (see the Methods section) and show that the irradiance from a 1-mm artificial pupil will be affected by diffraction. The MTF is much lower quality for a large pupil diameter (6 mm).
Figure 6
 
The variation of a model eye modulation transfer function (MTF) with pupil diameter. The on-axis MTF was computed using the Navarro eye model. The best performance is for a 3-mm pupil diameter. The smallest natural pupil size is about 2 mm (Watson & Yellott, 2012). The simulations include the HURB calculation (see the Methods section) and show that the irradiance from a 1-mm artificial pupil will be affected by diffraction. The MTF is much lower quality for a large pupil diameter (6 mm).
Decreasing the pupil diameter from 6 mm increases the optical quality up to about 2.5 mm. As the pupil diameter decreases below 2 mm, a large fraction of the rays passes near the pupil edge. In this range, the scattering effects from the HURB model of diffraction dominate, decreasing the image quality. The HURB model approximates the central part of the diffraction effect (Airy disk), but it does not extend to the first ring. 
Pupil size: Depth of field
The sharpness of an object's rendered edge depends on the distance between the object and the accommodation plane. The depth of field is a qualitative property that indicates the range of distances over which objects are in relatively good focus. The smaller the pupil diameter, the larger the depth of field, and the depth of field is much reduced for large (6 mm) compared with small (2 mm) pupil diameters. Figure 7 shows the depth of field for three different pupil diameters, as the eye remains accommodated to a fixed distance. 
Figure 7
 
Variations in depth of field calculated for the Navarro eye model with different pupil diameters. Pupil diameters are (A) 2 mm, (B) 4 mm, and (C) 6 mm. In all cases, the focus is set to a plane at 3.5 diopters (28 cm), and the depth of the pawn shown in the red box. The pawn remains in sharp focus, whereas the chess pieces in front and behind are out of focus; the depth of field decreases as pupil size increases. The horizontal field of view is 30°.
Figure 7
 
Variations in depth of field calculated for the Navarro eye model with different pupil diameters. Pupil diameters are (A) 2 mm, (B) 4 mm, and (C) 6 mm. In all cases, the focus is set to a plane at 3.5 diopters (28 cm), and the depth of the pawn shown in the red box. The pawn remains in sharp focus, whereas the chess pieces in front and behind are out of focus; the depth of field decreases as pupil size increases. The horizontal field of view is 30°.
Accommodation
Schematic eyes model accommodation by changing the lens curvature, thickness, and index of refraction according to the formulae provided by the models (e.g., Navarro or Arizona). These parameters can be simulated in ISET3d; hence, we can compute the effect of accommodation on the retinal irradiance. The impact of accommodation using the Navarro eye with a 4-mm pupil diameter is shown in Figure 8. The focus and defocus for the different numbers, presented at 100-, 200-, and 300-mm distance from the eye, changes substantially as the accommodation of the model eye changes. 
Figure 8
 
Retinal images for the Navarro eye model accommodated to three target distances: (A) 100 mm, (B) 200 mm, and (C) 300 mm. The images are calculated using a 4-mm pupil diameter. The horizontal field of view is 30°.
Figure 8
 
Retinal images for the Navarro eye model accommodated to three target distances: (A) 100 mm, (B) 200 mm, and (C) 300 mm. The images are calculated using a 4-mm pupil diameter. The horizontal field of view is 30°.
Longitudinal chromatic aberration
The LCA at the eye's focal plane has been measured several times (Bedford & Wyszecki, 1957; Wald & Griffin, 1947; Wyszecki & Stiles, 1982). The conversion from the LCA measurements (in diopters) to the wavelength-dependent line spread in the focal plane has been worked out (Marimont & Wandell, 1994), and the calculation is implemented in ISETBio for optical models that employ shift-invariant wavelength-dependent PSF. 
ISET3d extends the planar LCA calculation to account for depth-dependent effects. The color fringes at high-contrast edges depend on their distance from the focal plane (Figure 9, middle column). The spread of the wavelengths near an edge varies as the eye accommodates to different depth planes. The wavelength-dependent spread at an edge in the focal plane is large for short wavelengths and moderate for long wavelengths (middle). Accommodating to a more distant plane changes the color fringe at the same edge to red/cyan (top); accommodating to a closer plane changes the chromatic fringe at the edge to blue/orange (bottom). 
Figure 9
 
Longitudinal chromatic aberration. A scene including three letters at 1.8, 1.2, and 0.6 diopters (0.56, 0.83, 1.67 m) is the input (left). The scene is rendered three times through the Navarro model eye (4-mm pupil) to form a retinal image with the accommodation set to the different letter depths. The chromatic aberration at the 0.83 m (letter B) depth plane is rendered, showing how the color fringing changes as the focal plane is varied. The graphs at the right show the spectral irradiance across the edge of the target for several different wavelengths.
Figure 9
 
Longitudinal chromatic aberration. A scene including three letters at 1.8, 1.2, and 0.6 diopters (0.56, 0.83, 1.67 m) is the input (left). The scene is rendered three times through the Navarro model eye (4-mm pupil) to form a retinal image with the accommodation set to the different letter depths. The chromatic aberration at the 0.83 m (letter B) depth plane is rendered, showing how the color fringing changes as the focal plane is varied. The graphs at the right show the spectral irradiance across the edge of the target for several different wavelengths.
In these examples, the middle wavelengths spread somewhere between 5 and 20 min of arc; the short-wavelength light spreads over a larger range, from 10 to 40 min of arc. This spread is large enough to be resolved by the cone mosaic near the central retina, and the information in a single image is sufficient to guide the direction of accommodation needed to bring the front or back edge into focus. Experimental results confirm that experimentally manipulating such fringes does drive accommodation in the human visual system (Cholewiak, Love, Srinivasan, Ng, & Banks, 2017). 
Transverse chromatic aberration
TCA characterizes the wavelength-dependent magnification of the image (Thibos, Bradley, Still, Zhang, & Howarth, 1990). TCA arises from several optical factors, including the wavelength-dependent refraction of the surfaces and the geometric relationship between the pupil position, scene point, and optical defocus. In any small region of the image, the TCA magnification appears as a spatial displacement between the wavelength components of the irradiance; because the TCA is a magnification, the displacement size increases with eccentricity. 
In the human eye, the wavelength-dependent shift between 430 nm and 770 nm is approximately 6 arcmin at 15° eccentricity (Winter et al., 2016). This shift is larger than the vernier acuity threshold of about 1 to 2 arcmin at the same eccentricity (Whitaker, Rovamo, MacVeigh, & Mäkelä, 1992). Although not massive, the TCA displacement is large enough to be perceived in the periphery (Newton, 1984; Thibos et al., 1990). 
Ray tracing through a schematic eye simulates the effect of TCA. This effect can be clearly seen when we calculate the retinal image of a large rectangular grid composed of white lines on a black background for the Navarro eye model (Figure 10A). The rectangular grid is distorted because of the curvature of the retina. The wavelength-dependent magnification (TCA) is evident in the local displacements of different wavelengths. We calculate the shift size between short and long wavelengths at 15° eccentricity to be approximately 10 arcmin. This is only slightly larger than empirically measured TCA, even though the Navarro eye was not specifically designed to match experimental TCA data. A more thorough comparison of the Navarro eye with available literature on TCA can be found in (Escudero-Sanz & Navarro, 1999). 
Figure 10
 
Transverse chromatic aberration (TCA) at different eccentricities and pupil positions. (A) A white grid 1 m distant is rendered through the Navarro eye, which is accommodated to the grid plane. The curvature of the retina is seen in the distortion of the grid. (B, C) The images in each row show the TCA at 0, 8, and 15 eccentricity. The two rows were calculated after modifying the anterior chamber depth (ACD) of the Navarro eye within anatomical limits (Boehm, Privitera, Schmidt, & Roorda, 2019; Rabsilber, Khoramnia, & Auffarth, 2006). (B) The ACD is set to 3.29 mm. (C) The ACD is set to 2.57 mm. The TCA is larger when the iris and lens are closer to the posterior corneal surface (ACD is smaller).
Figure 10
 
Transverse chromatic aberration (TCA) at different eccentricities and pupil positions. (A) A white grid 1 m distant is rendered through the Navarro eye, which is accommodated to the grid plane. The curvature of the retina is seen in the distortion of the grid. (B, C) The images in each row show the TCA at 0, 8, and 15 eccentricity. The two rows were calculated after modifying the anterior chamber depth (ACD) of the Navarro eye within anatomical limits (Boehm, Privitera, Schmidt, & Roorda, 2019; Rabsilber, Khoramnia, & Auffarth, 2006). (B) The ACD is set to 3.29 mm. (C) The ACD is set to 2.57 mm. The TCA is larger when the iris and lens are closer to the posterior corneal surface (ACD is smaller).
Cone mosaic excitations
The ISET3d irradiance calculations can be used as input to the ISETBio methods. These methods transform the retinal irradiance into cone excitations. ISETBio can simulate a wide range of cone mosaic parameters including (a) the hexagonal spacing, (b) the size of an S-cone free zone in the central fovea, (c) the variation in cone spacing with eccentricity, (d) the variation in inner segment aperture size and outer segment length with eccentricity, (e) the macular pigment density, (f) the cone photopigment density, and (g) eye movement patterns. 
The rate of cone excitations for a briefly presented light-dark edge at the fovea is shown in Figure 11. In this example, the position of the fovea was aligned with the region of highest optical acuity (smallest point spread). For the schematic eyes used here, the highest acuity is on the optical axis. The overall cone density and aperture sizes change significantly as one calculates from the central foveal to just a few tenths of a degree in the periphery. The exclusion of the S-cones in the central fovea is also quite striking, as is the very low absorption rates by the S-cones, shown by the relatively large black dots that represent the S-cones (Figure 11D). Their absorption rate is low in large part because the lens absorbs much of the short-wavelength light. Were the S-cone apertures as small as the apertures in the very central fovea, they would receive very few photons. Not shown are the effects of small eye movements, another feature of the visual system that may be simulated by the ISETBio code (Cottaris, Jiang, et al., 2019). 
Figure 11
 
Cone mosaic excitations in response to an edge presented briefly at the fovea. (A) Longitudinal chromatic aberration spreads the short-wavelength light substantially. (B) The cone mosaic samples the retinal irradiance nonuniformly, even in the small region near the central fovea. The differences include cone aperture size, changes in overall sampling density, and changes in the relative sampling density of the three cone types. (C) The number of cone excitations per 5 ms for a line spanning the edge and near the center of the image. The variation in the profile is due to Poisson noise and dark noise (250 spontaneous excitations/cone/second). (D) The number of cone excitations per 5 ms across a small patch near the central fovea. The dark spots are the locations of simulated short-wavelength cones.
Figure 11
 
Cone mosaic excitations in response to an edge presented briefly at the fovea. (A) Longitudinal chromatic aberration spreads the short-wavelength light substantially. (B) The cone mosaic samples the retinal irradiance nonuniformly, even in the small region near the central fovea. The differences include cone aperture size, changes in overall sampling density, and changes in the relative sampling density of the three cone types. (C) The number of cone excitations per 5 ms for a line spanning the edge and near the center of the image. The variation in the profile is due to Poisson noise and dark noise (250 spontaneous excitations/cone/second). (D) The number of cone excitations per 5 ms across a small patch near the central fovea. The dark spots are the locations of simulated short-wavelength cones.
It is difficult to have a simple intuition about how these many parameters combine to affect cone excitations. We hope the ability to perform these computations will help clarify system performance and assist investigators in developing intuitions. 
Discussion
Eye models implemented in the ISET3d and ISETBio software calculate the retinal image and cone excitations of 3D scenes. Knowledge of this encoding may be useful for basic research about depth perception or for applied research into the image quality of novel displays. Accurate calculations of the retinal image from 3D scenes depend strongly on factors we reviewed, including accommodation, pupil diameter, and chromatic aberration. Accounting for these factors is a necessary starting point for building computational models of depth perception and metrics for image quality of 3D displays. 
Ray tracing is accurate at depth occlusions
The ray-tracing methods we describe accurately capture many features of image formation, but they are compute intensive. In some cases, the retinal irradiance can be calculated using simpler methods. It is worth considering when ray-tracing methods are required and when simpler calculations suffice. 
The simplest alternative to full ray tracing is to calculate the retinal irradiance by adding the light contributed from each visible scene point to the retinal irradiance (Barsky, Bargteil, Garcia, & Klein, 2002). In this approximation, the light from each scene point is spread in the retinal according to a PSF; the point spread depends on the distance between the scene point and the focal plane. For some calculations, this approach may suffice. An example of such a scene is a large planar surface, such as a flat panel display. 
In scenes that include nearby occluding objects, this approach can be inaccurate. For example, consider a scene composed of two planes separated in depth, with the optics in focus for the near plane (Figure 12). We compare retinal images that are computed by convolving each plane separately with the appropriate depth-dependent point spread and summing (Figure 12A) or by ray tracing (Figure 12B). The two results differ significantly near the occlusion (Figure 12C). 
Figure 12
 
Retinal image calculations near occluding boundaries. The scene comprises two planes with identical checkerboard patterns, placed at 0.5 m and 2 m from the eye. The eye is accommodated to the near plane. (A) The two planes are imaged separately, using the correct point spread for each depth. The lens diagrams show the rays for each plane before summation. The irradiance data are then summed. (B) The two planes are rendered using ISET3d ray tracing. Note that some rays from the far plane are occluded by the near plane. The rendered retinal irradiance is shown. (C) The A-B monochromatic difference image (absolute irradiance difference, summed across wavelengths) is large near the occluding boundary. The difference arises because rays are occluded rays from the distant plane.
Figure 12
 
Retinal image calculations near occluding boundaries. The scene comprises two planes with identical checkerboard patterns, placed at 0.5 m and 2 m from the eye. The eye is accommodated to the near plane. (A) The two planes are imaged separately, using the correct point spread for each depth. The lens diagrams show the rays for each plane before summation. The irradiance data are then summed. (B) The two planes are rendered using ISET3d ray tracing. Note that some rays from the far plane are occluded by the near plane. The rendered retinal irradiance is shown. (C) The A-B monochromatic difference image (absolute irradiance difference, summed across wavelengths) is large near the occluding boundary. The difference arises because rays are occluded rays from the distant plane.
The difference can be understood by considering the rays arriving at the lens from points on the distant plane near the depth discontinuity. A fraction of these rays is occluded by the near plane, changing the amplitude and position of the PSF from these points. 
The depth-dependent point spread calculation can be approximated in some cases, making it of interest in consumer photography and computer graphics applications (Barsky, Tobias, Chu, & Horn, 2005; Kraus & Strengert, 2007). But the calculations are not always physically accurate because the precise calculation depends on many factors, including the position of the occluding objects, the eye position, viewing direction, pupil diameter, and accommodation. To the extent that physically accurate information at depth boundaries is important for the question under study, ray tracing is preferred. 
Single-subject measurements
Schematic eyes typically represent an average performance from a population of observers. It is possible to personalize the surface parameters of a schematic eye for a single subject from adaptive optics measurements of the PSFs over a range of eccentricities (Navarro, González, & Hernández-Matamoros, 2006). Using optimization methods, the lens thickness or biconicity of the cornea can be adjusted so that the model eye matches the PSF measured in a single subject or for a standard subject (Polans et al., 2015). In this way, an eye model that reflects the properties of an individual subject can be created to estimate a personalized retinal image for 3D scenes. Currently, Zemax is used to perform the optimization from measured aberrations. Once the parameter optimization is complete, the new eye model can be implemented within ISETBio to render a personalized retinal image. 
Computation time
A key limitation of ray-tracing methods is computation time. During development and testing, we render small, low-resolution images that take roughly a minute to render. A final, high-quality 800 × 800 resolution retinal image can take more than an hour to render on an 8-core machine. We use cloud-scale computing, sending multiple Docker containers in parallel to the cloud, to reduce the total rendering time. We used an associated toolbox, ISETCloud, to implement this parallel rendering for several calculations in this article. 
The accuracy of the modeling is governed by the number of traced rays, pixel samples, or ray bounces. Increasing these values reduces the rendering noise inherent in Monte Carlo ray tracing and increases the realism of the lighting effects. As with other ray-tracing programs, changing these options has a strong effect on computation. Exploration of the computational cost of these rendering options and ways of noise reduction has been discussed thoroughly in the computer graphics field (Pharr et al., 2016; Sen, Zwicker, Rousselle, Yoon, & Kalantari, 2015). 
Wavelength sampling also has an impact on computation time. The PBRT container represents surfaces and lights at 31 wavelengths between 400 and 700 nm, and this is the wavelength sampling of the calculated retinal irradiance. Because of the high computational cost, we often trace a subset of wavelengths through the lens. Tracing each wavelength sample separately makes the rendering approximately 31 times slower. 
It is important to sample enough wavelengths to model chromatic aberration accurately. The index of refraction is a slowly varying function of wavelength, and thus, a 450-nm ray follows a very similar path through the lens as, say, a 460-nm ray. Hence, we assume that a certain band of wavelengths will follow the same path through the lens. Through experiments, we have found that 8 to 16 bands are enough to accurately capture chromatic aberration effects at the resolution of the cone mosaic. Once the rays begin interacting with the surfaces and lights, all calculations are interpolated to the 31-wavelength sampling density. 
Diffraction is also important for accuracy where the pupil diameter is very small. Simulating diffraction slightly increases the computation time. There are additional calculations that must be done for each ray passing through the aperture. In addition, the scattering of rays at the aperture may require the user to increase the number of rays sampled to keep rendering noise to a minimum. If the pupil diameter is large and the effect of diffraction is minimal, it is more efficient to turn off the diffraction calculation. 
Some features, such as changing the eye model, do not significantly affect the computation time of the simulation. Therefore, implementing new and improved schematic models that better match empirical measurements is one way to improve the accuracy of the simulation. 
Related work
We continue to expand the range of optics parameters included in the ISET3d eye models. The main extensions are to help clarify effects that arise from normal variation in the eye and extensions necessary to extend the range of model eyes. We are currently adding and testing off-axis and tilted lens positions to understand the impact of decentering. We see value in adding the ability to model gradient-index surfaces and also to model a larger set of surface shapes beyond the current group of spherical and biconic. These changes require further modifications to the ray tracer but will allow users to specify a wider variety of surfaces in the lens prescription. One of the questions we consider is whether the extended modeling is likely to have an impact on the spatial mosaic of cone excitations. 
The impact of intraocular scattering on the retinal image can be significant when bright lights are present (Artal, 2014). Although our system does not currently model intraocular scattering, we hope to incorporate it in the future. We are analyzing methods of implementing scattering using the participating media options of PBRT. In this approach, users can specify materials with custom attenuation and scattering coefficients over wavelength, as well as modify the phase function, which determines the angle of scattering. One method of implementing intraocular scattering models (Chen, Jiang, Yang, & Sun, 2012) would be to find coefficients and phase functions that approximate the scattering models for the cornea and lens. We would then implement participating media within the lens-tracing portion of the simulation and associate each ocular media with their specific scattering parameters. This feature would be a significant addition to the current simulation but is worth exploring. 
Other investigators have used ray tracing to clarify the properties of human image formation, for example, in the context of designing intraocular lenses and considering gradient-index lenses (Einighammer, Oltrup, Bende, & Jean, 2009; Schedin, Hallberg, & Behndig, 2016; Gómez-Correa, Coello, Garza-Rivera, Puente, & Chávez-Cerda, 2016). Those papers describe ideas based on ray-tracing principles, and they illustrate their calculations. A number of authors have also described software for using model eyes to calculate the impact of human optics on 3D scenes (Wei, Patkar, & Pai, 2014; Wu, Zheng, Hu, & Xu, 2011; Mostafawy, Kermani, & Lubatschowski, 1997). However, we have not found papers that provide open-source software tools that account for diffraction and spectral properties and that integrate the estimated retinal irradiance with cone mosaic calculations as we do in our tools. The ideas in the literature do provide valuable additional optics calculations that could be incorporated into the open-source and freely available software we provide (https://github.com/iset3d/wiki). 
Finally, we are committed to linking the ISET3d calculations with the ISETBio software (Brainard et al., 2015; Cottaris, Jiang, et al., 2019; Farrell et al., 2014). That software enables users to convert the retinal irradiance into the spatial pattern of excitations of the cone mosaic and to develop models of the neural circuitry response. We hope to expand the combination of tools to understand the effect of different scene radiances in the retina and brain and to use these analyses to learn more about visual perception and to create tools for visual engineering. 
Acknowledgments
The authors thank Jennifer Maxwell for her work in developing and documenting the software. This research was supported by a donation from Facebook Reality Labs to Stanfords Center for Image Systems Engineering and to the University of Pennsylvania. We thank Austin Roorda for providing us with measurements of TCA. 
Commercial relationships: none. 
Corresponding author: Trisha Lian. 
Address: Department of Electrical Engineering, Stanford University, Palo Alto, CA, USA. 
References
Akeley, K., Watt, S. J., Girshick, A. R., & Banks, M. S. (2004). A stereo display prototype with multiple focal distances. In Marks J. (Ed.). ACM SIGGRAPH 2004 papers (pp. 804–813). New York: Association for Computing Machinery.
Artal, P. (2014). Optics of the eye and its impact in vision: A tutorial. Advances in Optics and Photonics, 6, 340–367.
Artal, P. (Ed.). (2017). Handbook of visual optics, volume one: Fundamentals and eye optics. Boca Raton, FL: CRC Press.
Atchison, D. A. (2017). Schematic eyes. In Artal P. (Ed.). Handbook of visual optics, volume one (pp. 247–260). Boca Raton, FL: CRC Press.
Atchison, D. A., & Smith, G. (2005). Chromatic dispersions of the ocular media of human eyes. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 22, 29–37.
Atchison, D. A., & Thibos, L. N. (2016). Optical models of the human eye. Clinical & Experimental Optometry, 99, 99–106.
Bakaraju, R. C., Ehrmann, K., Papas, E., & Ho, A. (2008). Finite schematic eye models and their accuracy to in-vivo data. Vision Research, 48, 1681–1694.
Barsky, B. A., Bargteil, A. W., Garcia, D. D., & Klein, S. A. (2002). Introducing vision-realistic rendering. In Gibson S. & Debevec P. (Eds.), Procedures of the 13th Eurographics Workshop on Rendering (pp. 26–28). New York: Association for Computing Machinery
Barsky, B. A., Tobias, M. J., Chu, D. P., & Horn, D. R. (2005). Elimination of artifacts due to occlusion and discretization problems in image space blurring techniques. Graphical Models, 67, 584–599.
Bedford, R. E., & Wyszecki, G. (1957). Axial chromatic aberration of the human eye. Journal of the Optical Society of America, 47, 564–565.
Boehm, A. E., Privitera, C. M., Schmidt, B. P., & Roorda, A. (2019). Transverse chromatic offsets with pupil displacements in the human eye: Sources of variability and methods for real-time correction. Biomedical Optics Express, 10, 1691–1706.
Brainard, D., Jiang, H., Cottaris, N. P., Rieke, F., Chichilnisky, E. J., Farrell, J. E., & Wandell, B. (2015). ISETBIO: Computational tools for modeling early human vision. Imaging and Applied Optics, IT4A.4.
Campbell, F. W., & Gubisch, R. W. (1966). Optical quality of the human eye. Journal of Physiology, 186, 558–578.
Chen, Y.-C., Jiang, C.-J., Yang, T.-H., & Sun, C.-C. (2012). Development of a human eye model incorporated with intraocular scattering for visual performance assessment. Journal of Biomedical Optics, 17, 075009.
Cholewiak, S. A., Love, G. D., Srinivasan, P. P., Ng, R., & Banks, M. S. (2017). ChromaBlur: Rendering chromatic eye aberration improves accommodation and realism. ACM Transactions on Graphics, 36, 210.
Cottaris, N., Jiang, H., Ding, X., Wandell, B., & Brainard, D. H. (2019). A computational observer model of spatial contrast sensitivity: Effects of wavefront-based optics, cone mosaic structure, and inference engine. Journal of Vision, 19 (4):8, 1–27, https://doi.org/10.1167/19.4.8. [PubMed] [Article]
Cottaris, N., Rieke, F., Wandell, B., & Brainard, D. (2019). Computational observer modeling of the limits of human pattern resolution. In OSA Fall Vision Meeting, Reno.
Einighammer, J., Oltrup, T., Bende, T., & Jean, B. (2009). The individual virtual eye: A computer model for advanced intraocular lens calculation. Journal of Optometry, 2, 70–82.
El Hage, S. G., & Le Grand, Y. (1980). Optics of the eye. In El Hage S. G. & Le Grand Y. (Eds.), Physiological optics (pp. 57–69). Berlin: Springer Berlin Heidelberg.
Elmsley, H. H. (1948). Visual optics. London, UK: Hatton Press.
Escudero-Sanz, I., & Navarro, R. (1999). Off-axis aberrations of a wide-angle schematic eye model. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 16, 1881–1891.
Farrell, J. E., Catrysse, P. B., & Wandell, B. A. (2012). Digital camera simulation. Applied Optics, 51, A80–A90.
Farrell, J. E., Jiang, H., Winawer, J., Brainard, D. H., & Wandell, B. A. (2014). 27.2: Distinguished paper: Modeling visible differences: The computational observer model. SID Symposium Digest of Technical Papers, 45, 352–356.
Freniere, E. R., Groot Gregory, G., & Hassler, R. A. (1999). Edge diffraction in Monte Carlo ray tracing. In R. C. Juergens (Ed.), Optical design and analysis software (Vol. 3780, pp. 151–158). Denver, CO: International Society for Optics and Photonics.
Gómez-Correa, J. E., Coello, V., Garza-Rivera, A., Puente, N. P., & Chávez-Cerda, S. (2016). Three-dimensional ray tracing in spherical and elliptical generalized Luneburg lenses for application in the human eye lens. Applied Optics, 55, 2002–2010.
Gullstrand, A. (1909). Appendix II. Handbuch der physiologischen Optik, 1, 351–352.
Heasly, B. S., Cottaris, N. P., Lichtman, D. P., Xiao, B., & Brainard, D. H. (2014). RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research. Journal of Vision, 14 (2): 6, 1–22, https://doi.org/10.1167/14.2.6. [PubMed] [Article]
Howcroft, M. J., & Parker, J. A. (1977). Aspheric curvatures for the human lens. Vision Research, 17, 1217–1223.
Kiely, P. M., Smith, G., & Carney, L. G. (1982). The mean shape of the human cornea. Optica Acta: International Journal of Optics, 29, 1027–1040.
Kraus, M., & Strengert, M. (2007). Depth-of-field rendering by pyramidal image processing. In Computer Graphics Forum, 26, 645–654.
Kupers, E. R., Carrasco, M., & Winawer, J. (2019). Modeling visual performance differences with polar angle: A computational observer approach. PLoS Computational Biology, 15 (5): e1007063.
Lian, T., Farrell, J., & Wandell, B. (2018). Image systems simulation for 360 camera rigs. Electronic Imaging, 2018, 353–1 –353-5.
MacKenzie, K. J., Hoffman, D. M., & Watt, S. J. (2010). Accommodation to multiple focal plane displays: Implications for improving stereoscopic displays and for accommodation control. Journal of Vision, 10 (8): 22, 1–20, https://doi.org/10.1167/10.8.22. [PubMed] [Article]
Marimont, D. H., & Wandell, B. A. (1994). Matching color images: The effects of axial chromatic aberration. Journal of the Optical Society of America A, 11, 3113–3122.
Mercier, O., Sulai, Y., Mackenzie, K., Zannoli, M., Hillis, J., Nowrouzezahrai, D., et al. (2017). Fast gaze-contingent optimal decompositions for multifocal displays. ACM Transactions on Graphics, 36, 237: 1–237 :15.
Mostafawy, S., Kermani, O., & Lubatschowski, H. (1997). Virtual eye: Retinal image visualization of the human eye. IEEE Computer Graphics and Applications, 17, 8–12.
Narain, R., Albert, R. A., Bulbul, A., Ward, G. J., Banks, M. S., & O'Brien, J. F. (2015). Optimal presentation of imagery with focus cues on multi-plane displays. ACM Transactions on Graphics, 34, 59: 1–59 :12.
Navarro, R., Artal, P., & Williams, D. R. (1993). Modulation transfer of the human eye as a function of retinal eccentricity. Journal of the Optical Society of America A, 10, 201–212.
Navarro, R., Gonzalez, L., & Hernández-Matamoros, J. L. (2006). On the prediction of optical aberrations by personalized eye models. Optometry and Vision Science, 83, 371–381.
Newton, I. (1984). The optical papers of Isaac Newton Vol. I: The optical lectures 1670-1672 (A. E. Shapiro., Ed.). Cambridge, UK: Cambridge University Press.
Pharr, M., Jakob, W., & Humphreys, G. (2016). Physically based rendering: From theory to implementation. Burlington, MA: Morgan Kaufmann.
Polans, J., Jaeken, B., McNabb, R. P., Artal, P., & Izatt, J. A. (2015). Wide-field optical model of the human eye with asymmetrically tilted and decentered lens that reproduces measured ocular aberrations. Optica, 2, 124–134.
Rabsilber, T. M., Khoramnia, R., & Auffarth, G. U. (2006, March). Anterior chamber measurements using pentacam rotating Scheimpflug camera. Journal of Cataract and Refractive Surgery, 32, 456–459.
Schedin, S., Hallberg, P., & Behndig, A. (2016). Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus. Applied Optics, 55, 507–514.
Schwiegerling, J. (2004). Field guide to visual and ophthalmic optics. Bellingham, WA: SPIE Press.
Sen, P., Zwicker, M., Rousselle, F., Yoon, S.-E., & Kalantari, N. K. (2015). Denoising your Monte Carlo renders: Recent advances in image-space adaptive sampling and reconstruction. In ACM Siggraph 2015 Courses ( p. 11). New York: Association for Computing Machinery
Snell's Law. (2003, September). Retrieved from https://en.wikipedia.org/wiki/Snell%27s_law. (accessed February 17, 2019).
Stockman, A. (2001). Colour and vision research laboratory. Retrieved from http://www.cvrl.org/ (accessed February 19, 2019).
Thibos, L. N., Bradley, A., Still, D. L., Zhang, X., & Howarth, P. A. (1990). Theory and measurement of ocular chromatic aberration. Vision Research, 30, 33–49.
Thibos, L. N., Bradley, A., & Xin, H. (2002). A statistical model of the aberration structure of normal, well-corrected eyes. Ophthalmic & Physiological Optics, 22, 427–433.
Wald, G., & Griffin, D. R. (1947). The change in refractive power of the human eye in dim and bright light. Journal of the Optical Society of America, 37, 321–336.
Wandell, B. A. (1995). Foundations of vision. Sunderland, MA: Sinauer Associates.
Watson, A. B. (2013). A formula for the mean human optical modulation transfer function as a function of pupil size. Journal of Vision, 13 (6): 18, 1–11, https://doi.org/10.1167/13.6.18. [PubMed] [Article]
Watson, A. B., & Yellott, J. I. (2012). A unified formula for light-adapted pupil size. Journal of Vision, 12 (10): 12, 1–16, https://doi.org/10.1167/12.10.12. [PubMed] [Article]
Wei, Q., Patkar, S., & Pai, D. K. (2014, May). Fast ray-tracing of human eye optics on graphics processing units. Computer Methods and Programs in Biomedicine, 114, 302–314.
Whitaker, D., Rovamo, J., MacVeigh, D., & Mäkelä, P. (1992). Spatial scaling of vernier acuity tasks. Vision Research, 32, 1481–1491.
Williams, D., & Burns, P. D. (2014, January). Evolution of slanted edge gradient SFR measurement. In Triantaphillidou S. & Larabi M.-C. (Eds.), Image quality and system performance XI ( Vol. 9016, p. 901605). Bellingham, WA: SPIE Press.
Winter, S., Sabesan, R., Tiruveedhula, P., Privitera, C., Unsbo, P., Lundström, L., & Roorda, A. (2016, November). Transverse chromatic aberration across the visual field of the human eye. Journal of Vision, 16 (14): 9, 1–10, https://doi.org/10.1167/16.14.9. [PubMed] [Article]
Wu, J., Zheng, C., Hu, X., & Xu, F. (2011). Realistic simulation of peripheral vision using an aspherical eye model. In Avis N. & Lefebvre S. (Eds.), Eurographics 2011 – short papers. Geneva, Switzerland: Eurographics Association.
Wyszecki, G., & Stiles, W. S. (1982). Color science (Vol. 8). New York: Wiley.
Footnotes
1  Docker packages software with its dependencies into a container that can run on most platforms. See www.docker.com.
Appendix
These tables (Tables A1A3) show the parameters of the three eye models described in the text. 
Table A1
 
Navarro eye model (Escudero-Sanz & Navarro, 1999).
Table A1
 
Navarro eye model (Escudero-Sanz & Navarro, 1999).
Table A2
 
Arizona eye model (Schwiegerling, 2004).
Table A2
 
Arizona eye model (Schwiegerling, 2004).
Table A3
 
Le Grand full theoretical eye (Artal, 2017; Atchison & Smith, 2005).
Table A3
 
Le Grand full theoretical eye (Artal, 2017; Atchison & Smith, 2005).
The surfaces used in ISETBio and ISET3d are defined with the biconic surface sag (Einighammer et al., 2009). The sag defines the distance, z (parallel to the optical axis), of a curved surface as a function of the x and y position along the vertex.  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}z = {{{c_x}{x^2} + {c_y}{y^2}} \over {1 + \sqrt {1 - (1 + {k_x})c_x^2{x^2} - (1 + {k_y})c_y^2{y^2}} }}\end{equation}
 
cx and cy are the surface curvatures in the x and y direction, whereas kx and ky are the conic constants in the x and y direction. The surfaces used in the following models are aspheric but not biconic; therefore, kx = ky and cx = cy
Figure 1
 
The computational pipeline. A three-dimensional scene, including objects and materials, is defined in the format used by Physically Based Ray Tracing (PBRT) software (Pharr et al., 2016). The rays pass through an eye model implemented as a series of surfaces with wavelength-dependent indices of refraction. The simulated spectral irradiance at the curved retinal surface is calculated in a format that can be read by ISETBio (Cottaris, Jiang, et al., 2019). That software computes cone excitations and photocurrent at the cone outer segment membrane, in the presence of fixational eye movements (Cottaris, Rieke, Wandell, & Brainard, 2018).
Figure 1
 
The computational pipeline. A three-dimensional scene, including objects and materials, is defined in the format used by Physically Based Ray Tracing (PBRT) software (Pharr et al., 2016). The rays pass through an eye model implemented as a series of surfaces with wavelength-dependent indices of refraction. The simulated spectral irradiance at the curved retinal surface is calculated in a format that can be read by ISETBio (Cottaris, Jiang, et al., 2019). That software computes cone excitations and photocurrent at the cone outer segment membrane, in the presence of fixational eye movements (Cottaris, Rieke, Wandell, & Brainard, 2018).
Figure 2
 
Lens transmittance. The two rendered images illustrate the impact of lens transmittance on the rendered image. Without including lens transmittance (A), the images are relatively neutral in color; including lens transmittance reduces the short-wavelength photons, and images have a yellow cast (B). The inset at the top is the standard lens transmittance for a 20-year-old.
Figure 2
 
Lens transmittance. The two rendered images illustrate the impact of lens transmittance on the rendered image. Without including lens transmittance (A), the images are relatively neutral in color; including lens transmittance reduces the short-wavelength photons, and images have a yellow cast (B). The inset at the top is the standard lens transmittance for a 20-year-old.
Figure 3
 
On-axis modulation transfer functions (MTF) of the human optics (pupil diameter of 3 mm). (A) A comparison of theoretical and empirical measurements. Two curves were calculated at the point of highest optical acuity for the Navarro eye model using ISET3d and Zemax. The third curve is based on a formula derived by A. B. Watson (see Watson, 2013). The gray-shaded region is the range of estimates derived from adaptive optics measurements along the primary line of sight (Thibos et al., 2002). The spectral radiance of the simulated stimulus is an equal energy polychromatic light, and the curves from different wavelengths were combined using a photopic luminosity function weighting. The differences between the curves from ISET3d, Zemax, and Watson are smaller than the measured individual variation in optical MTFs. (B) The MTFs for the Navarro eye, calculated using ISET3d with diffraction (see the Methods section), are shown separately for different wavelengths.
Figure 3
 
On-axis modulation transfer functions (MTF) of the human optics (pupil diameter of 3 mm). (A) A comparison of theoretical and empirical measurements. Two curves were calculated at the point of highest optical acuity for the Navarro eye model using ISET3d and Zemax. The third curve is based on a formula derived by A. B. Watson (see Watson, 2013). The gray-shaded region is the range of estimates derived from adaptive optics measurements along the primary line of sight (Thibos et al., 2002). The spectral radiance of the simulated stimulus is an equal energy polychromatic light, and the curves from different wavelengths were combined using a photopic luminosity function weighting. The differences between the curves from ISET3d, Zemax, and Watson are smaller than the measured individual variation in optical MTFs. (B) The MTFs for the Navarro eye, calculated using ISET3d with diffraction (see the Methods section), are shown separately for different wavelengths.
Figure 4
 
Retinal irradiance calculated using three schematic eye models: (A) Arizona eye (Schwiegerling, 2004), (B) Navarro (Escudero-Sanz & Navarro, 1999), and (C) Le Grand (El Hage & Le Grand, 1980). The letters are placed at 1.4 (0.714), 1.0 (1), and 0.6 (1.667) diopters (meters) from the eye. The eye models are focused at infinity with a pupil size of 4 mm. Variations in the sharpness of the three letters illustrate the overall sharpness and the depth of field. The images are renderings of the spectral irradiance into sRGB format.
Figure 4
 
Retinal irradiance calculated using three schematic eye models: (A) Arizona eye (Schwiegerling, 2004), (B) Navarro (Escudero-Sanz & Navarro, 1999), and (C) Le Grand (El Hage & Le Grand, 1980). The letters are placed at 1.4 (0.714), 1.0 (1), and 0.6 (1.667) diopters (meters) from the eye. The eye models are focused at infinity with a pupil size of 4 mm. Variations in the sharpness of the three letters illustrate the overall sharpness and the depth of field. The images are renderings of the spectral irradiance into sRGB format.
Figure 5
 
A comparison of on-axis, polychromatic MTF from three different model eyes, calculated using ISET3d. The top figure corresponds to a 3-mm pupil diameter; the bottom figure corresponds to a 4-mm pupil diameter. The gray-shaded region is the range of estimates derived from on-axis adaptive optics measurements (Thibos et al., 2002). The Arizona and Navarro eye models are mostly within the range of the measurements; the Le Grand eye is outside the range
Figure 5
 
A comparison of on-axis, polychromatic MTF from three different model eyes, calculated using ISET3d. The top figure corresponds to a 3-mm pupil diameter; the bottom figure corresponds to a 4-mm pupil diameter. The gray-shaded region is the range of estimates derived from on-axis adaptive optics measurements (Thibos et al., 2002). The Arizona and Navarro eye models are mostly within the range of the measurements; the Le Grand eye is outside the range
Figure 6
 
The variation of a model eye modulation transfer function (MTF) with pupil diameter. The on-axis MTF was computed using the Navarro eye model. The best performance is for a 3-mm pupil diameter. The smallest natural pupil size is about 2 mm (Watson & Yellott, 2012). The simulations include the HURB calculation (see the Methods section) and show that the irradiance from a 1-mm artificial pupil will be affected by diffraction. The MTF is much lower quality for a large pupil diameter (6 mm).
Figure 6
 
The variation of a model eye modulation transfer function (MTF) with pupil diameter. The on-axis MTF was computed using the Navarro eye model. The best performance is for a 3-mm pupil diameter. The smallest natural pupil size is about 2 mm (Watson & Yellott, 2012). The simulations include the HURB calculation (see the Methods section) and show that the irradiance from a 1-mm artificial pupil will be affected by diffraction. The MTF is much lower quality for a large pupil diameter (6 mm).
Figure 7
 
Variations in depth of field calculated for the Navarro eye model with different pupil diameters. Pupil diameters are (A) 2 mm, (B) 4 mm, and (C) 6 mm. In all cases, the focus is set to a plane at 3.5 diopters (28 cm), and the depth of the pawn shown in the red box. The pawn remains in sharp focus, whereas the chess pieces in front and behind are out of focus; the depth of field decreases as pupil size increases. The horizontal field of view is 30°.
Figure 7
 
Variations in depth of field calculated for the Navarro eye model with different pupil diameters. Pupil diameters are (A) 2 mm, (B) 4 mm, and (C) 6 mm. In all cases, the focus is set to a plane at 3.5 diopters (28 cm), and the depth of the pawn shown in the red box. The pawn remains in sharp focus, whereas the chess pieces in front and behind are out of focus; the depth of field decreases as pupil size increases. The horizontal field of view is 30°.
Figure 8
 
Retinal images for the Navarro eye model accommodated to three target distances: (A) 100 mm, (B) 200 mm, and (C) 300 mm. The images are calculated using a 4-mm pupil diameter. The horizontal field of view is 30°.
Figure 8
 
Retinal images for the Navarro eye model accommodated to three target distances: (A) 100 mm, (B) 200 mm, and (C) 300 mm. The images are calculated using a 4-mm pupil diameter. The horizontal field of view is 30°.
Figure 9
 
Longitudinal chromatic aberration. A scene including three letters at 1.8, 1.2, and 0.6 diopters (0.56, 0.83, 1.67 m) is the input (left). The scene is rendered three times through the Navarro model eye (4-mm pupil) to form a retinal image with the accommodation set to the different letter depths. The chromatic aberration at the 0.83 m (letter B) depth plane is rendered, showing how the color fringing changes as the focal plane is varied. The graphs at the right show the spectral irradiance across the edge of the target for several different wavelengths.
Figure 9
 
Longitudinal chromatic aberration. A scene including three letters at 1.8, 1.2, and 0.6 diopters (0.56, 0.83, 1.67 m) is the input (left). The scene is rendered three times through the Navarro model eye (4-mm pupil) to form a retinal image with the accommodation set to the different letter depths. The chromatic aberration at the 0.83 m (letter B) depth plane is rendered, showing how the color fringing changes as the focal plane is varied. The graphs at the right show the spectral irradiance across the edge of the target for several different wavelengths.
Figure 10
 
Transverse chromatic aberration (TCA) at different eccentricities and pupil positions. (A) A white grid 1 m distant is rendered through the Navarro eye, which is accommodated to the grid plane. The curvature of the retina is seen in the distortion of the grid. (B, C) The images in each row show the TCA at 0, 8, and 15 eccentricity. The two rows were calculated after modifying the anterior chamber depth (ACD) of the Navarro eye within anatomical limits (Boehm, Privitera, Schmidt, & Roorda, 2019; Rabsilber, Khoramnia, & Auffarth, 2006). (B) The ACD is set to 3.29 mm. (C) The ACD is set to 2.57 mm. The TCA is larger when the iris and lens are closer to the posterior corneal surface (ACD is smaller).
Figure 10
 
Transverse chromatic aberration (TCA) at different eccentricities and pupil positions. (A) A white grid 1 m distant is rendered through the Navarro eye, which is accommodated to the grid plane. The curvature of the retina is seen in the distortion of the grid. (B, C) The images in each row show the TCA at 0, 8, and 15 eccentricity. The two rows were calculated after modifying the anterior chamber depth (ACD) of the Navarro eye within anatomical limits (Boehm, Privitera, Schmidt, & Roorda, 2019; Rabsilber, Khoramnia, & Auffarth, 2006). (B) The ACD is set to 3.29 mm. (C) The ACD is set to 2.57 mm. The TCA is larger when the iris and lens are closer to the posterior corneal surface (ACD is smaller).
Figure 11
 
Cone mosaic excitations in response to an edge presented briefly at the fovea. (A) Longitudinal chromatic aberration spreads the short-wavelength light substantially. (B) The cone mosaic samples the retinal irradiance nonuniformly, even in the small region near the central fovea. The differences include cone aperture size, changes in overall sampling density, and changes in the relative sampling density of the three cone types. (C) The number of cone excitations per 5 ms for a line spanning the edge and near the center of the image. The variation in the profile is due to Poisson noise and dark noise (250 spontaneous excitations/cone/second). (D) The number of cone excitations per 5 ms across a small patch near the central fovea. The dark spots are the locations of simulated short-wavelength cones.
Figure 11
 
Cone mosaic excitations in response to an edge presented briefly at the fovea. (A) Longitudinal chromatic aberration spreads the short-wavelength light substantially. (B) The cone mosaic samples the retinal irradiance nonuniformly, even in the small region near the central fovea. The differences include cone aperture size, changes in overall sampling density, and changes in the relative sampling density of the three cone types. (C) The number of cone excitations per 5 ms for a line spanning the edge and near the center of the image. The variation in the profile is due to Poisson noise and dark noise (250 spontaneous excitations/cone/second). (D) The number of cone excitations per 5 ms across a small patch near the central fovea. The dark spots are the locations of simulated short-wavelength cones.
Figure 12
 
Retinal image calculations near occluding boundaries. The scene comprises two planes with identical checkerboard patterns, placed at 0.5 m and 2 m from the eye. The eye is accommodated to the near plane. (A) The two planes are imaged separately, using the correct point spread for each depth. The lens diagrams show the rays for each plane before summation. The irradiance data are then summed. (B) The two planes are rendered using ISET3d ray tracing. Note that some rays from the far plane are occluded by the near plane. The rendered retinal irradiance is shown. (C) The A-B monochromatic difference image (absolute irradiance difference, summed across wavelengths) is large near the occluding boundary. The difference arises because rays are occluded rays from the distant plane.
Figure 12
 
Retinal image calculations near occluding boundaries. The scene comprises two planes with identical checkerboard patterns, placed at 0.5 m and 2 m from the eye. The eye is accommodated to the near plane. (A) The two planes are imaged separately, using the correct point spread for each depth. The lens diagrams show the rays for each plane before summation. The irradiance data are then summed. (B) The two planes are rendered using ISET3d ray tracing. Note that some rays from the far plane are occluded by the near plane. The rendered retinal irradiance is shown. (C) The A-B monochromatic difference image (absolute irradiance difference, summed across wavelengths) is large near the occluding boundary. The difference arises because rays are occluded rays from the distant plane.
Table A1
 
Navarro eye model (Escudero-Sanz & Navarro, 1999).
Table A1
 
Navarro eye model (Escudero-Sanz & Navarro, 1999).
Table A2
 
Arizona eye model (Schwiegerling, 2004).
Table A2
 
Arizona eye model (Schwiegerling, 2004).
Table A3
 
Le Grand full theoretical eye (Artal, 2017; Atchison & Smith, 2005).
Table A3
 
Le Grand full theoretical eye (Artal, 2017; Atchison & Smith, 2005).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×