Ocular accommodation is the process of adjusting the eye's crystalline lens so as to bring the retinal image into sharp focus. The major stimulus to accommodation is therefore retinal defocus, and in essence, the job of accommodative control is to send a signal to the ciliary muscle which will minimize the magnitude of defocus. In this article, we first provide a tutorial introduction to control theory to aid vision scientists without this background. We then present a unified model of accommodative control that explains properties of the accommodative response for a wide range of accommodative stimuli. Following previous work, we conclude that most aspects of accommodation are well explained by dual integral control, with a “fast” or “phasic” integrator enabling response to rapid changes in demand, which hands over control to a “slow” or “tonic” integrator which maintains the response to steady demand. Control is complicated by the sensorimotor latencies within the system, which delay both information about defocus and the accommodation changes made in response, and by the sluggish response of the motor plant. These can be overcome by incorporating a Smith predictor, whereby the system predicts the delayed sensory consequences of its own motor actions. For the first time, we show that critically-damped dual integral control with a Smith predictor accounts for adaptation effects as well as for the gain and phase for sinusoidal oscillations in demand. In addition, we propose a novel proportional-control signal to account for the power spectrum of accommodative microfluctuations during steady fixation, which may be important in hunting for optimal focus, and for the nonlinear resonance observed for low-amplitude, high-frequency input. Complete Matlab/Simulink code implementing the model is provided at https://doi.org/10.25405/data.ncl.14945550.

*commanded accommodation changes*on the visual input. The evidence that the system predicts

*changes in stimulus demand*is equivocal, and our model simply assumes that demand does not change over the timescale of the latency.

*not*by the output of the Smith predictor.

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” – Gall's Law(Gall, 1977).

*accommodative demand*, corresponding to the vergence of light rays from the object we wish to look at. This is measured in diopters; the demand in diopters corresponds to the reciprocal of the distance in meters from the eye. For an infinitely far object, the demand is 0D; for an object at 50 cm, the demand is 2D.

*ocular accommodation*. When the eye is correctly accommodated, the accommodation will be equal to the demand so that the image is in focus on the posterior receptor layer of the retina.

*Defocus*is the difference between the accommodative demand and the ocular accommodation, all measured in diopters. It acts as an error signal to the model. As discussed in the Introduction, we assume that defocus is a single, signed value which is somehow computed by the visual system from the retinal image (e.g. using blur, higher-order aberrations, longitudinal chromatic aberration (Burge & Geisler, 2011; Cholewiak, Love, & Banks, 2018; Fincham, 1951; Kruger, Mathews, Aggarwala, & Sanchez, 1993; Seidemann & Schaeffel, 2002; Wilson, Decker, & Roorda, 2002) and represented as a neural error signal; how this is achieved is beyond the scope of this article. In our sign convention, positive defocus error means that the eye is not accommodating enough (i.e., the eye is focusing on a point more distant than the object of interest, so the ocular image is focused behind the retina). Positive defocus error should therefore stimulate an increase of accommodation. The accommodative control system takes the defocus error as input and uses it to compute a neural control signal (blue block in Figure 2). This neural signal is then fed into the ocular plant block in Figure 2. This block, corresponding physiologically to the ciliary muscle, lens, and other components, converts the neural signal into the optical power of the lens (i.e., the ocular accommodation). This in turn affects the defocus error, because defocus is demand minus accommodation. The accommodative control system is designed to adjust accommodation to minimize the defocus error signal (Toates, 1972). Thus this is a negative feedback system.

*d*

_{1}(t) elicited accommodation response

*a*

_{1}(t), and demand

*d*

_{2}(t) elicited

*a*

_{2}(t), the response to a new demand made up of a weighted sum of these two timecourses,

*w*

_{1}

*d*

_{1}(t) +

*w*

_{2}

*d*

_{2}(t) would be

*w*

_{1}

*a*

_{1}(t) +

*w*

_{2}

*a*

_{2}(t). A time-invariant system is one where the same input, delayed by a time

*T*, will always elicit the same response, also delayed by a time

*T*. Thus if demand

*d*(

_{1}*t*) elicited accommodation response

*a*

_{1}(

*t*), demand

*d*

_{1}(

*t*−

*T*) would elicit accommodation response

*a*

_{1}(

*t*−

*T*).

*s*. We assume that all signals are zero for times before t = 0 and write the Laplace transform of a signal

*f(t*) as

*F*(

*s*), where

*t*, the corresponding upper-case denotes its Laplace transform as a function of

*s*. The Laplace transform is closely related to the Fourier transform with which vision scientists are typically more familiar, with

*s*representing a complex version of angular temporal frequency:

*s = jω*(where we use

*j*for the square root of −1 throughout).

*transfer function H*(

*s*). As discussed in more detail below, a transfer function

*H*(

*s*) is a kind of gain, since it is the ratio of the output to the input, for each frequency

*s*. For example, consider a transport delay block, whose effect is to delay the input signal by a latency

*T*, and which thus shifts the phase of each frequency. If the input signal is

*i(t)*, the output after delay is

*o*(

*t*) =

*i(t*−

*T*). Substituting this into Equation 1, we find that

*i*(

*t*) = 0 for

*t*< 0. Thus the transfer function of a transport delay block is

*H*(

*s*) = exp(−

*sT*). Constant signals are unaffected (

*H*(0) = 1); time-varying signals undergo a shift in phase proportional to their temporal frequency.

*f*(0)=0, we see that

*s*times the Laplace transform of the original function. This means that differentiation can be represented very simply in Laplace space by multiplication by

*s*, and integration by 1/

*s*.

*a*

_{RF}, generally of around 1.4D (Leibowitz & Owens, 1978; Rosenfield et al., 1993), which is the value we shall assume for our model. A similar default focus is also observed in darkness. To account for this, we assume that the accommodative control system adds onto the signal computed from defocus a constant “bias” signal. Because we have normalized neural signals to be expressed in diopters, setting this bias signal equal to the rest focus ensures that accommodation returns to the rest focus if the defocus error is clamped at zero.

*t*≤ 0. To handle this, we express both accommodation and demand relative to the rest focus. We define

*A*(

*s*) to be the Laplace transform, not of accommodation itself, but of accommodation relative to rest focus,

*a(t)*−

*a*Similarly

_{RF}.*D*(

*s*) is the Laplace transform of demand relative to rest focus,

*d(t)*−

*a*. With this trick, we can then analyze the system in the Laplace domain as if there were no bias signal (

_{RF}*a*

_{RF}= 0), and at the end simply add

*a*

_{RF}back on to demand and accommodation when we move back to the time domain. All the analyses in this article use this approach.

*B(s)*is the transfer function representing the brain's accommodative control system and

*P(s)*represents the ocular plant. As described in the previous section,

*A*(s) and

*D*(s) are the Laplace transforms of accommodation and demand relative to rest focus. The open-loop transfer function relating output

*A*(s) (accommodation) to input

*D*(s) (demand) is thus

*E*(s) =

*D*(s) −

*A*(s). We therefore now have

*s*= 0 (zero frequency). So, if we apply a constant demand

*d*

_{ss}in closed-loop mode, Equation 5 becomes

*D*(0) =

*d*

_{ss}−

*a*

_{RF}and A(0) =

*a*

_{ss}−

*a*

_{RF}(recalling that accommodation and demand are defined relative to rest focus

*a*

_{RF}). From Equation 4, we can write

*H*(0) in terms of

_{closed}*H*(0). It will be convenient to introduce the notation

_{open}*G*

_{open}for

*H*

_{open}(0) (i.e., the open-loop steady-state gain of the system). Putting this together with Equation 4 and Equation 6, we find that accommodation will eventually be

*G*

_{open}. This demonstrates an important property of negative-feedback systems which attempt to minimise error: small error requires high open-loop gain. Because we have set the gain of the plant to 1 (without loss of generality, as noted above), the gain

*G*

_{open}is set entirely by the brain's accommodative control system. Empirically, accommodation reaches around 80% to 90% of the demand when the demand is far from the rest focus. From Equation 4, we have

*G*

_{open}/(1 +

*G*

_{open}) is around 0.8-0.9 and in turn that G

_{open}must be in the range of 4 to 9.

*H*(

_{closed}*s*), then if accommodative demand is a sinusoidal function of time, the accommodative response will also be a sinusoid with the same temporal frequency

*f*. The amplitude of the response will be the amplitude of the demand multiplied by the gain at that frequency,

*g*(

*f*), and the phase will be delayed by

*φ*(

*f*). We will use lower-case

*g(f)*to denote the gain of a system at a particular temporal frequency

*f*, and upper-case

*G*=

*g*(0) to denote the steady-state gain, as we did above for

*G*

_{open}. According to a standard result of LTI theory, the gain and phase-delay of an LTI system at frequency

*f*can be obtained from the complex number represented by its transfer function

*H*(

*s*) evaluated at

*s*= 2π

*jf*. The gain

*g(f)*is the magnitude of the complex number

*H(*2π

*jf*) and the phase-delay

*φ*(

*f*) is its phase.

*T*

_{sens}∼200ms and

*T*

_{mot}∼100ms respectively (Gamlin et al., 1994; Schor, Lott, Pope, & Graham, 1999; Wilson, 1973), and we will fix the values in our model at these values. In Figure 3, these latencies are shown within the Accommodative Control System (i.e., the brain), but the model functioning is unchanged if, for example, part of the motor latency occurs at a neuromuscular junction in the eye or indeed if both latencies are merged into a single block.

*e*(

*t*), but

*e*(

*t*−

*T*

_{sens}): the retinal defocus as it was a time

*T*

_{sens}ago. This in turn reflects the accommodation due to the neural signal sent up to a time

*T*

_{sens}+

*T*

_{mot}ago. Thus the system suffers an overall latency of

*T*

_{lat}=

*T*

_{sens}+

*T*

_{mot}. This can easily lead to overshoots and “ringing”: oscillations in accommodation as the system is driven beyond the correct value by the out-of-date error signal.

*future*retinal defocus. That is, whereas in Figure 3 the controller operates on the sensed defocus, which due to the sensory latency actually represents defocus as it was some time in the past, in a predictive model the controller operates on the predicted

*future*defocus (Smith, 1957). Figure 4 shows how Figure 3 can be modified so that the input to the controller is predicted future defocus. Defocus is the difference between the stimulus accommodative demand and the ocular accommodation, so predicting future defocus requires a prediction both of demand and accommodation.

*Virtual Plant*block in Figure 4. Such internal models are referred to as

*forward models*in control systems theory. We assume that the motor latency

*T*

_{mot}largely represents delays in transmitting the control signal from the brain to the eye. We assume that the virtual plant is located in the brain close to where the neural control signal is generated, and thus has access to this signal with negligible delay. Accordingly, the output of the virtual plant is

*predicted future accommodation*(i.e., the value that ocular accommodation will have at a time

*T*

_{mot}in advance of the present). We write this predicted future accommodation as

*â*(

*t*+

*T*

_{mot}): the predicted accommodation at a time

*T*

_{mot}in the future, where the circumflex indicates that this is an

*estimate*of the future accommodation. Since the accommodation up to a time

*T*

_{mot}into the future is controlled by neural signals already sent by the brain, this estimate can in principle be perfect. It should be affected only by noise, and by any inaccuracies in the virtual plant as a model of the ocular plant. In the model we present here, neither of these apply and so the prediction of future accommodation is indeed perfect.

*Demand Predictor*block (Figure 4). This takes as its input what demand is estimated to have been at time

*T*

_{sens}in the past, \(\hat{d}( {t - {T}_{sens}} )\), and gives as output what it estimates demand will be at time

*T*

_{mot}in the future, \(\hat{d}( {t + {T}_{mot}} ).\) That is, it extrapolates its input into the future by a time corresponding to the entire sensorimotor latency,

*T*

_{lat}

*= T*

_{mot}+

*T*

_{sens}. In this article, our model Demand Predictor block will simply pass its input on unchanged, effectively assuming that the demand will stay at its current value. This is probably a reasonable assumption, because in many natural viewing situations, accommodative demand probably often changes rather little over the timescale of

*T*

_{lat}. A future model could incorporate a more elaborate form of prediction (e.g. taking account of stimulus periodicity), but that is beyond the scope of this article.

*T*

_{mot}in the future: \(\hat{a}( {t + {T}_{mot}} )\). Our model brain uses this predicted future accommodation in two ways. First (B), the model brain delays this predicted-accommodation signal by the total sensorimotor latency to obtain

*â*(

*t*−

*T*

_{sens}), an estimate of what the ocular accommodation was at a time

*T*

_{sens}in the past. Thus the predictive model actually uses an internal estimate of

*past*accommodation, as well as of future accommodation. The point of doing this is to match the latency of the defocus signal. The input to the whole system is accommodative demand,

*d*(

*t*) (label D). In the eye (label E), the ocular accommodation

*a*(

*t*) is optically subtracted from

*d*(

*t*) to yield the error signal

*e*(t), the optical defocus at time

*t*. Ideally, this is what the accommodation control should be based on, but due to the sensory latency

*T*

_{sens}, the brain only has access to the delayed signal,

*e*(

*t*−

*T*

_{sens}), representing the defocus at a time

*T*

_{sens}in the past. At the signal combination labeled C, the brain adds its estimate of past accommodation,

*â*(

*t*−

*T*

_{sens}), back onto this delayed defocus signal

*e*(

*t*−

*T*

_{sens}), in order to obtain an estimate of what the demand was at a time

*T*

_{sens}in the past: \(\hat{d}( {t - {T}_{sens}} ) = e( {t - {T}_{sens}} ) + \hat{a}( {t - {T}_{sens}} )\). This demand signal is fed into the Demand Predictor block, which uses it to make a guess at what the demand will be at a time

*T*

_{mot}in the future: \(\hat{d}( {t + {T}_{mot}} )\) (label F).

*m*(

*t*) (label H). This is the actual motor signal sent to the ocular plant, with a latency

*T*

_{mot}, which results in the ocular accommodation

*a*(

*t*) (label I). An efference copy of the same motor signal is also sent to be the input of the virtual plant. The output of the virtual plant is, of course, the predicted future accommodation that we began with (A), so we have now followed the signals around the whole of the inner and outer loops.

*current*defocus (Figure 2), in a predictive model the input to the accommodative controller itself is the predicted

*future*defocus. With this modification, PID-type controllers can now work well and avoid the instabilities associated with an out-of-date error signal.

*after*the sensory latency (even though some of the sensory delay represents the optic nerve and cortical processing) and

*before*the motor latency (even though that represents processes before accommodation). The reader is invited to trace the signals around Figure 4 and Figure 5, and verify that provided

*â*(t) =

*a*(

*t*), the same inputs are fed into the same blocks and so the results must be the same. Figure 5 provides a visual picture of what is being achieved by the predictive control: it effectively shifts the latencies outside the control loop. This diagram holds whatever the demand predictor does. If the demand predictor were able to predict future demand perfectly, it would cancel out the latencies and the system would behave as if there were no latencies. But even if the demand predictor merely assumes demand stays constant, as in our model, it still makes the control immune to the destabilising effect of latencies. The effect of latencies is now only to delay the response. The response to any stimulus is exactly the same as for a system with perfect prediction of demand, just occurring later in time (see Appendix and Appendix Table). Thus, although predicting the sensory input enables a more rapid response, predicting one's own motor response suffices to ensure stability.

*m*into accommodation

*a*. Physiologically, this block corresponds to the following components. The ocular lens is held in an elastic capsule between the anterior and posterior chambers of the eye. It is tethered along its equator by elastic suspensory ligaments or zonules. The axial zonules pass from the lens equator to the inner margin of the ciliary muscle, whereas the posterior zonules pass from the ciliary muscle back to the choroid at the ora serrata, the junction between the choroid and the ciliary body. The lens is flattened by the elastic tension under which it is held by the zonules and becomes more spherical—and so more optically powerful—when its extension is reduced by the constriction of the ciliary muscle. Figure 6A shows a diagram of this arrangement. Figure 6B shows a simplified biomechanical model (Beers & van der Heijde, 1994; Beers & van der Heijde, 1996; Schor & Bharadwaj, 2005; Wang & Pierscionek, 2019). The zonules, choroid, and ciliary attachment are represented as springs. The lens is represented by a Voigt model, in which a spring is in parallel with a dashpot or damper. The springs are modeled according to Hooke's law (i.e., they exert a force proportional to their extension). The dashpot exerts a force proportional to the rate of change of its extension, modeling the viscosity of the lens and capsule. The whole system is subject to the force

*f*exerted by the ciliary muscle, which is set by the neural signal sent by the accommodative control system. We assume that the optical power of the lens is proportional to the extension of the spring/dashpot modeling the lens.

*x*

_{L},

*x*

_{za}are the extensions of the lens and of the axial zonules, respectively,

*k*their spring constants and

*b*

_{L}the viscosity of the lens. Using the constraint that the sum of all the extensions must be constant, we can go through and solve the simultaneous equations for the lens extension

*x*

_{L}. If we do so, the result is the same as for the simplified system shown in Figure 6C, with a dashpot and a single spring, now representing the combined elasticity of all the component elements. The value of the full model is that the elasticity of the different tissues can be measured independently. This is important if one wants to model age-dependence (Schor & Bharadwaj, 2005), because these vary differently with age, but the collapsed model is obviously much simpler to work with.

*k*(i.e., its compliance). A dashpot is similar, but because the force is proportional to the rate of change of extension, the transfer function mapping extension to force is

*bs*, where

*b*is the viscosity and

*s*represents differentiation (see primer above). In this way, the simple biomechanical model shown in Figure 6C can be represented by the block diagram in Figure 6D or even more succinctly by the transfer function in Figure 6E. This is the transfer function of a first-order low-pass temporal filter with time-constant τ

_{plant}=

*b/k*, also known as a leaky integrator. This, then, is the function mapping ciliary muscle force to lens extension.

_{plant}is around 0.156s for young eyes (Schor & Bharadwaj, 2006). In this article, we will take this value as a given. As noted above, we can assume without loss of generality that the steady-state gain is 1.

*C(s)*. As noted above, in industrial control systems, controllers typically have PID terms, with transfer functions which scale as constant, 1/s or s, respectively.

*P*(s) as given in Equation 9, making

*C(s)*constant means that the system tracks rapid sinusoidal oscillations far better than human accommodation. For example,

*C(s)*= 5 results in a realistic steady-state gain of 83% (Equation 7), but the gain remains >50% out to frequencies as high as 8Hz, far higher than observed (see Figure 7 below). Derivative terms do not affect steady-state error but improve stability and avoid overshoot. They also enable rapid response to rapid changes. However, they can be problematic in the presence of noise. Previous work by Schor and Bharadwaj (Bharadwaj & Schor, 2006; Schor & Bharadwaj, 2004; Schor & Bharadwaj, 2006) suggests that the accommodative system has a distinct “pulse” mechanism for responding to sudden large changes in accommodation such as occur when we change from looking at a distant to a near object, which cannot be modeled by an LTI system and which are beyond the scope of this article. Furthermore, many of the benefits of derivative control are already achieved by our use of a forward model to predict future demand. We therefore do not include a derivative term. This leaves us with the integral term. A pure integral controller has a transfer function proportional to 1/s, and thus infinite gain at s = 0. This is desirable because it eliminates steady-state error, but it also means that errors can accumulate; also as noted, the human accommodation does not seem to completely eliminate steady-state error. We can account for this by modeling the controller as a leaky integrator, following Krishnan and Stark (1975):

_{fast}is the steady-state gain and τ

_{fast}the time-constant. The subscript “fast” is to distinguish this from a slow integrator which we shall introduce below. A leaky integrator acts like a pure integral controller over short timescales (

*s*τ >> 1), and like a pure proportional controller over long timescales (

*s*τ << 1), thus combining aspects of both. We noted above that accommodative lead/lag suggests the steady-state gain must be in the range 4 to 9. We somewhat arbitrarily chose

*G*

_{fast}= 8.

_{fast}, τ

_{fast}, τ

_{plant}(Equation 20). If the damping coefficient ζ is too low, the maximum gain is observed for a non-zero resonance frequency and can even exceed 1. This does not agree with empirical observations of accommodative response to sinewaves, which is low-pass (Charman & Heron, 2000; Kruger & Pola, 1986; Ohtsuka & Sawa, 1997; Stark et al., 1965) (Figure 7A). This indicates that ζ is at least 1/√2 , not far below critical damping (ζ = 1) (Labhishetty & Bobier, 2017). Saccades have a damping coefficient of around 0.7 (Bahill, Clark, & Stark, 1975); systems with this value have minimum settling time (i.e., they reach and remain within 5% of their final value most rapidly). We show in the Appendix that obtaining ζ∼1/√2 for a system with G

_{fast}>>1 requires the time-constant of the fast controller to be

_{plant}= 0.156s and G

_{fast}= 8, τ

_{fast}must be at least 2.5s.

*T*

_{delay}:

*φ*= 2

*πfT*

_{delay}(Charman & Heron, 2000; Heron, Charman, & Gray, 1999; Kruger & Pola, 1986; Ohtsuka & Sawa, 1997; van der Wildt, Bouman, & van de Kraats, 1974). The slope usually corresponds to a delay of ∼0.5s (dashed lines in Figure 7BC), although there is considerable variability between studies. Because 0.5s is close to the sensorimotor latency inferred from the response to step changes, it is often therefore assumed that this phase slope must represent the sensorimotor latency. However, this is not necessarily the case. First, the damped second-order system formed by the ocular plant and the neural control imposes delays in addition to the sensorimotor latencies. Second, if the brain predicts demand perfectly—at least theoretically possible for a regular stimulus like a sinewave—then its phase delay becomes independent of the sensorimotor latency (see Appendix).

_{fast}, given that the time-constant of the plant is a biomechanical given, and the gain of the fast integrator is already quite tightly constrained by the observed lead/lag following a change in demand.

_{fast}with empirical results from various subjects and studies. As noted, we can rule out τ

_{fast}< 2.5s because the gain is then too high at high frequencies. The gain data is probably best described by τ

_{fast}= 5s (green lines in Figure 7A), but this does not account for the phase data. The τ

_{fast}= 5s in the perfect-prediction model gives phases that match empirical data up to around 0.5 Hz, but, at higher frequencies, empirical phase continues to increase roughly linearly, implying a constant delay, whereas phase for the perfect prediction model asymptotes at 180

^{o}(Figure 7B). Thus we probably have to reject the perfect-prediction model (not surprising given its idealized nature). The no-change prediction model is qualitatively in much better agreement with the phase data, but then τ

_{fast}= 5s predicts larger phases than are observed (Figure 7C). The purple line shows the curve with minimum settling time, τ

_{fast}= 2.5s, which yields ζ∼1/√2. This is in reasonable agreement with both gain and phase data, assuming simple no-change demand prediction, and we therefore adopt this value in the rest of the article.

_{0}. This in turn makes the defocus error non-zero, which begins to charge up the fast integrator. The output of the fast integrator increases the neural control signal above the bias value, altering accommodation so as to reduce the error. It also begins to charge up the slow integrator. Thus, over short timescales, the neural signal controlling accommodation is set mainly by the output of the fast integrator. However, over long timescales, the slow integrator takes over. The ratio of the slow to fast steady-state contributions is equal to the gain of the slow integrator (Schor, 1979b; Schor et al., 1986); for example, with our value G

_{slow}= 5, steady-state accommodation is 83% because of the slow integrator and 17% because of the fast integrator.

_{fast}. When the signal from the fast integrator has dropped far enough, the slow integrator begins to discharge as well, resulting in a second, slower decay of accommodation, with a time constant corresponding to τ

_{slow}. Thus, after a long period of exposure, there is an initial rapid drop as the proportion of accommodation due to the fast integrator, initially 1/(G

_{slow}+1), decays rapidly, but then a much longer decay as the dominant component due to the slow integrator decays slowly.

*G*

_{fast}= 8,

*G*

_{slow}= 5 the gain term is 0.98, compared to 0.89 with only the fast integrator. Thus, after a step-change in demand, the model response rises rapidly to around 90% of the demand, and then over the next tens of second rises more slowly to approach the demand exactly. Thus the gain of the slow integrator cannot be made too large (say, much larger than 5) without eliminating the ability of the model to account for accommodative lead and lag.

*G*

_{slow}. In such systems, the fast integrator is driven not by retinal defocus directly, but by the estimated future defocus (Figure 4). This does

*not*immediately drop to zero when pinholes are applied. When the system is made open-loop by setting

*d*(

*t*) =

*a*(

*t*), the input to the fast integrator becomes

*a*(

*t*−

*T*

_{sens}) −

*a*(

*t*+

*T*

_{mot}) for the no-change prediction model. This becomes zero once accommodation has stabilized but is finite while it decays. When the gain of the slow integrator is sufficiently large, this small error input is enough to keep the slow integrator high. This in turn keeps accommodation high and thus sustains the error signal. Accommodation creeps slowly down to the rest focus with a time-constant, which, counterintuitively, can be much longer than any of the three time constants of the system: τ

_{plant}, τ

_{fast}, τ

_{slow}. This effect is independent of exposure duration and so cannot account for the adaptation that the slow integrator was introduced to explain. To avoid this effect and obtain a clear difference between short and long exposure durations, we have found that

*G*

_{slow}needs to be less than around 10. Here, we have set

*G*

_{slow}= 5.

*f*

^{α}). We model this by injecting white noise onto the defocus signal prior to input to the neural controllers (Figure 10). White noise has a flat power spectrum, but integration by the two integrators within the system (the neural controller and the plant) converts it to a power-law spectrum, with an approximately Brownian (1/

*f*

^{2}) spectrum.

*G*

_{fast}= 15, τ

_{fast}= 2s, which puts the damping coefficient ζ at 0.5—does show unrealistic high-frequency resonances within the forward model feedback loop, but our sub-critically-damped parameters

*G*

_{fast}=8, τ

_{fast}= 2.5s, ζ = 0.7 already suppress the open-loop resonance.

*not*open-loop mode. The first evidence comes from microfluctuations during steady fixation. Several workers have found that the power-spectrum of closed-loop accommodation has a peak at around 2Hz (Figure 9A). It is not always present, but when found is always

*more*prominent in closed-loop than open-loop accommodation. Although the location of this peak varies with heartrate, suggesting the pulse as a possible source interacting with blood volume of the ciliary body (Collins et al., 1995; Winn, Pugh, Gilmartin, & Owens, 1990), the fact that it is higher in closed-loop conditions suggests that the source must be amplified by a neural resonance within the outer feedback loop.

- (i) It can result in unrealistic jumps, where a small change in demand pushes the defocus above the threshold and thus elicits a disproportionately large response.
- (ii) It produces a hysteresis effect, whereby accommodative lead and lag can depend on how the demand is approached. For example, with a threshold of 0.2D, if the demand steps up from 1D to 2D, the effective defocus becomes zero once accommodation reaches 1.8D, so we get a lag. But if demand steps down from 3D to 2D, effective defocus becomes zero once accommodation reaches 2.2D, so we get a lead. This hysteresis is not typically observed, except with extremely blurred images (Heath, 1956a).
- (iii) It reduces the gain of the response to low-amplitude oscillations. For example, consider a slow oscillation ranging between 1D and 3D. Assume for simplicity that the closed-loop gain of the system is 1, so that in the absence of a deadzone, the response would track demand exactly. With a deadzone clipped at 0.2D, the response would range from 1.2D to 2.8D, reducing the gain to 0.8. With a lower-amplitude oscillation where demand ranged from 1.5D to 2.5D, the response would range from 1.7D to 2.3D, making the gain 0.6. With a still lower-amplitude demand ranging from 1.7D to 2.3D, response would range from 1.9D to 2.1D, making the gain 0.3. Yet this decrease in gain with decreasing amplitude is not observed. In fact, accommodative gain tends to be smallest for high amplitudes, not for low amplitudes (Stark et al., 1965, p. 196).

_{perfect}= φ

_{nochange}–

*360fT*. The phase function of most human subjects agrees better with that of the no-change model rather than the perfect model, suggesting that these subjects had little ability to predict the oscillatory demand.

_{sens}*d*(

*t*) −

*a*(

*t*)|, where

*d*(

*t*) =

*D*+

_{mean}*D*(sin 2π

_{amp}*ft*).

^{o}out of phase with the demand (Figure 12C). The error increases with demand amplitude, even though for frequencies below the peak, the gain (i.e., the ratio of response to demand) is closer to 1 for larger amplitudes (Figure 12AB).

*zero-gain tracking error*(i.e., the mean absolute defocus error which would be achieved if accommodation stayed at the steady-state value elicited by the mean demand [

*D*

_{mean}= 2D in this example]). Because the amplitude of zero gain tracking error depends only on the input amplitude, the error is independent of temporal frequency of the sine input. Since the static accommodative lag is small, the zero-gain steady-state response is also close to 2D. So the mean zero-gain error is approximately the average value of |

*D*(sin 2π

_{amp}*ft*)|, or 2

*D*

_{amp}/π, where

*D*

_{amp}is the amplitude of the demand oscillations about the 2D baseline.

*limiting tracking frequency*to be the frequency at which the actual gain and phase-delay of the accommodative response produces the same error as would be achieved with zero gain. This is where the zero-gain tracking error is first equal to the actual error, marked with a cross x in Figure 14A. For frequencies lower than this limit, the oscillation in accommodative response is helpful (i.e., it tracks the oscillations in demand with a phase delay low enough to reduce the mean defocus error below the zero-gain tracking error). However for frequencies above the limit marked with a cross, the oscillatory response is out of phase and ends up making mean defocus error larger than if accommodation simply remained constant at the baseline value.

*and*dual control by fast and slow integrators, as well as our novel use of a nonpredictive proportional-control signal. Accordingly, it is able to account well for a wide range of empirical observations: the gain and phase of the response to sinusoidal oscillations in demand, including the puzzling high-frequency low-frequency resonance; the power spectrum of microfluctuations in closed-loop and open-loop modes, and the adaptation of accommodation to a steady stimulus.

*SICE 2003 Annual Conference (IEEE Cat. No.03TH8734)*. 2, 1383–1387.

*Scientific Reports,*11(1), 15195, https://doi.org/10.1038/s41598-021-94642-2. [CrossRef]

*Computer Programs in Biomedicine,*4, 230–236, https://doi.org/10.1016/0010-468X(75)90036-7. [CrossRef]

*Vision Research,*34, 2897–2905, https://doi.org/10.1016/0042-6989(94)90058-2. [CrossRef]

*Optometry and Vision Science,*73, 235–242, https://doi.org/10.1097/00006324-199604000-00004. [CrossRef]

*Neural Control Strategies of the Human Focusing Mechanism*. Berkeley: University of California, Berkeley.

*Vision Research,*45, 17–28, https://doi.org/10.1016/j.visres.2004.07.040. [CrossRef]

*Vision Research,*46(6–7), 1019–1037, https://doi.org/10.1016/j.visres.2005.06.005.

*Proc Natl Acad Sci USA,*108, 16849–16854, https://doi.org/1108491108 [pii] 10.1073/pnas.1108491108 [CrossRef]

*The Journal of Physiology,*145, 579–594, https://doi.org/10.1113/jphysiol.1959.sp006164. [CrossRef]

*The Journal of Physiology,*145, 579–594, https://doi.org/10.1113/jphysiol.1959.sp006164. [CrossRef]

*J Physiol (Lond),*143, 18.

*The Journal of Physiology,*151(2), 285–295, https://doi.org/10.1113/jphysiol.1960.sp006438.

*Ophthalmic & Physiological Optics: The Journal of the British College of Ophthalmic Opticians (Optometrists),*8, 153–164, https://doi.org/10.1111/j.1475-1313.1988.tb01031.x.

*Vision Research,*40, 2057–2066, https://doi.org/10.1016/S0042-6989(00)00066-3.

*Ophthalmic and Physiological Optics,*35, 476–499, https://doi.org/10.1111/opo.12234.

*Journal of Neuroscience,*28, 2804–2813, https://doi.org/10.1523/JNEUROSCI.5300-07.2008.

*Journal of Vision,*18(9), 1, https://doi.org/10.1167/18.9.1.

*Vision Research,*35, 2491–2502.

*Vision Research,*22, 561–569, https://doi.org/10.1016/0042-6989(82)90114-6.

*Vision Research,*9, 233–244, https://doi.org/10.1016/0042-6989(69)90003-0.

*Journal of Vision,*11(10), 21, https://doi.org/10.1167/11.10.21.

*The British Journal of Ophthalmology,*35, 381–393.

*Systemantics: How Systems Work & Especially How They Fail*. New York: The New York Times Book Co.

*Journal of Vision,*9(6), 4.1–15, https://doi.org/10.1167/9.6.4.

*Journal of Neurophysiology,*72, 2368–2382, https://doi.org/10.1152/jn.1994.72.5.2368.

*Ophthalmic & Physiological Optics: The Journal of the British College of Ophthalmic Opticians (Optometrists),*13, 258–265, https://doi.org/10.1111/j.1475-1313.1993.tb00468.x.

*Vision Research,*33, 2083–2090, https://doi.org/10.1016/0042-6989(93)90007-j.

*American Journal of Optometry and Archives of American Academy of Optometry,*33, 513–524, https://doi.org/10.1097/00006324-195610000-00001.

*American Journal of Optometry and Archives of American Academy of Optometry,*33, 569–579, https://doi.org/10.1097/00006324-195611000-00001.

*Investigative Ophthalmology & Visual Science,*40, 2872–2883.

*Models of Accommodation*(pp. 287–339). Boston: Springer, https://doi.org/10.1007/978-1-4757-5865-8_8.

*IEEE Transactions on Biomedical Engineering*,

*BME,*33, 1021–1028, https://doi.org/10.1109/TBME.1986.325868.

*Bulletin of Mathematical Biology,*64, 285–299, https://doi.org/10.1006/bulm.2001.0274.

*Perception,*15, 7–15, https://doi.org/10.1068/p150007.

*Journal of the Optical Society of America. A, Optics and Image Science,*3, 223–227, https://doi.org/10.1364/josaa.3.000223.

*Biological Cybernetics,*54, 189–194, https://doi.org/10.1007/BF00356857.

*Vision Research,*13, 1545–54.

*Computer Programs in Biomedicine,*4, 237–245, https://doi.org/10.1016/0010-468X(75)90037-9.

*Vision Research,*33, 1397–1411, https://doi.org/10.1016/0042-6989(93)90046-y.

*Vision Research,*26, 957–971, https://doi.org/10.1016/0042-6989(86)90153-7.

*Vision Research,*130, 9–21, https://doi.org/10.1016/j.visres.2016.11.001.

*Journal of Vision,*21, 21–21, https://doi.org/10.1167/jov.21.3.21.

*Documenta Ophthalmologica. Advances in Ophthalmology,*46, 133–147, https://doi.org/10.1007/BF00174103.

*The Clinical Use of Prisms; and the Decentering of Lenses*. 2nd ed.. Bristol, England: John Wright & Sons.

*Journal of Motor Behaviour,*25, 203–216.

*British Journal of Ophthalmology,*81, 476–480, https://doi.org/10.1136/bjo.81.6.476.

*Optometry and Vision Science: Official Publication of the American Academy of Optometry,*96, 424–433, https://doi.org/10.1097/OPX.0000000000001384.

*American Journal of Optometry and Archives of American Academy of Optometry,*49, 389–400.

*Frontiers in Cellular Neuroscience,*12, 524, https://doi.org/10.3389/fncel.2018.00524.

*The Journal of Physiology,*159, 339–360.

*Ophthalmic & Physiological Optics : The Journal of the British College of Ophthalmic Opticians (Optometrists),*13(3), 266–284, https://doi.org/10.1111/j.1475-1313.1993.tb00469.x.

*Vision Research,*19, 757–765, https://doi.org/10.1016/0042-6989(79)90151-2.

*Vision Research,*19, 1359–1367, https://doi.org/10.1016/0042-6989(79)90208-6.

*The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society,*3, 766–769, https://doi.org/10.1109/IEMBS.2004.1403271.

*Vision Research,*45, 1237–1254, https://doi.org/10.1016/j.visres.2004.11.011.

*Vision Research,*46, 242–258, https://doi.org/10.1016/j.visres.2005.09.030.

*Vision Research,*26, 927–942, https://doi.org/10.1016/0042-6989(86)90151-3.

*Investigative Ophthalmology & Visual Science,*27, 820–827.

*Vision Research,*39, 3769–3795, https://doi.org/10.1016/s0042-6989(99)00094-2.

*Vision Research,*42, 2409–2417, https://doi.org/10.1016/S0042-6989(02)00262-6.

*Chemical Engineering Progress,*53, 217–219.

*Neurological Control Systems: Studies in Bioengineering*(pp. 220–230). Boston: Springer, https://doi.org/10.1007/978-1-4684-0706-8_10.

*IEEE Transactions on Systems Science and Cybernetics,*1(1), 75–83, https://doi.org/10.1109/TSSC.1965.300064.

*Ophthalmic & Physiological Optics: The Journal of the British College of Ophthalmic Opticians (Optometrists),*9, 392–397.

*IEEE Transactions on Bio-Medical Engineering,*37, 73–79, https://doi.org/10.1109/10.43618.

*Investigative Ophthalmology & Visual Science,*35, 1157–1166.

*Microscopy Research and Technique,*33, 390–439, https://doi.org/10.1002/(SICI)1097-0029(19960401)33:5<390::AID-JEMT2>3.0.CO;2-S.

*Physiological Reviews,*52, 828–863.

*American Journal of Optometry and Archives of American Academy of Optometry,*45, 483–506, https://doi.org/10.1097/00006324-196808000-00001.

*Optica Acta: International Journal of Optics,*21, 843–860, https://doi.org/10.1080/713818858.

*Progress in Retinal and Eye Research,*71, 114–131, https://doi.org/10.1016/j.preteyeres.2018.11.004.

*Scientific Reports,*7(1), 16688, https://doi.org/10.1038/s41598-017-16854-9.

*Journal of Anatomy,*88(Pt 1), 71–93.

*Journal of the Optical Society of America A-Optics Image Science and Vision,*19(5), 833–839.

*Vision Research,*13, 2491–2503, https://doi.org/10.1016/0042-6989(73)90246-0.

*Current Eye Research,*9(10), 971–975, https://doi.org/10.3109/02713689009069933.

*Vision Research,*50, 1266–1273, https://doi.org/10.1016/j.visres.2010.04.011.

*Revue Neurologique,*145(8–9), 613–620.

*a*is included as an inhomogeneous “forcing” term. We handle this by defining

_{RF}*A*(

*s*) and

*D*(

*s*) to be the Laplace transforms of

*a*(

*t*) −

*a*

_{RF}and

*d*(

*t*) −

*a*

_{RF}, respectively, where

*a*(

*t*) and

*d*(

*t*) are accommodation and demand as functions of time. In this way, we can effectively ignore

*a*when obtaining the transfer functions.

_{RF}*d(t)-a(t)*. The input to the Controller block is

*E*(

*s*)exp ( −

*sT*) (i.e., the defocus error signal after the sensory latency). The output from the Controller block is

_{sens}*C*(

*s*)

*E*(

*s*)exp ( −

*sT*), where C(s) is the transfer function of the Controller. After accounting for the motor latency, the input to the ocular plant is

_{sens}*C*(

*s*)

*E*(

*s*)exp ( −

*sT*). So, the output of the ocular plant (i.e., accommodation) is

_{lat}*f*,

*H*(2π

_{closed}*jf*). The closed-loop gain as a function of demand frequency is therefore

*P=P(2πjf)*, C =

*C(2πjf)*. The denominator contains oscillatory terms which mean that, even if

*PC*is lowpass (i.e., a monotonically decreasing function of frequency), the denominator can be close to zero at particular frequencies and thus produce large resonances, for which the closed-loop gain exceeds 1. These manifest themselves as ringing or instability in the response to step changes in demand, and as gains>1 for sinusoidal oscillations in demand, which are not observed for large amplitudes.

_{lat}= 0.3s and the plant being a leaky integrator with τ

_{plant}= 0.156s, Equation 15 has its first resonance at 1.2Hz where the closed-loop gain goes well above 1. This is ultimately responsible for the model's high-frequency peak in microfluctuations (Figure 15) and the low-amplitude resonance in the response to sine-waves (Figure 12), although the precise behavior also depends on the nonlinear clipping. The precise position of the first resonance depends on the gain of the proportional control, but only rather subtly. We therefore kept unit gain for simplicity.

*D*(

*s*)exp ( −

*sT*), with the exponential being the Laplacian representation of a time delay (cf discussion of (2). It then passes through the demand predictor, which attempts to predict the signal

_{sens}*T*into the future. If it did this perfectly, the output of the demand predictor would be

_{lat}=T_{sens}+T_{mot}*D*(

*s*)exp ( −

*sT*)exp ( −

_{sens}*sT*) =

_{lat}*D*(

*s*)exp ( +

*sT*). To allow for the fact that demand is unlikely to be predicted perfectly, we will write the output as \(\hat{D}( s )\exp ( { + s{T}_{mot}} )\). \(\hat{D}( s )\) is the Laplace transform of the estimated future demand, again relative to the rest focus. That is, whereas

_{mot}*d*(

*t*) is the actual demand at time

*t*, \(\hat{d}( t )\) is the estimated demand at time

*t*, as estimated at time (

*t*−

*T*

_{lat}).

*A*(

*s*)exp ( +

*sT*). Equating these, we see that

_{mot}_{lat}will still be the same as the defocus it is receiving now:

*f*is therefore the same as for the perfect predictor, whereas the phase is reduced by 2π

*f*T

_{lat}. In fact, the closed-loop gain would be the same for any demand predictor which accurately predicts demand any time at all into the future, even if, as here, that time is zero. Inaccurate predictions would of course change the closed-loop gain.

*sT*) term in the denominator can lead to local peaks in the gain at some frequencies. Thus with inaccurate no-change prediction, the system is prone to open-loop resonances due to the inner feedback loop via the efference copy. However, with our parameter values (Table 2), Equation 18 is a monotonically decreasing function of frequency. This ensures that we do not see local peaks in the power spectrum of open-loop microfluctuations (Figure 15).

_{lat}*s =*2π

*jf*. This is the transfer function of a second-order damped oscillator. We can rewrite it in the standard form

_{0}the natural angular frequency:

_{fast}has to be >>1, say at least 5, to avoid excessive lag. (Mathematically, there are two solutions, but the other one gives a very short time-constant for the controller, which in turn causes other problems such as open-loop resonances in the noise.)

_{fast}>>1, the natural frequency is approximately

_{plant}= 0.156s corresponds to 0.72Hz.

_{0}. In this region, for perfect demand prediction

*T*

_{delay}= 2τ

_{plant}. Presumably coincidentally, this delay is very similar to the sensorimotor latency, although as we can see it arises from a completely different source. However, for frequencies beyond ∼1Hz, the phase asymptotes to 180

^{o}(Figure 7).