Open Access
Article  |   November 2016
An active-efficient-coding model of optokinetic nystagmus
Author Affiliations
  • Chong Zhang
    Department of Electrical and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong
    czhangab@connect.ust.hk
  • Jochen Triesch
    Theoretical Life Sciences, Frankfurt Institute for Advanced Studies, Frankfurt, Germany
    triesch@fias.uni-frankfurt.de
  • Bertram E. Shi
    Department of Electrical and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong
    eebert@ust.hk
Journal of Vision November 2016, Vol.16, 10. doi:10.1167/16.14.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chong Zhang, Jochen Triesch, Bertram E. Shi; An active-efficient-coding model of optokinetic nystagmus. Journal of Vision 2016;16(14):10. doi: 10.1167/16.14.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Optokinetic nystagmus (OKN) is an involuntary eye movement responsible for stabilizing retinal images in the presence of relative motion between an observer and the environment. Fully understanding the development of OKN requires a neurally plausible computational model that accounts for the neural development and the behavior. To date, work in this area has been limited. We propose a neurally plausible framework for the joint development of disparity and motion tuning in the visual cortex and of optokinetic and vergence eye-movement behavior. To our knowledge, this framework is the first developmental model to describe the emergence of OKN in a behaving organism. Unlike past models, which were based on scalar models of overall activity in different neural areas, our framework models the development of the detailed connectivity both from the retinal input to the visual cortex and from the visual cortex to the motor neurons. This framework accounts for the importance of the development of normal vergence control and binocular vision in achieving normal monocular OKN behaviors. Because the model includes behavior, we can simulate the same perturbations as past experiments, such as artificially induced strabismus. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature on both binocular vision and the optokinetic reflex. Finally, our model makes quantitative predictions about OKN behavior using the same methods used to characterize OKN in the experimental literature.

Introduction
Perception and behavior are intricately linked. Deficits in one often lead to deficits in the other. Despite what might conceptually appear to be a potentially fragile codependency, in most cases both develop robustly and in tandem. Computational models of the joint development of perception and behavior in biological systems will lead not only to a better understanding but also possibly to corrective interventions when the processes goes awry, as well as to artificial robotic systems that exhibit robust and adaptive behavior in uncertain and nonstationary environments. 
Here we describe the application of one model of joint development, the active-efficient-coding framework, to model the optokinetic response. The active-efficient-coding framework extends Barlow's efficient-coding hypothesis (1961) to include behavior. It posits not only that neurons in the brain develop to encode the sensory stimulus efficiently but that behavior develops to shape the sensory input so that it can be efficiently encoded. The optokinetic reflex is an involuntary eye movement that stabilizes retinal images in the presence of relative motion between an observer and the environment. The eyes move to minimize retinal slip, the difference between the stimulus velocity projected onto the retina and the rotational velocity of the eye. In the presence of constant relative motion, the optokinetic response leads to a repetitive movement of the eyes known as optokinetic nystagmus (OKN). OKN consists of two phases: a slow one and a fast one. During the slow phase, the eyes move smoothly to stabilize the retinal image. The fast phase is a saccadic eye movement that is triggered based on a complex interaction of the eye position, eye velocity, and stimulus motion (Waddington & Harris, 2012, 2013), but is generally in the opposite direction. This type of nystagmus is often observed when looking sideways out of a moving vehicle, when it is called railway nystagmus. 
During binocular viewing, horizontal OKN in humans is normally symmetric from birth: It can be elicited in both the temporal-to-nasal (TN) and the nasal-to-temporal (NT) directions. However, during monocular viewing, horizontal OKN is asymmetric for infants younger than 3 months. Monocular OKN (mOKN) can be elicited in infants for stimuli moving in the TN direction but not the NT direction (Atkinson, 1979; Naegele & Held, 1982). The mOKN eventually becomes symmetric, but this is highly dependent upon the development of binocularity. For individuals with a developmental failure in binocularity—e.g., due to early strabismus (crossed eyes) or amblyopia (lazy eye)—the mOKN asymmetry persists into adult life (Braddick & Atkinson, 1981a, b; Crone, 1977; Tychsen, 1993). 
At birth, mOKN is asymmetric because OKN is mediated by a monocular subcortical pathway. This pathway passes through the nucleus of the optic tract (NOT) and the dorsal terminal nucleus (DTN; Hoffmann, 1981, 1986). The NOTs in the two hemispheres are directionally asymmetric in actuation: The left NOT drives the eyes to rotate leftward and the right NOT drives rightward rotation. Visual neurons in the left NOT are directly excited only by input from the right eye, and vice versa for the right NOT. Thus, right-eye input can only trigger leftward rotation and left-eye input can only trigger rightward rotation. This motion is in the TN direction for both eyes. The transition from asymmetric to symmetric mOKN is thought to reflect the development of an indirect binocular pathway from the visual cortex to the NOT. If binocularity does not develop normally—e.g., due to strabismus or amblyopia—mOKN will remain asymmetric even after the cortical pathway develops (Braddick, Atkinson, & Wattam-Bell, 2003). 
Fully understanding the development of the optokinetic response, and in particular the transition from asymmetric to symmetric mOKN, will require a neurally plausible computational model that simultaneously accounts for many things. First, it should model the development of ascending connections from the left and right eyes to the visual cortex and descending connections from the visual cortex to the NOT-DTN. Second, the model should account for the emergence of eye movements, since abnormalities in these (e.g., strabismus) interfere with the development of symmetric mOKN. Eye movements are critical, because they influence the statistics of the sensory input, which in turn influence the connectivity from the eyes to the visual cortex. Given the importance of the development of binocularity in the emergence of symmetric mOKN, it is important to incorporate the development of vergence eye movements, which ensure that the left and right eyes receive correlated input. Like the optokinetic response, vergence eye movements undergo developmental changes early in life. They first appear in infants at around 1 month (Aslin, 1977), and there is a significant improvement at 4 months (Mitkin & Orestova, 1988). 
To date, work in this area has been limited. The only quantitative models for the effect of monocular versus binocular visual input that we are aware of were presented by Hoffmann (1982) and by Kiorpes, Walton, O'Keefe, Movshon, and Lisberger (1996). To our knowledge, quantifying the effect of behavior has not been addressed at all. Hoffman described a simple model in which the activation in the left and right NOT were represented by single scalar values. This activation consisted of two components, a direct monocular input and a binocular input from cortex, and was stimulus-direction dependent. Hoffman showed that the differences between the model NOT activation values were comparable to the measured gain of OKN in cat under a variety of conditions. Kiorpes et al. modeled the overall strength of connections from the two hemiretinae of the two eyes to the middle temporal (MT) area and from MT to the cortical pursuit systems. Although OKN and pursuit are distinct behaviors, they exhibit similar asymmetries due to strabismus. Based on their estimates of the weight parameters from measurements of pursuit and recordings from MT, Kiorpes et al. inferred that the source of the asymmetry is in the connections from MT to the cortical pursuit systems rather than deficits in visual motor processing. 
Although these models give insight into the general sources of asymmetry, they have a number of shortcomings. First, they adopt very coarse representations of neural activity, where the activations of entire brain areas and connections between them are represented by a single scalar value. Second, they are not developmental, as gain parameters are fixed and do not evolve; thus, they cannot explain what drives the developmental changes. Third, they cannot be applied to realistic visual input such as images. Finally, it is not obvious how they could be modified to include behavior. 
This article describes a quantitative model for the joint development of disparity and motion tuning in the visual cortex, the optokinetic response, and vergence eye movements. This model takes as inputs image sequences sensed by the left and right eyes as the organism behaves within its environment. These image sequences are affected by both motion in the environment (e.g., object motion) and motion by the observer (e.g., eye movements). The model accounts for the development of ascending connections from the eyes to the visual cortex, and their dependence upon the statistics of visual signals. It also models visually driven eye movements, including the subcortical control of OKN, and the development of cortical control of OKN and vergence eye movements. This development takes place as the organism behaves within the environment. 
This work makes several contributions. First, to our knowledge it is the first developmental model to describe the emergence of OKN in a behaving organism. The model accounts for the development of both sensory processing and motor action. Second, it is the first model of OKN development that can be applied directly to input stimuli used experimentally to quantify OKN behavior, and in particular the asymmetry of mOKN. This enables us to compare the behavior exhibited by the model directly to the results in the experimental literature. Our model agrees both qualitatively and quantitatively with a number of findings on the degree of mOKN asymmetry after normal and strabismic development. Third, it is the first model of OKN development that explicitly takes into account the effect of vergence commands and that can model normal and abnormal development of vergence commands and their effect on mOKN. Finally, it results in testable predictions about the quantitative changes in mOKN asymmetry under different developmental conditions. 
Model description
This section describes our developmental model for OKN. The first subsection gives an overview of the active-efficient-coding framework, which we use to model the joint development of behavior and eye-movement control. The second subsection reviews current knowledge regarding the neural pathways controlling OKN and hypotheses about their development. The final subsection describes how we model the neural pathway and its control of behavior. The overall structure of our model is consistent with previously proposed models (Hoffmann, 1983; Kiorpes et al., 1996; Masseck & Hoffmann, 2009; Tychsen, 1999). The discussion in this section is primarily qualitative. The Appendix contains a formal mathematical description and a listing of parameter settings used in our simulations. 
The active-efficient-coding framework
Our developmental model is based upon the active-efficient-coding framework (Vikram, Teulière, Zhang, Shi, & Triesch, 2014; Zhang, Zhao, Triesch, & Shi, 2014; Zhao, Rothkopf, Triesch, & Shi, 2012), an extension of Barlow's efficient-coding hypothesis (1961). The efficient-coding hypothesis posits that neural populations develop so that they can best represent the sensory data with as little redundancy or wasteful activity as possible. One consequence of this hypothesis is that the neural code is adapted to the statistics of the sensory input. The active-efficient-coding hypothesis extends this to include behavior: It posits that in addition, the organism behaves so that its sensory input can be efficiently encoded. 
The framework is illustrated in Figure 1. As the organism behaves in the environment, its sensory input changes. The efficient-coding hypothesis predicts that the properties of the sensory neurons depend upon the statistics of this sensory input. The outputs of these sensory neurons in turn drive behavior. Under the active-efficient-encoding hypothesis, this behavior also develops so that the input can be efficiently encoded by the neural population. Both perception (determined by the wiring from sensory input to the sensory neurons) and behavior (determined by the wiring from sensory neurons to motor neurons) develop simultaneously as the organism behaves in the environment. 
Figure 1
 
The active-efficient-coding framework.
Figure 1
 
The active-efficient-coding framework.
The neural pathway of OKN
The neural substrate of OKN is illustrated in Figure 2. The motor neurons in the left (right) NOT only drive leftward (rightward) rotation. Visual information reaches the NOT directly via retinofugal projections and indirectly via the visual cortex. The direct subcortical pathway is monocular: Visual information flows from the nasal hemiretina of each eye to the contralateral NOT, as shown by the solid green line and the dashed blue line. At birth, only this subcortical pathway is functional. With monocular viewing, the optokinetic response is triggered only if the target is moving in the TN direction and only if the target is seen by the nasal hemiretina. 
Figure 2
 
The neural pathway mediating OKN. Boxes with arrows indicate the NOT. Boxes labeled VC indicate the visual cortex; CC: corpus callosum; LE: left eye; RE: right eye. The visual processing path is shown in blue for the left eye and green for the right eye. Solid lines indicate information from the right visual field, and dashed lines indicate information from the left visual field.
Figure 2
 
The neural pathway mediating OKN. Boxes with arrows indicate the NOT. Boxes labeled VC indicate the visual cortex; CC: corpus callosum; LE: left eye; RE: right eye. The visual processing path is shown in blue for the left eye and green for the right eye. Solid lines indicate information from the right visual field, and dashed lines indicate information from the left visual field.
The cortical pathway to the NOT, which develops after birth, consists of a descending input from the ipsilateral primary visual cortex (Hoffmann, 1981, 1989). Information from the contralateral visual cortex also reaches the NOT through this descending connection via the corpus callosum. Visual input from the left (right) hemifield in both eyes is routed to the right (left) cortex. This indirect cortical pathway provides a pathway via which the NOT can receive information from the ipsilateral eye. 
This simple routing argument is not sufficient to explain the development of symmetric mOKN, which is highly dependent upon the development of binocularity—i.e., the combination of binocular information in the cortex. In the macaque, ocular dominance columns in the visual cortex are well formed before visual experience (Horton & Hocking, 1996), but disparity-selective neurons are found a few days after birth, and the spatial-frequency response properties of these neurons need several weeks to improve (Chino, Smith, Hatta, & Cheng, 1997). The development of disparity selectivity depends critically upon the eyes receiving correlated input (Hubel & Wiesel, 1965); this in turn depends upon the development of vergence control. 
There have been two proposed mechanisms by which symmetry may fail to develop. One possibility is that a disruption of binocular vision may interfere with the development of the cortical connections to the NOT-DTN (Atkinson, 1979). Under this hypothesis, OKN is due to the subcortical pathway, which, as already pointed out, is asymmetric. However, this hypothesis seems unlikely, since it appears that the subcortical visual pathway supporting OKN eventually disappears or becomes significantly weaker during the course of normal development (del Viva, Morrone, & Fiorentini, 2001; Lynch & McLaren, 1983). This suggests that the subcortical visual pathway serves as initial developmental scaffolding that aids in the development of the cortically driven OKN (Braddick et al., 2003). Thus, it appears more likely that the asymmetry is due to abnormalities in the descending cortical inputs to the NOT. Tychsen (1999) has suggested that cortical neurons from ocular dominance columns serving the contralateral eye preferentially drive the NOT, similar to the contralateral bias seen in the subcortical pathway. During normal binocular development, horizontal connections emerge between ocular dominance columns. These provide the pathway through which information from both the ipsilateral and contralateral eyes can reach the NOT, enabling mOKN to become symmetric. Tychsen suggests that strabismus or amblyopia interferes with the development of these horizontal connections, leaving only input from the contralateral eye. In this case, mOKN remains asymmetric. 
Note that the direct subcortical and indirect cortical visual pathways we refer to here are not the same as the direct and indirect pathways governing the temporal characteristics of OKN and optokinetic after-nystagmus (OKAN), which are preserved (Cohen, Matsuo, & Raphan, 1977; Cohen, Reisine, Yokota, & Raphan, 1992; Raphan, Matsuo, & Cohen, 1979). Our model focuses upon the steady-state, rather than transient, characteristics of OKN, and thus is more consistent with the indirect or “velocity storage” mechanism, which appears to hold or store activity producing slow phase eye velocity. 
Developmental model of the optokinetic reflex
In this section, we give a general description of the model. The Appendix describes the model in complete mathematical detail. 
The detailed connectivity in our developmental model is shown in Figure 3. The model includes visually guided control of both OKN and vergence (VG). Visual information for OKN control follows both a direct monocular subcortical pathway and an indirect binocular cortical pathway. Visual information for VG control follows only the binocular cortical pathway. The connections in the subcortical pathway are hardwired to implement OKN control; however, because this pathway is monocular, mOKN is asymmetric. The connections in the cortical pathway, both from the retina to the cortical sensory neurons and from the cortical sensory neurons to the motor neurons controlling OKN and VG, are initialized randomly and are learned as the agent behaves in the environment. In our simulations, we considered only horizontal eye movements, since most studies in the literature focus on horizontal, rather than vertical, OKN (Knapp, Proudlock, & Gottlob, 2013). However, the model could be extended to handle vertical eye movements in a straightforward manner (Vikram et al., 2014; Zhang et al., 2014). 
Figure 3
 
The developmental model of the optokinetic reflex. Blue (green) lines represent the flow of information for the left (right) eye. Solid (dashed) lines represent the flow of information from the right (left) visual field. Sensory neurons are shown as blue or green circles. Subcortical sensory and motor neurons in the NOT are represented using solid circles. Cortical neurons are represented using half-green, half-blue circles. Motor neurons are represented using red circles. VA: vector averaging; AS: action selection; OKN: optokinetic nystagmus; VG: vergence. The input vector from image patches is denoted x; ys and yC represent the subcortical and cortical sensory neurons' responses; zOKN and zVG represent the subcortical motor neurons' responses controlling OKN and VG; uOKN and uVG are the commands generated by the two systems; θOKN represents version angle; and θVG represents VG angle. “L” and “R” are abbreviations for the left and right hemispheres. To avoid clutter, we omit the time index.
Figure 3
 
The developmental model of the optokinetic reflex. Blue (green) lines represent the flow of information for the left (right) eye. Solid (dashed) lines represent the flow of information from the right (left) visual field. Sensory neurons are shown as blue or green circles. Subcortical sensory and motor neurons in the NOT are represented using solid circles. Cortical neurons are represented using half-green, half-blue circles. Motor neurons are represented using red circles. VA: vector averaging; AS: action selection; OKN: optokinetic nystagmus; VG: vergence. The input vector from image patches is denoted x; ys and yC represent the subcortical and cortical sensory neurons' responses; zOKN and zVG represent the subcortical motor neurons' responses controlling OKN and VG; uOKN and uVG are the commands generated by the two systems; θOKN represents version angle; and θVG represents VG angle. “L” and “R” are abbreviations for the left and right hemispheres. To avoid clutter, we omit the time index.
Our model includes inputs from both foveal and peripheral regions of the retinae (Lonini et al., 2013). The fovea is assumed to be a square region subtending 7° of visual angle and centered on the optical axis. The periphery is assumed to be a square region subtending 25° of visual angle and centered on the optical axis. Although there is a difference between the optical axis and the line of sight in humans, which changes over the first few months of life (Riddell, Hainline, & Abramov, 1994), we do not anticipate that this would have a significant impact on the model. The changes would primarily introduce slight alterations to the geometry of image projection onto the fovea and periphery. Since the connections from retina to cortex are adaptive in our model, we expect that that our model could adapt to these differences. From a simulation point of view, the assumption that the optical axis and the line of sight coincide is convenient, since we model image formation using planar perspective projection. The difference between planar perspective projection and spherical perspective projection, which would be a better model of image formation in the eye, is smallest near the optical axis. 
Both fovea and periphery are spatially sampled, but we use a coarser sampling of the periphery so that the images representing both regions are 55 × 55 pixels. Note that the periphery region includes information from the fovea region, although at a coarser scale. These images are divided into a 10 × 10 array of 10 × 10–pixel patches, which overlap by 5 pixels horizontally and vertically. Our model also discretizes time at a sampling rate of 20 frames/s. This sampling rate is high enough to cover the range of peak temporal-frequency sensitivities in infants and adults (less than or equal to 10 Hz; Dobkins, Anderson, & Lia, 1999; Rasengane, Allen, & Manny, 1997) without temporal aliasing. Our model does not account for the effects of temporal-frequency selectivity, but we do not expect this to affect our results, since we do not consider OKN transients. 
Processing by the left and right hemispheres of the brain is modeled separately. Patches from the left hemiretina are passed to the left hemisphere in the model, and from the right hemiretina to the right hemisphere. Thus, each hemisphere processes visual information from the contralateral visual hemifield. 
The subcortical pathway models the sensorimotor transformations in the NOT region. They are active during the training of the cortical connections but disabled when the cortical pathway is tested. This is an approximation to the idea that the subcortical pathway is a scaffolding that supports the development of the cortical connections but becomes less important later in life (Braddick et al., 2003). 
Subcortical sensory neurons in the left hemisphere are assumed to respond to the retinal slip at the optic axis with Gaussian tuning curves tuned only to leftward motion in the right eye. Subcortical motor neurons in the left hemisphere trigger only conjugate eye rotations in the leftward direction; the opposite is true for neurons in the right hemisphere. There are an equal number (11) of sensory and motor neurons in each hemisphere, which are tuned to the same preferred slips and preferred rotations in units of degrees of visual angle per second. One-to-one connections between corresponding sensory and motor neurons ensure that OKN functions correctly from the start, modeling the fact that the subcortical pathway is functional at birth. However, because this pathway is monocular and neurons in each NOT receive input only from the contralateral eye, it exhibits asymmetric mOKN, as discussed earlier. 
The connections from the retina to the cortical sensory neurons are learned according to a sparse coding model. The efficient-coding hypothesis has been modeled mathematically by sparse coding algorithms (Olshausen & Field, 1997). Image intensities in corresponding patches from the left and right eyes and from current and previous frames are concatenated into a single input vector. Combining inputs from different eyes and different frames enables the model neurons to exhibit both disparity and motion tuning (Vikram et al., 2014). The sparse coder seeks to approximate the input vector as a sparse weighted sum of basis vectors chosen from an overcomplete dictionary. Within each scale, different patches are encoded independently but use the same dictionary of 600 basis vectors. The left and right hemispheres are modeled separately and the fovea and periphery use separate dictionaries, so that a total of four dictionaries are learned. Each basis vector is roughly analogous to the receptive field of a disparity- and motion-tuned simple cell in the primary visual cortex. The corresponding coefficient in the weighted sum corresponds to the activation of that simple cell. We compute the responses of model complex cells by pooling the squared coefficients from the same basis vector over the different patches. This results in a population of 600 complex cells from the fovea and 600 from the periphery. During development, the basis vectors are updated to best represent the statistics of the input vectors. At each time step, they are updated to minimize the total squared reconstruction error between the input and the sparse weighted sum of basis vectors summed over all patches. 
Our model assumes that these cortical neurons connect to motor neurons that drive both the OKN and VG eye-movement behavior. This is consistent with findings that in adults these systems share extensive amounts of neural circuitry (Fukushima et al., 2002). These findings may not be as strong in early development. In our model, these connections are initially set to small random values and grow in magnitude during development. 
The connections between the cortical sensory neurons and the motor neurons controlling OKN are learned according to a Hebbian learning rule. The bias toward connections from cortical neurons with a strong input from the contralateral eye is implemented by adding a regularizer that penalizes the size of the weights from each cortical cell to the left or right NOT according to the ocular dominance index of the cortical neurons. Connections from cortical sensory cells whose basis vectors are dominated by the ipsilateral eye are strongly penalized. Connections from cortical sensory cells whose basis vectors are balanced or dominated by the contralateral eye are favored. Motor commands to update the version angle of the two eyes are generated from the motor neuron responses by vector averaging and are temporally smoothed by an exponential weighting function. 
The connections between the cortical sensory neurons and the motor neurons controlling VG are learned by reinforcement learning using the natural actor–critic algorithm (Bhatnagar, Sutton, Ghavamzadeh, & Lee, 2009). The joint learning of disparity tuning and VG control during behavior has been described in detail elsewhere (Zhao et al., 2012). VG commands are selected probabilistically by sampling one of the motor neurons according to a softmax probability distribution that depends on the activation. 
The VG and version angles determined by the generated motor commands are combined to determine the left- and right-eye positions. If the eye positions reach a limit of motion (here set to approximately ±18°), a recentering reflex is triggered to return the eyes to the center of their motion range. This recentering reflex is a simplified version of the fast phase observed in OKN. The actual timing, amplitude, and direction of the fast phase are variable (Waddington & Harris, 2012) rather than fixed. The fast phase of OKN also appears to take on object-targeting properties like saccades (Harrison, Freeman, & Sumner, 2014, 2015). The angular range covered by the slow phase is also typically smaller than 18°, which was chosen here for convenience to avoid the need to handle a large number of fast phases. Since our model is concerned primarily with the behavior in the slow phase, we do not anticipate much difference if a more accurate model of the fast phase, which would require additional modeling assumptions, is used. 
Experimental results
Note that the learning of perception and behavior takes place simultaneously in our model. Their development is mutually interdependent: Behavior changes the statistics of the input, which changes the properties of the sensory neurons, and since the sensory neurons drive the motor neurons, these changes in turn alter the behavior. Thus, it is critical not only to simulate the neural activity and plasticity within the agent but also to place this agent within an environment. 
We use the simulation environment developed for the iCub humanoid robot (Tikhanoff et al., 2008) to generate the images seen by the two eyes in response to changes in the eye positions and in the environment. This enables us to simulate behavior and perception in the model within a realistic yet controlled environment. 
During training, a large planar target is placed in front of the simulated agent. The target depth is chosen randomly from a uniform distribution between 0.3 and 2 m. The physical size of the target is fixed, but it subtends varying visual angles depending upon its depth (46° at 0.3 m, 80° at 1 m, and 141° at 2 m). A fixed background target is placed 2 m away from the agent. Its size is chosen so that it covers 90° of visual angle. 
Targets move with constant horizontal speeds for periods of 1 s. During each period, a different image chosen randomly from a database of natural images (Geisler & Perry, 2011) is mapped onto the planar target. The target velocity within each period is chosen independently from a uniform distribution between −40°/s and 40°/s. The eyes' velocities are constrained to lie between −48°/s and 48°/s. Thus, possible retinal slips range from −88°/s to 88°/s. In response to the visual input, the iCub robot makes version and VG eye movements as described in the previous section. 
During development, the subcortical pathway is active. Updates to the neural connections to the cortical sensory neurons and from the cortical sensory neurons to the motor neurons proceed in tandem with the behavior. The version angle is driven by both the subcortical and cortical sensory neurons. The VG angle is driven only by the cortical sensory neurons. 
To model the effect of strabismus, we fix the VG angle to a constant value, independent of the VG motor neuron output. In particular, unless otherwise noted, we introduce an esotropic strabismus, where the VG is fixed to 20°, corresponding to fixation at 0.25 m in front of the agent when the version angle is 0°. 
After development, the subcortical pathway is disabled, consistent with evidence that it does not play a significant role after the cortical pathway develops (Braddick et al., 2003). The neural connections to the cortical sensory neurons and from the cortical sensory neurons to the motor neurons are fixed at their values after training. Thus, both version and VG are driven solely by input from the same set of cortical sensory neurons. 
Properties of the cortical sensory neurons
Figure 4 shows the basis vectors (analogous to receptive fields) from the cortical sensory neurons in the left hemisphere that receive input from the right-eye fovea learned under both the normal and strabismic cases. The results are similar for the right hemisphere. Each basis vector is presented as a 2 × 2 array of image patches. We can study disparity tuning by comparing the patches in the left and right columns, and motion tuning by comparing the patches in the top and bottom rows. With normal development, almost all the neurons are binocular. With strabismus, most of the neurons are monocular. 
Figure 4
 
The basis vectors of the foveal cortical sensory neurons from the left hemisphere learned under (a) normal and (b) strabismic cases. Since the input vector is a concatenation of image intensities from four patches—left/right eye at current/past frames—each basis vector is presented as four images arranged in a 2 × 2 array. Left and right columns correspond to the left and right eyes, and top and bottom rows correspond to current and past frames. The 600 basis vectors are presented in a 15 × 40 array. For clarity, we show larger views of four representative basis vectors highlighted in red in (a, b) for both the (c) normal and (d) strabismic cases.
Figure 4
 
The basis vectors of the foveal cortical sensory neurons from the left hemisphere learned under (a) normal and (b) strabismic cases. Since the input vector is a concatenation of image intensities from four patches—left/right eye at current/past frames—each basis vector is presented as four images arranged in a 2 × 2 array. Left and right columns correspond to the left and right eyes, and top and bottom rows correspond to current and past frames. The 600 basis vectors are presented in a 15 × 40 array. For clarity, we show larger views of four representative basis vectors highlighted in red in (a, b) for both the (c) normal and (d) strabismic cases.
We quantify the binocularity of the basis vectors by computing their ocular dominance indices (see Appendix) and binning them into bins numbered from 1 for contralateral monocular to 7 for ipsilateral monocular. Bin 4 corresponds to vectors with equal weighting for both eyes. Figure 5 shows the ocular dominance (OD) histogram computed for the basis vectors shown in Figure 4. Our model produces similar results as those observed by Hubel and Wiesel (1965, e.g., figure 5). With normal development, most neurons fall into Bin 4 and there are few monocular neurons. For the strabismic case, most neurons are monocular, and only a few binocular neurons are in Bin 4. 
Figure 5
 
OD histogram of the foveal cortical sensory neurons from the left hemisphere under (a) normal and (b) strabismic cases. Neurons in Bin 1 respond only to contralateral monocular input. Neurons in Bin 7 respond only to ipsilateral monocular input. Neurons in Bin 4 respond equally to both contralateral and ipsilateral monocular input.
Figure 5
 
OD histogram of the foveal cortical sensory neurons from the left hemisphere under (a) normal and (b) strabismic cases. Neurons in Bin 1 respond only to contralateral monocular input. Neurons in Bin 7 respond only to ipsilateral monocular input. Neurons in Bin 4 respond equally to both contralateral and ipsilateral monocular input.
For both the normal and strabismic cases, the basis vectors show similar motion tuning. To characterizing the motion tuning, we fitted 2-D spatial Gabor functions to the four image patches of each basis function. The Gabor functions had the same spatial frequency and Gaussian covariance matrices but different absolute phase parameters. For the binocular neurons (with OD indices in Bins 3 to 5) we fitted all four image patches and computed the preferred slip from the difference between the absolute phase parameters in the top and bottom rows averaged across the two columns. For monocular neurons, we computed the fits and phase difference only for the dominant eye. Figure 6 shows the preferred slips and directions for the sensory neurons from the left hemisphere (fovea) learned under (a) normal and (b) strabismic conditions. The preferred directions are almost uniformly distributed. Most preferred slips lie below 10°/second. 
Figure 6
 
Polar plot of the distribution of preferred slips and directions for the sensory neurons from the left hemisphere (fovea) learned under (a) normal and (b) strabismic conditions. The angular coordinate indicates the preferred direction, and the radial coordinate indicates the preferred slips in units of degrees per second. We use logarithmic scaling for the radial direction, so there is a singularity at the origin.
Figure 6
 
Polar plot of the distribution of preferred slips and directions for the sensory neurons from the left hemisphere (fovea) learned under (a) normal and (b) strabismic conditions. The angular coordinate indicates the preferred direction, and the radial coordinate indicates the preferred slips in units of degrees per second. We use logarithmic scaling for the radial direction, so there is a singularity at the origin.
OKN behavior
Figure 7 shows the right-eye position trajectories of the agent after development under normal and strabismic conditions in response to monocular input generated by a large planar object moving with a speed of 30°/s in either the NT or TN direction. The object was placed 1 m in front of the agent and was large enough (covering 80° of visual angle horizontally and vertically) that it covered the entire visual field. Behavior for the left eye was similar. 
Figure 7
 
Eye-position trajectory in response to a large planar object moving at 30°/s in either the TN (a, c) or NT (b, d) direction after development under either (a, b) normal or (c, d) strabismic conditions.
Figure 7
 
Eye-position trajectory in response to a large planar object moving at 30°/s in either the TN (a, c) or NT (b, d) direction after development under either (a, b) normal or (c, d) strabismic conditions.
The eye trajectories demonstrate nystagmus: There is a repeating pattern of slow and fast phases. During the slow phase, the eye tracks the object as it moves. When the eye reaches the limit of its motion (approximately ±18°), the fast phase returns the eye quickly to the center of its range. After normal development, mOKN is symmetric. The curves in response to NT or TN motion are similar, except for a change in the sign of the velocity. However, with strabismus, mOKN is asymmetric. During the slow phase, the slope for the NT stimulus is lower than the slope for the TN stimulus, indicating that the eye cannot track motion in the NT direction accurately. As a consequence, the number of fast/slow cycles (known as beats) during the same stimulus duration is lower for the NT stimulus than for the TN stimulus. The absolute rate at which fast phases occur is small compared to empirical data due to the simplifications we made in modeling the fast phase. However, since our measurement of asymmetry depends upon ratios as described later, we do not expect a more accurate model of the fast phase to change those measurements by much. 
Following the experimental literature, we compute two quantitative measures for the asymmetry of mOKN behavior: the nasal bias index (NBI; Tychsen, 2007) and the asymmetry index (ASI; Reed et al., 1991). The NBI is based on the difference between VTN and VNT, the slow phase velocities during TN and NT motion:    
The NBI is 0 for symmetrical mOKN and approaches 1 for strongly asymmetrical mOKN. The ASI is based upon the difference between the number of beats (fast/slow cycles) for TN and NT stimuli of the same length. Denoting the number of beats for NT and TN motion by mNT and mTN, we define    
The ASI is 0 for symmetrical mOKN and approaches 1 for strongly asymmetrical mOKN. Following Reed et al. (1991), we will consider values of ASI greater than 0.25 to indicate asymmetric mOKN. In computing the NBI and ASI, we averaged over 100 trials in which the target moves with a constant angular speed of 30°/s for 10 s. 
The mOKN asymmetry after development under the strabismic condition depends upon the bias against ipsilateral connections and toward contralateral connections, which we control by a nonnegative parameter G. Larger values of G create stronger biases against connections to the motor neurons from cortical neurons with significant input from the ipsilateral eye. The detailed description of the parameter is in the Appendix. Figure 8 shows that as the bias increases, the mOKN asymmetry increases. Tychsen (2007) tested mOKN asymmetry in primates with esotropia with visual stimuli moving at 30°/s and found the NBI value to be around 0.35, matching our results with G = 0.012. Thus, we fixed G = 0.012 in the simulations reported here, including the results in Figure 7. The ASI computed at the end of learning is 0.05 in the normal condition and 0.50 in the strabismic condition. 
Figure 8
 
The mOKN asymmetry as measured by NBI increases with increases in the parameter G controlling the bias against connections to the NOT from cortical cells with strong ipsilateral-eye input. The NBI was measured for the right eye for a stimulus speed of 30°/s. The curve continues to increase beyond the plotted range.
Figure 8
 
The mOKN asymmetry as measured by NBI increases with increases in the parameter G controlling the bias against connections to the NOT from cortical cells with strong ipsilateral-eye input. The NBI was measured for the right eye for a stimulus speed of 30°/s. The curve continues to increase beyond the plotted range.
Figure 9 shows the mean slow phase velocity as a function of the stimulus speed for both (a) normal and (c) strabismic development. Eye velocity was computed using a central difference algorithm after removing the fast phase. For each velocity, the stimulus was presented for a period of 10 s for each trial, and we repeated 100 trials. The eye velocity was computed 1 s after presenting the stimulus for each trial to remove the transient components. The mean slow phase velocity was averaged over all periods and trials. After normal development, the slow phase velocity is nearly identical for stimuli moving in both the TN and NT direction. Thus, the NBI shown in Figure 9b is close to 0 for all stimulus speeds. On the other hand, there is a downward shift in the slow phase velocity for the NT stimulus for the strabismic case, as shown in Figure 9c, corresponding to the larger NBI values shown in Figure 9d. For slower stimuli (10°/sec), the NBI was close to 1, since the slow-phase velocity in the NT direction is close to 0. Similar to the results of Waddington and Harris (2012, figure 2), we observe a saturation in the mean slow phase velocity at higher stimulus speeds. 
Figure 9
 
OKN behavior as a function of stimulus speed. The top row shows (a) the mean slow phase velocity as a function of the stimulus speed for both TN- (red) and NT- (blue) directed stimuli, and (b) the NBI computed for stimuli with different speeds, after normal development. The bottom row (c, d) shows the same measures after strabismic development.
Figure 9
 
OKN behavior as a function of stimulus speed. The top row shows (a) the mean slow phase velocity as a function of the stimulus speed for both TN- (red) and NT- (blue) directed stimuli, and (b) the NBI computed for stimuli with different speeds, after normal development. The bottom row (c, d) shows the same measures after strabismic development.
Figure 10 shows the population response of the motor neurons after development with strabismus. These population responses were generated by holding the eyes fixed and presenting monocular stimuli with different retinal slips to the right eye. Since the bases are normalized to have unit norm and the connections from sensory neurons to each motor neuron are also normalized (see the Appendix for a detailed description), the motor response has arbitrary units. The scale of the response can be changed by adjusting the normalization factor. The reasons for the mOKN asymmetry are evident by comparing the magnitude of the population responses from the left NOT in response to negative (TN) slips (Figure 10a, top left) with the magnitude of the population responses from the right NOT in response to positive (NT) slips (Figure 10d, bottom right). For the right eye, the left NOT drives TN rotation and the right NOT drives NT rotation. The better stabilization for stimuli moving in the TN direction is reflected by the greater magnitude of the responses in the left NOT (Figure 10a) compared with the right NOT (Figure 10d). The magnitude of the responses in Figure 10b and c show a similar asymmetry. 
Figure 10
 
The population responses of the motor neurons driving version eye movements after development with strabismus in response to monocular visual stimuli presented to the right eye. Each curve shows the population response to a visual stimulus with fixed slip. Left and right columns show responses from neurons in the left and right NOT. Top and bottom rows show responses to negative and positive slips. The legend of (a) is the same as (b). The legend of (c) is the same as (d).
Figure 10
 
The population responses of the motor neurons driving version eye movements after development with strabismus in response to monocular visual stimuli presented to the right eye. Each curve shows the population response to a visual stimulus with fixed slip. Left and right columns show responses from neurons in the left and right NOT. Top and bottom rows show responses to negative and positive slips. The legend of (a) is the same as (b). The legend of (c) is the same as (d).
Strabismus affects the statistics of the stereo disparity between the left- and right-eye inputs. For the development with esotropic strabismus studied earlier, the distribution of stereo disparity was biased toward large far disparities. Input in the left- and right-eye patches was largely uncorrelated, resulting in most basis vectors being monocular. In this case, the bias against ipsilateral connections causes the asymmetry in cortically driven mOKN. Since the input-disparity statistics determine the degree of monocularity in the basis vectors, we hypothesize that the degree of eso- or exotropia disparity can affect the degree of mOKN asymmetry by altering the proportion of inputs with near versus far disparities. Our experiments are consistent with this hypothesis. Figure 11a shows the percentage of inputs with far disparities as a function of the fixation depth corresponding to the VG angle used to simulate strabismus. The percentage decreases as the fixation depth increases, with an equal proportion of near and far disparities when the fixation depth is 1 m, close to the middle of the range of target object depths. Figure 11b shows that the degree of mOKN asymmetry, as measured by the NBI, reaches a minimum when the input disparities are equally distributed between near and far disparities. 
Figure 11
 
Behavior measure of mOKN over near–far disparity statistics. (a) Percentage of input with far disparities as a function of fixation depth (VG angles). (b) NBI as a function of fixation depth (VG angles). The bottom x-axis represents the fixation depth, and the top x-axis represents the corresponding VG angle.
Figure 11
 
Behavior measure of mOKN over near–far disparity statistics. (a) Percentage of input with far disparities as a function of fixation depth (VG angles). (b) NBI as a function of fixation depth (VG angles). The bottom x-axis represents the fixation depth, and the top x-axis represents the corresponding VG angle.
Effect of removing the contralateral bias
In the results already discussed, we have assumed, following Tychsen (1999) and others, that there is a bias toward connections to the NOT from cortical neurons dominated by contralateral retinal input. Our model enables us to examine whether this assumption is necessary to account for the mOKN asymmetry observed after development with strabismus. 
An alternative hypothesis for mOKN asymmetry which does not rely upon this bias is that the strabismus gives rise to uncorrelated motion in the two eyes because they image independently moving surfaces. Even if all cortical neurons (whether dominated by ipsilateral, contralateral, or balanced retinal input) are equally able to connect to the NOT, the connections required to stabilize motion in the NT direction will only develop under Hebbian learning if the ipsilateral eye observes NT motion at the same time as the contralateral eye is observing TN motion, since the subcortical OKN control stabilizes TN motion only in the contralateral eye. Note that TN motion in one eye corresponds to NT motion in the other eye. 
We tested this hypothesis by removing the regularizer enforcing the bias against connections from cortical neurons dominated by ipsilateral input. Then mOKN is symmetrical after normal development but asymmetrical after strabismic development. The ASI computed at the end of learning is 0.04 under the normal condition and 0.44 under the strabismic condition. The NBI exhibits a similar trend, becoming 0.02 for the normal condition and 0.29 the for strabismic condition. We simulated strabismus as in previous experiments by fixing the VG angle to 20°. The target object size was chosen so that it covered 40° of visual angle at 1 m. This target object size is small enough that the two eyes see different surfaces (one eye sees the target, the other the background) a large proportion (∼65%) of the time. We examine the effect of changing the target object size later. 
This model predicts that the degree of mOKN asymmetry decreases with the percentage of time the two eyes observe correlated motion. This percentage will be small if the eyes image different objects. It will be large under normal development, since VG operates correctly to ensure that the two eyes fixate onto the same part of the same surface. However, the percentage can also be large with strabismus, if the planar surface is large enough that the two eyes observe the same surface. Even if the eyes image different parts of that surface, as long as the surface is undergoing frontoparallel translation with no rotation the motion in the two eyes will be similar. 
We tested this hypothesis by simulating strabismic development with target objects of various sizes ranging from 20° to 80° of visual angle when placed at a distance of 1 m. Figure 12a plots the percentage of time the retinal slip in the two eyes was similar, as a function of object size. We defined the retinal slip to be similar if the difference was lower than 1°/s. For smaller objects, most of the time only one eye observed the moving stimulus, while the other observed the static background. As the size of the target object increased, it was seen by both eyes for a larger and larger percentage of the time. Due to the strabismus, they did not necessarily observe the same part of the target, but the retinal slip in the two eyes was still similar. Figure 12b shows that as the object size becomes larger, the degree of asymmetry as measured by the ASI decreases. If the two eyes observe the same slip for at least 80% of the time, the mOKN is symmetrical. 
Figure 12
 
The results of strabismic development without the contralateral bias for different object sizes measured in degrees of visual angle subtended at a distance of 1 m. (a) The percentage of time that the two eyes observe similar motion as the object size (defined as the visual angle subtended at a distance of 1 m) changes. (b) The ASI as a function of object size. The shaded region indicates symmetric mOKN.
Figure 12
 
The results of strabismic development without the contralateral bias for different object sizes measured in degrees of visual angle subtended at a distance of 1 m. (a) The percentage of time that the two eyes observe similar motion as the object size (defined as the visual angle subtended at a distance of 1 m) changes. (b) The ASI as a function of object size. The shaded region indicates symmetric mOKN.
Discussion
We have described a neurally plausible framework for modeling the development of the cortical pathway driving the optokinetic reflex. This framework models the joint emergence of perception and action and accounts for the importance of the development of normal VG control and binocular vision in achieving symmetric mOKN. The framework is a based on the active-efficient-coding model (Teulière et al., 2015; Zhang et al., 2014; Zhao et al., 2012), which posits that neural coding and behavior both develop to ensure that the neural population can represent the input stimulus with high fidelity while requiring only a few active neurons. At the start of the simulation, OKN control is established via the subcortical pathway, which guides the learning of the cortical pathway. However, VG control is random at the start, since it depends upon the connections from retina to sensory cortical neurons and from the sensory cortical neurons to VG motor neurons, both of which are initialized randomly. Thus, in our model, binocularity, disparity selectivity, and VG and version control develop simultaneously. Their development is mutually interdependent. 
Unlike past models, which were based on scalar models of the overall activity in different neural areas, our framework models the detailed connectivity from the retinal input to the cortex as well as from the cortex to the motor neurons driving version and VG behavior. To our knowledge, this is the first model that can make quantitative predictions about behavior using the same stimuli used to characterize OKN in the experimental literature. In addition, it is the first model to explicitly model development as well as the interaction between VG and version control in the development of the optokinetic reflex. 
This model matches both quantitatively and qualitatively with a wide range of experimental findings from the literature on both binocular vision and the optokinetic reflex. Because it includes behavior, we can simulate the same perturbations as performed in past experiments, such as artificially induced strabismus. 
For example, the model matches with Hubel and Wiesel's (1965) measurements of OD histograms of kittens with normal vision and kittens with artificially introduced squint. Those researchers noted that 79% of the recorded cells were monocular (belonging to Bins 1 and 7) for kittens with squint, compared to 20% in kittens without. For our model, around 84% of cells are monocular after development with strabismus, versus 2% for normal development. There is a much lower percentage of monocular cells observed in our model under normal development, because the statistics of input disparity are much more clustered around zero disparity than we would expect for animals with typical behavior. For convenience, the model simulations have the agent continually looking at and verging onto large planar objects. However, a more realistic environment would have smaller objects with more complex depth contours, leading to a wider diversity of disparities. We expect that this would lead to more units with unbalanced ocular input after normal development. 
Reed et al. (1991) compared four groups of subjects: those with early onset of strabismus (before 24 months of age), those with late onset of strabismus (after 24 months of age), those with monocular enucleation, and those in a control group. Only subjects with early onset of strabismus showed statistically significant horizontal mOKN asymmetry, whereas other groups had symmetrical mOKN response. Reed et al. used an asymmetry index (their figure 1) to measure the asymmetry of the mOKN. For the control group, the ASI was distributed within the range −0.3 to 0.25. For subjects with early onset (<24 months) of strabismus, the ASI falls within the range 0.25 to 1. In our simulations, we observed similar ranges of the ASI after normal and strabismic development for the models both with (ASI of 0.05 under normal and 0.50 under strabismic conditions) and without (ASI of 0.04 under normal and 0.44 under strabismic conditions) the contralateral bias. 
Tychsen (2007) used the nasal bias index to measure mOKN asymmetry. In his experiments, he measured NBI values around 0.35 for subjects with strabismus at stimulus speeds of 30°/s. We have chosen our parameters in the contralateral-bias model to match this value (G = 0.012 in Figure 8). Our model predicts that even with this contralateral bias, mOKN can be symmetric after strabismus if the proportion of near and far disparities observed during development is balanced. For the model without contralateral bias, the NBI after strabismic development is around 0.3 when the two eyes usually observe uncorrelated motion. 
A similar asymmetry after strabismic development has also been observed in the smooth pursuit system; our results are consistent with experimental findings there as well. Kiorpes et al. (1996) found strong asymmetry favoring TN stimuli in monkeys with artificially induced strabismus. Based on single-unit recordings from MT neurons in monkeys with strabismus, they found most MT cells to be monocular. We find strabismus to have a similar effect on the cortical sensory neurons that develop in our model (Figure 5b), although the cortical neurons in our model are more similar to V1, rather than MT, neurons. Kiorpes et al. (1996) found the direction preferences in MT to be uniformly distributed, with no bias favoring TN motion. We also observe a nearly uniform distribution of preferred motion directions in our model for both normal and strabismic development (Figure 6). Thus, our model lends further support to the suggestion from Kiorpes et al. that asymmetries in visual tracking due to strabismus are caused not by asymmetries in the lower level cortical representation of visual motion but rather in the mapping of this sensory representation to eye movements. 
Kiorpes et al. also proposed a model enabling them to estimate the strength of connections from the two hemiretinae of the two eyes to left and right MT, and from MT to the higher parts of the cortical pursuit system (CPS). Based on pursuit measurements from monkeys with strabismus, they estimated an enhancement in weights connecting MT to the contralateral cortical pursuit system and a reduction in weights connecting MT to the ipsilateral cortical pursuit systems. Consistent with this, in the contralateral-bias model with strabismus, we found the l2 norm of the weights from each cortical area to the contralateral NOT to be 1.3 times greater than the l2 norm of the ipsilateral weights. Note that we cannot make a quantitative comparison between the weights estimated by Kiorpes et al. and the weights in our model: Their weights are scalars representing the strength of connections between entire areas, whereas in our model the weights are matrices representing the detailed connectivity between individual units in sensory and motor areas. In our model, the final motor command depends not only upon the size but also upon the pattern of these weights. In the simplified scalar model of Kiorpes et al., the two effects are confounded. 
Waddington and Harris (2012) showed that eye velocity during the slow phase increases as stimulus speed increases, but saturates at high stimulus speeds. For normal human subjects, when the stimulus speed is 10°/s, the slow phase eye velocity is around 8°/s. As the stimulus speed increases to 40°/s, the eye velocity only reaches around 20°/s. Our model exhibits similar quantitative behavior (Figure 9a). At the stimulus speed of 10°/s, the eye velocity is 8°/s. When stimulus speed increases to 40°/s, the eye velocity reaches 23°/s. Waddington and Harris tested only healthy subjects. Our model predicts that subjects with strabismus will exhibit quantitatively similar behavior for stimuli moving in the TN direction, but with lower eye velocities when the stimulus moves in the NT direction. Figure 9c shows that for stimuli moving in the TN direction, the eye velocity shows a similar behavior for subjects with typical behavior, increasing from 11°/s to 24°/s when the stimulus speed increases from 10°/s to 40°/s. For stimuli moving in the NT direction, the eye velocity increases from around 0°/s to only 13°/s. 
Our model also enables us to make other testable predictions about the effect of the input statistics on the degree of mOKN asymmetry. The contralateral-bias model predicts that the percentage of near and far disparities seen will affect the degree of mOKN asymmetry, with less asymmetry when the percentages are balanced. The model without contralateral bias predicts that the degree of mOKN asymmetry will decrease as the percentage of time that the retinal slip in the two eyes is similar increases. 
Moving forward, the modeling framework described here can be extended in a number of directions. In addition to the effect of strabismus studied here, we could study the effect of amblyopia. For example, Westall and Schor (1985) investigated the mOKN response in people with amblyopia to stimuli at different retinal locations: central, peripheral, nasal, and temporal. We could also model this in our framework. Another interesting question is whether using reinforcement, rather than Hebbian, learning for OKN response results in any differences in performance. Our model has focused on behavior during the slow phase of nystagmus. We used a simplified hardwired recentering reflex to account for the fast phase of nystagmus, but it would be interesting to study the development of this phase as well. Instead of using full-field stimulus, another possible extension is to apply the framework to a small-target tracking task, like smooth pursuit eye movements. 
Conclusion
We have proposed a neurally plausible framework for modeling the development of the cortical pathway driving the optokinetic reflex. The framework models the joint development of both perception and action. Unlike past models, which were based on scalar models of the overall activity in different neural areas, our framework models the detailed connectivity from the retinal input to the cortex and from the cortex to the motor neurons driving version and vergence behavior. The proposed model agrees both qualitatively and quantitatively with a number of findings from the literature on both binocular vision and the optokinetic reflex. Our model also makes quantitative predictions about OKN behavior using the same methods used to characterize OKN in the experimental literature. 
Acknowledgments
This work was supported in part by the Hong Kong Research Grants Council under Grant 618512 and the German Federal Ministry of Education and Research (BMBF) under Grants 01GQ1414 and 01EW1603A. JT was supported by the Quandt Foundation. 
Commercial relationships: none. 
Corresponding author: Chong Zhang. 
References
Aslin R. N. (1977). Development of binocular fixation in human infants. Journal of Experimental Child Psychology, 23 (1), 133–150.
Atkinson J. (1979). Development of optokinetic nystagmus in the human infant and monkey infant: An analogue to development in kittens. In Freeman R. D. (Ed.), Developmental neurobiology of vision (pp. 277–287. New York: Plenum Press.
Barlow H. B. (1961). Possible principles underlying the transformation of sensory messages. In Rosenblith W. A. (Ed.), Sensory communication (pp. 217–234. Cambridge, MA: MIT Press.
Bhatnagar S, Sutton R. S, Ghavamzadeh M, Lee M. (2009). Natural actor–critic algorithms. Automatica, 45 (11), 2471–2482.
Braddick O, Atkinson J. (1981a). Acuity, contrast sensitivity and accommodation in infancy. In Aslin R. N, Alberts J. R, Petersen M. R. (Eds.), The development of perception (pp. 245–278). New York: Academic Press.
Braddick O, Atkinson J. (1981b). Development of optokinetic nystagmus in infants: An indicator of cortical binocularity? In Fisher D. F, Monty R. A, Senders J. W. (Eds.), Eye movements: Cognition and visual perception (pp. 53–64. Hillsdale, NJ: Lawrence Erlbaum Associates.
Braddick O, Atkinson J, Wattam-Bell J. (2003). Normal and anomalous development of visual motion processing: Motion coherence and “dorsal-stream vulnerability.” Neuropsychologia, 41 (13), 1769–1784.
Chino Y. M, Smith E. L, Hatta S, Cheng H. (1997). Postnatal development of binocular disparity sensitivity in neurons of the primate visual cortex. The Journal of Neuroscience, 17 (1), 296–307.
Cohen B, Matsuo V, Raphan T. (1977). Quantitative analysis of the velocity characteristics of optokinetic nystagmus and optokinetic after-nystagmus. The Journal of Physiology, 270 (2), 321–344.
Cohen B, Reisine H, Yokota J. I, Raphan T. (1992). The nucleus of the optic tracta: Its function in gaze stabilization and control of visual-vestibular interaction. Annals of the New York Academy of Sciences, 656 (1), 277–296.
Crone R. (1977). Amblyopia: The pathology of motor disorders in amblyopic eyes. Documenta Ophthalmologica Proceedings Series, 45, 9–18.
del Viva M. M, Morrone M. C, Fiorentini A. (2001). VEP selective responses to flow motion in adults and infants. Perception, 30, 36.
Dobkins K. R, Anderson C. M, Lia B. (1999). Infant temporal contrast sensitivity functions (tCSFs) mature earlier for luminance than for chromatic stimuli: Evidence for precocious magnocellular development? Vision Research, 39 (19), 3223–3239.
Fukushima K, Yamanobe T, Shinmei Y, Fukushima J, Kurkin S, Peterson B. W. (2002). Coding of smooth eye movements in three-dimensional space by frontal cortex. Nature, 419 (6903), 157–162.
Gattass R, Gross C, Sandell J. (1981). Visual topography of V2 in the macaque. Journal of Comparative Neurology, 201 (4), 519–539.
Geisler W. S, Perry J. S. (2011). Statistics for optimal point prediction in natural images. Journal of Vision, 11 (12): 14, 1–17, doi:10.1167/11.12.14. [PubMed] [Article]
Harrison J. J, Freeman T. C, Sumner P. (2014). Saccade-like behavior in the fast-phases of optokinetic nystagmus: An illustration of the emergence of volitional actions from automatic reflexes. Journal of Experimental Psychology: General, 143 (5), 1923–1938, doi:10.1037/a0037021.
Harrison J. J, Freeman T. C, Sumner P. (2015). Saccadic compensation for reflexive optokinetic nystagmus just as good as compensation for volitional pursuit. Journal of Vision, 15 (1): 24, 1–13, doi:10.1167/15.1.24. [PubMed] [Article]
Hoffmann K.-P. (1981). Neuronal responses related to optokinetic nystagmus in the cat's nucleus of the optic tract. In Fuchs A, Becker W. (Eds.), Progress in oculomotor research (pp. 443–454. New York: Elsevier.
Hoffmann K.-P. (1982). Cortical versus subcortical contributions to the optokinetic reflex in the cat. In Lennerstrand G. (Ed.), Functional basis of ocular motility disorders (pp. 303–310. Oxford, UK: Pergamon Press.
Hoffmann K.-P. (1983). Control of the optokinetic reflex by the nucleus of the optic tract in the cat. In Hein A, Jeannerod M. (Eds.), Spatially oriented behavior (pp. 135–153. New York: Springer.
Hoffmann K.-P. (1986). Visual inputs relevant for the optokinetic nystagmus in mammals. Progress in Brain Research, 64, 75–84.
Hoffmann K.-P. (1989). Control of the optokinetic reflex by the nucleus of the optic tract in primates. Progress in Brain Research, 80, 173–182.
Horton J. C, Hocking D. R. (1996). An adult-like pattern of ocular dominance columns in striate cortex of newborn monkeys prior to visual experience. The Journal of Neuroscience, 16 (5), 1791–1807.
Hoyer P. O, Hyvärinen A. (2000). Independent component analysis applied to feature extraction from colour and stereo images. Network: Computation in Neural Systems, 11 (3), 191–210.
Hubel D. H, Wiesel T. N. (1965). Binocular interaction in striate cortex of kittens reared with artificial squint. Journal of Neurophysiology, 28 (6), 1041–1059.
Kiorpes L, Walton P. J, O'Keefe L. P, Movshon J. A, Lisberger S. G. (1996). Effects of early-onset artificial strabismus on pursuit eye movements and on neuronal responses in area MT of macaque monkeys. The Journal of Neuroscience, 16 (20), 6537–6553.
Knapp C. M, Proudlock F. A, Gottlob I. (2013). OKN asymmetry in human subjects: A literature review. Strabismus, 21 (1), 37–49.
Lonini L, Forestier S, Teulière C, Zhao Y, Shi B. E, Triesch J. (2013). Robust active binocular vision through intrinsically motivated learning. Frontiers in Neurorobotics, 7 (20), 1–10.
Lynch J. C, McLaren J. W. (1983). Optokinetic nystagmus deficits following parieto-occipital cortex lesions in monkeys. Experimental Brain Research, 49 (1), 125–130.
Mallat S. G, Zhang Z. (1993). Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41 (12), 3397–3415.
Masseck O. A, Hoffmann K.-P. (2009). Comparative neurobiology of the optokinetic reflex. Annals of the New York Academy of Sciences, 1164 (1), 430–439.
Mitkin A, Orestova E. (1988). Development of binocular vision in early ontogenesis. Psychologische Beitrage, 30, 65–74.
Naegele J. R, Held R. (1982). The postnatal development of monocular optokinetic nystagmus in infants. Vision Research, 22 (3), 341–346.
Olshausen B. A, Field D. J. (1997). Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37 (23), 3311–3325.
Raphan T, Matsuo V, Cohen B. (1979). Velocity storage in the vestibulo-ocular reflex arc (VOR). Experimental Brain Research, 35 (2), 229–248.
Rasengane T. A, Allen D, Manny R. E. (1997). Development of temporal contrast sensitivity in human infants. Vision Research, 37 (13), 1747–1754.
Reed M. J, Steinbach M. J, Anstis S. M, Gallie B, Smith D, Kraft S. (1991). The development of optokinetic nystagmus in strabismic and monocularly enucleated subjects. Behavioural Brain Research, 46 (1), 31–42.
Riddell P. M, Hainline L, Abramov I. (1994). Calibration of the Hirschberg test in human infants. Investigative Ophthalmology & Visual Science, 35 (2), 538–543. [PubMed] [Article]
Shouval H, Intrator N, Law C. C, Cooper L. N. (1996). Effect of binocular cortical misalignment on ocular dominance and orientation selectivity. Neural Computation, 8 (5), 1021–1040.
Teulière C, Forestier S, Lonini L, Zhang C, Zhao Y, Shi B. E, Triesch J. (2015). Self-calibrating smooth pursuit through active efficient coding. Robotics and Autonomous Systems, 71, 3–12.
Tikhanoff V, Cangelosi A, Fitzpatrick P, Metta G, Natale L, Nori F. (2008, Aug). An open-source simulator for cognitive robotics research: The prototype of the iCub humanoid robot simulator. Paper presented at the IEEE Workshop on Performance Metrics for Intelligent Systems, Washington, DC.
Tychsen L. (1993). Motion sensitivity and the origins of infantile strabismus. In Simons K. (Ed.), Early visual development: Basic and clinical research (pp. 364–390. New York: Oxford University Press.
Tychsen L. (1999). Infantile esotropia: Current neurophysiologic concepts. In Rosenbaum A. L, Santiago A. P. (Eds.), Clinical strabismus management (pp. 117–138. Philadelphia: Saunders.
Tychsen L. (2007). Causing and curing infantile esotropia in primates: The role of decorrelated binocular input. Transactions of the American Ophthalmological Society, 105, 564–593.
Vikram T. N, Teulière C, Zhang C, Shi B. E, Triesch J. (2014, Oct.). Autonomous learning of smooth pursuit and vergence through active efficient coding. Paper presented at the IEEE International Conference on Development and Learning and Epigenetic Robotics, Palazzo Ducale, Genoa, Italy.
Waddington J, Harris C. M. (2012). Human optokinetic nystagmus: A stochastic analysis. Journal of Vision, 12 (12): 5, 1–17, doi:10.1167/12.12.5. [PubMed] [Article]
Waddington J, Harris C. M. (2013). The distribution of quick phase interval durations in human optokinetic nystagmus. Experimental Brain Research, 224 (2), 179–187.
Westall C. A, Schor C. M. (1985). Asymmetries of optokinetic nystagmus in amblyopia: The effect of selected retinal stimulation. Vision Research, 25 (10), 1431–1438.
Yuodelis C, Hendrickson A. (1986). A qualitative and quantitative analysis of the human fovea during development. Vision Research, 26 (6), 847–855.
Zhang C, Zhao Y, Triesch J, Shi B. E. (2014, May). Intrinsically motivated learning of visual motion perception and smooth pursuit. Paper presented at the IEEE International Conference on Robotics and Automation, Hong Kong.
Zhao Y, Rothkopf C. A, Triesch J, Shi B. E. (2012, Nov.). A unified model of the joint development of disparity selectivity and vergence control. Paper presented at the IEEE International Conference on Development and Learning and Epigenetic Robotics, San Diego, CA.
Appendix
This appendix describes the mathematical details of the model and the simulation environment we used to perform our experiments. 
Retinal processing
For the purposes of simulation, retinal images acquired by the left and right eye are sampled in both space and time. The temporal sampling rate is 20 frames/s. 
Spatially, we model information flow from both fovea and periphery. The fovea region is assumed to be a square window covering 7° of visual angle and centered on the optical axis. The peripheral region is assumed to be a square window covering 25° of visual angle and centered on the optical axis. The fovea is sampled at 7.8 pixels/° and the periphery at 2.2 pixels/°, resulting in 55 × 55–pixel images for both cases. These images are further divided into a 10 × 10 array of patches of 10 × 10 pixels. The overlap between neighboring patches is 5 pixels horizontally and vertically. The 100 patches are divided to two sets of 50, corresponding to the left and right hemiretinae. Information from the right (left) hemiretina is routed to the right (left) subcortical and cortical hemispheres. The sizes of the patches (1.3° in the fovea and 4.5° in the periphery) are comparable to receptive-field sizes measured in V1 (Gattass, Gross, & Sandell, 1981); however, the spatial resolution is much lower than that of the cones in the retina (Yuodelis & Hendrickson, 1986). This suggests that the spatial structure of the learned receptive fields in our model will be coarser than in biology but should still capture their essential characteristics. Simulating the connectivity at the resolution of the biological retina would have been computationally prohibitive. 
Corresponding patches from the left and right eye at sample indices t and t − 1 are concatenated into 400 dimensional input vectors, denoted by x(h,k,n,t), where h ∈ {L,R} indexes the hemisphere, k ∈ {F,P}indexes the region (fovea or periphery), n ∈ {1,…,50} indexes the patch, and t indexes time. The input vectors are normalized to have zero mean and unit variance. 
Subcortical sensory processing
Since our model is primarily concerned with the development of the cortical pathway driving OKN, and the subcortical pathway eventually loses its influence, we do not model the details of subcortical sensory processing. We assume that the population of sensory neurons in the NOT responds to the horizontal component of the retinal slip at the optical axis, which can be computed in our simulations given the object velocity, eye velocity, and imaging geometry. The neurons have Gaussian tuning curves. Their response is given by  where h ∈ {L,R}, t indexes time, i = {1,…,11} indexes the neuron, ε(t) is the retinal slip at time t, ε̂i(h) is the preferred slip, and the parameters A and σ determine the height and width of the tuning curve. Since sensory neurons in the left (right) NOT are tuned to leftward (rightward) motion, we set the preferred slips of the sensory neurons to be equally spaced from −40°/s to 0°/s for the left NOT and from 0°/s to 40°/s for the right NOT. We set A = 0.1 and σ = 4°/s. We concatenate the responses from the sensory neurons into a single vector denoted byDisplay FormulaImage not available .  
Cortical sensory processing
We use a sparse coding algorithm to model the outputs of the cortical neurons. Our model treats the input from each patch, from each scale and each hemisphere, and at each time sample identically and independently. Each input vector x(h,k,n,t) is approximated as the sparse weighted sum of unit-norm basis vectors taken from an overcomplete dictionary ϕi(h,k,t), where i ∈ {1,…,600}. The two hemispheres (h) and foveal or peripheral regions (k) have different dictionaries, which evolve over time. The approximation is given by    
We use the matching pursuit algorithm proposed by Mallat and Zhang (1993) to choose the coefficientsDisplay FormulaImage not available such that the reconstruction error  is small and at most 10 of the coefficients are nonzero.  
The dictionaries of basis vectorsDisplay FormulaImage not available evolve over time so that they best represent the statistics of the input patches. At each time step, we update the basis vectors using an online two-step procedure similar to that used by Olshausen and Field (1997). In the first step, we find the coefficients αi(h,k,n,t) using matching pursuit. In the second step, we assume that the coefficients are constant, and update the basis vectors using gradient descent to minimize the total normalized squared reconstruction error over all patches:    
After each update, the bases are renormalized so that they are unit norm. 
Each of the 600 basis vectors is roughly analogous to the receptive field of a binocular and motion-tuned simple cell in the primary visual cortex. The coefficients αi(h,k,n,t) are analogous to the activation of the simple cells responding to the visual information at time t from patch n from scale k of hemiretina h. We model the output of complex cells by pooling the squared coefficients for each basis vector over the set of all patches:  where N = 50. We concatenate these model outputs into a feature vectorDisplay FormulaImage not available .  
OKN control
OKN control is mediated by model motor neurons in the left and right NOT areas. Motor neurons in the left (right) NOT control the leftward (rightward) conjugate eye rotations in both eyes. Each NOT contains 11 motor neurons, corresponding to preferred rotations equally spaced from −40°/s to 0°/s in the left NOT and from 0°/s to 40°/s in the right NOT. We choose 11 motor neurons here for simulation convenience, to match the number of subcortical sensory neurons and parameter settings used in previous work (Zhao et al., 2012
We assume a linear model of the motor neuron responses zOKN ∈ ℝ11:  whereDisplay FormulaImage not available , and WOKN,C ∈ ℝ11×600 are weight matrices determining the connections from the subcortical and cortical sensory neurons to motor neurons. The first index h of WOKN,C represents the hemisphere where the motor neurons are located, and the second index η represents the hemisphere where the cortical sensory neurons located. The subcortical pathway has only ipsilateral connections, but the cortical pathway has both ipsilateral and contralateral connections.  
Motor commands are generated from the motor-neuron responses using vector averaging:  whereDisplay FormulaImage not available is the vector of preferred rotations, which we have for convenience chosen to be the same as the vector of preferred retinal slips in the subcortical sensor neurons, and the norm on the bottom is the l1 norm.  
The motor command is used to update the version angle of the two eyes according to  where ΔθOKN(t) = βΔθOKN(t − 1) + (1 − β)uOKN(t), and β controls the time constant of an exponential smoothing filter applied to the motor commands.  
The subcortical connections are hardwired and do not change over time during training. Since the preferred set of retinal slips in the sensory neurons and the preferred set of rotations in the motor neurons are identical, we choose  where I is an identity matrix and the parameter μ = 10 controls the synaptic strength. Thus, retinal slips excite corresponding eye rotations. This ensures that the subcortical pathway functions from the start to stabilize the retinal input when viewing objects moving in the TN direction. During testing, the subcortical connections are removed by setting μ = 0.  
The weights of the cortical connections evolve over time according to a Hebbian learning rule:  where Γ(h,η,k,t) = diag(γ1(h,η,k,t),…,γ600(h,η,k,t)) is a diagonal matrix of weight-decay parameters and κ is a positive learning-rate parameter. After each update, each row of the weight vector is normalized so that the sum of the weights entering each motor neuron sums to 1. Weights are initialized to small random values drawn from independent uniform distributions before normalization.  
The weight-decay parameters bias the motor neurons to make stronger connections with cortical neurons that contain significant input from the contralateral eye. The smaller the weight-decay parameter γi(h,η,k,t), the the stronger the penalty to connections from the model complex cell i in cortical hemisphere η to the motor neurons in the NOT in hemisphere h. The value of the weight-decay parameter depends upon ODi(n,k,t), the OD index of complex cell i from hemisphere η and region k, according to the equation  which is plotted in Figure 13. The parameter G controls the extent to which the connections are penalized. The parameter a controls the slope of the transition at the threshold +b (−b) for the left (right) NOT. We chose a = 10 and b = 0.5.  
Figure 13
 
The weight-decay parameter as a function of OD index for the weights connecting to the (a) right and (b) left NOT.
Figure 13
 
The weight-decay parameter as a function of OD index for the weights connecting to the (a) right and (b) left NOT.
The OD index is computed according to the method described by Hoyer and Hyvärinen (2000). If we let ϕi,L(h,k,t) and ϕi,R(h,k,t) denote the parts of the basis vector ϕi(h,k,t) corresponding to the left- and right-eye inputs, then    
The OD index is equal to 1 (−1) if the complex cell receives only left- (right-) eye input and equal to 0 if it receives balanced input. The OD index changes over time as the basis vectors change due to the training of the sparse coder. When computing OD histograms, we typically use seven bins with boundaries [−0.85, −0.5, −0.15, 0.15, 0.5, 0.85], which is same approach as that used by Shouval, Intrator, Law, and Cooper (1996). This facilitates comparison with prior experimental results (e.g., Hubel & Wiesel, 1965). 
Vergence control
Vergence control is mediated by 11 (again chosen for convenience and consistency with prior work) motor neurons encoding preferred changes in the VG angle, which are equally spaced from −1° to 1°. We assume a linear model for the motor neuron responses zVG ∈ ℝ11:  where WVG ∈ ℝ11×600 are weight matrices determining the connection from the cortical neurons to the VG motor neurons.  
The motor commands for VG uVG(t) are obtained by choosing the preferred change in the VG angle corresponding to one of the motor neurons, which is sampled from the probability distribution obtained by applying a softmax function to the vector of motor-neuron responses,Display FormulaImage not available :  where T is a positive temperature parameter controlling the greediness of the softmax function. Given the VG command, the VG angle is updated according to    
The weights are updated using reinforcement learning according to the natural actor–critic reinforcement-learning algorithm described by Bhatnagar et al. (2009), with the addition of weight decay. The reward is the negative discounted sum of the total reconstruction error −∑h,k,ne(h,k,n,t), where e(h,k,n,t) is given in (5). 
Figure 1
 
The active-efficient-coding framework.
Figure 1
 
The active-efficient-coding framework.
Figure 2
 
The neural pathway mediating OKN. Boxes with arrows indicate the NOT. Boxes labeled VC indicate the visual cortex; CC: corpus callosum; LE: left eye; RE: right eye. The visual processing path is shown in blue for the left eye and green for the right eye. Solid lines indicate information from the right visual field, and dashed lines indicate information from the left visual field.
Figure 2
 
The neural pathway mediating OKN. Boxes with arrows indicate the NOT. Boxes labeled VC indicate the visual cortex; CC: corpus callosum; LE: left eye; RE: right eye. The visual processing path is shown in blue for the left eye and green for the right eye. Solid lines indicate information from the right visual field, and dashed lines indicate information from the left visual field.
Figure 3
 
The developmental model of the optokinetic reflex. Blue (green) lines represent the flow of information for the left (right) eye. Solid (dashed) lines represent the flow of information from the right (left) visual field. Sensory neurons are shown as blue or green circles. Subcortical sensory and motor neurons in the NOT are represented using solid circles. Cortical neurons are represented using half-green, half-blue circles. Motor neurons are represented using red circles. VA: vector averaging; AS: action selection; OKN: optokinetic nystagmus; VG: vergence. The input vector from image patches is denoted x; ys and yC represent the subcortical and cortical sensory neurons' responses; zOKN and zVG represent the subcortical motor neurons' responses controlling OKN and VG; uOKN and uVG are the commands generated by the two systems; θOKN represents version angle; and θVG represents VG angle. “L” and “R” are abbreviations for the left and right hemispheres. To avoid clutter, we omit the time index.
Figure 3
 
The developmental model of the optokinetic reflex. Blue (green) lines represent the flow of information for the left (right) eye. Solid (dashed) lines represent the flow of information from the right (left) visual field. Sensory neurons are shown as blue or green circles. Subcortical sensory and motor neurons in the NOT are represented using solid circles. Cortical neurons are represented using half-green, half-blue circles. Motor neurons are represented using red circles. VA: vector averaging; AS: action selection; OKN: optokinetic nystagmus; VG: vergence. The input vector from image patches is denoted x; ys and yC represent the subcortical and cortical sensory neurons' responses; zOKN and zVG represent the subcortical motor neurons' responses controlling OKN and VG; uOKN and uVG are the commands generated by the two systems; θOKN represents version angle; and θVG represents VG angle. “L” and “R” are abbreviations for the left and right hemispheres. To avoid clutter, we omit the time index.
Figure 4
 
The basis vectors of the foveal cortical sensory neurons from the left hemisphere learned under (a) normal and (b) strabismic cases. Since the input vector is a concatenation of image intensities from four patches—left/right eye at current/past frames—each basis vector is presented as four images arranged in a 2 × 2 array. Left and right columns correspond to the left and right eyes, and top and bottom rows correspond to current and past frames. The 600 basis vectors are presented in a 15 × 40 array. For clarity, we show larger views of four representative basis vectors highlighted in red in (a, b) for both the (c) normal and (d) strabismic cases.
Figure 4
 
The basis vectors of the foveal cortical sensory neurons from the left hemisphere learned under (a) normal and (b) strabismic cases. Since the input vector is a concatenation of image intensities from four patches—left/right eye at current/past frames—each basis vector is presented as four images arranged in a 2 × 2 array. Left and right columns correspond to the left and right eyes, and top and bottom rows correspond to current and past frames. The 600 basis vectors are presented in a 15 × 40 array. For clarity, we show larger views of four representative basis vectors highlighted in red in (a, b) for both the (c) normal and (d) strabismic cases.
Figure 5
 
OD histogram of the foveal cortical sensory neurons from the left hemisphere under (a) normal and (b) strabismic cases. Neurons in Bin 1 respond only to contralateral monocular input. Neurons in Bin 7 respond only to ipsilateral monocular input. Neurons in Bin 4 respond equally to both contralateral and ipsilateral monocular input.
Figure 5
 
OD histogram of the foveal cortical sensory neurons from the left hemisphere under (a) normal and (b) strabismic cases. Neurons in Bin 1 respond only to contralateral monocular input. Neurons in Bin 7 respond only to ipsilateral monocular input. Neurons in Bin 4 respond equally to both contralateral and ipsilateral monocular input.
Figure 6
 
Polar plot of the distribution of preferred slips and directions for the sensory neurons from the left hemisphere (fovea) learned under (a) normal and (b) strabismic conditions. The angular coordinate indicates the preferred direction, and the radial coordinate indicates the preferred slips in units of degrees per second. We use logarithmic scaling for the radial direction, so there is a singularity at the origin.
Figure 6
 
Polar plot of the distribution of preferred slips and directions for the sensory neurons from the left hemisphere (fovea) learned under (a) normal and (b) strabismic conditions. The angular coordinate indicates the preferred direction, and the radial coordinate indicates the preferred slips in units of degrees per second. We use logarithmic scaling for the radial direction, so there is a singularity at the origin.
Figure 7
 
Eye-position trajectory in response to a large planar object moving at 30°/s in either the TN (a, c) or NT (b, d) direction after development under either (a, b) normal or (c, d) strabismic conditions.
Figure 7
 
Eye-position trajectory in response to a large planar object moving at 30°/s in either the TN (a, c) or NT (b, d) direction after development under either (a, b) normal or (c, d) strabismic conditions.
Figure 8
 
The mOKN asymmetry as measured by NBI increases with increases in the parameter G controlling the bias against connections to the NOT from cortical cells with strong ipsilateral-eye input. The NBI was measured for the right eye for a stimulus speed of 30°/s. The curve continues to increase beyond the plotted range.
Figure 8
 
The mOKN asymmetry as measured by NBI increases with increases in the parameter G controlling the bias against connections to the NOT from cortical cells with strong ipsilateral-eye input. The NBI was measured for the right eye for a stimulus speed of 30°/s. The curve continues to increase beyond the plotted range.
Figure 9
 
OKN behavior as a function of stimulus speed. The top row shows (a) the mean slow phase velocity as a function of the stimulus speed for both TN- (red) and NT- (blue) directed stimuli, and (b) the NBI computed for stimuli with different speeds, after normal development. The bottom row (c, d) shows the same measures after strabismic development.
Figure 9
 
OKN behavior as a function of stimulus speed. The top row shows (a) the mean slow phase velocity as a function of the stimulus speed for both TN- (red) and NT- (blue) directed stimuli, and (b) the NBI computed for stimuli with different speeds, after normal development. The bottom row (c, d) shows the same measures after strabismic development.
Figure 10
 
The population responses of the motor neurons driving version eye movements after development with strabismus in response to monocular visual stimuli presented to the right eye. Each curve shows the population response to a visual stimulus with fixed slip. Left and right columns show responses from neurons in the left and right NOT. Top and bottom rows show responses to negative and positive slips. The legend of (a) is the same as (b). The legend of (c) is the same as (d).
Figure 10
 
The population responses of the motor neurons driving version eye movements after development with strabismus in response to monocular visual stimuli presented to the right eye. Each curve shows the population response to a visual stimulus with fixed slip. Left and right columns show responses from neurons in the left and right NOT. Top and bottom rows show responses to negative and positive slips. The legend of (a) is the same as (b). The legend of (c) is the same as (d).
Figure 11
 
Behavior measure of mOKN over near–far disparity statistics. (a) Percentage of input with far disparities as a function of fixation depth (VG angles). (b) NBI as a function of fixation depth (VG angles). The bottom x-axis represents the fixation depth, and the top x-axis represents the corresponding VG angle.
Figure 11
 
Behavior measure of mOKN over near–far disparity statistics. (a) Percentage of input with far disparities as a function of fixation depth (VG angles). (b) NBI as a function of fixation depth (VG angles). The bottom x-axis represents the fixation depth, and the top x-axis represents the corresponding VG angle.
Figure 12
 
The results of strabismic development without the contralateral bias for different object sizes measured in degrees of visual angle subtended at a distance of 1 m. (a) The percentage of time that the two eyes observe similar motion as the object size (defined as the visual angle subtended at a distance of 1 m) changes. (b) The ASI as a function of object size. The shaded region indicates symmetric mOKN.
Figure 12
 
The results of strabismic development without the contralateral bias for different object sizes measured in degrees of visual angle subtended at a distance of 1 m. (a) The percentage of time that the two eyes observe similar motion as the object size (defined as the visual angle subtended at a distance of 1 m) changes. (b) The ASI as a function of object size. The shaded region indicates symmetric mOKN.
Figure 13
 
The weight-decay parameter as a function of OD index for the weights connecting to the (a) right and (b) left NOT.
Figure 13
 
The weight-decay parameter as a function of OD index for the weights connecting to the (a) right and (b) left NOT.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×