Abstract
The midget retinal ganglion cell (mRGC) mosaic forms a critical neural substrate for human pattern and color vision. To help understand mRGCs, we are developing an image-computable model of their spatial receptive fields (RFs) across the central primate retina. The open-source model explicitly incorporates the eye’s optics (including chromatic aberration), spatial and spectral sampling by the interleaved trichromatic cone mosaic, and spatial pooling of cone signals. The mRGC mosaic is synthesized as follows. First, synthetic lattices of cone positions and mRGC RF positions are generated independently, based on anatomical estimates of cone density (Packer et al., 1989) and mRGC RF positions (Watson 2014), using an iterative algorithm (Cottaris et al., 2019). Next, cones are connected to RF centers in a way that optimizes a tradeoff between spatial compactness and spectral homogeneity of the RF center. Finally, cones are pooled by mRGC surrounds with spatial weights based on H1 horizontal cell RF data (Packer & Dacey 2002), optimized to yield visual-field spatial transfer functions (STFs) that approximate those recoded in the macaque (Croner & Kaplan 1995). Point spread functions derived from wavefront-aberration measurements (Polans et al., 2015) are used to link visual field and retinal extents. To validate the model, we fit STFs (computed from the model's responses to drifting achromatic gratings viewed through physiological optics), with a Difference of Gaussians RF model and compare the parameters to the same model fit to measurements obtained in vivo in the macaque (C&K, 1995). The ratio of surround/center radius and surround/center integrated sensitivity agrees closely (mean/std z-score: 0.02 +/- 0.94 and -0.08 +/1.16, respectively). The model center sizes are slightly smaller than those from macaque (mean/std z-score: -1.99, 1.70), which may be due to uncertainty about the optics. Our model offers an image-computable approach for assessing how mRGCs impact spatial and chromatic vision.