November 2018
Volume 18, Issue 12
Open Access
Article  |   November 2018
Model of parafoveal chromatic and luminance temporal contrast sensitivity of humans and monkeys
Author Affiliations
  • Emily C. Gelfand
    Department of Physiology & Biophysics, Washington National Primate Research Center, University of Washington, Seattle, WA, USA
  • Gregory D. Horwitz
    Department of Physiology & Biophysics, Washington National Primate Research Center, University of Washington, Seattle, WA, USA
    ghorwitz@u.washington.edu
Journal of Vision November 2018, Vol.18, 1. doi:10.1167/18.12.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily C. Gelfand, Gregory D. Horwitz; Model of parafoveal chromatic and luminance temporal contrast sensitivity of humans and monkeys. Journal of Vision 2018;18(12):1. doi: 10.1167/18.12.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Rhesus monkeys are a valuable model for studies of primate visual contrast sensitivity. Their visual systems are similar to that of humans, and they can be trained to perform detection tasks at threshold during neurophysiological recording. However, the stimulus dependence of rhesus monkey contrast sensitivity has not been well characterized. Temporal frequency, color, and retinal eccentricity affect the contrast sensitivity of humans in reasonably well-understood ways. To ask whether these factors affect monkey sensitivity similarly, we measured detection thresholds of two monkeys using a two-alternative, forced-choice task and compared them to thresholds of two human subjects who performed the same task. Stimuli were drifting Gabor patterns that varied in temporal frequency (1–60 Hz), L- and M-cone modulation ratio, and retinal eccentricity (2°–14° from the fovea). Thresholds were fit by a model that assumed a pair of linear detection mechanisms: a luminance contrast detector and a red-green contrast detector. Analysis of model fits indicated that the sensitivity of these mechanisms varied across the visual field, but their temporal and spectral tuning did not. Human and monkey temporal contrast sensitivity was similar across the conditions tested, but monkeys were twofold less sensitive to low-frequency, luminance modulations.

Introduction
A primary goal of neuroscience is to understand how sensory signals are converted into perceptual experiences. This broad phenomenon can be studied fruitfully through flicker sensitivity. Neurons in the early visual system respond to flicker above the critical flicker fusion frequency, implying a loss of high-frequency information between these neurons and those that mediate perception directly (Lee, Pokorny, Smith, Martin, & Valberg, 1990; Kremers, Lee, & Kaiser, 1992; Yeh, Lee, & Kremers, 1995; Engel, Zhang, & Wandell, 1997; Gur & Snodderly, 1997; Krolak-Salmon et al., 2003; Williams, Mechler, Gordon, Shapley, & Hawken, 2004; Vul & MacLeod, 2006; Jiang, Zhou, & He, 2007; Lee, Sun, & Zucchini, 2007; Falconbridge, Ware, & MacLeod, 2010). In addition, some neurons respond to imperceptible low-frequency modulations, demonstrating that information loss is not exclusive to high frequencies (Palmer, Cheng, & Seidemann, 2007; Hass & Horwitz, 2013). The loci and stimulus specificity of information loss in the visual system are largely unknown, and identifying them is an important step toward understanding visual awareness (Crick & Koch, 1998; Carmel, Lavie, & Rees, 2006). 
With regard to temporal vision specifically, a significant obstacle to localizing information-processing bottlenecks is that existent neurophysiological and psychophysical measurements are difficult to compare. Several factors contribute. First, psychophysical measurements of temporal contrast sensitivity are made at low contrast, by definition, whereas most neurophysiological studies use high-contrast stimuli. Nonlinearities in neuronal contrast-response functions prevent accurate extrapolation of responses from high to low contrasts. Second, temporal contrast sensitivity varies across the visual field (Sharpe, 1974; Koenderink, Bouma, Bueno de Mesquita, & Slappendel, 1978a; Koenderink, Bouma, Bueno de Mesquita, & Slappendel, 1978b; Virsu, Rovamo, Laurinen, & Nasanen, 1982; Wright & Johnston, 1983; Rovamo & Raninen, 1984; Tyler, 1985; Tyler, 1987; Pointer & Hess, 1989; Snowden & Hess, 1992) and with retinal illumination (De Lange Dzn, 1961; Kelly, 1972; Rovamo & Raninen, 1984; Snowden, Hess, & Waugh, 1995). Neurophysiological and psychophysical measurements are rarely matched for these conditions. Finally, most neurophysiological measurements of flicker sensitivity have been made in animal models, and relatively little is known about the temporal contrast sensitivity of these animals (but see De Valois, Morgan, Polson, Mead, & Hull, 1974; Merigan, 1980). 
To help bridge the gap between neurophysiological and psychophysical measurements of temporal contrast sensitivity, we made behavioral measurements in rhesus monkeys—the animal most frequently used to model human visual behavior. Specifically, we used a two-alternative, forced-choice (2AFC) task to measure contrast sensitivity of two rhesus monkeys as a function of three factors: temporal frequency, the relative modulation depth of the long wavelength-sensitive (L) cones and the medium wavelength-sensitive (M) cones (i.e., color direction in the LM plane of cone contrast space), and position in the visual field. We varied color because monkeys are highly sensitive to chromatic modulations under some conditions (Stoughton, Lafer-Sousa, Gagin, & Conway, 2012; Gagin et al., 2014; Lindbloom-Brown, Tait, & Horwitz, 2014). We also varied visual field location because chromatic sensitivity drops steeply with retinal eccentricity in humans (Anderson, Mullen, & Hess, 1991; Mullen, 1991; Stromeyer, Lee, & Eskew, 1992; Mullen & Kingdom, 2002), and most neurophysiological studies probe neurons with parafoveal receptive fields. For comparison, we also measured the temporal contrast sensitivity of two human observers under the same conditions as the monkeys. 
To analyze the data, we built a model that described contrast sensitivity across the range of stimulus variations tested. The model was based on three established models, each of which described contrast sensitivity as a function of temporal frequency (Watson, 1986), color direction (Stromeyer, Cole, & Kronauer, 1985), and location in the visual field (Robson & Graham, 1981). These models had not been previously united, but we found that a simple combination predicted thresholds accurately without the need to assume complex interactions among the model parameters. 
Methods
Subjects
Four subjects participated in this study: the authors (H1, a 23-year-old woman; H2, a 46-year-old man) and 2 nonhuman primates (M1 and M2, both male, Macaca mulatta). All procedures used with nonhuman primates were approved by the University of Washington Institutional Animal Care and Use Committee and adhered to the American Physiological Society's Guiding Principles for the Care and Use of Vertebrate Animals in Research and Training. All procedures used with human subjects conformed to the Declaration of Helsinki and the policies of the University of Washington Human Subjects Division. Human subjects provided written, informed consent. 
Displays
All subjects were tested in a room that was dark except for the light from a digital light-processing projector (ProPixx, VPixx Inc., Saint-Bruno, Canada) illuminating a rear projection screen (Da-lite Inc., Warsaw, IN) at 240 Hz. The screen subtended 46° × 26° of visual angle. The center of the screen was 61 cm in front of the subject and matched vertically and horizontally to the subject's eye level. The chromaticity of the display background was (x = 0.3, y = 0.3), and the luminance was 90 cd/m2
Psychophysical task
Contrast detection thresholds were measured using a spatial 2AFC contrast detection task. Each trial began with the presentation of a 0.2° × 0.2° black fixation point at the center of the screen (Figure 1). Five hundred milliseconds later, a Gabor stimulus appeared in the left or right hemifield. The fixation point disappeared 100 to 600 ms after the end of the stimulus presentation, and simultaneously, two targets appeared on the horizontal meridian. The subject was then required to indicate within 700 ms whether the stimulus had appeared on the left or right by selecting the corresponding target. Correct responses were accompanied by a tone and, for monkeys, a water reward. 
Figure 1
 
Contrast detection task. Panels from top to bottom show the sequence of events in each trial. Top panel: Subject fixates. Middle panel: Gabor stimulus appears. The horizontal meridian (dotted line), φ (arc), and r (curly bracket) illustrate the polar coordinate system used to describe the location of the stimulus; they were not visible to the subject. Bottom panel: Choice targets appear.
Figure 1
 
Contrast detection task. Panels from top to bottom show the sequence of events in each trial. Top panel: Subject fixates. Middle panel: Gabor stimulus appears. The horizontal meridian (dotted line), φ (arc), and r (curly bracket) illustrate the polar coordinate system used to describe the location of the stimulus; they were not visible to the subject. Bottom panel: Choice targets appear.
Testing procedures
Monkey subjects were seated in a testing chair, with their heads stabilized by a head posting device. Eye position was tracked with a scleral search coil (Riverbend Instruments, Birmingham, AL). In 86% of the testing sessions, fixation was required to remain within a 1° × 1° window. In the remaining 14% of the testing sessions, the fixation window was enlarged to a maximum of 1.5° × 1.5°. Targets appeared 2° from the fixation point on the horizontal meridian. 
Human subjects performed the same psychophysical task as the monkeys. In 42% (133 of 320) of the testing sessions, the subject's reports were expressed via saccades to the same target locations as the monkeys'. In these sessions, head position was stabilized with a chin rest, eye position was tracked (EyeLink 1000 Plus, SR Research Ltd., Ottawa, Canada), and fixation was enforced. In the other 58% of sessions, subjects indicated their responses with a button box, and eye position was not tracked. The chin rest was used in most but not all of these sessions. Sixty percent of the button box sessions were conducted before the eye tracker sessions. 
To examine the effect of response method on detection thresholds, we compared thresholds for 10 different combinations of color direction and temporal frequency, on the horizontal meridian, 5° from the fixation point. Threshold measurements were strongly correlated across response methods (r = 0.93 and 0.67 for H1 and H2, respectively) and did not differ significantly for either subject (paired t-tests: p = 0.86 and p = 0.11), indicating that the two response methods yielded similar threshold measurements. 
Stimuli
The stimulus was an upward-drifting, horizontally oriented Gabor, with a spatial frequency of 1 cycle/° and a standard deviation of 0.15°. Stimulus contrast ramped up over 167 ms, remained constant for 334 ms, and then ramped down over 167 ms. The length of the stimulus duration mitigates the effect of the contrast envelope on the temporal frequency power spectrum. 
Contrast detection thresholds were measured as a function of three variables: temporal frequency, color direction in the LM plane, and location in the visual field. Temporal frequency and color direction varied within blocks of trials and, on each trial, were selected from a set of two to four combinations that were chosen at the beginning of the block. Stimulus locations in the visual field were fixed within each block. Practice trials at the beginning of each block familiarized the subjects with the stimulus locations. Nevertheless, increases in spatial uncertainty with retinal eccentricity presumably manifest as increases in contrast detection thresholds (Pelli, 1985; Levi, Klein, & Yap, 1987). 
Colorimetric calculations were based on the Stockman, MacLeod, and Johnson (1993) 10° cone fundamentals. S-cones were not modulated, and all stimuli were presented at ≥2° from the fovea to avoid peak macular pigment density. L- and M-cone contrasts were defined as  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}L {\mbox{-}} cone\;contrast = {{{L_{STIM}} - {L_{BACKGROUND}}} \over {{L_{BACKGROUND}}}},\end{equation}
 
\begin{equation}\tag{2}M {\mbox{-}} cone\;contrast = {{{M_{STIM}} - {M_{BACKGROUND}}} \over {{M_{BACKGROUND}}}},\end{equation}
where LSTIM represents the L-cone excitation produced by the peak of the Gabor stimulus and LBACKGROUND represents the L-cone excitation produced by the background. The quantities MSTIM and MBACKGROUND are identical except for the M-cones.  
Color direction was defined as  
\begin{equation}\tag{3}{\tan ^{ - 1}}\left( {{{L {\mbox{-}} cone\;contrast} \over {M {\mbox{-}} cone\;contrast}}} \right),\end{equation}
and the modulation amplitude of the stimulus was defined as  
\begin{equation}\tag{4}\sqrt {L{\mbox{-}}cone\;contras{t^2} + M{\mbox{-}}cone\;contras{t^2}} .\end{equation}
 
The color direction and modulation amplitude of a Gabor pattern that modulates the L- and M-cones can be represented as the direction and length, respectively, of a vector in the LM plane of cone contrast space. Temporal frequency can be varied independently of L- and M-cone contrasts and is therefore represented as an orthogonal stimulus dimension. Thus, each Gabor stimulus is represented in a three-dimensional space (Figure 2). 
Figure 2
 
Stimulus space. Each Gabor stimulus is represented by a pair of points that are symmetric with respect to the temporal frequency axis. Points far from this axis have high contrast, and points on the axis have zero contrast.
Figure 2
 
Stimulus space. Each Gabor stimulus is represented by a pair of points that are symmetric with respect to the temporal frequency axis. Points far from this axis have high contrast, and points on the axis have zero contrast.
Contrast detection thresholds for each color direction–temporal frequency combination were measured by the QUEST procedure (Watson & Pelli, 1983). The mode of the QUEST function after 40 trials was taken as an estimate of the threshold. The number of threshold measurements from each subject is provided in Table 1
Table 1
 
Number of threshold measurements per subject. Notes: Color direction and temporal frequency conditions were distributed nearly continuously in the experiment but are binned coarsely in the table. Nonopponent and opponent stimuli are those in which L- and M-cone modulations had the same or opposite sign, respectively.
Table 1
 
Number of threshold measurements per subject. Notes: Color direction and temporal frequency conditions were distributed nearly continuously in the experiment but are binned coarsely in the table. Nonopponent and opponent stimuli are those in which L- and M-cone modulations had the same or opposite sign, respectively.
Within each block of trials, the Gabor stimulus appeared at one of two locations that were mirror symmetric about the vertical meridian. We refer to these location pairs as the location (singular) of the stimulus, because knowing one member of the pair identifies the other. At each location tested, thresholds were first measured with four stimuli: 1 Hz L+M, 1 Hz L−M, 60 Hz L+M, and 60 Hz L−M. 
Subsequent color direction–temporal frequency combinations were selected using an adaptive procedure based on Gaussian process regression (Rasmussen, 2004). Before each session, the subject's thresholds were fitted with a nonparametric function that provided threshold predictions for every color direction–temporal frequency combination, along with error estimates associated with these predictions. Color direction–temporal frequency combinations were sampled where the estimated prediction error was greatest. The covariance of the Gaussian process was the product of a Matérn function of log temporal frequency and a periodic function of color direction (MacKay, 1998). Hyperparameters of the covariance function, which specify the variance and length scale of the fitted function, were refit after each block by maximum likelihood. Color direction–temporal frequency combinations for which the predicted threshold was outside of the gamut of the display were not tested. 
Modeling contrast sensitivity: Effects of temporal frequency and color direction
Our model of temporal contrast sensitivity is based on one developed by Watson (1986). The Watson model assumes that detection is mediated by a single, linear bandpass filter that can be described as the difference of two low-pass filters, each with transfer function  
\begin{equation}\tag{5}{H_1}\left( \omega \right) = {\left( {i2\pi \tau \omega + 1} \right)^{ - n}},\end{equation}
where Display Formula\(\tau \) is a time constant, Display Formula\(\omega \) is temporal frequency in Hz, and Display Formula\(n\) is the number of low-pass stages. The transfer function of the bandpass filter is the difference between the transfer functions of two low-pass filters:  
\begin{equation}\tag{6}H\left( \omega \right) = \xi \left( {{H_1}\left( \omega \right) - \zeta {H_2}\left( \omega \right)} \right),\end{equation}
where Display Formula\({H_1}\left( \omega \right)\) and Display Formula\({H_2}\left( \omega \right)\) are the transfer functions of the low-pass filters defined by Equation 5, Display Formula\(\xi \) is a gain parameter, and Display Formula\(\zeta \) controls the transience of the bandpass filter. When Display Formula\(\zeta = 0\), the filter is low pass, and when Display Formula\(\zeta \ \gt \ 0\), the filter is bandpass.  
We extended this model to describe contrast detection thresholds across color directions in the LM plane. We assumed that detection is mediated by two linear mechanisms whose outputs are squared and summed. As a consequence, detection contours at any temporal frequency were constrained to be elliptical. We did not assume that the luminance and chromatic mechanisms were orthogonal. Therefore, the orientation of detection ellipses in the LM plane could, and in general did, change with temporal frequency. 
One of the mechanisms (RG) was assumed to respond to the difference between L- and M-cone contrasts. The second mechanism (LUM) was assumed to respond to a weighted sum of L- and M-cone contrast. The sensitivity of each mechanism at frequency Display Formula\(\omega \) was HRG(Display Formula\(\omega \)) and HLUM(Display Formula\(\omega \)), which are transfer functions that conform to the Watson (1986) model but have different parameter values. The predicted contrast sensitivity across directions in the LM plane was therefore  
\begin{equation}\tag{7}Contrast\;sensitivity = \sqrt {{{\left( {{H_{RG}}\left( \omega \right)\left[ {\cos \left( {{{3\pi } \over 4}} \right)L + \sin \left( {{{3\pi } \over 4}} \right)M} \right]} \right)}^2} \atop + {{\left( {{H_{LUM}}\left( \omega \right)\left[ {\cos \left( \theta \right)L + \sin \left( \theta \right)M} \right]} \right)}^2}} ,\end{equation}
where L and M are cone contrasts normalized so that L2 + M2 = 1, and θ is a fitted parameter indicating the relative weighting of L- and M-cones to the LUM mechanism. The RG mechanism was assumed to weight L- and M-cone signals equally (Stromeyer et al., 1985; Gegenfurtner & Hawken, 1995; Stromeyer, Kronauer, Chaparro, & Eskew, 1995; Sankeralli & Mullen, 1996). Contrast threshold was defined as the reciprocal of contrast sensitivity.  
Modeling contrast sensitivity: Effects of stimulus position in the visual field
The preceding model describes contrast sensitivity at individual locations in the visual field. To capture differences in contrast sensitivity across the visual field, we extended the model. Visual field locations were represented in polar coordinates, where r is the eccentricity of the stimulus in degrees of visual angle, and φ is the position of the stimulus in the plane of the screen, relative to the horizontal meridian (Figure 1). These parameters can be written as  
\begin{equation}\tag{8}r = \sqrt {{h^2} + {v^2}} \end{equation}
 
\begin{equation}\tag{9}\varphi = {\tan ^{ - 1}}{(v) \over (h)},\end{equation}
where h and v are the horizontal and vertical positions, respectively, of the stimulus in degrees of visual angle relative to the fixation point.  
As described in the Results section, we tested several parametric forms of the relationship between Display Formula\({\xi _{LUM}}\) and Display Formula\({\xi _{RG}}\) (Equation 6) and (r, φ). The general form of the dependence was  
\begin{equation}\tag{10}{\log _{10}}\left( \xi \right) = {b_0} + {b_1}r + {b_2}r\cos \left( {2\varphi } \right) + {b_3}r\sin \left( {2\varphi } \right),\!\end{equation}
where Display Formula\(\xi \) represents Display Formula\({\xi _{LUM}}\) or Display Formula\({\xi _{RG}}\), which govern the sensitivity of the LUM and RG mechanisms, respectively. Setting φ = 0 shows that Display Formula\(\xi \) changes with slope (b1 + b2) along the horizontal meridian, and setting φ = ±π/2 shows that Display Formula\(\xi \) changes with slope (b1b2) along the vertical meridian. b3 is a parameter that allows Display Formula\(\xi \) to differ between the upper and lower visual fields. When b3 is positive, Display Formula\(\xi \) is greater in the upper hemifield, and when b3 is negative, Display Formula\(\xi \) is greater in the lower hemifield. Note that b3 does not affect Display Formula\(\xi \) on the vertical meridian (where φ = ±π/2), a region of visual space we were unable to test because of the logic of our left/right 2AFC task. All parameters were fit by minimizing the summed, absolute values of differences between the log-transformed measured and predicted detection thresholds.  
Results
We measured contrast detection thresholds of two monkey and two human subjects as a function of three variables: temporal frequency, angle in the LM plane, and location in the visual field. Thresholds of subject M2 (Figure 3), measured at screen location r = 5, φ = 0, capture many features of this broader data set. 
Figure 3
 
Data from subject M2 and model fit. (A–D) Contrast detection thresholds (black points) on trials in which the stimulus appeared 5° from the fixation point on the horizontal meridian. Stimulus directions for which a threshold could not be measured because of limitations of the display gamut are plotted at the gamut edge (red points). Surfaces are best fits of Equation 7 (green). (A) Stimulus space oriented so that the L+M axis is in the plane of the page. (B) Stimulus space oriented so that the L−M axis is in the plane of the page. (C, D) Magnified views of the circled portion of A and B, respectively. (E) Cross sections through the surfaces in A–D parallel to the LM plane at 1 Hz (red), 5 Hz (green), 10 Hz (blue), and 20 Hz (black). Detection thresholds (symbols) were collected from bins that spanned the nominal temporal frequency ± a factor of 1.5. (F) Contrast sensitivity measurements (points) and 1-D functions from the model fit (curves) in the L-cone direction (red), M-cone direction (cyan), L−M direction (gray), and L+M direction (black). Data points were collected from bins that spanned the nominal color direction ± 10°.
Figure 3
 
Data from subject M2 and model fit. (A–D) Contrast detection thresholds (black points) on trials in which the stimulus appeared 5° from the fixation point on the horizontal meridian. Stimulus directions for which a threshold could not be measured because of limitations of the display gamut are plotted at the gamut edge (red points). Surfaces are best fits of Equation 7 (green). (A) Stimulus space oriented so that the L+M axis is in the plane of the page. (B) Stimulus space oriented so that the L−M axis is in the plane of the page. (C, D) Magnified views of the circled portion of A and B, respectively. (E) Cross sections through the surfaces in A–D parallel to the LM plane at 1 Hz (red), 5 Hz (green), 10 Hz (blue), and 20 Hz (black). Detection thresholds (symbols) were collected from bins that spanned the nominal temporal frequency ± a factor of 1.5. (F) Contrast sensitivity measurements (points) and 1-D functions from the model fit (curves) in the L-cone direction (red), M-cone direction (cyan), L−M direction (gray), and L+M direction (black). Data points were collected from bins that spanned the nominal color direction ± 10°.
Thresholds generally increased with temporal frequency, as shown by the flaring of the data points and the fitted surface along the temporal frequency axis (Figure 3A, B). To show the effects of color direction, the data have been plotted twice: once rotated so that the L+M axis is in the plane of the page (Figure 3A) and once rotated so that the L−M axis is in the plane of the page (Figure 3B). 
Detection thresholds for low temporal frequency L+M modulations were greater than for low temporal frequency L−M modulations, as expected (Stromeyer et al., 1985). This feature of the data is manifest in the greater width of the fitted threshold surface in the L+M direction (Figure 3C) than in the L−M direction (Figure 3D). It can also be seen in slices through the detection threshold surface fit: detection ellipses (Figure 3E) and contrast sensitivity functions (Figure 3F). The bump in RG sensitivity at ∼2 Hz (Figure 3F) was a consequence of noisy data fit with a flexible model. It was not present in data from M2 at other locations nor in equivalent data from human subject H1 (Figure 4). 
Figure 4
 
Data and model fits from subject H1 with conventions as in Figure 3.
Figure 4
 
Data and model fits from subject H1 with conventions as in Figure 3.
Modeling contrast sensitivity at individual visual field locations
For each observer, we measured detection thresholds at 11 to 21 locations in the visual field and fit the data independently at each location. Each of these fits contains 13 parameters: six that control the contrast sensitivity of the LUM mechanism, six that control the contrast sensitivity of the RG mechanism, and one that controls the L:M ratio of the LUM mechanism (see Equation 7 in the Methods section). We iteratively refit data from each location using solutions from every other location as initial guesses to the solver (MATLAB, MathWorks, Natick, MA; fmincon) until none of the fits improved. We confirmed that the final model described the data well in the sense that the distribution of the residuals was centered on zero, was narrow, and depended little on predicted threshold (Figure 5). A subtle decrease in the variance of the residuals with predicted threshold may be due to the exclusion from this analysis of thresholds beyond the display gamut, which occur preferentially under high predicted-threshold conditions. 
Figure 5
 
Residuals from the 13-parameter model fits (Equation 7) as a function of predicted threshold. Residuals are defined as log10(measured threshold) − log10 (predicted threshold), where both measured and predicted thresholds are in units of stimulus modulation amplitude (Equation 4). Models were fit independently to data collected at each visual field location. Residuals from each subject are plotted in a different color (see inset).
Figure 5
 
Residuals from the 13-parameter model fits (Equation 7) as a function of predicted threshold. Residuals are defined as log10(measured threshold) − log10 (predicted threshold), where both measured and predicted thresholds are in units of stimulus modulation amplitude (Equation 4). Models were fit independently to data collected at each visual field location. Residuals from each subject are plotted in a different color (see inset).
Describing detection thresholds at each visual field location independently had two significant shortcomings. First, the model overfit the data; many parameters were used to fit few data points. Second, predictions were made only at locations in the visual field at which thresholds had been measured. In the next section, we describe an extension of the model with fewer parameters that generalizes to a continuum of visual field locations. 
Modeling contrast sensitivity across visual field locations
To extend the model, we first looked for patterns in the fitted values of the 13 model parameters across locations in the visual field. For each subject, we plotted the best-fit value of each parameter as a function of location in the visual field and inspected the plots to identify trends. The parameters ξLUM and ξRG, which specify the sensitivity of the LUM and RG mechanisms, respectively, stood out as strongly eccentricity dependent (Equation 6, data not shown). These two parameters were therefore allowed to change with visual field location in all model variants described below. 
We considered the possibility that allowing nLUM (Equation 5), nRG (Equation 5), or θ (Equation 7) to vary across the visual field, in addition to ξLUM and ξRG, would improve the model fit. nLUM and nRG affect the slope of the high-frequency roll-off of the LUM and RG mechanisms, respectively, and θ affects the L:M cone weighting to the LUM mechanism. We fit the data from each subject using models in which ξLUM and ξRG and, optionally, one of the set (nLUM, nRG, and θ), were allowed to vary across location. All other parameters were constrained to have the same value at every location. Individual threshold measurements were held out from each fit and used to calculate prediction errors from each model. 
Prediction errors were similar when computed from models that allowed nLUM, nRG, or θ to vary as from a model that did not (one-sided Wilcoxon tests, p > 0.1 in all 12 cases: 3 models × 4 subjects). These results are consistent with the idea that the overall sensitivity of the LUM and RG mechanisms, but not nLUM, nRG, or θ, varies across the region of the visual field that we probed. We therefore focused exclusively on models for which only ξLUM and ξRG changed with visual field location. In the next section, we discuss the parametric form of this dependence. 
Parametric description of ξLUM and ξRG across visual space
Contrast sensitivity for all subjects dropped more quickly along the vertical meridian than along the horizontal meridian for both LUM (Figure 6A) and RG (Figure 6B). We modeled this pattern in the data with Equation 10 (see the Methods section) and considered four variants of the model. Each model variant applied different constraints to b3, which controls the asymmetry of detection thresholds above and below the horizontal meridian. In the “symmetric” variant, sensitivity was forced to be symmetric in the upper and lower visual fields (b3 = 0 for both ξLUM and ξRG). In the “yoked” variant, the upper and lower visual field asymmetry was constrained to be identical for both mechanisms (a single b3 parameter was shared by ξLUM and ξRG). In the “luminance-only” variant, LUM sensitivity, but not RG sensitivity, was allowed to differ between upper and lower visual fields (b3 = 0 for ξRG). In the “unconstrained” variant, LUM and RG sensitivity was allowed to differ independently and asymmetrically in the upper and lower visual fields (b3 was fit separately for ξLUM and ξRG). 
Figure 6
 
Variations in ξLUM and ξRG across the visual field. Data from subject M1 were fitted with a model in which all of the parameters except ξLUM and ξRG were fixed across visual field locations. Left: LUM (A, black dots) and RG (B, black dots) contrast sensitivity as a function of visual field location, parameterized by r and φ. Contrast sensitivity is the reciprocal of detection threshold in units of stimulus modulation amplitude (Equation 4). To facilitate comparison between LUM and RG, contrast sensitivity was evaluated at 6 Hz, which is the frequency at which the components of the fitted model apart from ξLUM and ξRG confer equal sensitivity. Surfaces were fit with Equation 10. Insets show the slope of the modelled contrast sensitivity decline as a function of φ (e.g., for each degree of eccentricity along the horizontal meridian, LUM contrast sensitivity drops by a factor of 0.95). Right: Surface fits from the left rendered as a heat map with visual field location represented in degrees of visual angle. The color bar applies to both top and bottom panels. Contours in A are 20, 15, and 10. Contours in B are 30, 25, 20, 15, and 10.
Figure 6
 
Variations in ξLUM and ξRG across the visual field. Data from subject M1 were fitted with a model in which all of the parameters except ξLUM and ξRG were fixed across visual field locations. Left: LUM (A, black dots) and RG (B, black dots) contrast sensitivity as a function of visual field location, parameterized by r and φ. Contrast sensitivity is the reciprocal of detection threshold in units of stimulus modulation amplitude (Equation 4). To facilitate comparison between LUM and RG, contrast sensitivity was evaluated at 6 Hz, which is the frequency at which the components of the fitted model apart from ξLUM and ξRG confer equal sensitivity. Surfaces were fit with Equation 10. Insets show the slope of the modelled contrast sensitivity decline as a function of φ (e.g., for each degree of eccentricity along the horizontal meridian, LUM contrast sensitivity drops by a factor of 0.95). Right: Surface fits from the left rendered as a heat map with visual field location represented in degrees of visual angle. The color bar applies to both top and bottom panels. Contours in A are 20, 15, and 10. Contours in B are 30, 25, 20, 15, and 10.
We compared these model variants using a leave-one-out, cross-validated analysis of prediction error similar to the analysis of nLUM, nRG, and θ previously described. We held out individual threshold measurements, fit the four models (symmetric, yoked, luminance-only, and unconstrained) to the remaining data, recorded prediction errors between the model fits and the held-out data point, and repeated this process for each threshold measurement. The model with the lowest prediction errors, for all subjects, was the yoked variant (Figure 7). 
Figure 7
 
Cross-validated model comparisons. Individual threshold measurements were withheld from fitting and used to calculate prediction errors for four models: symmetric (b3 = 0 for both ξLUM and ξRG), yoked (a single b3 parameter was shared by ξLUM and ξRG), luminance-only (b3 = 0 for ξRG), and unconstrained (b3 was fit separately for ξLUM and ξRG). The prediction error is quantified as log10(measured threshold) − log10(predicted threshold), where threshold is measured in units of stimulus modulation amplitude (Equation 4). The ratio of prediction errors between models was calculated for each threshold measurement. Negative log prediction error ratios indicate that the yoked model produced lower prediction errors than the alternative model. Points and error bars indicate medians and bootstrap estimates of standard error. More data were collected from monkeys than humans, resulting in smaller error bars for monkeys.
Figure 7
 
Cross-validated model comparisons. Individual threshold measurements were withheld from fitting and used to calculate prediction errors for four models: symmetric (b3 = 0 for both ξLUM and ξRG), yoked (a single b3 parameter was shared by ξLUM and ξRG), luminance-only (b3 = 0 for ξRG), and unconstrained (b3 was fit separately for ξLUM and ξRG). The prediction error is quantified as log10(measured threshold) − log10(predicted threshold), where threshold is measured in units of stimulus modulation amplitude (Equation 4). The ratio of prediction errors between models was calculated for each threshold measurement. Negative log prediction error ratios indicate that the yoked model produced lower prediction errors than the alternative model. Points and error bars indicate medians and bootstrap estimates of standard error. More data were collected from monkeys than humans, resulting in smaller error bars for monkeys.
The yoked model contained 18 parameters: 13 that governed sensitivity as a function of temporal frequency and color direction, and five that governed changes in two of the 13 parameters (ξLUM and ξRG) across the visual field. Residuals from these model fits, plotted as a function of predicted threshold, were similar to those obtained when a separate 13-parameter model was fitted to the data at each screen location individually despite the 8- to 15-fold reduction in the number of parameters (Figure 8, compare to Figure 5). 
Figure 8
 
Residuals, defined as log10(measured threshold) − log10(predicted threshold), from the 18-parameter model fits (Equation 10) as a function of predicted threshold. Conventions are as in Figure 4.
Figure 8
 
Residuals, defined as log10(measured threshold) − log10(predicted threshold), from the 18-parameter model fits (Equation 10) as a function of predicted threshold. Conventions are as in Figure 4.
The median ratio between the measured and predicted thresholds was 1.00, indicating that the predictions were not systematically biased upward or downward. The 10th and 90th percentiles of the ratios were 0.77 and 1.40, respectively, indicating that 80% of the measured thresholds were within a factor of ∼0.7 of the predictions. We conclude that the model fit most of the data accurately. 
Analysis of residuals
If the model were specified perfectly, we would expect the residuals to be independent and identically distributed across all combinations of temporal frequency, color direction, and visual field location. Testing this hypothesis is difficult given the number of independent variables, but to confirm the absence of strong patterns in the residuals, we performed two additional analyses. In each analysis, we pooled residuals across two of the stimulus variables (e.g., r and φ location in the visual field) and examined them as a function of the remaining two (e.g., color direction and temporal frequency). 
First, we collapsed residuals across visual field locations and calculated the autocorrelation of median residuals as a function of color direction and temporal frequency (Figure 9, left side of each panel). This autocorrelation function was fairly flat for all subjects, consistent with independent residuals across temporal frequency and color direction. 
Figure 9
 
Analysis of residuals from the 18-parameter model fits. Panels A, B, C, and D show results from subjects M1, M2, H1, and H2, respectively, and each panel shows results from two analyses. Left: Autocorrelation of median residuals as a function of color direction (abscissa) and temporal frequency (ordinate). Color represents Pearson's correlation coefficient (for color bar, see inset in A). Right: Magnitude and sign of median residual (for dot size and color, see inset in A) as a function of stimulus location in the visual field. The median residual is the median of the distribution of ratios between the measured and predicted thresholds.
Figure 9
 
Analysis of residuals from the 18-parameter model fits. Panels A, B, C, and D show results from subjects M1, M2, H1, and H2, respectively, and each panel shows results from two analyses. Left: Autocorrelation of median residuals as a function of color direction (abscissa) and temporal frequency (ordinate). Color represents Pearson's correlation coefficient (for color bar, see inset in A). Right: Magnitude and sign of median residual (for dot size and color, see inset in A) as a function of stimulus location in the visual field. The median residual is the median of the distribution of ratios between the measured and predicted thresholds.
Second, we plotted median residuals as a function of location (Figure 9, right side of each panel). Residuals for subjects M1 and H1 had little discernable structure. On the other hand, the model systematically overestimated subject M2's sensitivity near the horizontal meridian along the line h = 5° and underestimated it further from the horizontal meridian (Figure 9B). This pattern is probably due to task training: visual field locations at which sensitivity was overestimated were tested earlier than locations at which sensitivity was underestimated. For subject H2, the assumption that contrast sensitivity decays exponentially along the horizontal meridian is imperfect. For this subject, contrast sensitivity drops more gradually over the central 5° of the horizontal meridian than predicted from exponential decay (Figure 9D; Equation 10). 
Human-monkey comparison
As expected from previous studies, human and monkey temporal contrast sensitivity was similar (De Valois et al., 1974; Merigan, 1980). Here, we extended these results to all directions in the LM plane and a variety of locations in the visual field from 2° to 14°. To test quantitatively for differences in temporal contrast sensitivity between humans and monkeys, we took the raw contrast sensitivity measurements for each subject, normalized them within each visual field location, and then pooled them across locations. Normalized luminance contrast sensitivity was greater for humans than monkeys from 1 to 1.5 Hz (two-way analysis of variance with subject as a random effect, p = 0.056). Model fits for each subject, evaluated at location r = 5°, φ = 0, illustrate this difference (Figure 10). 
Figure 10
 
Temporal contrast sensitivity functions from the 18-parameter (yoked) model fits for each subject evaluated at screen location r = 5, φ = 0 in the L+M (solid) and L−M (dashed) directions.
Figure 10
 
Temporal contrast sensitivity functions from the 18-parameter (yoked) model fits for each subject evaluated at screen location r = 5, φ = 0 in the L+M (solid) and L−M (dashed) directions.
Discussion
We measured the contrast detection thresholds of two humans and two monkeys as a function of three variables: temporal frequency, location in the visual field, and color direction in the LM plane. We built a model that successfully described thresholds for all observers over the range of stimulus variables tested (1–60 Hz, 2°–14° of eccentricity, and all color directions within the LM plane). 
We obtained three main results. First, the model fitted contrast detection threshold data from both humans and monkeys with small adjustments to the parameters (Table 2). This confirms the similarity between monkey and human luminance contrast sensitivity and extends these results across color directions in the LM plane and visual field locations. Second, the model did not require complex interactions among parameters to fit the data adequately. This is not trivial: As the number of stimulus variables increases linearly, the number of variable combinations increases exponentially. Theoretically, for example, sensitivity to L-cone modulations in the upper visual field could have been poorly predicted by a model that assumes independent contributions of color direction and screen location to contrast sensitivity, but this was not the case. Third, we found, using the model as a guide, that monkeys were only half as sensitive to low-frequency luminance modulations as humans. A retrospective look at data from a previous study confirms this result, although this difference was not previously emphasized (Merigan, 1980, their figure 3). 
Table 2
 
Parameter values from final fitted models.
Table 2
 
Parameter values from final fitted models.
Subjects M1, H1, and H2 were heavily trained on the task before data collection began (M1 is monkey A and H2 is human G from Lindbloom-Brown et al., 2014). Subject M2 was the least heavily trained subject but exhibited similar contrast sensitivity to the others, suggesting that all four subjects had attained near-asymptotic performance. Longer training periods would likely have been necessary had we used stimuli containing S-cone increments (Gagin et al., 2014). 
Effects of eye size
Retinal illuminance depends on eye size and affects temporal contrast sensitivity (De Lange Dzn, 1958; Kelly, 1961; Snowden et al., 1995). Monkey eyes are smaller than human eyes, so their retinal illuminance is relatively high. We considered the possibility that this size difference could account for the difference between humans and monkeys in low-frequency luminance contrast sensitivity but found it unlikely. When retinal illuminance is greater than 10 Td, human detection thresholds to low-frequency luminance modulations are largely independent of illuminance when they are measured in Weber contrast (Kelly, 1961). The background of our display (producing ∼650 Td) was sufficiently intense that we would not expect low-frequency luminance contrast sensitivity to vary much, if at all, with the modest difference in retinal illuminance afforded by differences in eye size (Virsu & Lee, 1983; Smith, Lee, Pokorny, Martin, & Valberg, 1992). 
Effects of stimulus size
Adjusting stimulus size to compensate for the cortical magnification factor, a procedure called M-scaling, approximately equates detection thresholds across retinal eccentricities (Rovamo, Virsu, & Nasanen, 1978; Strasburger, Rentschler, & Jüttner, 2011). M-scaling is sufficient to equate temporal contrast sensitivity across eccentricity under some conditions (Virsu et al., 1982) but not others (Rovamo & Raninen, 1984; Raninen & Rovamo, 1986). We did not M-scale our stimuli primarily because M-scaling that equates luminance contrast detection thresholds does not equate chromatic contrast detection thresholds (Noorlander, Koenderink, den Ouden, & Edens, 1983; Rovamo & Iivanainen, 1991; Vakrou, Whitaker, McGraw, & McKeefry, 2005; Masuda & Uchikawa, 2009). An important future direction is to extend the model to multiple stimulus sizes. 
Assumptions of the model
In constructing the model, we relied heavily on results from previous studies. In this section, we present the assumptions of the model and direct the reader to the studies that supported these assumptions. 
We chose a particular parametric form for the shape of the temporal contrast sensitivity function that is sufficiently flexible to fit a variety of data sets (Watson, 1986; Barten, 1993). We further assumed that detection contours in the LM plane are elliptical. This description, while demonstrably imperfect, is adequate under the stimulus conditions we used (Poirson, Wandell, Varner, & Brainard, 1990; Cole, Hine, & McIlhagga, 1994; Metha, Vingrys, & Badcock, 1994; Giulianini & Eskew, 1998). Detection thresholds of humans in the LM plane are roughly elliptical across temporal frequencies (Noorlander, Heuts, & Koenderink, 1981) and retinal locations (Stromeyer et al., 1992), and we found that this is also true for monkeys. 
We assumed that the orientations and sizes of detection ellipses were given by an energy calculation on the outputs of two linear detection mechanisms (Stockman & Brainard, 2010). Noise masking reveals more than two detection mechanisms in the LM plane (Hansen & Gegenfurtner, 2013; Shepard, Swanson, McCarthy, & Eskew, 2016), but two mechanisms dominate under the conditions of our experiment (Giulianini & Eskew, 1998; Stromeyer, Thabet, Chaparro, & Kronauer, 1999). We assumed that cone weights to the two postulated detection mechanisms do not change with temporal frequency. This approximation is imperfect but is reasonable when the L- and M-cones are in similar adaptation states (Stromeyer, Cole, & Kronauer, 1987; Gegenfurtner & Hawken, 1995; Stromeyer, Chaparro, Tolias, & Kronauer, 1997; Stockman & Plummer, 2005a; Stockman & Plummer, 2005b; Stockman, Jägle, Pirzer, & Sharpe, 2008). Under the conditions of our experiment, L- and M-cones absorbed ∼8,900 and 7,400 photons/cone/s, respectively, and were therefore in an adaptation state similar to that produced by a moderate-intensity, 565-nm background. Under these conditions, flicker perception is dominated by a fast, cone-nonopponent pathway with little influence of the slow, cone-opponent pathway that might manifest as frequency-dependent cone weights to the LUM mechanism in our experiment (Stockman, Henning, Anwar, Starba, & Rider, 2018). 
We also assumed that cone weights to each mechanism do not vary with retinal eccentricity. This assumption is supported by the near-constant L:M cone ratio to the RG mechanism across the visual field (Newton & Eskew, 2003; Sakurai & Mullen, 2006; Hansen, Pracejus, & Gegenfurtner, 2009) and to the LUM mechanism over the region of visual space we probed (Anderson et al., 1991; Knau, 2000). Further support for this assumption comes from our observation that allowing θ, the L:M ratio of the LUM mechanism in the model, to vary across the visual field did not improve prediction accuracy. 
We assumed that log-transformed contrast sensitivity declines linearly with eccentricity with a slope that depends on the angle in the plane of the display screen (Robson & Graham, 1981). Our results confirmed the observation that the slope of this relationship is steeper near the vertical meridian than near the horizontal meridian (Pointer & Hess, 1989; Pointer & Hess, 1990; Abrams, Nizam, & Carrasco, 2012). Our results also confirm that low-frequency chromatic sensitivity is greater than low-frequency luminance sensitivity at the fovea (Chaparro, Stromeyer, Huang, Kronauer, & Eskew, 1993), and this relationship can reverse in the periphery due to the steeper decline in chromatic sensitivity with retinal eccentricity (Mullen, 1991; Mullen & Kingdom, 1996; Mullen & Kingdom, 2002; Mullen, Sakurai, & Chu, 2005). We found that chromatic and luminance contrast sensitivity was similarly asymmetric between upper and lower visual fields. 
We assumed that the shape of the temporal contrast sensitivity function of the luminance and chromatic detection mechanisms does not change with eccentricity over the region of visual space that we probed. The assumption, which is supported by previous results (Wright & Johnston, 1983; Snowden & Hess, 1992), was built into the model by allowing only ξLUM and ξRG change across the visual field. We tested this assumption by asking whether allowing nLUM or nRG to vary across the visual field improved threshold predictions, and we found that it did not. 
Future directions
The contrast detection literature is vast, and extracting core principles from it and synthesizing them into a concise, accessible format is useful. For example, using the model, we can communicate large data sets with few numbers and interpolate contrast sensitivity for conditions that we did not test. The model can be used to identify stimuli for which detection is maximally or minimally constrained by signals in the early visual system (Geisler, 1989; Angueyra & Rieke, 2013; Brainard et al., 2015; Hass, Angueyra, Lindbloom-Brown, Rieke, & Horwitz, 2015) and to identify stimuli that are differentially visible between subjects. Our model spans only a few stimulus dimensions but could in principle be merged with models that predict contrast sensitivity on the basis of stimulus parameters that we did not vary (e.g., background illumination, spatial frequency, stimulus size, and S-cone modulation). Our code and data are available on GitHub (http://www.github/horwitzlab). 
Our model helps to bridge the gap between neurophysiological and psychophysical studies of temporal contrast sensitivity. Measurements of neuronal responses at psychophysical detection threshold are difficult to obtain in part because detection thresholds depend on stimulus parameters in complex ways. A classic approach to this problem is to identify a suprathreshold stimulus that excites an isolated neuron strongly and then titrate a stimulus parameter (e.g., contrast) to measure psychophysical and neuronal detection thresholds simultaneously. This approach can be inefficient; psychophysical trials are longer than fixation trials, and estimating a distribution of noisy neuronal responses requires many repeated trials. Moreover, the assumption that suprathreshold stimulus preferences are predictive of neuronal sensitivity at the behavioral detection threshold may be inaccurate. 
The model we present helps meet these challenges. Using the model, a battery of stimuli can be synthesized that are matched for detectability but differ in other respects (e.g., temporal frequency and color). These stimuli can be presented at the receptive fields of recorded neurons during detection task performance or passive fixation. This approach may be useful for revealing the neuronal basis of contrast sensitivity. For example, magnocellular, parvocellular, and koniocellular neurons in the lateral geniculate nucleus all respond to L+M modulations, and what contributions each makes to contrast sensitivity is poorly understood. Stimulating neurons of each type with threshold-contrast L+M modulations and comparing their relative sensitivity will provide an upper bound on each population's contribution. 
Acknowledgments
The authors thank Zack Lindbloom-Brown for computer programming and Beth Buffalo for generous assistance with human eye movement measurements. They also thank Abhishek De, Yasmine El-Shamayleh, and Patrick Weller. 
Commercial relationships: none. 
Corresponding author: Gregory D. Horwitz. 
Address: Department of Physiology & Biophysics, Washington National Primate Research Center, University of Washington, Seattle, WA, USA. 
References
Abrams, J., Nizam, A., & Carrasco, M. (2012). Isoeccentric locations are not equivalent: The extent of the vertical meridian asymmetry. Vision Research, 52, 70–78.
Anderson, S. J., Mullen, K. T., & Hess, R. F. (1991). Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. Journal of Physiology, 442, 47–64.
Angueyra, J. M., & Rieke, F. (2013). Origin and effect of phototransduction noise in primate cone photoreceptors. Nature Neuroscience, 16, 1692–1700.
Barten, P. G. J. (1993). Spatiotemporal model for the contrast sensitivity of the human eye and its temporal aspects. Proceedings SPIE 1913 (Human Vision, Visual Processing, and Digital Display IV), 13.
Brainard, D. H., Jiang, H., Cottaris, N. P., Rieke, F., Chichilnisky, E. J., Farrell, J. E., & Wandell, B. A. (2015). Isetbio: Computational tools for modeling early human vision. In Imaging Systems and Applications. OSA Technical Digest [online]. Optical Society of America, paper IT4A.4. Arlington, VA: OSA Publishing.
Carmel, D., Lavie, N., & Rees, G. (2006). Conscious awareness of flicker in humans involves frontal and parietal cortex. Current Biology, 16, 907–911.
Chaparro, A., Stromeyer, C. F., Huang, E. P., Kronauer, R. E., & Eskew, R. T. (1993, January 28). Colour is what the eye sees best. Nature, 361, 348–350.
Cole, G. R., Hine, T. J., & McIlhagga, W. (1994). Estimation of linear detection mechanisms for stimuli of medium spatial frequency. Vision Research, 34, 1267–1278.
Crick, F., & Koch, C. (1998, January 15). Constraints on cortical and thalamic projections: The no-strong-loops hypothesis. Nature, 391, 245–250.
De Lange Dzn, H. (1958). Research into the dynamic nature of the human fovea → cortex systems with intermittent and modulated light. I. Attenuation characteristics with white and colored light. Journal of the Optical Society of America, 48, 777–784.
De Lange Dzn, H. (1961). Eye's response at flicker fusion to square-wave modulation of a test field surrounded by a large steady field of equal mean luminance. Journal of the Optical Society of America, 51, 415–421.
De Valois, R. L., Morgan, H. C., Polson, M. C., Mead, W. R., & Hull, E. M. (1974). Psychophysical studies of monkey vision. I. Macaque luminosity and color vision tests. Vision Research, 14, 53–67.
Engel, S., Zhang, X., & Wandell, B. (1997, July 3). Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature, 388, 68–71.
Falconbridge, M., Ware, A., & MacLeod, D. I. (2010). Imperceptibly rapid contrast modulations processed in cortex: Evidence from psychophysics. Journal of Vision, 10 (8): 21, 1–10, https://doi.org/10.1167/10.8.21. [PubMed] [Article]
Gagin, G., Bohon, K. S., Butensky, A., Gates, M. A., Hu, J. Y., Lafer-Sousa, R.,… Conway, B. R. (2014). Color-detection thresholds in rhesus macaque monkeys and humans. Journal of Vision, 14 (8): 12, 1–15, https://doi.org/10.1167/14.8.12. [PubMed] [Article]
Gegenfurtner, K. R., & Hawken, M. J. (1995). Temporal and chromatic properties of motion mechanisms. Vision Research, 35, 1547–1563.
Geisler, W. S. (1989). Sequential ideal-observer analysis of visual discriminations. Psychological Review, 96, 267–314.
Giulianini, F., & Eskew, R. T.,Jr. (1998). Chromatic masking in the (delta L/L, delta M/M) plane of cone-contrast space reveals only two detection mechanisms. Vision Research, 38, 3913–3926.
Gur, M., & Snodderly, D. M. (1997). A dissociation between brain activity and perception: Chromatically opponent cortical neurons signal chromatic flicker that is not perceived. Vision Research, 37, 377–382.
Hansen, T., & Gegenfurtner, K. R. (2013). Higher order color mechanisms: evidence from noise-masking experiments in cone contrast space. Journal of Vision, 13 (1): 26, 1–21, https://doi.org/10.1167/13.1.26. [PubMed] [Article]
Hansen, T., Pracejus, L., & Gegenfurtner, K. R. (2009). Color perception in the intermediate periphery of the visual field. Journal of Vision, 9 (4): 26, 1–12, https://doi.org/10.1167/9.4.26. [PubMed] [Article]
Hass, C. A., Angueyra, J. M., Lindbloom-Brown, Z., Rieke, F., & Horwitz, G. D. (2015). Chromatic detection from cone photoreceptors to V1 neurons to behavior in rhesus monkeys. Journal of Vision, 15 (15): 1, 1–19, https://doi.org/10.1167/15.15.1. [PubMed] [Article]
Hass, C. A., & Horwitz, G. D. (2013). V1 mechanisms underlying chromatic contrast detection. Journal of Neurophysiology, 109, 2483–2494.
Jiang, Y., Zhou, K., & He, S. (2007). Human visual cortex responds to invisible chromatic flicker. Nature Neuroscience, 10, 657–662.
Kelly, D. H. (1961). Visual response to time-dependent stimuli. I. Amplitude sensitivity measurements. Journal of the Optical Society of America, 51, 422–429.
Kelly, D. H. (1972). Adaptation effects on spatio-temporal sine-wave thresholds. Vision Research, 12, 89–101.
Knau, H. (2000). Thresholds for detecting slowly changing Ganzfeld luminances. The Optical Society of America. A, Optics, Image Science, and Vision 17, 1382–1387.
Koenderink, J. J., Bouman, M. A., Bueno de Mesquita, A. E., & Slappendel, S. (1978a). Perimetry of contrast detection thresholds of moving spatial sine wave patterns. I. The near peripheral visual field (eccentricity 0 degrees-8 degrees). Journal of the Optical Society of America, 68, 845–849.
Koenderink, J. J., Bouman, M. A., Bueno de Mesquita, A. E., & Slappendel, S. (1978b). Perimetry of contrast detection thresholds of moving spatial sine patterns. II. The far peripheral visual field (eccentricity 0 degrees-50 degrees). Journal of the Optical Society of America, 68, 850–854.
Kremers, J., Lee, B. B., & Kaiser, P. K. (1992). Sensitivity of macaque retinal ganglion cells and human observers to combined luminance and chromatic temporal modulation. Journal of the Optical Society of America. A, Optics and Image Science, 9, 1477–1485.
Krolak-Salmon, P., Henaff, M. A., Tallon-Baudry, C., Yvert, B., Guenot, M., Vighetto, A.,… Bertrand, O. (2003). Human lateral geniculate nucleus and visual cortex respond to screen flicker. Annals of Neurology, 53, 73–80.
Lee, B. B., Pokorny, J., Smith, V. C., Martin, P. R., & Valberg, A. (1990). Luminance and chromatic modulation sensitivity of macaque ganglion cells and human observers. Journal of the Optical Society of America. A, Optics and Image Science, 7, 2223–2236.
Lee, B. B., Sun, H., & Zucchini, W. (2007). The temporal properties of the response of macaque ganglion cells and central mechanisms of flicker detection. Journal of Vision, 7 (14): 1, 1–16, https://doi.org/10.1167/7.14.1. [PubMed] [Article]
Levi, D. M., Klein, S. A., & Yap, Y. L. (1987). Positional uncertainty in peripheral and amblyopic vision. Vision Research, 27, 581–597.
Lindbloom-Brown, Z., Tait, L. J., & Horwitz, G. D. (2014). Spectral sensitivity differences between rhesus monkeys and humans: Implications for neurophysiology. Journal of Neurophysiology, 112, 3164–3172.
MacKay, D. J. C. (1998). Introduction to Gaussian processes. In C. M. Bishop (Ed.), NATO ASI Series F Computer and Systems Sciences (pp. 133–166). Berlin: Springer.
Masuda, O., & Uchikawa, K. (2009). Temporal integration of the chromatic channels in peripheral vision. Vision Research, 49, 622–636.
Merigan, W. H. (1980). Temporal modulation sensitivity of macaque monkeys. Vision Research, 20, 953–959.
Metha, A. B., Vingrys, A. J., & Badcock, D. R. (1994). Detection and discrimination of moving stimuli: The effects of color, luminance, and eccentricity. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 11, 1697–1709.
Mullen, K. T. (1991). Colour vision as a post-receptoral specialization of the central visual field. Vision Research, 31, 119–130.
Mullen, K. T., & Kingdom, F. A. (1996). Losses in peripheral colour sensitivity predicted from hit and miss post-receptoral cone connections. Vision Research, 36, 1995–2000.
Mullen, K. T., & Kingdom, F. A. (2002). Differential distributions of red-green and blue-yellow cone opponency across the visual field. Visual Neuroscience, 19, 109–118.
Mullen, K. T., Sakurai, M., & Chu, W. (2005). Does L/M cone opponency disappear in human periphery? Perception, 34, 951–959.
Newton, J. R., & Eskew, R. T.,Jr. (2003). Chromatic detection and discrimination in the periphery: A postreceptoral loss of color sensitivity. Visual Neuroscience, 20, 511–521.
Noorlander, C., Heuts, M. J., & Koenderink, J. J. (1981). Sensitivity to spatiotemporal combined luminance and chromaticity contrast. Journal of the Optical Society of America, 71, 453–459.
Noorlander, C., Koenderink, J. J., den Ouden, R. J., & Edens, B. W. (1983). Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research, 23, 1–11.
Palmer, C., Cheng, S. Y., & Seidemann, E. (2007). Linking neuronal and behavioral performance in a reaction-time visual detection task. Journal of Neuroscience, 27, 8122–8137.
Pelli, D. G. (1985). Uncertainty explains many aspects of visual contrast detection and discrimination. Journal of the Optical Society of America. A, Optics and Image Science, 2, 1508–1532.
Pointer, J. S., & Hess, R. F. (1989). The contrast sensitivity gradient across the human visual field: With emphasis on the low spatial frequency range. Vision Research, 29, 1133–1151.
Pointer, J. S., & Hess, R. F. (1990). The contrast sensitivity gradient across the major oblique meridians of the human visual field. Vision Research, 30, 497–501.
Poirson, A. B., Wandell, B. A., Varner, D. C., & Brainard, D. H. (1990). Surface characterizations of color thresholds. Journal of the Optical Society of America. A, Optics and Image Science, 7, 783–789.
Raninen, A., & Rovamo, J. (1986). Perimetry of critical flicker frequency in human rod and cone vision. Vision Research, 26, 1249–1255.
Rasmussen, C. E. (2004). Gaussian processes in machine learning. Advanced Lectures on Machine Learning, 3176, 63–71.
Robson, J. G., & Graham, N. (1981). Probability summation and regional variation in contrast sensitivity across the visual field. Vision Research, 21, 409–418.
Rovamo, J., & Iivanainen, A. (1991). Detection of chromatic deviations from white across the human visual field. Vision Research, 31, 2227–2234.
Rovamo, J., & Raninen, A. (1984). Critical flicker frequency and M-scaling of stimulus size and retinal illuminance. Vision Research, 24, 1127–1131.
Rovamo, J., Virsu, V., & Nasanen, R. (1978, January 5). Cortical magnification factor predicts the photopic contrast sensitivity of peripheral vision. Nature, 271, 54–56.
Sakurai, M., & Mullen, K. T. (2006). Cone weights for the two cone-opponent systems in peripheral vision and asymmetries of cone contrast sensitivity. Vision Research, 46, 4346–4354.
Sankeralli, M. J., & Mullen, K. T. (1996). Estimation of the L-, M-, and S-cone weights of the postreceptoral detection mechanisms. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 13, 906–915.
Sharpe, C. R. (1974). Letter: The contrast sensitivity of the peripheral visual field to drifting sinusoidal gratings. Vision Research, 14, 905–906.
Shepard, T. G., Swanson, E. A., McCarthy, C. L., & Eskew, R. T.,Jr. (2016). A model of selective masking in chromatic detection. Journal of Vision, 16 (9): 3, 1–17, https://doi.org/10.1167/16.9.3. [PubMed] [Article]
Smith, V. C., Lee, B. B., Pokorny, J., Martin, P. R., & Valberg, A. (1992). Responses of macaque ganglion cells to the relative phase of heterochromatically modulated lights. Journal of Physiology–London, 458, 191–221.
Snowden, R. J., & Hess, R. F. (1992). Temporal frequency filters in the human peripheral visual field. Vision Research, 32, 61–72.
Snowden, R. J., Hess, R. F., & Waugh, S. J. (1995). The processing of temporal modulation at different levels of retinal illuminance. Vision Research, 35, 775–789.
Stockman, A., & Brainard, D. (2010). Color vision mechanisms. In Bass M. (Ed.), OSA handbook of optics (pp. 11.11–11.104). New York: McGraw-Hill.
Stockman, A., Henning, G. B., Anwar, S., Starba, R., & Rider, A. T. (2018). Delayed cone-opponent signals in the luminance pathway. Journal of Vision, 18 (2): 6, 1–35, https://doi.org/10.1167/18.2.6. [PubMed] [Article]
Stockman, A., Jägle, H., Pirzer, M., & Sharpe, L. T. (2008). The dependence of luminous efficiency on chromatic adaptation. Journal of Vision, 8 (16): 1, 1–26, https://doi.org/10.1167/8.16.1. [PubMed] [Article]
Stockman, A., MacLeod, D. I., & Johnson, N. E. (1993). Spectral sensitivities of the human cones. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 10, 2491–2521.
Stockman, A., & Plummer, D. J. (2005a). Long-wavelength adaptation reveals slow, spectrally opponent inputs to the human luminance pathway. Journal of Vision, 5 (9): 5, 702–716, https://doi.org/10.1167/5.9.5. [PubMed] [Article]
Stockman, A., & Plummer, D. J. (2005b). Spectrally opponent inputs to the human luminance pathway: Slow +L and −M cone inputs revealed by low to moderate long-wavelength adaptation. Journal of Physiology–London, 566, 77–91.
Stoughton, C. M., Lafer-Sousa, R., Gagin, G., & Conway, B. R. (2012). Psychophysical chromatic mechanisms in macaque monkey. Journal of Neuroscience, 32, 15216–15226.
Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11 (5): 13, 1–82, https://doi.org/10.1167/11.5.13. [PubMed] [Article]
Stromeyer, C. F., Chaparro, A., Tolias, A. S., & Kronauer, R. E. (1997). Colour adaptation modifies the long-wave versus middle-wave cone weights and temporal phases in human luminance (but not red-green) mechanism. Journal of Physiology–London, 499, 227–254.
Stromeyer, C. F.,III, Cole, G. R., & Kronauer, R. E. (1985). Second-site adaptation in the red-green chromatic pathways. Vision Research, 25, 219–237.
Stromeyer, C. F.,III, Cole, G. R., & Kronauer, R. E. (1987). Chromatic suppression of cone inputs to the luminance flicker mechanism. Vision Research, 27, 1113–1137.
Stromeyer, C. F.,III, Kronauer, R. E., Ryu, A., Chaparro, A., & Eskew, R. T.,Jr. (1995). Contributions of human long-wave and middle-wave cones to motion detection. Journal of Physiology, 485 (Pt. 1), 221–243.
Stromeyer, C. F.,III, Lee, J., & Eskew, R. T.,Jr. (1992). Peripheral chromatic sensitivity for flashes: A post-receptoral red-green asymmetry. Vision Research, 32, 1865–1873.
Stromeyer, C. F.,III, Thabet, R., Chaparro, A., & Kronauer, R. E. (1999). Spatial masking does not reveal mechanisms selective to combined luminance and red-green color. Vision Research, 39, 2099–2112.
Tyler, C. W. (1985). Analysis of visual modulation sensitivity. II. Peripheral retina and the role of photoreceptor dimensions. Journal of the Optical Society of America. A, Optics and Image Science, 2, 393–398.
Tyler, C. W. (1987). Analysis of visual modulation sensitivity. III. Meridional variations in peripheral flicker sensitivity. Journal of the Optical Society of America. A, Optics and Image Science, 4, 1612–1619.
Vakrou, C., Whitaker, D., McGraw, P. V., & McKeefry, D. (2005). Functional evidence for cone-specific connectivity in the human retina. Journal of Physiology, 566 (Pt. 1), 93–102.
Virsu, V., & Lee, B. B. (1983). Light adaptation in cells of macaque lateral geniculate nucleus and its relation to human light adaptation. Journal of Neurophysiology, 50, 864–878.
Virsu, V., Rovamo, J., Laurinen, P., & Nasanen, R. (1982). Temporal contrast sensitivity and cortical magnification. Vision Research, 22, 1211–1217.
Vul, E., & MacLeod, D. I. (2006). Contingent aftereffects distinguish conscious and preconscious color processing. Nature Neuroscience, 9, 873–874.
Watson, A. B. (1986). Temporal sensitivity. In Handbook of perception and human performance ( Vol. 1, pp. 1–43.). New York: Wiley-Interscience.
Watson, A. B., & Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33, 113–120.
Williams, P. E., Mechler, F., Gordon, J., Shapley, R., & Hawken, M. J. (2004). Entrainment to video displays in primary visual cortex of macaque and humans. Journal of Neuroscience, 24, 8278–8288.
Wright, M. J., & Johnston, A. (1983). Spatiotemporal contrast sensitivity and visual field locus. Vision Research, 23, 983–989.
Yeh, T., Lee, B. B., & Kremers, J. (1995). Temporal response of ganglion cells of the macaque retina to cone-specific modulation. Journal of the Optical Society of America. A, Optics, Image Science, and Vision, 12, 456–464.
Figure 1
 
Contrast detection task. Panels from top to bottom show the sequence of events in each trial. Top panel: Subject fixates. Middle panel: Gabor stimulus appears. The horizontal meridian (dotted line), φ (arc), and r (curly bracket) illustrate the polar coordinate system used to describe the location of the stimulus; they were not visible to the subject. Bottom panel: Choice targets appear.
Figure 1
 
Contrast detection task. Panels from top to bottom show the sequence of events in each trial. Top panel: Subject fixates. Middle panel: Gabor stimulus appears. The horizontal meridian (dotted line), φ (arc), and r (curly bracket) illustrate the polar coordinate system used to describe the location of the stimulus; they were not visible to the subject. Bottom panel: Choice targets appear.
Figure 2
 
Stimulus space. Each Gabor stimulus is represented by a pair of points that are symmetric with respect to the temporal frequency axis. Points far from this axis have high contrast, and points on the axis have zero contrast.
Figure 2
 
Stimulus space. Each Gabor stimulus is represented by a pair of points that are symmetric with respect to the temporal frequency axis. Points far from this axis have high contrast, and points on the axis have zero contrast.
Figure 3
 
Data from subject M2 and model fit. (A–D) Contrast detection thresholds (black points) on trials in which the stimulus appeared 5° from the fixation point on the horizontal meridian. Stimulus directions for which a threshold could not be measured because of limitations of the display gamut are plotted at the gamut edge (red points). Surfaces are best fits of Equation 7 (green). (A) Stimulus space oriented so that the L+M axis is in the plane of the page. (B) Stimulus space oriented so that the L−M axis is in the plane of the page. (C, D) Magnified views of the circled portion of A and B, respectively. (E) Cross sections through the surfaces in A–D parallel to the LM plane at 1 Hz (red), 5 Hz (green), 10 Hz (blue), and 20 Hz (black). Detection thresholds (symbols) were collected from bins that spanned the nominal temporal frequency ± a factor of 1.5. (F) Contrast sensitivity measurements (points) and 1-D functions from the model fit (curves) in the L-cone direction (red), M-cone direction (cyan), L−M direction (gray), and L+M direction (black). Data points were collected from bins that spanned the nominal color direction ± 10°.
Figure 3
 
Data from subject M2 and model fit. (A–D) Contrast detection thresholds (black points) on trials in which the stimulus appeared 5° from the fixation point on the horizontal meridian. Stimulus directions for which a threshold could not be measured because of limitations of the display gamut are plotted at the gamut edge (red points). Surfaces are best fits of Equation 7 (green). (A) Stimulus space oriented so that the L+M axis is in the plane of the page. (B) Stimulus space oriented so that the L−M axis is in the plane of the page. (C, D) Magnified views of the circled portion of A and B, respectively. (E) Cross sections through the surfaces in A–D parallel to the LM plane at 1 Hz (red), 5 Hz (green), 10 Hz (blue), and 20 Hz (black). Detection thresholds (symbols) were collected from bins that spanned the nominal temporal frequency ± a factor of 1.5. (F) Contrast sensitivity measurements (points) and 1-D functions from the model fit (curves) in the L-cone direction (red), M-cone direction (cyan), L−M direction (gray), and L+M direction (black). Data points were collected from bins that spanned the nominal color direction ± 10°.
Figure 4
 
Data and model fits from subject H1 with conventions as in Figure 3.
Figure 4
 
Data and model fits from subject H1 with conventions as in Figure 3.
Figure 5
 
Residuals from the 13-parameter model fits (Equation 7) as a function of predicted threshold. Residuals are defined as log10(measured threshold) − log10 (predicted threshold), where both measured and predicted thresholds are in units of stimulus modulation amplitude (Equation 4). Models were fit independently to data collected at each visual field location. Residuals from each subject are plotted in a different color (see inset).
Figure 5
 
Residuals from the 13-parameter model fits (Equation 7) as a function of predicted threshold. Residuals are defined as log10(measured threshold) − log10 (predicted threshold), where both measured and predicted thresholds are in units of stimulus modulation amplitude (Equation 4). Models were fit independently to data collected at each visual field location. Residuals from each subject are plotted in a different color (see inset).
Figure 6
 
Variations in ξLUM and ξRG across the visual field. Data from subject M1 were fitted with a model in which all of the parameters except ξLUM and ξRG were fixed across visual field locations. Left: LUM (A, black dots) and RG (B, black dots) contrast sensitivity as a function of visual field location, parameterized by r and φ. Contrast sensitivity is the reciprocal of detection threshold in units of stimulus modulation amplitude (Equation 4). To facilitate comparison between LUM and RG, contrast sensitivity was evaluated at 6 Hz, which is the frequency at which the components of the fitted model apart from ξLUM and ξRG confer equal sensitivity. Surfaces were fit with Equation 10. Insets show the slope of the modelled contrast sensitivity decline as a function of φ (e.g., for each degree of eccentricity along the horizontal meridian, LUM contrast sensitivity drops by a factor of 0.95). Right: Surface fits from the left rendered as a heat map with visual field location represented in degrees of visual angle. The color bar applies to both top and bottom panels. Contours in A are 20, 15, and 10. Contours in B are 30, 25, 20, 15, and 10.
Figure 6
 
Variations in ξLUM and ξRG across the visual field. Data from subject M1 were fitted with a model in which all of the parameters except ξLUM and ξRG were fixed across visual field locations. Left: LUM (A, black dots) and RG (B, black dots) contrast sensitivity as a function of visual field location, parameterized by r and φ. Contrast sensitivity is the reciprocal of detection threshold in units of stimulus modulation amplitude (Equation 4). To facilitate comparison between LUM and RG, contrast sensitivity was evaluated at 6 Hz, which is the frequency at which the components of the fitted model apart from ξLUM and ξRG confer equal sensitivity. Surfaces were fit with Equation 10. Insets show the slope of the modelled contrast sensitivity decline as a function of φ (e.g., for each degree of eccentricity along the horizontal meridian, LUM contrast sensitivity drops by a factor of 0.95). Right: Surface fits from the left rendered as a heat map with visual field location represented in degrees of visual angle. The color bar applies to both top and bottom panels. Contours in A are 20, 15, and 10. Contours in B are 30, 25, 20, 15, and 10.
Figure 7
 
Cross-validated model comparisons. Individual threshold measurements were withheld from fitting and used to calculate prediction errors for four models: symmetric (b3 = 0 for both ξLUM and ξRG), yoked (a single b3 parameter was shared by ξLUM and ξRG), luminance-only (b3 = 0 for ξRG), and unconstrained (b3 was fit separately for ξLUM and ξRG). The prediction error is quantified as log10(measured threshold) − log10(predicted threshold), where threshold is measured in units of stimulus modulation amplitude (Equation 4). The ratio of prediction errors between models was calculated for each threshold measurement. Negative log prediction error ratios indicate that the yoked model produced lower prediction errors than the alternative model. Points and error bars indicate medians and bootstrap estimates of standard error. More data were collected from monkeys than humans, resulting in smaller error bars for monkeys.
Figure 7
 
Cross-validated model comparisons. Individual threshold measurements were withheld from fitting and used to calculate prediction errors for four models: symmetric (b3 = 0 for both ξLUM and ξRG), yoked (a single b3 parameter was shared by ξLUM and ξRG), luminance-only (b3 = 0 for ξRG), and unconstrained (b3 was fit separately for ξLUM and ξRG). The prediction error is quantified as log10(measured threshold) − log10(predicted threshold), where threshold is measured in units of stimulus modulation amplitude (Equation 4). The ratio of prediction errors between models was calculated for each threshold measurement. Negative log prediction error ratios indicate that the yoked model produced lower prediction errors than the alternative model. Points and error bars indicate medians and bootstrap estimates of standard error. More data were collected from monkeys than humans, resulting in smaller error bars for monkeys.
Figure 8
 
Residuals, defined as log10(measured threshold) − log10(predicted threshold), from the 18-parameter model fits (Equation 10) as a function of predicted threshold. Conventions are as in Figure 4.
Figure 8
 
Residuals, defined as log10(measured threshold) − log10(predicted threshold), from the 18-parameter model fits (Equation 10) as a function of predicted threshold. Conventions are as in Figure 4.
Figure 9
 
Analysis of residuals from the 18-parameter model fits. Panels A, B, C, and D show results from subjects M1, M2, H1, and H2, respectively, and each panel shows results from two analyses. Left: Autocorrelation of median residuals as a function of color direction (abscissa) and temporal frequency (ordinate). Color represents Pearson's correlation coefficient (for color bar, see inset in A). Right: Magnitude and sign of median residual (for dot size and color, see inset in A) as a function of stimulus location in the visual field. The median residual is the median of the distribution of ratios between the measured and predicted thresholds.
Figure 9
 
Analysis of residuals from the 18-parameter model fits. Panels A, B, C, and D show results from subjects M1, M2, H1, and H2, respectively, and each panel shows results from two analyses. Left: Autocorrelation of median residuals as a function of color direction (abscissa) and temporal frequency (ordinate). Color represents Pearson's correlation coefficient (for color bar, see inset in A). Right: Magnitude and sign of median residual (for dot size and color, see inset in A) as a function of stimulus location in the visual field. The median residual is the median of the distribution of ratios between the measured and predicted thresholds.
Figure 10
 
Temporal contrast sensitivity functions from the 18-parameter (yoked) model fits for each subject evaluated at screen location r = 5, φ = 0 in the L+M (solid) and L−M (dashed) directions.
Figure 10
 
Temporal contrast sensitivity functions from the 18-parameter (yoked) model fits for each subject evaluated at screen location r = 5, φ = 0 in the L+M (solid) and L−M (dashed) directions.
Table 1
 
Number of threshold measurements per subject. Notes: Color direction and temporal frequency conditions were distributed nearly continuously in the experiment but are binned coarsely in the table. Nonopponent and opponent stimuli are those in which L- and M-cone modulations had the same or opposite sign, respectively.
Table 1
 
Number of threshold measurements per subject. Notes: Color direction and temporal frequency conditions were distributed nearly continuously in the experiment but are binned coarsely in the table. Nonopponent and opponent stimuli are those in which L- and M-cone modulations had the same or opposite sign, respectively.
Table 2
 
Parameter values from final fitted models.
Table 2
 
Parameter values from final fitted models.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×