February 2025
Volume 25, Issue 2
Open Access
Article  |   February 2025
Temporal dynamics of human color processing measured using a continuous tracking task
Author Affiliations
Journal of Vision February 2025, Vol.25, 12. doi:https://doi.org/10.1167/jov.25.2.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael A. Barnett, Benjamin M. Chin, Geoffrey K. Aguirre, Johannes Burge, David H. Brainard; Temporal dynamics of human color processing measured using a continuous tracking task. Journal of Vision 2025;25(2):12. https://doi.org/10.1167/jov.25.2.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We characterized the temporal dynamics of color processing using a continuous tracking paradigm by estimating subjects' temporal lag in tracking chromatic Gabor targets. To estimate the lag, we computed the cross-correlation between the velocities of the Gabor target's random walk and the velocities of the subject's tracking. Lag was taken as the time of the peak of the resulting cross-correlogram. We measured how the lag changes as a function of chromatic direction and contrast for stimuli in the LS cone contrast plane. In the same set of subjects, we also measured detection thresholds for stimuli with matched spatial, temporal, and chromatic properties. We created a model of tracking and detection performance to test whether a common representation of chromatic contrast accounts for both measures. The model summarizes the effect of chromatic contrast over different chromatic directions through elliptical isoperformance contours, the shapes of which are contrast independent. The fitted elliptical isoperformance contours have essentially the same orientation in the detection and tracking tasks. For the tracking task, however, there is a striking reduction in relative sensitivity to signals originating in the S cones.

Introduction
In human vision the retinal image is encoded by the L, M, and S cones (Brainard & Stockman, 2010). Subsequent stages of processing combine the signals from the three classes of cones to create, broadly speaking, three post-receptoral mechanisms: two cone-opponent mechanisms and a luminance mechanism. The cone-opponent mechanisms represent the differences between cone signals [S − (L + M) and L − M] whereas the luminance mechanism represents an additive combination (L + M) (Stockman & Brainard, 2010). The physiological basis of these mechanisms begins in the retina, with cone-opponent responses observed in retinal ganglion cells, as well as in their targets in the lateral geniculate nucleus (e.g., DeValois, Abramov, & Jacobs, 1966; Derrington, Krauskopf, & Lennie, 1984; Lennie & Movshon, 2005; Shevell & Martin, 2017). The sensitivity of each of the three cone-opponent post-receptoral mechanisms varies in a distinct manner with spatial and temporal frequency (e.g., de Lange, 1958; Kelly, 1975; Mullen, 1985; Sekiguchi, Williams, & Brainard, 1993; Poirson & Wandell, 1996; Metha & Mullen, 1996). 
Models based on three post-receptoral mechanisms are able to account for aspects of the psychophysically-measured detection and discrimination of colored patterns (e.g., Guth, Massof, & Benzschawel, 1980; Krauskopf, Williams, & Heeley, 1982; Poirson, Wandell, Varner, & Brainard, 1990; Poirson & Wandell, 1990b; Poirson & Wandell, 1990a; Guth, 1991; Poirson & Wandell, 1993; Poirson & Wandell, 1996; Knoblauch & Maloney, 1996), although deviations from model predictions suggest the presence of additional mechanisms, perhaps at cortical sites (e.g., Krauskopf, Williams, Mandler, & Brown, 1986; Gegenfurtner & Kiper, 1992; Eskew, Wang, & Richters, 2004; Hansen & Gegenfurtner, 2013; see Shapley & Hawken, 2002; Gegenfurtner, 2003; Lennie & Movshon, 2005; Eskew, 2009; Stockman & Brainard, 2010). Even in the retina, the presence of multiple parallel pathways (e.g. multiple classes of bipolar and retinal ganglion cells with distinct ON and OFF cell types) suggests that more than three mechanisms will be required for a full account of the neural machinery (Merigan & Maunsell, 1990; Dacey, 2000; Masland, 2012; Casile, Victor, & Rucci, 2019; Kim et al., 2022, see cell type taxonomy in supplement). This point is particularly relevant to understanding the processing of signals originating in the S cones (see below for more discussion). Nonetheless, our view is that the conceptualization of a luminance and two cone-opponent mechanisms provides a useful starting point for modeling the psychophysical processing of visual stimuli. 
Although S cones have temporal dynamics similar to those of L and M cones (Schnapf, Nunn, Meister, & Baylor, 1990; Stockman, MacLeod, & Lebrun, 1993), human detection sensitivity declines more rapidly with temporal frequency for stimuli that isolate S cones than for those that excite the L and M cones (Wisowaty & Boynton, 1980; Stockman et al., 1991). S-cone inputs to chromatic mechanisms are delayed relative to L- and M-cone inputs under typical adaptation conditions, but adaptation has a substantial effect on such delays (Stromeyer, Eskew, Kronauer, & Spillman, 1991; Stockman & Plummer, 1998; Blake, Land, & Mollon, 2008; Lee, Mollon, Zaidi, & Smithson, 2009), Similarly, reaction times to detect S-cone mediated stimuli are longer than for stimuli detected by L and M cones (McKeefry, Parry, & Murray, 2003; Smithson & Mollon, 2004). Mollon and Krauskopf (1973) measured reaction time as a function of background illuminance for 430, 500, and 650 nm stimuli and found reaction times of ∼300-400 ms with the highest latencies associated with the 430 nm stimuli. Using a two-pulse detection method, Shinomori and Werner (2008) measured temporal sensitivity for increments and decrements of S-cone isolating modulations and derived lags of 50–70 and 100–120 ms, respectively. 
Consistent with the behavioral observations, studies using functional magnetic resonance imaging reveal that the response in early human visual cortex to S-cone stimuli is attenuated more rapidly with temporal frequency than are responses for L- and M-cone stimuli (Engel, Zhang, & Wandell, 1997; Liu & Wandell, 2005; Spitschan, Datta, Stern, Brainard, & Aguirre, 2016; Gentile, Spitschan, Taskin, Bock, & Aguirre, 2024). And electrophysiological recordings indicate that at least some single-unit responses to S-cone signals are delayed relative to their responses to L- and M-cones signals (Cottaris & De Valois, 1998). Furthermore, the strength of S-cone input to motion-selective cortex is small, per unit contrast, relative to L- and M-cone inputs (Seidemann, Poirson, Wandell, & Newsome, 1999; Wandell et al., 1999). It should be no surprise therefore that motion perception is degraded for stimuli detected only by the S cones (Cavanagh, MacLeod, & Anstis, 1987; Dougherty, Press, & Wandell, 1999). 
Although the S cone contribution to motion perception is small, it is not zero. S cones can contribute via their input to luminance mechanisms, which is dependent on adaptation, delayed under typical adaptation conditions, and of inverted sign relative to the contributions of L and M cones (Lee & Stromeyer, 1989; see Stockman, MacLeod, & DePriest, 1991; Stromeyer et al., 1991; Ripamonti, Woo, Crowther, & Stockman, 2009). The delay of S-cone inputs to luminance mechanisms can differ from their delay into chromatic mechanisms (Stromeyer et al., 1991; see Stockman & Plummer, 1998; Lee et al., 2009). 
Visual perception supports tasks beyond stimulus detection and discrimination. Ultimately, it supports visually-guided behavior. For this reason, there has been recent interest in investigating the relation between mechanisms of early visual processing and perceptual-motor performance (Bonnen, Burge, Yates, Pillow, & Cormack, 2015; Bonnen, Huk, & Cormack, 2017; Chin & Burge, 2022; Straub & Rothkopf, 2022; Burge & Cormack, 2024). A compelling approach—continuous target-tracking psychophysics (Bonnen et al., 2015)—measures the ability of a subject to use an on-screen cursor to track a target undergoing a random walk (Brownian motion). The resulting data provide an efficient way to estimate the temporal lag of the visual-motor system that is associated with the tracking behavior (Figure 1). 
Figure 1.
 
Experimental overview. (a) The LS cone contrast plane. The x-axis shows L-cone contrast and the y-axis S-cone contrast. Stimulus modulations around a background are represented as vectors in this plane, with the direction providing the relative strength of the L- and S-cone contrasts and the magnitude providing the overall contrast of the modulation. Indeed, we specify stimuli by their angular direction in the LS contrast plane, and their overall contrast by their vector length in this plane. A set of example directions are shown, with the color of each limb providing the approximate appearance of that portion of the modulation. (b) Example stimuli. In the tracking task, subjects tracked the position of a horizontally moving color Gabor modulation. In the detection task, subjects had to indicate which of two temporal intervals contained a horizontally moving color Gabor modulation. (c) The top panel shows position traces for an example tracking run in the experiment. The gray line shows the position of the target center as a function of time. The black line shows the subject's cursor position as a function of time. The bottom panel show the velocities for the example data in the panel above. The gray line shows the velocity of the target as a function of time. The black like shows the cursor velocity as a function of time. (d) Example tracking cross-correlograms. The gray line shows the cross-correlation between the stimulus and cursor velocities. This cross-correlogram is fit with a log-Gaussian function, and the time of the peak of the fit provides the estimate of tracking lag. Examples are given for stimuli that evoke longer (stimulus “A”) and shorter (stimulus “B”) lags.
Figure 1.
 
Experimental overview. (a) The LS cone contrast plane. The x-axis shows L-cone contrast and the y-axis S-cone contrast. Stimulus modulations around a background are represented as vectors in this plane, with the direction providing the relative strength of the L- and S-cone contrasts and the magnitude providing the overall contrast of the modulation. Indeed, we specify stimuli by their angular direction in the LS contrast plane, and their overall contrast by their vector length in this plane. A set of example directions are shown, with the color of each limb providing the approximate appearance of that portion of the modulation. (b) Example stimuli. In the tracking task, subjects tracked the position of a horizontally moving color Gabor modulation. In the detection task, subjects had to indicate which of two temporal intervals contained a horizontally moving color Gabor modulation. (c) The top panel shows position traces for an example tracking run in the experiment. The gray line shows the position of the target center as a function of time. The black line shows the subject's cursor position as a function of time. The bottom panel show the velocities for the example data in the panel above. The gray line shows the velocity of the target as a function of time. The black like shows the cursor velocity as a function of time. (d) Example tracking cross-correlograms. The gray line shows the cross-correlation between the stimulus and cursor velocities. This cross-correlogram is fit with a log-Gaussian function, and the time of the peak of the fit provides the estimate of tracking lag. Examples are given for stimuli that evoke longer (stimulus “A”) and shorter (stimulus “B”) lags.
Here, we examined how cone signals are combined to support stimulus tracking. In two separate experiments we measured the ability of participants (i) to track and (ii) to detect targets that were matched in spatial, temporal, and chromatic properties. Figure 1 provides an overview of the stimuli and experimental paradigm. In both experiments, we characterized how performance varied for stimuli modulated in different directions in the L- and S-cone contrast plane. More specifically, we measured tracking lag, taken as the time to peak estimated from the measured cross-correlogram for each stimulus modulation (Figure 1d). We then used a model-based approach to understand how the visual system integrates chromatic information to perform tracking and detection. The results characterize how any combination of L-and S-cone contrast contributes to performance on these tasks in the form of contrast-invariant isoperformance contours in the L and S cone contrast plane. In the Discussion, we speculate about the possible mechanistic basis of our results, but we note at the outset that our experiments were not designed to sharply characterize the underlying mechanisms. The data indicate a deficit in the ability of S cones to support target-tracking, relative to that of L cones, when tracking isoperformance contours are compared with those for detection. 
Results
Experiment 1: Chromatic contrast and tracking lag
We first examined the basic relationship between the contrast of a stimulus and tracking lag, as measured in our color tracking task (see Methods). Figure 2 presents tracking lag as a function of contrast for Subject 2, grouped by the chromatic direction of the stimuli. Overall, we observed that tracking lag decreased as stimulus contrast increased. This was observed in all subjects and for all chromatic directions (see Supplementary Figure S2 for data from Subjects 1 and 3). 
Figure 2.
 
Lag versus contrast for subject 2. Tracking lag as a function of modulation contrast separately for each chromatic direction. The closed circles indicate the lags and are grouped into their corresponding chromatic direction by plot colors. The chromatic direction angles are displayed in the legend of each panel. The error bars on each lag estimate are the SEMs found via a bootstrap procedure (that is, the standard deviations of the bootstrapped parameter estimates). The dashed curves in each panel are the lag predictions of the color tracking model (CTM; see the following sections). The line colors lines are matched to the color of the corresponding symbols.
Figure 2.
 
Lag versus contrast for subject 2. Tracking lag as a function of modulation contrast separately for each chromatic direction. The closed circles indicate the lags and are grouped into their corresponding chromatic direction by plot colors. The chromatic direction angles are displayed in the legend of each panel. The error bars on each lag estimate are the SEMs found via a bootstrap procedure (that is, the standard deviations of the bootstrapped parameter estimates). The dashed curves in each panel are the lag predictions of the color tracking model (CTM; see the following sections). The line colors lines are matched to the color of the corresponding symbols.
While tracking lag decreases with increases in contrast, the rate of decrease and minimum lag differed across chromatic directions. For example, the lag for tracking a 20% contrast L-cone isolating (0°) stimulus was roughly 325 ms, whereas 70% contrast was needed to achieve this low tracking latency for ±75° directions. Notably, tracking an S-cone isolating (90°) stimulus was associated with the longest lag values in all three subjects. This observation is broadly consistent with the prior literature on S-cone mediated temporal and motion processing (see Introduction). Our results are also broadly consistent with the stimulus dependence of eye-tracking lag described in a brief summary of an earlier study (Mulligan, 2002). 
With sufficiently high contrast, lag tended to asymptote at the same value for chromatic directions that incorporated some amount of L-cone contrast. For chromatic directions that had little to no L cone contrast, the lag versus contrast functions were decreasing at the highest contrasts available within our display gamut. We quantify the relative contribution of the L- and S-cones cones to tracking performance in the section below. 
Experiment 1: Color tracking model
We developed a color tracking model (CTM) that relates the L- and S-cone contrast of a stimulus modulation to tracking lag. The first stage of the model combines contrast from the two cone mechanisms through a quadratic computation. The responses of the mechanisms providing input to this stage are assumed to be linear with contrast. This stage transforms stimulus contrast and chromatic direction into what we call equivalent contrast. This is the effective contrast of the stimulus after it has been weighted by the sensitivity of the assumed underlying mechanisms for the corresponding chromatic direction. The first stage can be summarized by an elliptical contour whose shape indicates the relative L- and S-cone contrasts in any chromatic direction that lead to the same equivalent contrast. In the CTM, the shape of the elliptical isoperformance contour is constrained to be independent of overall contrast. 
This equivalent contrast computed by the first stage of the model provides a common axis for the lag measurements, collapsed across chromatic direction. The second stage of the model uses a single exponential decay function, described in more detail below, to transform equivalent contrast to tracking lag. This function is independent of the chromatic direction of the stimulus, as it depends only on the equivalent contrast. Hence, the elliptical contour that characterizes how equivalent contrast depends on chromatic direction specifies an isoperformance contour. It describes the relative L- and S-cone contrasts in any chromatic direction that lead to the same tracking lag. 
The shape of an elliptical isoperformance contour is specified by two parameters: (1) the ellipse angle θ (representing the direction of least sensitivity; counterclockwise to the positive abscissa), and (2) the minor axis ratio m (the ratio of vector lengths between the most and least sensitive directions). Within this framework, these parameters provide a full account of the dependence of performance on chromatic direction for the tracking task. Figure 3 shows the isoperformance contours for the CTM for all three subjects, with the parameter values for each subject inset in each panel. These are shown with their major axes normalized to have a length of 2 (±1 in each direction around the origin). 
Figure 3.
 
Isoperformance Contours for the Color Tracking Model. The gray ellipse in each panel shows the isoreponse contour associated with the color tracking task for each subject. The isoperformance contour is the set of stimuli that result in the same tracking lags. This contour is constrained in the CTM to take the form of an ellipse in the LS cone contrast plane, and its shape is constrained to be the same independent of overall contrast. The ellipse is specified by two parameters: (1) the ellipse angle and (2) the minor axis ratio. The ellipse angle represents the direction of least sensitivity defined counterclockwise to the positive abscissa. The minor axis ratio is the ratio of the vector lengths between the most and least sensitive directions. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (1 in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and their standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 3.
 
Isoperformance Contours for the Color Tracking Model. The gray ellipse in each panel shows the isoreponse contour associated with the color tracking task for each subject. The isoperformance contour is the set of stimuli that result in the same tracking lags. This contour is constrained in the CTM to take the form of an ellipse in the LS cone contrast plane, and its shape is constrained to be the same independent of overall contrast. The ellipse is specified by two parameters: (1) the ellipse angle and (2) the minor axis ratio. The ellipse angle represents the direction of least sensitivity defined counterclockwise to the positive abscissa. The minor axis ratio is the ratio of the vector lengths between the most and least sensitive directions. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (1 in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and their standard errors estimated by bootstrap resampling) are provided in each panel.
For all subjects, θ was close to 90°, implying that the chromatic mechanisms underlying tracking are least sensitive to stimuli modulated near to the nominal S-cone isolating direction (and most sensitive to stimuli modulated near to the nominal L-cone isolating direction), when overall stimulus contrast is expressed using the vector length convention that we follow (see Methods). The parameter m captures the magnitude of this difference in sensitivity across chromatic directions. All three subjects had an m value of 0.03, implying that tracking performance as summarized by tracking lag is ∼30× more sensitive to nominal L-cone isolating stimuli than to nominal S-cone isolating stimuli. 
Because the empirically determined ellipses are quite distended, we took special care to choose stimuli that aligned with the least sensitive direction (see Horiguchi, Winawer, Dougherty, & Wandell, 2013 for a similar approach). After an initial determination of the ellipse shape from measurements made with a common set of stimuli for each subject, we tailored additional stimuli to preliminary model fits for each subject, so that estimates of ellipse size and orientation would be well constrained by the data (stimulus angles for each subject provided in Figure 4). 
Figure 4.
 
Nonlinearity of the CTM. The gray curve in each panel shows the nonlinearity of the color tracking model. The x-axis it the equivalent contrast which is the result of the isoperformance contour. The y-axis respresent the response, which in this figure, is the lag from the tracking task in seconds. The closed circles in each plot are the tracking lags. The stimulus contrasts of the for each lag has been adjusted by the isoperformance contour allowing the lags for all stimuli to be plotted on the equivalent contrast axis. The color map denotes which color correspond to which directions. The inset color map in each panel marks the directions tested which were unique to each subject. Parameters of the exponential fit to data (not bootstrapped) are provided in each panel.
Figure 4.
 
Nonlinearity of the CTM. The gray curve in each panel shows the nonlinearity of the color tracking model. The x-axis it the equivalent contrast which is the result of the isoperformance contour. The y-axis respresent the response, which in this figure, is the lag from the tracking task in seconds. The closed circles in each plot are the tracking lags. The stimulus contrasts of the for each lag has been adjusted by the isoperformance contour allowing the lags for all stimuli to be plotted on the equivalent contrast axis. The color map denotes which color correspond to which directions. The inset color map in each panel marks the directions tested which were unique to each subject. Parameters of the exponential fit to data (not bootstrapped) are provided in each panel.
For each subject, the isoperformance contour can be used convert stimuli of varying contrast and chromatic directions into equivalent contrast. This value for each stimulus was then related to tracking lag via the (three-parameter) exponential decay function (see Methods for equation), with that function common across all chromatic directions. Figure 4 presents the form of the response nonlinearity that was found for each subject. In all subjects, an increase in equivalent contrast was associated with a decrease in lag, reaching an asymptote of approximately 350 ms in all three subjects. The best-fit parameter values (A, s, and d; see Methods for functional form) differed between subjects and are provided in each panel of Figure 4
The ability of the CTM to account for the lag data is summarized by the agreement between transformed data and the response nonlinearity (Figure 4) as well as by the dashed fit lines to the untransformed tracking lags (Figure 2 and Supplementary Figure S2). Indeed, the five-parameter model (three for the non-linearity; two for the isoperformance contour) provides a good account of tracking performance across contrast levels and chromatic directions. One exception was found in Subject 3, for whom there appeared to be some bifurcation of transformed lag values relative to prediction at high equivalent contrast levels (Figure 4, right panel). We do not have an explanation for this. 
Experiment 2: Chromatic contrast and detection
Subjects participated in a two-interval forced choice (2IFC) color detection task in which they were asked to report which of two intervals contained a moving Gabor target. While it was presented, this target underwent random walk motion with the same parameters used in the tracking experiment. The Gabor targets varied in chromatic direction and contrast as in the tracking task, although we used fewer chromatic directions in this experiment. We measured fraction correct detection for each direction and contrast. As with the tracking lag data, we examined the relationship between fraction correct and contrast by grouping the measurements by their chromatic direction. For all subjects and chromatic directions (Figure 5 and Supplementary Figure S3), fraction correct increased with stimulus contrast. 
Figure 5.
 
Detection versus contrast for subject 2. The figure shows the fraction correct in the detection task as a function of the stimulus contrast for each chromatic direction used in the experiment. The closed circles in each panel are the fraction correct for an individual stimulus direction, with angle in the LS contrast plane provided above each panel. The solid curves are the fraction correct predictions of the color detection model fit to all the data simultaneously (see below). Note the difference in x-axis range between panels.
Figure 5.
 
Detection versus contrast for subject 2. The figure shows the fraction correct in the detection task as a function of the stimulus contrast for each chromatic direction used in the experiment. The closed circles in each panel are the fraction correct for an individual stimulus direction, with angle in the LS contrast plane provided above each panel. The solid curves are the fraction correct predictions of the color detection model fit to all the data simultaneously (see below). Note the difference in x-axis range between panels.
Differences in sensitivity across the chromatic directions can be appreciated by considering the stimulus contrast required to achieve a criterion detection threshold (e.g., 76% correct). For L-cone isolating stimuli, this contrast is less than one percent, while it is several percent for S-cone isolating stimuli. The very high sensitivity to L-cone isolating modulation suggests that this stimulus may be detected by an L − M cone-opponent mechanism (Chaparro, Stromeyer, Huang, Kronauer, & Eskew, 1993), but we have no independent confirmation of this possibility. 
Experiment 2: Color detection model
We developed a model of detection performance as a function of stimulus direction and contrast in the LS plane. As with the CTM, the color detection model (CDM) is also composed of two stages, with the first based on a quadratic computation of equivalent contrast (elliptical isoperformance contours), which in this case identifies the stimulus contrast in each chromatic direction that produces the same level of detection performance. The second stage is a cumulative Weibull (specified for a guessing fraction correct of 0.5). This allowed us to convert the equivalent contrast to a prediction of fraction correct, bounded between 0.5 and 1.0. 
The elliptical isoperformance contours of the CDM for each subject are plotted in Figure 6. The plotting conventions are the same as used in Figure 3. For all subjects, θ was again close to 90°. Although the differences in angle across the two experiments exceed the measurement confidence as assessed by bootstrapping, the magnitudes of the numerical differences (less than 2 degrees) are small. 
Figure 6.
 
Isoperformance contours for the color detection task. The gray ellipse in each panel shows the isoreponse contour associated with the color detection task for each subject. These contours define a set of stimuli which produce the same fraction correct in the detection task. The isoperformance contour of the color detection model has the same parameterization as the color tracking model. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (one in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 6.
 
Isoperformance contours for the color detection task. The gray ellipse in each panel shows the isoreponse contour associated with the color detection task for each subject. These contours define a set of stimuli which produce the same fraction correct in the detection task. The isoperformance contour of the color detection model has the same parameterization as the color tracking model. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (one in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and standard errors estimated by bootstrap resampling) are provided in each panel.
The minor axis ratios (m) for the threshold experiment ranged between 0.09 and 0.10 across the subjects, implying that the underlying chromatic mechanisms for detection were ∼10× more sensitive to L-cone isolating stimuli than S-cone isolating stimuli. Recall that, for tracking performance, relative sensitivity to L-cone isolating stimuli was considerably stronger (∼30× rather than ∼10×). This difference in relative sensitivity is substantial, and well beyond the uncertainty in parameter estimation as established by bootstrapping. 
Figure 7 shows the nonlinearity of the CDM that relates equivalent contrast to detection performance. An increase in equivalent contrast was associated with an increase in detection performance that was well described by the function. There were slight variations in the values of the two parameters that define the function for each subject (values provided in each panel). 
Figure 7.
 
Nonlinearity of the color detection model. The gray curve in each panel shows the nonlinearity of the color detection model for all subjects. The x-axis is equivalent contrast. The y-axis is the response, which in the figure, is the fraction correct of the detection task. The closed circles in each plot are the fraction correct for each condition tested. The stimulus contrast for each closed circle has been adjusted by the isoperformance contour, allowing the fraction correct to be plotted on the equivalent contrast axis. The color map denotes which color corresponds to which direction. Psychometric function parameters from fit to data (not bootstrapped) are provided in each panel (see Methods).
Figure 7.
 
Nonlinearity of the color detection model. The gray curve in each panel shows the nonlinearity of the color detection model for all subjects. The x-axis is equivalent contrast. The y-axis is the response, which in the figure, is the fraction correct of the detection task. The closed circles in each plot are the fraction correct for each condition tested. The stimulus contrast for each closed circle has been adjusted by the isoperformance contour, allowing the fraction correct to be plotted on the equivalent contrast axis. The color map denotes which color corresponds to which direction. Psychometric function parameters from fit to data (not bootstrapped) are provided in each panel (see Methods).
The ability of the CDM to account for the detection data is demonstrated by the overall fit shown in Figure 7, as well as solid fit lines shown in Figure 5 and Supplementary Figure S3. The four-parameter model (two for the isoperformance contour; two for the non-linearity that transforms equivalent contrast to fraction correct) provides a good account of detection performance across contrast levels and chromatic directions. 
Discussion
Quadratic models
We measured tracking and detection performance for a set of matched stimuli that varied in chromatic direction and contrast. We find that each task can be accounted for by a two-stage model. The two models, the CTM and the CDM, each have first stages of the same form: a quadratic combination of cone contrast that computes an overall equivalent contrast. The second stage is a task-specific, non-linear readout that relates equivalent contrast to performance. This model structure, which separates effects of chromatic direction from those of contrast, characterizes the role of chromatic direction on performance in a contrast-independent manner. Such separability between chromatic direction and contrast is a prerequisite for making well-defined statements about the sensitivity of performance as a function of the chromatic direction of the modulation. If performance were not direction-contrast separable, then consideration of the effect of chromatic direction would need to be accompanied by specification of what contrasts were being considered for each direction. The elliptical contours provided in Figures 3 and 6 summarize our findings about how performance depends on chromatic direction for each task. 
Similarly, the direction-contrast separability also allows us to make statements about the dependence of performance on contrast that are independent of chromatic direction. Figures 4 and 7 summarize our findings about this dependence for each task. 
To illustrate how the CTM may be used to re-express the results in interesting ways, we computed the predicted difference in tracking lag between S- and L-cone isolating stimuli, when the cone contrast across the two directions was equated. Figure 8 shows the result of this calculation for each of the three subjects. Each subject shows the same qualitative dependence: in each case the lag difference decreases systematically with contrast. The fact that lag difference depends strongly on cone contrast emphasizes the importance of jointly characterizing the dependence of performance on both chromatic direction and contrast. More generally, the contrast dependence of the relative lag means reminds us that it important to consider performance at multiple contrast levels: the effects of an independent variable (here color direction) can depend considerably on contrast. 
Figure 8.
 
Comparison of predicted S- and L-cone tracking lags. The figure shows predictions obtained from the CTM model for the difference between S- and L-cone tracking lag, when contrast is equated across the two chromatic directions. The x-axis shows the common cone contrast, and the y-axis the difference in tracking lag in seconds. The model would allow similar predictions to be made for any pair of chromatic directions and relative contrasts.
Figure 8.
 
Comparison of predicted S- and L-cone tracking lags. The figure shows predictions obtained from the CTM model for the difference between S- and L-cone tracking lag, when contrast is equated across the two chromatic directions. The x-axis shows the common cone contrast, and the y-axis the difference in tracking lag in seconds. The model would allow similar predictions to be made for any pair of chromatic directions and relative contrasts.
Quadratic models have been used previously to understand color sensitivity (e.g., Guth et al., 1980; Krauskopf et al., 1982; Poirson et al., 1990; Poirson & Wandell, 1990b; Poirson & Wandell, 1990a; Guth, 1991; Poirson & Wandell, 1993; Poirson & Wandell, 1996; Knoblauch & Maloney, 1996), color motion processing (Chichilnisky, Heeger, & Wandell, 1993), and cortical responses to colored stimulus modulations (Horwitz & Hass, 2012; Barnett et al., 2021). 
S cone isolation
A number of factors limit how well silent substitution isolates intended mechanisms. These include the precision of stimulus control, individual differences in cone fundamentals, and variation of those fundamentals across the retina. We now consider how some of these factors might have influenced our data. Because subjects were more sensitive to L cone contrast than to S cone contrast in our tasks, we focused primarily on the degree to which our nominally S-cone isolating stimulus might have stimulated L- and M- cones. 
First, we use the Asano et al. model of individual differences in cone fundamentals (Asano, Fairchild, & Blonde, 2016) to evaluate how our stimuli would have driven the cones of different subjects with cone fundamentals within the normal range of variation. We drew 10,000 sets of L-, M-, and S-cone fundamentals according to independent Gaussian distributions over the parameters of the model, with zero mean and the standard deviations provided by Asano et al. (2016). For each set of fundamentals, we evaluated how the angle corresponding to maximum S-cone isolation varied across these observers. To do so, we computed stimuli at 401 equally spaced angles between 80° and 100° in the LS cone contrast plane using the Stockman-Sharpe 2° fundamentals, as in our experiment. Each of these stimuli had 70% cone contrast. For each stimulus, we then computed its luminance contrast, and we found the angle with minimum luminance contrast. This procedure finds the angle that most purely drives the S cones on the assumption that any non S-cone mediated performance is driven by the L and M cones through the luminance mechanism. Panel a of Supplementary Figure S4 shows a histogram of the angles we obtained. The angles range from ∼88° to ∼94°, a range that encompasses the angle of poorest sensitivity found in our experiments. Supplementary Figure S4b shows the L and M cone mediated luminance contrast at the angle of minimum luminance contrast, where luminance was taken as a 2 to 1 weighted sum of L and M cones. The luminance contrasts do not exceed 0.05%, indicating that L and M cone mediated luminance is unlikely to have contributed to performance at our angle of least sensitivity. We repeated the analysis on the assumption that L − M contrast (i.e. the difference between L-cone and M-cone contrast), rather than luminance contrast, might have intruded on the intended S cone isolation. The results are shown in Supplementary Figures S4c and S4d and lead to a similar conclusion. 
Given that our stimuli were moving, it is possible that subjects did not remain fixated on the target during the tasks. We do not have independent measurements of how accurately subjects' eyes tracked the targets, but we can use the magnitude of the deviation of the cursor from the target as a conservative proxy for the magnitude of fixation deviation. Supplementary Figure S5 shows histograms of the signed deviations between the center of the stimulus and the cursor position, for all three subjects and for all conditions consisting of nominally S-cone isolating modulations. The cursor rarely deviated by more than a degree, so that with our ∼2° stimulus, contrast was largely confined to the central 3° of the visual field. We repeated the analysis shown in Supplementary Figure S4, but starting with the CIE 3° and 4° cone fundamentals. The results were essentially unchanged (not shown), presumably because the individual difference variation incorporated into the calculation swamps the systematic effect of field size. 
A difference between the tracking and detection experiment is that in the tracking experiment, the video hardware quantized the signals controlling the display with eight-bit resolution, whereas this resolution was higher (nominally 14-bit) for the detection experiment. If we consider the maximum contrast in each color direction, the effect of 8-bit quantization can produce unintended luminance contrasts of ∼0.5% and similarly unintended L − M contrasts of up to 0.25%. These are small, but nonetheless large enough that they could conceivably contribute to tracking performance in the nominally S-cone isolating condition. Although we cannot completely rule out this possibility, we think it is unlikely for two reasons. First, quantization applies at individual pixels of the Gabor stimulus, and across the stimulus quantization effects are likely to balance out in sign via an implicit spatial dithering induced by the graded stimulus variation. Second, if unintended luminance or L − M contrast contributed substantially to tracking performance relative to their contribution in the detection experiment, we would expect the tracking ellipses to be less distended than the detection ellipses, opposite the pattern that we found. 
Underlying mechanisms
If the iso-performance ellipses for the two tasks had had the same shape, we could have concluded that the same combination of cone signals (i.e. the same equivalent contrast) accounted for performance in both tasks. This result was not observed. Instead, there was a substantial difference in relative sensitivity for S as compared to L chromatic directions, with a deficit for S-cone isolating stimuli of ∼3× observed for tracking relative to detection. 
Although our experiments do not uniquely determine the underlying mechanisms,1 we think detection of L-cone isolating stimuli is mediated either by a luminance or L − M cone contrast mechanism, while detection of S-cone isolating stimuli is mediated by the S cones. Although the stimuli in the detection experiment were moving, the subject did not have to judge their direction of motion, so we might suppose that in the detection experiment the most-sensitive S-cone mechanism need not support direction-selective motion judgments. If this is the case, and if tracking is supported by an S-cone input to a direction-selective mechanism (possibly a luminance mechanism; see review in Introduction), then a difference in S cone sensitivity relative to L and M cone sensitivity for this mechanism could explain the difference in ellipse axis ratio across the experiments. Such a difference could also be explained if there is additional delay of S-cone input into such a direction-selective luminance mechanism (e.g., Lee & Stromeyer, 1989), with the added delay leading to larger tracking lags. 
Our data do not, however, conclusively rule other types of mechanistic explanations. For example, both S-cone detection and tracking could be mediated by a single S-cone mechanism, while mechanisms that supporting tracking via L and M cone signals could differ from those that support detection, leading to a difference in relative L and S isoperformance contrasts across the two tasks. In this case, we might suppose that tracking L-cone isolating stimuli is mediated by a luminance mechanism while an L − M cone contrast mechanism mediates detection. 
There was also a difference in the major axis angle for Subjects 1 (∼1°) and 3 (∼2°). These angular deviations may be the result of measurement variability, but could also represent a small difference between tracking and detection. In the latter case, we do not have a concise explanation. 
Future directions
Future work would benefit from extending measurements beyond a single plane in color space to characterize the full, three-dimensional iso-response contour. Such contours would be expected to be well-described by ellipsoidal quadratic forms (Poirson et al., 1990; Knoblauch & Maloney, 1996). Indeed, if measured in a full set of directions in three-dimensional LMS contrast space, the logic introduced by Poirson and Wandell (Poirson & Wandell, 1993; Poirson & Wandell, 1996) could be used to leverage the change in sensitivity across tasks to estimate the inputs to a set of task-mediating cone-opponent mechanisms. Similarly, changes in the spatial frequency and motion parameters of the tracked stimuli, or variation in performance with adaptation, could potentially be exploited to further generalize the findings and constrain inferences about underlying mechanisms. 
Summary
The current work constitutes an step towards a broad goal of predicting how the temporal dynamics of visual processing and visuomotor behavior (e.g., the tracking lags) depend on the spatiochromatic properties of stimuli. Future experiments might aim to understand how the particular contrasts, colors, and spatial frequencies defining an arbitrary stimulus determine these dynamics, and elucidate properties of the underlying mechanisms. 
Methods
Subjects
Three subjects (ages 28, 29, 33; two male) took part in all psychophysical experiments. All subjects had normal or corrected to normal acuity and normal color vision. All subjects gave informed written consent. The research was approved by the University of Pennsylvania Institutional Review Board and conformed to the tenets of the Declaration of Helsinki. Two of the subjects are authors on this article; one was naïve to the purpose of the study. 
Experimental session overview
All subjects participated in both the color tracking task and color detection task experiments. In total, all experiments spanned eight sessions. The first six sessions were for the tracking task, and the remaining sessions were used for the detection task. Each of the tracking sessions lasted approximately 1.5 hours, and each detection session lasted approximately one hour. Subjects completed the full set of tracking sessions before starting the detection sessions. All experiments were preregistered: the color tracking task (Exp. 1: https://osf.io/xvsm3/; Exp. 2: https://osf.io/5y2dh/; Exp. 3: https://osf.io/e6dfs/) and the color detection task (Exp. 4: https://osf.io/ekv24/; Exp. 5: https://osf.io/ekv24/). 
Stimulus display and generation
The stimuli were designed to create specific responses of the cone photoreceptors using silent substitution (Estévez & Spekreijse, 1982). Silent substitution is based on the principle that sets of light spectra exist that, when exchanged, selectively modulate the activity of cone photoreceptors. Therefore modulating the stimuli, relative to a background, can selectively modulate the activity of the L-, M-, or S-cones, or combinations of cones for a specified contrast. These calculations require a model of the spectral sensitivities of the cone photoreceptors and spectral power distributions of the monitor primaries. We use the Stockman-Sharpe 2-degree cone fundamentals. 
All stimuli were generated using a ViewSonic G220fb CRT monitor with three primaries and had a refresh rate of 60 Hz. The horizontal and vertical resolution of the monitor was 1024 × 768 pixels, respectively, corresponding to a screen size of 40.5 × 30.3 cm. Subjects viewed the monitor at a distance of 92.5 cm for tracking and 105 cm for detection. This corresponds to a screen size of 24.7° by 18.6° for tracking and 21.8° by 16.4° for detection. The difference in viewing distance was accounted for when generating the stimuli for the two experiments, so that the angular size of the stimuli were matched between tracking and detection. 
To obtain the spectral radiance of the monitor RGB primaries, we performed a monitor calibration using a spectral radiometer. For the tracking experiment, this was a Photoresearch PR-650 SpectraScan, which sampled the spectra at 4 nm intervals between 380 nm and 780 nm. For the detection experiment, this was a Photoresearch PR-670 SpectraScan, which sampled the spectra at 2 nm intervals between 380 and 780 nm. The same instruments were used to obtain gamma functions and the ambient background light for each experiment, respectively. 
Stimuli
The stimuli for all experiments were nominally restricted to the LS plane of cone contrast space. Cone contrast space is a three-dimensional space with each axis showing the change in the quantal catch of the L, M, and S cones relative to a specified reference spectrum. This reference spectrum is referred to as the background light. The background in the tracking experiment had luminance: Y = 31 cd/m2, chromaticity: x = 0.326, y = 0.372. The background in the detection experiment had luminance: Y = 32 cd/m2, chromaticity: x = 0.299, y = 0.332. These were calculated using the CIE 1931 XYZ color matching functions, https://cvrl.org). We set the origin of the LS cone contrast plane to be this background and confined all modulation to this plane. Modulations made around this background have the effect that they only modify the L- and S-cone excitations, while leaving M-cone excitations unchanged. We refer to the chromatic component of the stimuli used in this experiment as vectors in this plane. Each stimulus has an L-cone and S-cone vector component, and we refer to the stimuli by the angle computed by the ratios of these components. In this space, S-cone isolating stimulus vectors are oriented at 90° and L-cone isolating stimulus vectors are oriented at 0°. We refer to these angles as the chromatic directions. The overall color contrast of a stimulus is specified as the L2-norm of the stimulus vector in the LS cone contrast plane (square root of sum of squared L and S cone contrasts). Color contrast specified in this way reduces to the conventional definition of contrast for stimuli modulated in the L or S cone isolating directions, and cannot exceed 100% for these directions. For intermediate directions, contrast specified in this way is not limited to 100% (but cannot exceed 141%). See Brainard (1996) for a discussion of color contrast specification and the plusses and minuses of various conventions. 
Our stimuli were Gabor patterns whose spatial properties are described in more detail below. There is thus a second aspect to contrast specification that arises because we summarize the contrast of the spatially modulated stimulus with a single number. For our stimuli, the spatial contrast modulation and color contrast direction were separable variables, in the sense that the color contrast direction of the stimulus was constant across the stimulus, and only its magnitude varies with spatial position. We construct the Gabor pattern as a single image plane with the underlying sinusoidal component having a magnitude of 1, and specify the contrast of a chromatic Gabor as the color contrast that is modulated by this pattern. This seems to us a sensible and typical convention. But it is important to note that the sinusoidal component of the Gabor patterns was presented in sine phase, a choice we made so that there is no change in their DC value of the stimulus with as contrast varied. This in turn means that the maximum contrast that is actually achieved at any pixel of the underlying Gabor pattern is less than 1 (since the peak of the sinusoidal modulation when it takes on values of ± 1 away from the center of the Gabor is attenuated by the Gaussian envelope). Thus when we specify a contrast of, for example, 1 for the pattern, the highest contrast that we actually need to fit into the monitor gamut is less than 1. In our case, the highest value in the contrast pattern was 0.9221. This aspect of the contrast specification convention thus expands the contrast gamut (given our specification conventions) relative to what would be possible for, say, a Gabor pattern with the sinusoid in cosine phase. Supplementary Figure S1 shows the gamut of achievable contrasts for our stimuli, within our monitor's gamut in the LS contrast plane. 
The spatiotemporal parametrization of the stimuli were identical across both experiments. The stimuli used were sine phase Gabor patches. The frequency of the sine wave was set to 1 C.P.D. and the standard deviation of the Gaussian window was set to 0.6 degrees of visual angle (DVA). This standard deviation corresponds to a FWHM of 1.41° and 90% of the Gaussian envelope being contained in a 2° diameter window. The Gabor patch performed a random walk confined to move horizontally across the monitor. The Gabor target updated its position on each frame according to a Gaussian velocity distribution. This distribution was centered on 0°/s with a standard deviation of 2.6°/s and a FWHM of 6.1°/s. Values drawn from this distribution with a negative sign corresponded to leftward motion and values with a positive sign corresponded to rightward motion. This resulted in an average speed of 2.07°/s and an average step size of approximately 0.63 mm. What varied across stimuli was the chromatic content of the Gabor with the spatiotemporal parameters fixed for all directions and contrasts. The exact chromatic directions and contrasts used in each experiment are reported their respective sections below. 
The color tracking task
Subjects participated in the continuous tracking task in which they were asked to track the position of a target Gabor patch. On each trial, the Gabor patch spatially jittered its position along a horizontal linear path across the middle of the monitor in accordance with the temporal parameters noted in the prior section. The subjects were instructed to indicate the position of the Gabor patch by continuously trying to keep the cursor in the middle of the patch. Subjects controlled the position of the cursor though the use of a computer mouse. At the end of each trial, we obtain a time-course of the target positions on the screen and the subjects cursor responses. Each trial lasted 11 seconds with an initial static period of one second. Example traces of the target (gray line) and tracking (black line) position as a function of time can be seen in the upper portion of panel c of Figure 1
The Gabor patches were modulated in 18 different chromatic directions each with 6 contrast levels. In each direction, the six contrasts were exponentially spaced between a maximum and minimum contrast (i.e. the logarithm of the contrasts was linearly spaced). Supplementary Table S1 summarizes the directions and maximum and minimum contrasts used in the tracking experiment for all subjects. These directions were split into three sets of experiments each with 6 directions. The conventions used to specify contrast are described above. 
Within each set, subjects completed two sessions each made up of 10 experimental runs. An experimental run consisted of 36 trials corresponding to a single presentation of each of the conditions (six directions and six contrasts). Across the runs, we flip the handedness of the sine-phase Gabors such that alternating runs are offset by 180° spatial phase of the Gabor. The order of trials within a run was pseudorandomized such that each session contained the desired number of repeats. Subjects controlled the pace of the trials and between trials only the background was present. A single session contained 360 trials lasting approximately 1 hour. In total, across all three experimental sets, 2160 trials were collected per subject equal to six hours of tracking data. 
Estimating tracking lag
From the time-courses of the target and the tracking positions, we can estimate the tracking lag. To do so, we perform a cross-correlation between the velocities of the target's random walk and the velocities of the subject's tracking. The velocities of the targets have a white noise power spectra containing no temporal autocorrelations. Example velocity traces for a given run can be seen in the lower portion of panel c of Figure 1. In this panel, the target velocity is shown as the gray line, and the tracking velocity is shown as the black line. 
If the subject were to perfectly track the Brownian motion of the target, we would end up with two identical white noise velocity traces. The cross correlation of two white noise signals produces a delta function centered on 0 seconds meaning they are perfectly correlated only when the signals are temporally aligned and have no other correlation as a function of delaying one signal relative to the other. If instead the subject perfectly tracked the target motion but had a consistent 2 second delay in their tracking then the resulting cross-correlation function would again be a delta function but centered at 2 seconds rather than 0 seconds. Deviation from the idealized delta function shape can arise due to factors such as noisy tracking. In the current work, we use the time of the peak of the cross-correlation function as our estimate of the visual-motor system's tracking lag (Mulligan, Stevenson, & Cormack, 2013; Bonnen et al., 2015). 
To compute the lag for each condition, we concatenated the stimulus positions from all runs for that condition, and separately concatenated the tracked positions. We then computed the cross-correlation of the pair of concatenated positions. This provides us with a single lag estimate for each chromatic direction and contrast pair. Example cross-correlograms are plotted as the gray lines in panel d of Figure 1. This shows the correlation between the two signals as function of delaying one relative to the other for a particular Gabor target chromatic direction and contrast. To estimate lag, we fit a log-Gaussian function to the measured cross-correlogram and take its mode as the lag. The equation for the log-Gaussian fit to the cross-correlogram was  
\begin{eqnarray*} cc\left( t \right) = A{{e}^{ - 0.5{{{\frac{{\left( {ln\left( t \right) - ln\left( \mu \right)} \right)}}{\sigma }}}^2}}}, \end{eqnarray*}
here the free parameter A describes the amplitude of the cross-correlogram and takes the place of the normalizing constant in the standard equation for a log-Gaussian probability density function. The parameter μ tells us the time at which the log-Gaussian fit reaches its maximum and provides our estimate of lag. An example log-Gaussian fit can be seen as the purple curve in panel d of Figure 1. Overall, we obtain one lag estimate per stimulus condition. There is additional information in the cross-correlogram beyond lag, that we did not examine here. 
The error bars on the lag estimates plotted in Figure 2 are computed as the standard deviation of bootstrapped lag estimates, which provides an estimate of the SEM of these estimates. This same bootstrapping approach was used to estimate SEMs in the other cases where measurement precision is reported. 
The color tracking model
The CTM is a five-parameter model that provides a prediction of tracking lag for any input stimulus specified in terms of its L- and S- cones contrasts. A detailed formulation of a related model maybe found in the model appendix of Barnett et al. (2021). Converting chromatic direction and contrast into lag is done through two stages. The first stage is the application of a quadratic isoperformance contour. The isoperformance contour effectively weights the input stimulus contrast by the underlying chromatic mechanism sensitivity for the corresponding chromatic direction. This weighting produces a single output variable from the L- and S-cone contrast inputs, collapsing across chromatic direction. We refer to this output variable as ‘equivalent contrast’ and it represent the strength of the chromatic mechanism output which is now independent of the original chromatic direction. The isoperformance contour represents sets of stimuli that when shown to a subject result in equal tracking lags. In the CTM, the shape of the isoperformance contour is constrained to be an ellipse restricted to the LS cone contrast plane. 
Of the 5 parameters in the CTM, 2 of them are used to specify the elliptical isoperformance contour. One of these parameters is the ellipse angle. The ellipse angle represents the direction of least sensitivity in the LS plane and read counterclockwise to the positive abscissa. Because the parametrization of the model enforces two orthogonal mechanisms, the second mechanism represents the direction of maximal sensitivity. The other parameter is the minor axis ratio. This is the ratio of the minor to major axis vector lengths. In this model, the length of the major axis is locked to unit length and the minor axis is constrained to be less than the major axis. Within this, minor axis vector length can be readily interpreted as this ratio. Together, these parameters provide a complete account of the chromatic stage of the model. 
The second stage of the CTM employs a single nonlinearity which transforms the equivalent contrast to a prediction of the tracking lag. Since the isoperformance contour allows us to collapse across chromatic directions to a scalar equivalent contrast, we can use a single nonlinear function to map to tracking lag. The functional form we use to convert equivalent contrast to lag is a the three-parameter exponential decay function:  
\begin{eqnarray*} lag = A{{e}^{ - m\, s}} + d \end{eqnarray*}
 
In this expression, the parameters are A, s, and d which represent the amplitude, scale, and offset, respectively. The scale parameter operates on the equivalent contrast (m) and acts as gain on the output of the chromatic stage of the model. The offset (d) is interpreted as the minimum lag, this is the point at which the decay function asymptotes. 
The color detection task
Subjects participated in a color detection task which allowed for estimates of detection thresholds to be obtained for each chromatic direction. The detection task used a two-interval forced choice task paradigm in which a Gabor stimulus was presented in one of two sequential intervals. The start of each interval was marked with a brief tone as well as the disappearance of the fixation dot. Each interval had a duration of 400 ms and were separated by a 200 ms gap between intervals in which the fixation dot was present. At the end of the trial, the subject indicates via a gamepad button press the interval in which they think the stimulus was presented. Based on this response, the subject receives auditory feedback (high pitch tone for correct, low pitch tone for incorrect). 
The Gabor stimuli in the detection task had the same spatio-temporal properties as those used in the tracking task. Therefore, the stimuli, during their presentation, performed the same random walk for 400 ms. The primary way in which the non-chromatic properties of the stimuli differed across tasks was in the stimulus ramping. At the beginning and end of the interval containing the target, the contrast of the Gabor was temporally windowed. This window was a half-cosine ramp with a duration of 100 ms. The structure of the target interval was as follows: a 100 ms ascending ramp at the beginning, 200 ms of full stimulus contrast, and a 100 ms descending ramp at the end. 
For this experiment, we modulated the Gabor stimuli in 12 chromatic contrast directions. The background different somewhat between that used in the tracking experiment (see specified backgrounds above). Within each chromatic direction, we tested at 6 evenly spaced contrast levels between the maximum contrast and 0 (excluding 0). The maximum contrasts were determined in pilot experiments for each subject and were intended to effectively sample the rising portion of the psychometric function. The directions tested and corresponding maximum contrasts studied are reported in Supplementary Table S2. The conventions used to specify contrast are described above. 
The stimuli were displayed on the same CRT monitor as the previous experiments. One difference is the need for finer control of contrast than was needed for the previous tracking experiments. To achieve the required bit depth, we used a Bits++ device (Cambridge Research Systems) to enable 14-bit (nominal) control of the R, G, and B channel inputs to the CRT. In addition, a different host computer was used to control the experiment (Asus RoG laptop and Ubuntu 20.04) for compatibility with the Bits++ device. In generating the stimuli for this experiment, a software error led to us using the CIE cone fundamentals for a 30 year old subject, rather than the 32 year old value that aligns the CIE fundamentals with the Stockman-Sharpe 2° fundamentals. In addition, the ambient light from the monitor (that measured with RGB inputs set to zero) was not accounted for in stimulus generation. In analyzing and reporting the data for this experiment, however, we used the spectra of the stimuli actually presented (including the ambient) and computed contrast with respect to the intended Stockman-Sharpe 2° fundamentals. The deviations between intended and obtained stimulus contrasts caused by this software error are very small: the maximum deviation between intended and obtained L contrast was less than 0.07%, M cone contrast less than 0.05%, S cone contrast less than 0.6%, luminance contrast less than 0.06%, and L − M cone contrast less than 0.05%. The stimulus angles deviate slightly from their nominal values. Supplementary Table S2 provides the actual angles and contrasts after the correction to the stimuli actually presented. Trials were blocked so that 60 trials of a given direction will be shown consecutively. Each block contained 10 presentations of each of the contrast levels. The contrasts are pseudorandomized such that a random permutation of all six levels were shown before repeating a contrast. To orient the subject to the chromatic direction of the block, there were three practice trials shown at the start of each block for the highest contrast level. Subjects completed a total of 40 trials per contrast/direction pair, that is 4 blocks per direction. Half of the blocks for each direction will be left-handed Gabors and the other half will be right-handed Gabors. We have 12 directions each with 6 contrast levels for a total of 240 trials per direction and a total of 2880 trials. 
Threshold detection
From the detection data, we estimate a threshold for each of the chromatic directions tested. Threshold is the stimulus contrast needed, per direction, to reliably detect the Gabor target. We estimate this value by fitting a psychometric function to the fraction correct as a function of stimulus contrast for each direction. Specifically, we fit a cumulative Weibull function and use it to determine the contrast needed to reach 76% correct. 
The color detection model
The CDM is a 4-parameter model that provides a prediction of the fraction correct in the detection task for any input stimuli specified in terms of its L- and S- cones contrasts. The CDM parallels the CTM in its construction. It also uses two stages to convert stimulus contrast to fraction correct. The first stage is an elliptical isoperformance contour identical to the one used in the CTM (see The Color Tracking Model section). Because the relationship between equivalent contrast and the variable of interest across the two tasks have different forms, we need to use task-dependent nonlinearities as the second stage. In the CTM, this relationship was captured with an exponential decay function. For the detection task we use the cumulative Weibull as the nonlinearity that converts the equivalent contrast into fraction correct. The functional form we use is:  
\begin{eqnarray*} PC = 1 - \left( {1 - 0.5} \right){{e}^{ - {{{\left(\frac{m}{\lambda }\right)}}^k}}} \end{eqnarray*}
 
In this expression, the parameters are λ and k which represent the scale, and shape, respectively. The scale parameter operates on the equivalent contrast (m) and acts as gain on the output of the chromatic stage of the model. The shape parameter (k) controls the slope. The guess rate is locked at 0.5 because this is chance in a 2IFC, therefore the part of the expression “(1-0.5)” bounds the output fraction correct between 0.5 and 1. 
Parameter fitting
We fit both the CTM and CDM to their respective measurements as a function of their stimulus contrasts. These data were fit using the MATLAB function fmincon to find a set of model parameters that minimize the root mean squared error between the actual tracking lags or fraction correct and the predicted values of the CTM and the CDM, respectively. 
Acknowledgments
Supported by NIH R01-EY028571 (JB). 
Commercial relationships: none. 
Corresponding author: David H. Brainard. 
Address: University of Pennsylvania, Goddard Labs, Suite 302C, Philadelphia, PA 19104, USA. 
Footnotes
1  The orientation of the isoperformance contours constrains the possible underlying mechanisms, but it does not uniquely determine them. In particular, any given quadratic isoperformance ellipse is consistent with an equivalent contrast computed as the vector length of the joint response of many pairs of post-receptoral chromatic mechanisms (Poirson et al., 1990; see also Barnett et al., 2021). Nonetheless, the implication does hold that if two tasks are mediated by the same underlying mechanisms, possibly with differential sensitivity to the output of each mechanism, then the contours should have the same orientation.
References
Asano, Y., Fairchild, M. D., & Blonde, L. (2016). Individual colorimetric observer model. PLoS One, 11(2), e0145671. [CrossRef] [PubMed]
Barnett, M. A., Aguirre, G. K., & Brainard, D. (2021). A quadratic model captures the human V1 response to variations in chromatic direction and contrast. eLife, 10, e65590. [CrossRef] [PubMed]
Blake, Z., Land, T., & Mollon, J. (2008). Relative latencies of cone signals measured by a moving vernier task. J Vis, 8(16):16, 11. [CrossRef]
Bonnen, K., Burge, J., Yates, J., Pillow, J., & Cormack, L. K. (2015). Continuous psychophysics: Target-tracking to measure visual sensitivity. Journal of Vision, 15(3), 14. [CrossRef] [PubMed]
Bonnen, K., Huk, A. C., & Cormack, L. K. (2017). Dynamic mechanisms of visually guided 3D motion tracking. J Neurophysiol, 118(3), 1515–1531. [CrossRef] [PubMed]
Brainard, D. H. (1996). Cone contrast and opponent modulation color spaces. In Kaiser, P. K. & Boynton, R. M. (Eds.), Human Color Vision (2 ed., pp. 563–579). Washington, D.C.: Optical Society of America.
Brainard, D. H., & Stockman, A. (2010). Colorimetry. In Bass, M., DeCusatis, C., Enoch, J., Lakshminarayanan, V., Li, G., Macdonald, C., ... van Stryland, E. (Eds.), The Optical Society of America Handbook of Optics, 3rd edition, Volume III: Vision and Vision Optics (pp. 10.11–10.56). New York: McGraw Hill.
Burge, J., & Cormack, L. K. (2024). Continuous psychophysics shows millisecond-scale visual processing delays are faithfully preserved in movement dynamics. Journal of Vision, 24(5), 4. [CrossRef] [PubMed]
Casile, A., Victor, J. D., & Rucci, M. (2019). Contrast sensitivity reveals an oculomotor strategy for temporally encoding space. Elife, 8, e40924. [CrossRef] [PubMed]
Cavanagh, P., MacLeod, D. I. A., & Anstis, S. M. (1987). Equiluminance: Spatial and temporal factors and the contribution of blue-sensitive cones. Journal of the Optical Society of America A, 4(8), 1428–1438. [CrossRef]
Chaparro, A., Stromeyer, C. F., III, Huang, E. P., Kronauer, R. E., & Eskew, R. T., Jr. (1993). Colour is what the eye sees best. Nature, 361(6410), 348–350. [CrossRef] [PubMed]
Chichilnisky, E. J., Heeger, D., & Wandell, B. A. (1993). Functional segregation of color and motion perception examined in motion nulling. Vision Research, 33(15), 2113–2125. [CrossRef] [PubMed]
Chin, B. M., & Burge, J. (2022). Perceptual consequences of interocular differences in the duration of temporal integration. Journal of Vision, 22(12), 12. [CrossRef] [PubMed]
Cottaris, N. P., & De Valois, R. L. (1998). Temporal dynamics of chromatic tuning in macaque primary visual cortex. Nature, 395(6705), 896–900. [CrossRef] [PubMed]
Dacey, D. M. (2000). Parallel pathways for spectral coding in primate retina. Annual Review of Neuroscience, 23, 743–775. [CrossRef] [PubMed]
de Lange, H. (1958). Research into the dynamic nature of the human fovea-cortex systems with intermittent and modulated light. I. Attenuation characteristics with white and colored light. Journal of the Optical Society of America, 48, 777–784. [CrossRef] [PubMed]
Derrington, A. M., Krauskopf, J., & Lennie, P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. Journal of Physiology, 357, 241–265. [CrossRef] [PubMed]
DeValois, R. L., Abramov, I., & Jacobs, G. H. (1966). Analysis of response patterns of LGN cells. Journal of the Optical Society of America, 56, 966–977. [CrossRef] [PubMed]
Dougherty, R. F., Press, W. A., & Wandell, B. A. (1999). Perceived speed of colored stimuli. Neuron, 24(4), 893–899. [CrossRef] [PubMed]
Engel, S., Zhang, X. M., & Wandell, B. (1997). Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature, 388(6637), 68–71. [CrossRef] [PubMed]
Eskew, R. T., Jr. (2009). Higher order color mechanisms: A critical review. Vision Research, 49(22), 2686–2704. [CrossRef] [PubMed]
Eskew, R. T., Jr., Wang, Q., & Richters, D. P. (2004). A five-mechanism model of hue sensations Journal of Vision, 4(8), 315. [CrossRef]
Estévez, O., & Spekreijse, H. (1982). The “silent substitution” method in visual research. Vision Research, 22, 681–691. [CrossRef] [PubMed]
Gegenfurtner, K. (2003). Cortical mechanisms of colour vision. Nature Neuroscience, 4(7), 563–572. [CrossRef]
Gegenfurtner, K., & Kiper, D. C. (1992). Contrast detection in luminance and chromatic noise. Journal of the Optical Society of America A, 9(11), 1880–1888. [CrossRef]
Gentile, C. P., Spitschan, M., Taskin, H. O., Bock, A. S., & Aguirre, G. K. (2024). Temporal sensitivity for achromatic and chromatic flicker across the visual cortex. Journal of Neuroscience, 44(21), e1395232024, https://doi.org/10.1523/JNEUROSCI.1395-23.2024. [CrossRef]
Guth, S. L. (1991). Model for color and light adaptation. Journal of the Optical Society of America A, 8(6), 976–993. [CrossRef]
Guth, S. L., Massof, R. W., & Benzschawel, T. (1980). Vector model for normal and dichromatic color vision. Journal of the Optical Society of America, 70(2), 197–212. [CrossRef] [PubMed]
Hansen, T., & Gegenfurtner, K. R. (2013). Higher order color mechanisms: Evidence from noise-masking experiments in cone contrast space. Journal of Vision, 13(1), 26.21. [CrossRef]
Horiguchi, H., Winawer, J., Dougherty, R. F., & Wandell, B. A. (2013). Human trichromacy revisited. Proceedings of the National Academy of Science U.S.A., 110(3), E260–E269. [CrossRef]
Horwitz, G. D., & Hass, C. A. (2012). Nonlinear analysis of macaque V1 color tuning reveals cardinal directions for cortical color processing. Nature Neuroscience, 15(6), 913–919. [CrossRef] [PubMed]
Kelly, D. H. (1975). Luminous and chromatic flickering patterns have opposite effects. Science, 188(4186), 371–372. [CrossRef] [PubMed]
Kim, Y. J., Peterson, B. B., Crook, J. D., Joo, H. R., Wu, J., Puller, C., ... Dacey, D. M. (2022). Origins of direction selectivity in the primate retina. Nature Communications, 13(1), 2862. [CrossRef] [PubMed]
Knoblauch, K., & Maloney, L. T. (1996). Testing the indeterminacy of linear color mechanisms from color discrimination data. Vision Research, 36(2), 295–306. [CrossRef] [PubMed]
Krauskopf, J., Williams, D. R., & Heeley, D. W. (1982). Cardinal directions of color space. Vision Research, 22(9), 1123–1131. [CrossRef] [PubMed]
Krauskopf, J., Williams, D. R., Mandler, M. B., & Brown, A. M. (1986). Higher order color mechanisms. Vision Research, 26(1), 23–32. [CrossRef] [PubMed]
Lee, J., & Stromeyer, C. F., III. (1989). Contribution of human short-wave cones to luminance and motion detection. Journal of Physiology, 413, 563–593. [CrossRef] [PubMed]
Lee, R. J., Mollon, J. D., Zaidi, Q., & Smithson, H. E. (2009). Latency characteristics of the short-wavelength-sensitive cones and their associated pathways. Journal of Vision, 9(12):5, 1–17. [PubMed]
Lennie, P., & Movshon, J. A. (2005). Coding of color and form in the geniculostriate visual pathway. Journal of the Optical Society of America A, 10(10), 2013–2033.
Liu, J., & Wandell, B. A. (2005). Specializations for chromatic and temporal signals in human visual cortex. Journal of Neuroscience, 25(13), 3459–3468. [PubMed]
Masland, R. H. (2012). The neuronal organization of the retina. Neuron, 76(2), 266–280. [PubMed]
McKeefry, D. J., Parry, N. R. A., & Murray, I. J. (2003). Simple reaction times in color space: The influence of chromaticity, contrast, and cone opponency. Investigative Ophthalmology & Visual Science, 44(5), 2267–2276. [PubMed]
Merigan, W. H., & Maunsell, J. H. R. (1990). Macaque vision after magnocellular lateral geniculate lesions. Visual Neuroscience, 5, 347–352. [PubMed]
Metha, A. B., & Mullen, K. T. (1996). Temporal mechanisms underlying flicker detection and identification for red-green and achromatic stimuli. Journal of the Optical Society of America A, 13(10), 1969–1980.
Mollon, J. D., & Krauskopf, J. (1973). Reaction time as a measure of the temporal response properties of individual colour mechanisms. Vision Research, 13, 27–40. [PubMed]
Mullen, K. T. (1985). The contrast sensitivity of human colour vision to red-green and blue-yellow gratings. Journal of Physiology, 359, 381–400. [PubMed]
Mulligan, J. B. (2002). Sensory processing delays measured with the eye-movement correlogram. Annals–New York Academy of Sciences, 956, 476–478.
Mulligan, J. B., Stevenson, S. B., & Cormack, L. K. (2013). Reflexive and voluntary control of smooth eye movements. Presented at the Human Vision and Electronic Imaging XVIII,
Poirson, A. B., & Wandell, B. A. (1990a). The ellipsoidal representation of spectral sensitivity. Vision Research, 30(4), 647–652. [PubMed]
Poirson, A. B., & Wandell, B. A. (1990b). Task-dependent color discrimination. Journal of the Optical Society Of America A, 7(4), 776–782.
Poirson, A. B., & Wandell, B. A. (1993). Appearance of colored patterns: Pattern-color separability. Journal of the Optical Society of America A, 10(12), 2458–2470.
Poirson, A. B., & Wandell, B. A. (1996). Pattern-color separable pathways predict sensitivity to simple colored patterns. Vision Research, 36(4), 515–526. [PubMed]
Poirson, A. B., Wandell, B. A., Varner, D. C., & Brainard, D. H. (1990). Surface characterizations of color thresholds. Journal of the Optical Society of America A, 7(4), 783–789.
Ripamonti, C., Woo, W. L., Crowther, E., & Stockman, A. (2009). The S-cone contribution to luminance depends on the M- and L-cone adaptation levels: Silent surrounds? Journal of Vision, 9(3), 10.11–10.
Schnapf, J. L., Nunn, B. J., Meister, M., & Baylor, D. A. (1990). Visual transduction in cones of the monkey Macaca fascicularis. Journal of Physiology, 427, 681–713. [PubMed]
Seidemann, E., Poirson, A. B., Wandell, B. A., & Newsome, W. T. (1999). Color signals in area MT of the macaque monkey. Neuron, 24(4), 911–917. [PubMed]
Sekiguchi, N., Williams, D. R., & Brainard, D. H. (1993). Aberration-free measurements of the visibility of isoluminant gratings. Journal of the Optical Society of America A, 10(10), 2105–2117.
Shapley, R., & Hawken, M. (2002). Neural mechanisms for color perception in the primary visual cortex. Current Opinion in Neurobiology, 12(4), 426–432. [PubMed]
Shevell, S. K., & Martin, P. R. (2017). Color opponency: Tutorial. Journal of the Optical Society of America A, 34(7), 1099–1108.
Shinomori, K., & Werner, J. S. (2008). The impulse response of S-cone pathways in detection of increments and decrements Visual Neuroscience, 25(3), 341–347. [PubMed]
Smithson, H. E., & Mollon, J. D. (2004). Is the S-opponent chromatic sub-system sluggish? Vision Research, 44(25), 2919–2929. [PubMed]
Spitschan, M., Datta, R., Stern, A. M., Brainard, D. H., & Aguirre, G. K. (2016). Human visual cortex responses to rapid cone and melanopsin-directed flicker. Journal of Neuroscience, 36(5), 1471–1482. [PubMed]
Stockman, A., & Brainard, D. H. (2010). Color vision mechanisms. In Bass, M., DeCusatis, C., Enoch, J., Lakshminarayanan, V., Li, G., Macdonald, C., ... van Stryland, E. (Eds.), The Optical Society of America Handbook of Optics, 3rd edition, Volume III: Vision and Vision Optics (pp. 11.11–11.104). New York: McGraw Hill.
Stockman, A., MacLeod, D. I. A., & DePriest, D. D. (1991). The temporal properties of the human short-wave photoreceptors and their associated pathways. Vision Research, 31(2), 189–208. [PubMed]
Stockman, A., MacLeod, D. I. A., & Lebrun, S. (1993). Faster than the eye can see: Blue cones respond to rapid flicker. Journal of the Optical Society of America A, 10(6), 1396–1402.
Stockman, A., & Plummer, D. J. (1998). Color from invisible flicker: A failure of the Talbot-Plateau law caused by an early “hard” saturating nonlinearity used to partition the human short-wave cone pathway. Vision Research, 38(23), 3703–3728. [PubMed]
Straub, D., & Rothkopf, C. A. (2022). Putting perception into action with inverse optimal control for continuous psychophysics. Elife, 11, e76635. [PubMed]
Stromeyer, C. F., III, Eskew, R. T., Jr., Kronauer, R. E., & Spillman, L. (1991). Temporal phase response of the short-wave cone signal for color and luminance. Vision Research, 31(5), 787–803. [PubMed]
Wandell, B. A., Poirson, A. B., Newsome, W. T., Baseler, H. A., Boynton, G. M., Huk, A., ... Sharpe, L. T. (1999). Color signals in human motion-selective cortex. Neuron, 24(4), 901–909. [PubMed]
Wisowaty, J. J., & Boynton, R. M. (1980). Temporal modulation sensitivity of the blue mechanism: Measurements made without chromatic adaptation. Vision Research, 20(11), 895–909. [PubMed]
Figure 1.
 
Experimental overview. (a) The LS cone contrast plane. The x-axis shows L-cone contrast and the y-axis S-cone contrast. Stimulus modulations around a background are represented as vectors in this plane, with the direction providing the relative strength of the L- and S-cone contrasts and the magnitude providing the overall contrast of the modulation. Indeed, we specify stimuli by their angular direction in the LS contrast plane, and their overall contrast by their vector length in this plane. A set of example directions are shown, with the color of each limb providing the approximate appearance of that portion of the modulation. (b) Example stimuli. In the tracking task, subjects tracked the position of a horizontally moving color Gabor modulation. In the detection task, subjects had to indicate which of two temporal intervals contained a horizontally moving color Gabor modulation. (c) The top panel shows position traces for an example tracking run in the experiment. The gray line shows the position of the target center as a function of time. The black line shows the subject's cursor position as a function of time. The bottom panel show the velocities for the example data in the panel above. The gray line shows the velocity of the target as a function of time. The black like shows the cursor velocity as a function of time. (d) Example tracking cross-correlograms. The gray line shows the cross-correlation between the stimulus and cursor velocities. This cross-correlogram is fit with a log-Gaussian function, and the time of the peak of the fit provides the estimate of tracking lag. Examples are given for stimuli that evoke longer (stimulus “A”) and shorter (stimulus “B”) lags.
Figure 1.
 
Experimental overview. (a) The LS cone contrast plane. The x-axis shows L-cone contrast and the y-axis S-cone contrast. Stimulus modulations around a background are represented as vectors in this plane, with the direction providing the relative strength of the L- and S-cone contrasts and the magnitude providing the overall contrast of the modulation. Indeed, we specify stimuli by their angular direction in the LS contrast plane, and their overall contrast by their vector length in this plane. A set of example directions are shown, with the color of each limb providing the approximate appearance of that portion of the modulation. (b) Example stimuli. In the tracking task, subjects tracked the position of a horizontally moving color Gabor modulation. In the detection task, subjects had to indicate which of two temporal intervals contained a horizontally moving color Gabor modulation. (c) The top panel shows position traces for an example tracking run in the experiment. The gray line shows the position of the target center as a function of time. The black line shows the subject's cursor position as a function of time. The bottom panel show the velocities for the example data in the panel above. The gray line shows the velocity of the target as a function of time. The black like shows the cursor velocity as a function of time. (d) Example tracking cross-correlograms. The gray line shows the cross-correlation between the stimulus and cursor velocities. This cross-correlogram is fit with a log-Gaussian function, and the time of the peak of the fit provides the estimate of tracking lag. Examples are given for stimuli that evoke longer (stimulus “A”) and shorter (stimulus “B”) lags.
Figure 2.
 
Lag versus contrast for subject 2. Tracking lag as a function of modulation contrast separately for each chromatic direction. The closed circles indicate the lags and are grouped into their corresponding chromatic direction by plot colors. The chromatic direction angles are displayed in the legend of each panel. The error bars on each lag estimate are the SEMs found via a bootstrap procedure (that is, the standard deviations of the bootstrapped parameter estimates). The dashed curves in each panel are the lag predictions of the color tracking model (CTM; see the following sections). The line colors lines are matched to the color of the corresponding symbols.
Figure 2.
 
Lag versus contrast for subject 2. Tracking lag as a function of modulation contrast separately for each chromatic direction. The closed circles indicate the lags and are grouped into their corresponding chromatic direction by plot colors. The chromatic direction angles are displayed in the legend of each panel. The error bars on each lag estimate are the SEMs found via a bootstrap procedure (that is, the standard deviations of the bootstrapped parameter estimates). The dashed curves in each panel are the lag predictions of the color tracking model (CTM; see the following sections). The line colors lines are matched to the color of the corresponding symbols.
Figure 3.
 
Isoperformance Contours for the Color Tracking Model. The gray ellipse in each panel shows the isoreponse contour associated with the color tracking task for each subject. The isoperformance contour is the set of stimuli that result in the same tracking lags. This contour is constrained in the CTM to take the form of an ellipse in the LS cone contrast plane, and its shape is constrained to be the same independent of overall contrast. The ellipse is specified by two parameters: (1) the ellipse angle and (2) the minor axis ratio. The ellipse angle represents the direction of least sensitivity defined counterclockwise to the positive abscissa. The minor axis ratio is the ratio of the vector lengths between the most and least sensitive directions. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (1 in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and their standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 3.
 
Isoperformance Contours for the Color Tracking Model. The gray ellipse in each panel shows the isoreponse contour associated with the color tracking task for each subject. The isoperformance contour is the set of stimuli that result in the same tracking lags. This contour is constrained in the CTM to take the form of an ellipse in the LS cone contrast plane, and its shape is constrained to be the same independent of overall contrast. The ellipse is specified by two parameters: (1) the ellipse angle and (2) the minor axis ratio. The ellipse angle represents the direction of least sensitivity defined counterclockwise to the positive abscissa. The minor axis ratio is the ratio of the vector lengths between the most and least sensitive directions. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (1 in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and their standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 4.
 
Nonlinearity of the CTM. The gray curve in each panel shows the nonlinearity of the color tracking model. The x-axis it the equivalent contrast which is the result of the isoperformance contour. The y-axis respresent the response, which in this figure, is the lag from the tracking task in seconds. The closed circles in each plot are the tracking lags. The stimulus contrasts of the for each lag has been adjusted by the isoperformance contour allowing the lags for all stimuli to be plotted on the equivalent contrast axis. The color map denotes which color correspond to which directions. The inset color map in each panel marks the directions tested which were unique to each subject. Parameters of the exponential fit to data (not bootstrapped) are provided in each panel.
Figure 4.
 
Nonlinearity of the CTM. The gray curve in each panel shows the nonlinearity of the color tracking model. The x-axis it the equivalent contrast which is the result of the isoperformance contour. The y-axis respresent the response, which in this figure, is the lag from the tracking task in seconds. The closed circles in each plot are the tracking lags. The stimulus contrasts of the for each lag has been adjusted by the isoperformance contour allowing the lags for all stimuli to be plotted on the equivalent contrast axis. The color map denotes which color correspond to which directions. The inset color map in each panel marks the directions tested which were unique to each subject. Parameters of the exponential fit to data (not bootstrapped) are provided in each panel.
Figure 5.
 
Detection versus contrast for subject 2. The figure shows the fraction correct in the detection task as a function of the stimulus contrast for each chromatic direction used in the experiment. The closed circles in each panel are the fraction correct for an individual stimulus direction, with angle in the LS contrast plane provided above each panel. The solid curves are the fraction correct predictions of the color detection model fit to all the data simultaneously (see below). Note the difference in x-axis range between panels.
Figure 5.
 
Detection versus contrast for subject 2. The figure shows the fraction correct in the detection task as a function of the stimulus contrast for each chromatic direction used in the experiment. The closed circles in each panel are the fraction correct for an individual stimulus direction, with angle in the LS contrast plane provided above each panel. The solid curves are the fraction correct predictions of the color detection model fit to all the data simultaneously (see below). Note the difference in x-axis range between panels.
Figure 6.
 
Isoperformance contours for the color detection task. The gray ellipse in each panel shows the isoreponse contour associated with the color detection task for each subject. These contours define a set of stimuli which produce the same fraction correct in the detection task. The isoperformance contour of the color detection model has the same parameterization as the color tracking model. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (one in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 6.
 
Isoperformance contours for the color detection task. The gray ellipse in each panel shows the isoreponse contour associated with the color detection task for each subject. These contours define a set of stimuli which produce the same fraction correct in the detection task. The isoperformance contour of the color detection model has the same parameterization as the color tracking model. The scale of the ellipses across subjects is normalized to have a length of 2 along their major axis (one in each direction); for this reason the contrast values on the axes should be interpreted as normalized rather than absolute. The ellipse parameters (means and standard errors estimated by bootstrap resampling) are provided in each panel.
Figure 7.
 
Nonlinearity of the color detection model. The gray curve in each panel shows the nonlinearity of the color detection model for all subjects. The x-axis is equivalent contrast. The y-axis is the response, which in the figure, is the fraction correct of the detection task. The closed circles in each plot are the fraction correct for each condition tested. The stimulus contrast for each closed circle has been adjusted by the isoperformance contour, allowing the fraction correct to be plotted on the equivalent contrast axis. The color map denotes which color corresponds to which direction. Psychometric function parameters from fit to data (not bootstrapped) are provided in each panel (see Methods).
Figure 7.
 
Nonlinearity of the color detection model. The gray curve in each panel shows the nonlinearity of the color detection model for all subjects. The x-axis is equivalent contrast. The y-axis is the response, which in the figure, is the fraction correct of the detection task. The closed circles in each plot are the fraction correct for each condition tested. The stimulus contrast for each closed circle has been adjusted by the isoperformance contour, allowing the fraction correct to be plotted on the equivalent contrast axis. The color map denotes which color corresponds to which direction. Psychometric function parameters from fit to data (not bootstrapped) are provided in each panel (see Methods).
Figure 8.
 
Comparison of predicted S- and L-cone tracking lags. The figure shows predictions obtained from the CTM model for the difference between S- and L-cone tracking lag, when contrast is equated across the two chromatic directions. The x-axis shows the common cone contrast, and the y-axis the difference in tracking lag in seconds. The model would allow similar predictions to be made for any pair of chromatic directions and relative contrasts.
Figure 8.
 
Comparison of predicted S- and L-cone tracking lags. The figure shows predictions obtained from the CTM model for the difference between S- and L-cone tracking lag, when contrast is equated across the two chromatic directions. The x-axis shows the common cone contrast, and the y-axis the difference in tracking lag in seconds. The model would allow similar predictions to be made for any pair of chromatic directions and relative contrasts.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×