January 2019
Volume 19, Issue 1
Open Access
Article  |   January 2019
V1-based modeling of discrimination between natural scenes within the luminance and isoluminant color planes
Author Affiliations
Journal of Vision January 2019, Vol.19, 9. doi:https://doi.org/10.1167/19.1.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michelle P. S. To, David J. Tolhurst; V1-based modeling of discrimination between natural scenes within the luminance and isoluminant color planes. Journal of Vision 2019;19(1):9. https://doi.org/10.1167/19.1.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters. This paper examines whether this may be a false assumption, by decomposing our original full-color stimulus images into monochromatic and isoluminant variants, which observers rate separately and which we model separately. The ratings for the original full-color scenes correlate better with the new ratings for the monochromatic variants than for the isoluminant ones, suggesting that luminance cues carry more weight in observers' ratings to full-color images. The ratings for the original full-color stimuli can be predicted from the new monochromatic and isoluminant rating data by combining them by Minkowski summation with power m = 2.71, consistent with other studies involving feature summation. The model performed well at predicting ratings for monochromatic stimuli, but was weaker for isoluminant stimuli, indicating that mirroring the monochromatic models is not sufficient to model the color planes. We discuss several alternative strategies to improve the color modeling.

Introduction
One strand of vision research has been to ask whether psychophysical studies of human thresholds or discrimination can be interpreted quantitatively with the response properties of single neurons in model experimental animals; such comparisons have a long history (e.g., De Valois, 1965; Ratliff, 1965). We have been investigating the perception of spatiochromatic differences in naturalistic images and movies, and we have asked whether a neurophysiologically based computational model (after Watson, 1987) can explain the perceived magnitudes of such changes (To, Gilchrist, & Tolhurst, 2015; To, Gilchrist, Troscianko, & Tolhurst, 2011; To, Lovell, Troscianko, & Tolhurst, 2010). It is our aim to ask whether such a model will better explain human performance if it simulates neuronal response behavior with greater fidelity. 
In our experiments, human observers provide magnitude estimation ratings of the suprathreshold differences they perceive between pairs of natural images (To et al., 2010). Some of the image pairs show truly natural differences: they comprise two photographs of the same scene taken at different times. Other image differences are imposed by computational postprocessing. Thus, images could change in whole or in part in terms of color (hue and/or saturation), spatial frequency distribution (blur or sharpening), content (objects moving, changing aspect, appearing or disappearing), texture, and shadows. To et al. (2010) used full-color (normal) scenes but also inverted pixel-reversed variants of these scenes, whose purpose was to remove any higher-level features and content that may influence observers' ratings. 
Our visual difference predictor model is originally based on other physiological models of visual discrimination (e.g., Watson, 1987; Watson & Ahumada, 2005; Watson & Solomon, 1997), and incorporates neurophysiological findings from quantitative studies of V1 neuron receptive fields (DeAngelis, Ohzawa & Freeman, 1993; De Valois, Albrecht, & Thorell, 1982; Field & Tolhurst, 1986; Jones & Palmer, 1987; Movshon, Thompson, & Tolhurst, 1978a, 1978b, 1978c; Ringach, 2002). In addition to linear summation mechanisms, the model includes two key nonlinearities: nonspecific contrast normalization (Bonds, 1989; Carandini, Heeger, & Movshon, 1997; DeAngelis, Robson, Ohzawa, & Freeman, 1992; Foley, 1994; Heeger, 1992; Tolhurst & Heeger, 1997), and orientation-specific surround suppression (Blakemore & Tobin, 1972; Cavanaugh, Bair, & Movshon, 2002; DeAngelis et al., 1994; Henry, Joshi, Xing, Shapley, & Hawken, 2013; Meese, 2004; Sceniak, Ringach, Hawken, & Shapley, 1999). Models of this kind have been good at explaining patterns of detection thresholds for monochromatic gratings and Gabor patches (To et al., 2017; Watson & Ahumada, 2005; Watson & Solomon, 1997) and detection experiments with monochromatic natural images (Párraga, Troscianko, & Tolhurst, 2005; Rohaly, Ahumada, & Watson, 1997; Tolhurst et al., 2010). In particular, the two nonlinearities are necessary to model contrast discriminations even with quite simple sinewave grating stimuli (Foley, 1994; Meese, 2004; To et al., 2017) and it would seem logical to begin any model of natural image discriminations by including them (Rohaly et al., 1997). One aim of the present paper is to investigate whether model performance is affected by the order in which the two nonlinearities (contrast normalization and surround suppression) are imposed. In their study of sinusoidal grating dipper functions, To et al. (2017), found that the best model predictions were obtained when the two inhibitory mechanisms were implemented sequentially rather than in parallel. This change to our 2010 model is more consistent with neurophysiological findings (e.g., Henry et al., 2013). 
While such models are a credible description of the early foveal coding of monochromatic information, our interest is in studying the perception of differences in natural images shown in color. We have always assumed that we should transform the RGB images, and then model three planes—a luminance plane, a red/green opponent plane and a blue/yellow opponent plane (De Valois, 1965; Hurvich & Jameson, 1957). We chose to recode the stimulus images according to MacLeod and Boynton (1979) weightings to give a luminance plane and two isoluminant cone-opponent planes: L/M and S/(L + M). This follows Párraga, Brelstaff, Troscianko, and Moorehead (1998) and a large body of psychophysical evidence for parallel, near-independent processing of luminance and isoluminant cone-opponent gratings (e.g., Losada & Mullen, 1994; Mullen, 1985). In our modeling, the isoluminant cone-opponent planes are processed by receptive fields with the same orientation and frequency tuning as the luminance channels (Beaudot & Mullen, 2005). Our luminance plane model (an extension of Watson & Solomon, 1997) is based on numerous detailed quantitative studies of receptive field shape and bandwidth, and clear models of normalization and surround suppression (see above). By contrast, very little is agreed about the neurophysiology of color coding in V1 (Shapley & Hawken, 2011). Thus, the modeling of the isoluminant opponent planes in our model is subject to many assumptions, which we shall consider in the Discussion
To et al. (2010) found that their best model, with all its assumptions, was moderately successful at predicting the suprathreshold ratings for full-color naturalistic stimuli: −r = 0.59 for the normal images and r = 0.72 for the inverted pixel-reversed ones. We argued that the better correlation for pixel-reversed images results from observers making decisions solely on the basis of low-level visual differences (which our model attempts to explain) rather than any semantic content. On one hand, even a correlation of 0.59 is impressive for a simplistic model where the behavior of several million neurons is defined by just seven free parameters and a few fixed features that could have been free parameters, such as the specific orientation tuning bandwidth, receptive-field aspect ratio (Tolhurst & Thompson, 1981), or the spacing in octaves between successive frequency bands. However, we do wish to understand why the correlations are not better! 
One possibility is that we have made too many false assumptions in trying to extend the good monochromatic models (To et al., 2017; Tolhurst et al., 2010; Watson & Solomon, 1997) to the full-color case. There are questions about how the three cones contribute to red/green and blue/yellow opponency (De Valois & De Valois, 1993; Mollon & Cavonius, 1987; Schmidt, Neitz, & Neitz, 2014; Schmidt, Touch, Neitz, & Neitz, 2016; Stockman & Brainard, 2010) and there is little consensus on the receptive-field organization of V1 neurons responsible for color coding (Conway, 2001; Shapley & Hawken, 2011). Therefore, in this study, we have decomposed our original full-color images (both the normal and the pixel-reversed) into monochromatic and isoluminant variants and have asked observers to rate the perceived differences between pairs of monochromatic scenes and isoluminant scenes separately. First, we ask whether the different planes contribute equally to observers' ratings. Second, we examine whether Minkowski summation can model the integration of the luminance and cone-opponent planes into a single rating for full-color stimuli. We have previously reported that Minkowski summation with power m = 2.5–3.0 can be used to model how differences along different feature dimension are combined (To, Baddeley, Troscianko, & Tolhurst, 2011; To, Lovell, Troscianko, & Tolhurst, 2008). Finally, we will model the new monochromatic and isoluminant ratings separately to determine whether the luminance-only model with all its neurophysiological and psychophysical backing is good enough for monochromatic natural images, and whether the poor overall performance of our 2010 model is, indeed, due to weakness in the modeling of the two isoluminant color opponent planes. 
Methods
Observers
Seven observers participated in all four separate experiments, and they remained naïve to the purpose of each. The observers were students or researchers at Lancaster University, UK. To ensure that they had normal or corrected-to-normal vision, we assessed their spatial acuity with the Snellen acuity chart and color vision with the Ishihara color test (13th Ed.) prior to all testing. Informed consent was obtained from all observers. 
Display equipment and stimulus construction
The stimuli were presented on a NEC MultiSync FP2141SB CRT 22 in. display driven at 800 × 600 pixels and a frame rate of 100 Hz by a ViSaGe system (Cambridge Research Systems: Rochester, UK). 
The stimuli in the present four experiments were monochrome and isoluminant variants of the full-color 900 normal and 900 pixel-reversed image pairs previously used in To et al. (2010). The 900 original normal images contained animals, landscapes, objects, people, plants, and/or garden or still-life scenes (e.g., Figures 1 and 2). The differences between the images in a pair could include changes in content (with objects appearing or moving location), the spatial frequency distribution (images sharpened or blurred), color (saturation and/or hue), shape, texture, and shadows; see To et al. (2010) for examples. Of these, 325 of the pairs consisted of two photographs of the same scene taken at different times, and we call these truly natural or ecologically valid (e.g., Figure 1). The interval between taking the photographs could be a few seconds to 10s of minutes. The image differences could arise, for example, from changing shadows, melting snow, or the effects of wind; they could involve the movement of animals or vehicles; or they could involve the photographer rearranging objects within the scene. Thus, many of the pairs would have involved some affine transform of an object or objects in the scene; unfortunately, we did not construct pairs in which the whole scene changed, as if the observer had changed their viewpoint. 
Figure 1
 
Here are two examples of ecologically valid image pairs that consist of two photographs of the same scene taken at different times, and their derived variants. Panel A presents a pair where a subject has appeared/disappeared (short time interval) and Panel B presents a scene where the lighting and content have changed (long time interval). The full-color (top row in each panel) normal images (left pair) and their pixel-reversed variants (right pair) were studied in To et al. (2010). Here we study the monochromatic and isoluminant variants of the normal full-color pairs on the left, and of the pixel-reversed pairs (right). In constructing the isoluminant images, we converted CIE XYZ representations with a matrix that made the final images isoluminant (according to L*a*b) on the experimental display. For the present figures, they have been transformed into RGB color space hopefully to make them look roughly isoluminant for the reader.
Figure 1
 
Here are two examples of ecologically valid image pairs that consist of two photographs of the same scene taken at different times, and their derived variants. Panel A presents a pair where a subject has appeared/disappeared (short time interval) and Panel B presents a scene where the lighting and content have changed (long time interval). The full-color (top row in each panel) normal images (left pair) and their pixel-reversed variants (right pair) were studied in To et al. (2010). Here we study the monochromatic and isoluminant variants of the normal full-color pairs on the left, and of the pixel-reversed pairs (right). In constructing the isoluminant images, we converted CIE XYZ representations with a matrix that made the final images isoluminant (according to L*a*b) on the experimental display. For the present figures, they have been transformed into RGB color space hopefully to make them look roughly isoluminant for the reader.
Figure 2
 
Here are two examples of image pairs that only differ along a color dimension in part of the image. Panels A and B present pairs where color changes are noticeable in the full-color and isoluminant pairs but less so in the monochromatic pairs. As in the previous figure, the original full-color normal pairs with their monochromatic and isoluminant variants are shown on the left, the pixel-reversed pairs, also presented with their variants, are shown on the right.
Figure 2
 
Here are two examples of image pairs that only differ along a color dimension in part of the image. Panels A and B present pairs where color changes are noticeable in the full-color and isoluminant pairs but less so in the monochromatic pairs. As in the previous figure, the original full-color normal pairs with their monochromatic and isoluminant variants are shown on the left, the pixel-reversed pairs, also presented with their variants, are shown on the right.
The remaining image pairs involved some kind of post processing, usually involving MATLAB (MathWorks, Natick, MA) programming, and some of these pairs contained combinations of two types of imposed change (To et al., 2008). There were 273 processed pairs that involved a change only in the hue and/or saturation of part or all of one image (color-only change; e.g., Figure 2); these color changes were not guaranteed to be isoluminant. 
The pixel-reversed images were modified versions of the originals, in which the content was inverted and pixel-level values were reversed so the brightest pixels in the original were swapped in location with the dimmest (e.g., Figures 1 and 2, right). The purpose of modifying the normal images was to reduce the higher-level semantic content of scenes, while maintaining the lower-level visually discriminable elements intact. They were similar in appearance to inverted negatives of the originals, except that the pixel-reversal algorithm (To et al., 2010) retained the same overall luminance. 
For Experiments 1 and 2 in the present study, monochromatic stimuli were generated by averaging the R, G, and B planes in the original full-color To et al. (2010) stimuli (e.g., Figures 1 and 2, middle rows in panels). For Experiments 3 and 4, isoluminant stimuli were produced by first measuring the CIE XYZ coordinates of the three phosphors on the NEC display, then transforming the To et al. (2010) stimuli to XYZ and subsequently to L*a*b space. “L” in all the pixels was set to the same average value, before the L*a*b images were transformed back to XYZ and then to RGB space (see Figures 1 and 2, lower row in panels). The “L” value was that given by a mid-gray on the display ([128,128,128]). Note that, since the monochromatic and isoluminant stimuli are derived from the original To et al. (2010) images, they also contained the same content and much of the same feature differences as the original normal images. The magnitude of changes along the color dimensions were typically affected differently by transforming the full-color photographs into monochromatic and isoluminant versions. Examples A and B from Figure 2 show that, in most cases, the isoluminant pairs preserve some of the changes from the full–color pairs, but the two images in a monochromatic pair appear very similar. 
The images were 256 × 256 pixels square (covering an area of 3.2 degrees of visual angle), but the 30 pixels at the edges of the stimuli were blended with the gray surround by compressing the pixel values towards 128 with a Gaussian falloff with a standard deviation of 12 pixels. The surrounding gray of the display had a luminance of 88 cd/m−2
Standard pairs
In the four experiments, observers were presented with pairs of images test pairs (TPs) and were asked to rate how different the images in a pair appeared to them based on a standard pair (SP), whose difference was set to 20: 
  •  
    Image differences that were similar to the difference in the SP were rated 20.
  •  
    Image differences that were less than the difference in the SP were rated between 1 and 19.
  •  
    Image differences that were greater than the difference in the SP were rated over 20, with no imposed upper limit.
  •  
    Seemingly identical images were given a 0 (zero) rating.
Observers were told that all difference ratings should be proportional to the SP scale so that if, for example, the TP was half or twice as different as the standard, they should enter 10 (=20/2) or 40 (=20 × 2), respectively. 
In the original To et al. (2010) study, the SP was a pair of lily photographs that differed in color saturation (see Figure 3A). In the current study, the SP for Experiments 1 and 2 were monochromatic variants of the original (see Figure 3B). However, when isoluminant versions of the original SP were produced, they appeared too similar, so the difference between the isoluminant SP was magnified (see Figure 3C). 
Figure 3
 
Standard pairs used in the original To et al. (2010) study with full-color pairs (A), Experiments 1 and 2 with monochromatic pairs, and (B) and Experiments 3 and 4 with isoluminant pairs. The same standard pair was used for the normal and pixel-reversed version of an experiment.
Figure 3
 
Standard pairs used in the original To et al. (2010) study with full-color pairs (A), Experiments 1 and 2 with monochromatic pairs, and (B) and Experiments 3 and 4 with isoluminant pairs. The same standard pair was used for the normal and pixel-reversed version of an experiment.
Stimulus presentation protocol
The experimental protocol has been described in detail in To et al. (2010). Observers were expected to try to fixate the center of each image; a fixation spot was present between stimuli, but it was extinguished during the 833 ms when a stimulus image was actually present. After a number of practice trials, each experiment began with the sequential presentation of the two images in the SP with an interval between them: fixation point on otherwise mid-gray display (83 ms), Standard Image 1 (833 ms), fixation point (83 ms), and Standard Image 2 (833 ms). The SP was then shown after every subsequent 10 trials to remind observers of their reference point. Following this was the presentation of the TPs. 
The presentation order of the TPs was randomized differently for the seven observers. In addition, the two images within each TP were also presented in random order in three 833 ms intervals: Fixation point, first image from TP, fixation point, second image from TP, fixation point, and first image from TP again. The rationale behind this three-interval presentation was to allow observers to see change directions from first image to second image, and from second image to first image. Following presentation of a TP, a response screen displayed a random number between 10 and 30, which the observers were asked to modify into their judged rating of the perceived difference between the images in that pair. 
Each experiment was divided into four sessions in which the observer had to rate the difference between 225 of the TPs. These experimental sessions could be completed on the same or different days. 
Data collation and statistical analysis
In each of the four experiments, seven observers rated each TP once. The ratings of each observer in an experiment were normalized against that observer's median rating within the experiment. The normalized ratings for each TP were then averaged across the seven observers, and these averaged ratings were then multiplied by the grand average of all ratings (for all TPs from all observers' ratings in that experiment) so that the data roughly centered on the standard value of 20. The data in the graphs in the Results section therefore only show the mean ratings given to each TP, averaged across observers. The standard error of the mean rating averaged about 3.0, but tended to be higher for the higher averaged ratings and lower for the very low average ratings. We previously suggested that observers sometimes differed quite markedly when giving ratings for big perceptual changes, even when they agreed more consistently for small and moderate differences (To et al., 2010); however, this applies to no more than about 90 of the 5,400 ratings, those whose average was above 40 (twice the standard). The few over-exuberant ratings were outliers in each observer's responses, so that standardizing their data to z scores, say, might not “correct” the problem (which affects few of the data points). 
V1-based visual difference predictor modeling (VDP)
We have been developing a computational model of the perceived magnitude ratings in experiments with full-color naturalistic images, trying to model the responses of millions of V1 simple or complex cells in response to the two images in a pair (To, Gilchrist et al., 2011; To et al., 2010; To et al., 2015). As mentioned in the Introduction, this model derives from the seminal work of Rohaly et al., (1997), Watson (1987), and Watson and Solomon (1997). We have also elaborated the model in studies of contrast discrimination in monochromatic naturalistic images and sinusoidal gratings (Tolhurst et al., 2010; To et al., 2017). The details of the modeling and the physiological and psychophysical justification of the various steps are given in our previous papers. 
The first step of the model is of particular interest to the present study. It is widely accepted that colored lights are encoded in three planes: luminance, red/green opponent, and blue/yellow opponent (De Valois, 1965; Hurvich & Jameson, 1957; Losada & Mullen, 1994). Thus, the full-color images (normal and pixel-reversed) are recoded with a MacLeod and Boynton (1979) transform into a luminance plane, and two cone-opponent isoluminant planes: L/M opponent and S/(L + M) opponent. The complex cell model is then run in parallel on these three planes with identical receptive field code and identical parameters (see below). A plane is first convolved with odd- and even-symmetric Gabor functions of five optimal spatial frequencies (one octave interval) and six optimal orientations (60 receptive field shapes in all, 256 × 256 locations in each set). Division by local mean luminance gives contrast rather than luminance responses. Complex cell responses are calculated as the RMS of the responses of the odd- and even-symmetric fields to give 30 sets of complex-cell responses. In this study, all the Gabor functions are self-similar with bandwidth of about one octave, but the field length or aspect ratio (and, therefore, the orientation specificity) is a free parameter in the fitting procedure. 
The quasi-linear responses of the many complex cells are then subject to two nonlinearities deduced from physiological and psychophysical studies with gratings: within-field, nonspecific contrast normalization or gain control (Carandini et al., 1997; Foley, 1994; Heeger, 1992; Watson & Solomon, 1997) and orientation-specific surround suppression (Blakemore & Tobin, 1972; Cavanaugh et al., 2002; Meese, 2004; Sceniak et al. 1999; To et al., 2017). At each location (x,y) in the stimulus, we calculate a nonspecific contrast normalization signal Nx,y by summing the quasi-linear contrast responses (C) of the 30 complex cell fields exactly centered at that point (across frequency f and orientation o), each raised to a power q:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}{N_{x,y}} = \mathop \sum \limits_{f = 1}^5 \mathop \sum \limits_{o = 1}^6 {\left| {{C_{x,y,f,o}}} \right|^q}\end{equation}
 
This one nonspecific signal will suppress the responses of all 30 fields at the location equally, and q is a free parameter in the fitting procedure. 
We model surround suppression as coming from an elongated area (aspect ratio 1.6) centered on the receptive field, elongated along the complex cell's optimal orientation. The spread of this elongated Gaussian blob is proportional to the period of a cell's optimal spatial frequency, and is a free parameter in the fitting procedure (“surround spread”, expressed as a proportion of the period of the neuron's best spatial frequency). A different surround signal Sx,y,f,o is calculated (see To et al., 2010) at each point and for each of the five spatial frequencies and six orientations. The calculation involves raising responses to a free parameter r
In To et al. (2010), the two nonlinearities were applied in parallel at the same point in the model. However, following evidence that surround suppression probably follows contrast normalization (Baker, Meese, & Summers, 2007; Durand, Freeman, & Carandini, 2007; Henry et al., 2013; Li, Thompson, Duong, Peterson, & Freeman, 2006; Petrov, Carandini, & McKee, 2005), we found that our model was more effective at explaining grating contrast discrimination if the application of the two nonlinearities was sequential rather than parallel (To et al., 2017). 
Parallel models
In our original model (To et al., 2010), the responses of each of the millions of “neurons” in the model were raised to power p1, and were finally subjected to the two nonlinear suppressive effects by division at the same time, using a modified version of the Naka-Rushton equation, an elaboration of Heeger's (1992) formulation for contrast normalization. In the case of the parallel model, the final response of the field at location (x,y), frequency f, orientation o, and symmetry s is:  
\begin{equation}\tag{2}respons{e_{x,y,f,o}} = {{sign\left( {{c_{x,y,f,o}}} \right) \cdot {{\left| {{c_{x,y,f,o}}} \right|}^{p_{1}}}} \over {1 + {W_N} \cdot {N_{x,y}} + {W_S} \cdot {S_{x,y,f,o}}}}\end{equation}
where sign extracts the sign (+ or −) of Display Formula\({c_{x,y,f,o}}\), WN, and WS are weights, and the calculations of Nx,y and Sx,y,f,o involve raising response values to powers q and r, respectively, as described above. The surround suppressive signal S is calculated from the same quasi-linear contrast responses as the normalizing signal N.  
Sequential models
Here, an intermediate normalized response (i_response) is calculated based on the normalizing signal only, and then the surround suppressive signal is calculated from these normalized responses (i_response). There are two successive Naka-Rushton equations:  
\begin{equation}\tag{3}i\_respons{e_{x,y,f,o}} = {{sign\left( {{c_{x,y,f,o}}} \right) \cdot {{\left| {{c_{x,y,f,o}}} \right|}^{p_1}}} \over {1 + {W_N} \cdot {N_{x,y}}}}\end{equation}
 
\begin{equation}\tag{4}respons{e_{x,y,f,o}} = {{sign\left( {i\_respons{e_{x,y,f,o}}} \right) \cdot {{\left| {i\_respons{e_{x,y,f,o}}} \right|}^{p_2}}} \over {1 + {W_S} \cdot {S_{x,y,f,o}}}}\end{equation}
 
It will be noted that there is an extra parameter here (p2). 
Final pooling of all the difference cues
We finally have a model of the responses or outputs of all the neurons to one plane of one image in a pair. The process is repeated for the comparison image, and we subtract the model outputs for the two stimuli neuron-by-neuron. The many visibility cues across x, y, frequency, and orientation are combined into a single value by Minkowski summation with power m (Watson & Solomon, 1997). The n (1.97 million) individual visibility cues are raised to the power m, are summed and the mth root taken.  
\begin{equation}\tag{5}{\it{overall\ difference}} = {\left( {\mathop \sum \limits_i^n {{\left( {{\it{difference\ cue}_i}} \right)}^m}} \right)^{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 m}}\right.\kern-1.2pt}\lower0.7ex\hbox{$m$}}}}\end{equation}
 
This generates a single number, which is predicted to be directly proportional to the magnitude rating of the perceived difference for that plane. For the ratings for monochromatic stimuli, we model only the luminance plane and this number should be proportional to the observers' final rating. For the isoluminant stimuli, we model the L/M and S/(L + M) planes, so that the final rating prediction is gained by a Minkowski summation of the two plane cues with the same exponent m, with the cue in the S/(L + M) plane weighted against the L/M plane with a parameter WB. Finally, for the full-color stimuli, all three planes are modeled and the final rating obtained by a Minkowski summation of three cues, with the isoluminant planes weighted against the luminance plane with weights WR and WB
Finding model parameters
Depending upon which experiments are fitted and whether the model is parallel or sequential, there are 8–11 free parameters. These are found by iteratively searching for the combination of parameters that maximizes the correlation coefficient between the model output and the observers' ratings (using fminsearch() in MATLAB). For each image pair, the ratings of the participating observers were standardized and averaged (see above). We report single fits for the 900 normal and 900 pixel-reversed ratings together (n = 1,800). However, we previously suggested that the ratings for some kinds of image change will never be satisfactorily fit by the kind of V1 model that we implement. Our model neurons very literally compare the images point-by-point and can detect small changes in object location or texture; the observers barely notice these. As well as fitting all 1,800 data for each model, we have separately fit a subset of 1,324 data, after discarding the (“unfittable”) images pairs with small spatial changes (see To et al., 2010). 
To compare the performances of different versions of models to a given data set, we calculate the corrected Akaike coefficient for small sample-size from the residual sum of squares of the regression of actual rating plotted against model prediction, while taking into account the number of parameters (Motulsky & Christopoulos, 2004). Although the actual Akaike information criterion (AIC) number for any one fit is not very informative, the delta AIC (ΔAIC) between two models weighs up a difference in residual sums of squares against any difference in the number of parameters, and can give some indication of the relative success of different models.  
\begin{equation}\tag{6a}AIC = n.ln\left( {{{ssq} \over n}} \right) + 2k\end{equation}
 
\begin{equation}\tag{6b}Corrected\ AIC = AIC + {{\left( {2k.\left( {k + 1} \right)} \right)} \over {\left( {n - k - 1} \right)}} = n.ln\left( {{{ssq} \over n}} \right) + 2k + {{\left( {2k.\left( {k + 1} \right)} \right)} \over {\left( {n - k - 1} \right)}}\end{equation}
where n is the number of data points to fit, k is one more than the number of model parameters, and ssq is the residual sum of squares deviation between model and data.  
Results
Experimental observations
To et al. (2010) measured the perceived differences between 900 pairs of naturalistic images, and between 900 pairs of inverted pixel-reversed versions of those pairs. In this study, we have re-evaluated those 1,800 ratings measured in To et al. (2010) by recruiting and testing seven new observers who each provided a total of 3,600 magnitude estimation ratings for image pairs presented in four suprathreshold discrimination experiments: 900 monochrome variants of the original full-color normal pairs, 900 monochrome variants of the original pixel-reversed pairs, 900 isoluminant variants of the normal pairs, and 900 isoluminant variants of the pixel-reversed pairs. 
Interobserver correlations and standard errors
Comparing each observer's 900 ratings in an experiment with each of the other observers, the interobservers' correlations ranged between 0.31 and 0.81. The correlations in Experiments 1 and 2 with monochromatic variants were higher compared to those in Experiments 3 and 4 with isoluminant variants, and the latter were slightly lower than in the original experiments in To et al. (2010). Table 1 presents interobserver correlations for each experiment. The correlations for monochromatic stimuli are noticeably higher compared to those for To et al.'s (2010) original full-color and the isoluminant variants. 
Table 1
 
Averages, maximal and minimal Pearson's r comparing each observer against others viewing the same stimuli.
Table 1
 
Averages, maximal and minimal Pearson's r comparing each observer against others viewing the same stimuli.
Comparing ratings for full-color, monochromatic, and isoluminant stimuli
We examined the correspondence between ratings for the full-color image pairs (normal and pixel reversed) from To et al. (2010) with the new ratings given for their monochromatic and isoluminant variants. For each of the experiments, the normalized ratings of the observers were averaged together to generate a single numerical rating for each image pair. In general, we would expect the ratings for the monochromatic and isoluminant pairs to be the same as or (more likely) lower than the original ratings for the full-color images, since they now contain only partial cues to differences. However, this was not always the case, and we will consider this in the Discussion
When comparing the ratings for monochromatic normal pairs with the original full color normal pairs, we found a good correspondence (r = 0.69, n = 900) between the two (see Figure 4A). The ratings in Experiment 1 seemed to be confined to scores under 50 but this was not the case for in the original 2010 experiment where ratings went up to 60. We identified the 273 original full-color image pairs containing color-only changes (red symbols). The color changes were not guaranteed to be isoluminant and so some of these pairs may have included luminance changes, but largely these changes would have been difficult to discern in the monochromatic versions (see Methods, Figure 2). Unsurprisingly, therefore, for these color-only change stimuli, the monochromatic ratings were much lower than for the original full-color pairs. Furthermore, these low ratings for the monochromatic versions were poorly correlated with the full-color ratings (r = 0.42, n = 273). The remaining 627 stimuli (gray symbols) gave ratings lying closer to the identity line. 
Figure 4
 
The graphs present the correspondence between magnitude estimation ratings from the current experiments with those previously collected in To et al. (2010). Monochromatic ratings from Experiments 1 (normal) and 2 (pixel-reversed) are plotted against full-color ratings of the equivalent originals in Panels A and B, respectively. Likewise, isoluminant ratings from Experiments 3 (normal) and 4 (pixel-reversed) are plotted against full-color ratings for the originals in Panels C and D, respectively. The red data points represent ratings for those image pairs that only contain image-processed color differences in the original full-color versions; they give only small or zero change in the monochrome versions. The gray data points correspond to all other stimulus types (see Methods).
Figure 4
 
The graphs present the correspondence between magnitude estimation ratings from the current experiments with those previously collected in To et al. (2010). Monochromatic ratings from Experiments 1 (normal) and 2 (pixel-reversed) are plotted against full-color ratings of the equivalent originals in Panels A and B, respectively. Likewise, isoluminant ratings from Experiments 3 (normal) and 4 (pixel-reversed) are plotted against full-color ratings for the originals in Panels C and D, respectively. The red data points represent ratings for those image pairs that only contain image-processed color differences in the original full-color versions; they give only small or zero change in the monochrome versions. The gray data points correspond to all other stimulus types (see Methods).
There was a stronger correlation between the ratings for monochromatic and full-color inverted pixel-reversed image pairs (r = 0,79; Figure 4B): The gray data points are more closely clustered around the identity line. However, for the pairs with color-only changes in the originals, the ratings for the monochromatic versions again tend to be very low, as should be expected since the monochromatic versions show little of our applied color changes. 
The correspondence between the ratings for isoluminant and full-color ratings was noticeably weaker compared to the previous two comparisons. In the case of the normal pairs, the correlation between the two sets was r = 0.63 (n = 900) and the data points are more widely spread (see Figure 4C). For color-only changes (red symbols), the ratings for isoluminant variants were higher than those for the full-color images, and now they are reasonably correlated with the full-color ratings (r = 0.70, n = 273). The isoluminant ratings for the remaining stimuli (gray symbols) are also correlated to some extent with the full-color ratings (r = 0.66, n = 627); this follows since many of these stimuli will have involved changes in the geometry, location, or presence of objects that had some color difference from the rest of the image. These are different trends from those shown for the monochromatic stimuli (Figure 4A and 4B). 
In the case of the inverted pixel-reversed stimuli (Figure 4D), the correspondence between two sets of ratings was weaker (r = 0.60) and the isoluminant ratings were generally lower than the full-color ratings. However, the isoluminant color-only change data (red symbols) are still well correlated with the full-color versions (r = 0.76, n = 273). 
Integration of monochromatic and isoluminant cues
Given that the original images can be decomposed into the monochromatic and isoluminant images, we questioned whether the full-color ratings from To et al. (2010) could be predicted by combining the present monochromatic and isoluminant ratings. We have previously shown that a Minkowski summation with m = 2.5–3.0 was able to model how different features such as object movement, blur, and color change are integrated (To et al., 2008, To, Baddeley et al., 2011). We attempted to fit the 1,800 full-color ratings (normal together with pixel-reversed) by a Minkowski summation of their corresponding monochromatic and isoluminant variants. We minimized the summed squared error between actual and predicted ratings with three parameters: in addition to the Minkowski exponent m as a free parameter, we required non-unity weights for the monochromatic and isoluminant ratings (see Discussion). Similar to other studies of feature combination (To et al., 2008; To, Baddeley et al., 2011), we found that the best-fit Minkowski exponent was 2.71:  
\begin{equation}\tag{7}{\it{fullcolor}} = \root {2.71} \of {{{\left( {0.78 \cdot monochromatic} \right)}^{2.71}} + {{\left( {0.76 \cdot isoluminant} \right)}^{2.71}}} \end{equation}
 
The correlation between the actual and modeled ratings was 0.85 (n = 1,800; see Figure 5), though the correlation was slightly higher for the pixel-reversed images. There is noticeable curvature for the higher ratings, as if the Minkowski sum of the components is not great enough to explain the full-color ratings. 
Figure 5
 
Minkowski summation of monochromatic and isoluminant ratings compared with the actual full-color ratings from To et al. (2010). In Panel A, the best Minkowski predictions (with m = 2.71) for all full-color normal (blue, r = 0.83) and pixel-reversed (purple, r = 0.87) ratings are plotted against the actual ratings from To et al. (2010).
Figure 5
 
Minkowski summation of monochromatic and isoluminant ratings compared with the actual full-color ratings from To et al. (2010). In Panel A, the best Minkowski predictions (with m = 2.71) for all full-color normal (blue, r = 0.83) and pixel-reversed (purple, r = 0.87) ratings are plotted against the actual ratings from To et al. (2010).
Ratings for truly natural image changes
It is of interest to ask the relative contribution of the monochromatic and isoluminant cues to the overall perception of image differences. Unfortunately, the weights 0.78 and 0.76 in Equation 7 are arbitrary (see Discussion). Furthermore, 675 of the 900 parent image pairs involved some kind of image post-processing such as painting out of features, imposing blur, or color changes that are potentially unnatural. Therefore, we have examined the Minkowski model performance for just those normal image pairs made from two unprocessed photographs of the same scene, taken at different times. 
In the original experiment with normal images, there were 325 pairs that were real differences that did not include any artificial changes (color, bandwidth, objects appearing/disappearing). We do not include the pixel-reversed variants in the following analysis, since those images are clearly unnatural. For this subset of 325 pairs, there is a strong correspondence between the monochromatic ratings and full-color ratings (see Figure 6A; r = 0.81, n = 325). This is much higher than the correspondence for the remaining post-processed pairs (r = 0.68, n = 675; not shown). For the isoluminant ratings, the data were less well correlated with the original full-color ratings (see Figure 6B; r = 0.68), but this correspondence is still superior compared to ratings for the post-processed pairs (r = 0.61, not shown). The results suggest that ratings for full-color ecologically valid pairs are better correlated with monochromatic ratings than with isoluminant ratings. This also demonstrates that, in general, the monochromatic and isoluminant ratings are better correlated with ratings for pairs with real differences rather than processed ones. 
Figure 6
 
The panels A and B plot the magnitude estimation ratings for monochromatic and isoluminant variants against the ratings for the full-color versions from To et al. (2010) for ecologically valid pairs only. Panel A shows that the correspondence between monochromatic and full-color ratings is high (r = 0.81). Panel B shows that the correspondence between the isoluminant ratings and full-color ratings was weaker (r = 0.68). Panel C shows the best Minkowski predictions with m = 1.93 (r = 0.85) for the full-color ecologically valid ratings plotted against the actual ratings from To et al. (2010).
Figure 6
 
The panels A and B plot the magnitude estimation ratings for monochromatic and isoluminant variants against the ratings for the full-color versions from To et al. (2010) for ecologically valid pairs only. Panel A shows that the correspondence between monochromatic and full-color ratings is high (r = 0.81). Panel B shows that the correspondence between the isoluminant ratings and full-color ratings was weaker (r = 0.68). Panel C shows the best Minkowski predictions with m = 1.93 (r = 0.85) for the full-color ecologically valid ratings plotted against the actual ratings from To et al. (2010).
We fitted the 325 full-color “real” pair ratings by Minkowski summation of the appropriate monochromatic and isoluminant ratings (Figure 6C). The best fit was given with a Minkowski exponent of 1.93 and a correlation coefficient of 0.849 (n = 325):  
\begin{equation}\tag{8}fullcolor = \root {1.93} \of {{{\left( {0.71 \cdot monochromatic} \right)}^{1.93}} + {{\left( {0.68 \cdot isoluminant} \right)}^{1.93}}} \end{equation}
 
The weights 0.71 and 0.68 are arbitrary. That the correlation between the monochromatic and full color ratings (r = 0.81) is almost as high as that between the Minkowski sum and the full-color ecologically valid ratings (r = 0.85) suggests that luminance-based cues contribute more to the perception of differences in natural images, in general, than do pure color ones. 
V1-based modeling of perceptual ratings
Full-color stimuli
In To et al. (2010), we fitted our first attempts at a V1-based discriminator model to the normal and the pixel-reversed images separately, with best correlations between model predictions and actual ratings of r = 0.59 (n = 900) and r = 0.73 (n = 900), respectively. Here, we have recoded some details of the model such as reverting to the more usual self-similar receptive-field shapes and allowing receptive-field aspect ratio to be a new free parameter. Table 2 (columns 1 and 2) shows the parameter values resulting from iteratively fitting our present coding to all 1,800 full-color image pairs at once. The table shows the fits for two variant models: (a) where two key nonlinearities are applied in parallel (Equation 2) and (b) where they are applied sequentially (Equations 3 and 4). 
Table 2
 
The best fitting values of the various parameters (defined in Methods) of the main VDP models discussed here. Parallel and sequential versions of the model were fit to the full-color and monochromatic rating data, but the isoluminant rating data were fit only with a sequential model. The number of parameters depends on model type and on the experimental data set (see Methods). These fits are for n = 1,800, with all the normal and all the pixel-reversed data together.
Table 2
 
The best fitting values of the various parameters (defined in Methods) of the main VDP models discussed here. Parallel and sequential versions of the model were fit to the full-color and monochromatic rating data, but the isoluminant rating data were fit only with a sequential model. The number of parameters depends on model type and on the experimental data set (see Methods). These fits are for n = 1,800, with all the normal and all the pixel-reversed data together.
Table 3A (columns 1 and 2) shows the statistics of those best fits, and Figure 7A plots the 1,800 actual full-color ratings against the model predictions (in arbitrary units) for the sequential model variant. The data for the pixel-reversed pairs (purple) seem to be closer to the regression line than the normal image data. Pearson's r is higher for the sequential model than the parallel model; the difference (0.66 vs. 0.71) is significant at p = 0.002. Furthermore, the difference in Akaike criterion (−251) is very large, implying that the sequential model is very much “better” than the parallel model, even given that the sequential model has an extra free parameter. It is also the case that the full-color ratings are correlated with the Euclidean distance (Kingdom, Field, & Olmos, 2007) or root-mean-square difference between pixel values; however, Pearson's r was only 0.345. 
Table 3
 
(A) Summary statistics of the five model fits shown in Table 2 (i.e., for all 1,800 normal and pixel-reversed data). The table shows the correlation between ratings and model predictions, and the Akaike criterion (Equation 6) calculated from the residual sum of squares after fitting a regression to the experiment/model plot. Delta AIC is shown for the full-color and monochromatic models; it summarizes the difference in the fits of the parallel and sequential models. The correlation between rating and Euclidean distance is also shown. (B) The same, but for fits to a subset of the ratings data (n = 1,324 out of 1,800), after discarding the ratings given to image pairs that differed by a small object movement or a texture change (To et al., 2010).
Table 3
 
(A) Summary statistics of the five model fits shown in Table 2 (i.e., for all 1,800 normal and pixel-reversed data). The table shows the correlation between ratings and model predictions, and the Akaike criterion (Equation 6) calculated from the residual sum of squares after fitting a regression to the experiment/model plot. Delta AIC is shown for the full-color and monochromatic models; it summarizes the difference in the fits of the parallel and sequential models. The correlation between rating and Euclidean distance is also shown. (B) The same, but for fits to a subset of the ratings data (n = 1,324 out of 1,800), after discarding the ratings given to image pairs that differed by a small object movement or a texture change (To et al., 2010).
Figure 7
 
Experimental data plotted against the sequential model predictions. (A) Ratings from the original experiment with full-color images (To et al. 2010); (B) for Experiments 1 and 2 with monochromatic images; and (C) for Experiments 3 and 4 with isoluminant images, respectively. The regression lines of best fit are shown. Data corresponding to the normal images (original or variant) are shown in blue, and the data corresponding to the pixel-reversed images (original or variant) are shown in purple.
Figure 7
 
Experimental data plotted against the sequential model predictions. (A) Ratings from the original experiment with full-color images (To et al. 2010); (B) for Experiments 1 and 2 with monochromatic images; and (C) for Experiments 3 and 4 with isoluminant images, respectively. The regression lines of best fit are shown. Data corresponding to the normal images (original or variant) are shown in blue, and the data corresponding to the pixel-reversed images (original or variant) are shown in purple.
Monochromatic stimuli
Tables 2 and 3A (columns 3 and 4) show the best fit parameters and fitting statistics of parallel and sequential models to fit the 1,800 monochromatic ratings collected for this paper. Since the monochromatic stimuli occupy only one of the three luminance/color-opponent planes of the full-color stimuli, these models have two fewer parameters than the fits for the full-color stimuli (see Methods). Figure 7B plots the experimental ratings for monochromatic stimuli against the predictions of the sequential model. It is very clear from Figure7B (confirmed by Table 3A) that the monochromatic ratings are fitted much better than the full-color ratings (Figure 7A). Again, the sequential model is much “better” than the parallel one (>AIC = −95) even though the correlation coefficients (0.83 and 0.85) are not significantly different. These correlations coefficients are highly significantly better than those describing the fits to the full-color stimuli. The monochromatic ratings had a correlation of 0.62 against Euclidean distance; while this is higher than the equivalent correlation for full-color ratings, it is substantially less than the correlation with a biologically driven model. 
Isoluminant stimuli
Tables 2 and 3A (column 5) show the best fit parameters and fitting statistics for a sequential model only to fit the 180 isoluminant ratings collected for this paper. Since the isoluminant stimuli occupy only two of the three luminance/color-opponent planes of the full-color stimuli, this model has one fewer parameter than the fit for the full-color stimuli (see Methods). Figure 7C plots the experimental ratings for isoluminant stimuli against the predictions of the sequential model. The correlation between ratings and sequential model (r = 0.702) is the lowest of the three experiments shown in Figure 7. The isoluminant ratings had a correlation of 0.59 against Euclidean distance. 
Discarding stimuli with only small spatial changes
Of the 900 basal full-color image pairs, we suggested that some 238 would never be fit well by models based on point-by-point comparison of neuronal responses (To et al., 2010). These stimuli have small spatial changes that are well detected by the models, but not by the observers. We have fitted parallel and sequential models to the remaining 662 normal and 662 pixel-reversed pairs. The parameters and graphs are not shown, but the fitting statistics are shown in Table 3B. For the full-color and monochromatic stimuli, discarding the “unfittable” stimuli does indeed lead to highly significant increases in the correlation coefficients (compare Tables 3B and 3A). Interestingly the fit to the isoluminant stimuli is not improved. The sequential model fitted to the 1324 monochromatic stimuli (r = 0.89) is particularly good. As for the full set of 1,800 stimuli, the Akaike coefficient shows that the sequential models are much “better” for the full-color and monochromatic stimuli than the parallel models. 
Discussion
The purpose of this study was to investigate how human observers perceive and rate changes in the monochromatic and isoluminant components in naturalistic scenes, and to determine the extent to which a V1-based model can be used to predict these ratings. In particular, we were interested in whether the isoluminant data would be as well-modeled as the monochromatic data. We took the full-color (900 normal and 900 pixel-reversed) natural scenes from our original study (To et al., 2010), decomposed them into monochromatic and isoluminant scenes, and repeated the experiments with the monochromatic and isoluminant versions separately. 
Magnitude estimation ratings
We compared ratings for each monochromatic or isoluminant image pair across observers, and found that there is generally a good agreement. Interobserver correlations ranged between 0.31 and 0.81, not dissimilar from the interobserver correlations reported in To et al. (2010). Interestingly, correlations were higher for monochromatic stimuli compared to isoluminant and the original full-color scenes (see Table 1). This could be a consequence of individual differences for color vision in human and other primates (e.g., Alpern & Pugh, 1977; Emery, Volbrecht, Peterzell, & Webster, 2017; Mollon, Bowmaker, & Jacobs, 1984; Pickford, 1951; Suero, Pardo, & Perez, 2010). The isoluminant stimuli were based on a standard CIE observer and were not tailored to the individual observers. Variations in luminance perception are not so widely reported. 
The stimulus pairs differed from each other in a variety of ways (see To et al., 2010), but an interesting subset consisted of pairs where a color-only change was applied by computer processing of original photographs (e.g., Figure 2). Although these changes were not guaranteed to be isoluminant, the difference between the pairs was primarily chromatic. Some pairs might also contain small luminance changes, but these were generally difficult to detect. The ratings for the monochromatic variants of these color-only stimuli, as would be expected, were considerably lower compared to those for the original full-color and isoluminant pairs (see Figure 4). Furthermore, these low ratings for the monochromatic versions were also more poorly correlated with the original color-only ratings (r = 0.43 and 0.55 for normal and pixel-reversed, respectively) compared to ratings for pairs containing other changes, such as content and spatial frequency distribution (r = 0.78 and 0.82 for normal and pixel-reversed, respectively). The opposite trend is seen for isoluminant color-only pairs: these ratings were better correlated with the original color-only ratings (r = 0.70 and 0.76 for normal and pixel-reversed, respectively) compared to ratings for pairs containing other changes (r = 0.66 for both normal and pixel-reversed). 
In addition to normally colored naturalistic scenes, To et al. (2010) also studied inverted, pixel-reversed versions (akin to inverted negatives) to disguise the semantic content of the scenes. Here, we also studied monochromatic and isoluminant versions of those. Without the distraction of semantic content, observers' ratings are presumably dependent just on simple shape and color cues. The correlation between full-color and monochromatic ratings is closer for the pixel reversed than for the normal images retaining semantic content. Minkowski prediction of full-color ratings from combination of monochromatic and isoluminant ratings is also closer for the pixel-reversed versions. 
In general, we would expect the ratings for the monochromatic and isoluminant pairs to be the same as or (more likely) lower than the original ratings for the full-color images, since they now contain only partial cues to differences. However, this was not always the case for two main reasons. First, we used different standard pairs to anchor observers' ratings in the different experiments (see Figure 3) so that the rating scales for monochromatic, isoluminant, and full-color scenes are not directly comparable. Second, even though we attempted to fix the scales by reference to the standards, there is a tendency of observers to self-normalize (Gescheider, 1997). Bearing this in mind, the ratings can still be compared if they are given weights to compensate for the different standard pairs and different self-normalization. 
We decomposed the original full-color images from To et al. (2010) into their monochromatic and isoluminant components in the current study. These components might be equivalent to the achromatic and chromatic planes which underlie the independent coding of simple colors (e.g., Hurvich & Jameson, 1957). We considered whether ratings for differences along achromatic and chromatic dimensions could be combined to predict ratings for full-color stimuli in the same way that independent channels have often been modeled (e.g., To et al., 2009; To, Gilchrist et al., 2011; Watson, 1987; Watson & Solomon, 1997). Here we attempted to fit the 1,800 original full-color ratings (normal together with pixel-reversed) by Minkowski summation of their corresponding monochromatic (achromatic) and isoluminant (chromatic) components. The best predictions were obtained with a Minkowski summation model with exponent m = 2.71 (refer to Equation 7). The correlation between the actual and modeled ratings was 0.85, with the correlation slightly higher for the pixel-reversed images. The optimal model weighed the ratings for the monochromatic and isoluminant components similarly (0.78 and 0.76, respectively). These weights parameter were included as the ratings were based on different standards and were normalized within each experiment (see above). While not a definitive proof, this is consistent with a model where “luminance based shape” and “color” are processed separately. 
The original normal full-color scenes included 325 pairs that contained only real differences; that is, no artificial manipulation of color, bandwidth, and content. For these original real difference pairs, there was a strong correlation between their ratings and the ratings for the monochromatic (r = 0.81), much less so for the isoluminant variants (r = 0.68). This is much higher than the correspondence for the remaining artificial post-processed pairs (r = 0.68 and 0.61 for monochromatic and isoluminant, respectively). The ratings for this subset of normal full-color images can be predicted by Minkowski summation of the monochromatic and isoluminant pairs. The best exponent was lower than for the whole data set (1.93, Equation 8). The correlation between actual ratings and Minkowski sum (r = 0.85) was only slightly higher than that between the monochromatic ratings alone and the full-color ratings (r = 0.81). The color cues have added little in general to the perception of differences in everyday natural scenes. The monochromatic component preserves most of the spatial information (Eskew & Boynton, 1987; Tansley & Boynton, 1976; see also Stockman & Brainard, 2010). In the isoluminant component, spatial details are indistinct and difficult to identify (see the loss of shadow information in the isoluminant examples of Figure 1). If luminance plays a more central important role in the identification of, and therefore changes in, content in a scene, perhaps the visual system has evolved to be better and more accurate at processing achromatic information. This increased reliance on the luminance channels could explain why correlation between ratings for full-color scenes and monochromatic scenes is higher compared to correlations between full-color scenes and isoluminant scenes. This complements the findings of Yoonessi and Kingdom (2008) who demonstrated that luminance contributes more to the perception of changes in complex images, compared to the red-green channel. We do, however, recognize that there are instances where color cues are vitally important for those few but specific scenes involving fruits, edible leaves, and sexual display (Dominy & Lucas, 2001; Párraga, Troscianko, & Tolhurst, 2002; Sumner & Mollon, 2000). 
V1-based modeling of ratings
We have been developing a multineuronal visual difference predictor model to explain the perceived magnitudes of spatial, chromatic, and temporal differences in full-color natural images in terms of the response properties of single V1 neurons (To, Gilchrist et al., 2011; To et al., 2010; To et al., 2015). The model is based on Watson (1987) and Daly (1993), and it has proven to be very successful at explaining detection and contrast-discrimination thresholds in sinusoidal gratings and Gabor patches of various configurations (To et al., 2017; Watson & Ahumada, 2005; Watson & Solomon, 1997). It was early extended to detection of objects in monochromatic natural scenes (Rohaly et al., 1997), and we also made early attempts at modeling the detection of changes in monochromatic natural images (Párraga, Troscianko, & Tolhurst, 2000, 2005; Tadmor & Tolhurst, 1994) but these were quite crude compared to the work of Watson and colleagues. Our present interest is to extend the modeling of thresholds in monochromatic gratings to explain the perception of suprathreshold differences in full-color natural images. 
Here, we have applied two versions of our model to the full-color, monochromatic and isoluminant rating data separately (summarized in Tables 2 and 3). The models have just 8–11 explicit numerical parameters covering the behavior of millions of model neurons, although there are at least an equal number of programming decisions that we have fixed rather than allowing them to float. Given that there are so few parameters and so many neurons, a correlation coefficient between model and full-color ratings of 0.71 is a cause for optimism. There are two key nonlinear inhibitory processes involved, and our better model for the full-color (and monochromatic) ratings applies surround suppression (Equation 4) after contrast normalization (Equation 3), rather than together at the same stage (a single Equation 2). This is more consistent with recent neurophysiological studies (e.g., Henry et al., 2013) and confirms our findings when modeling contrast discrimination in gratings and Gabors (To et al., 2017). That the order of application matters shows that we need to include such nonlinear behaviors for full fidelity. 
These V1-inspired models were substantially better at explaining the observers' ratings for an 1,800-set of our stimuli than was the simple physical Euclidean distance between images in the pairs. This is likely to be because we have so many image pairs with differences of so many different unrelated kinds and magnitudes. A physical metric may well rank the order of change in a set of highly related stimuli that differ stepwise in just one way (as, for instance, in the progression of stimuli in a psychometric function) but it does not explain the relative visually perceived differences between different kinds of stimulus. This is consistent with the experience of Kingdom et al. (2007) who showed that Euclidean distance was poor at predicting the difference in thresholds between natural affine transforms of natural images and added noise. We have performed unpublished rating experiments where the image pairs were based on only 15 photographs, but each was subject to 15 different levels of jpeg compression. Not surprisingly, the average rating of the perceived difference between an image and its jpeg variant was highly correlated with the amount of compression; the ratings also had a high Pearson's r of 0.88 (n = 225) against Euclidean distance, leaving little space for a V1-based model to show its superiority (r = 0.93). As Kingdom et al. (2007) found, a good challenge to quantitative modeling of perception or detection in natural images must involve comparison among a variety of image types and transforms. 
Table 2 lists the values of the several parameters in the best fitting models. We have previously noted that it is difficult to interpret the specific values of some of these, such as the powers and the weights (To et al., 2017). However, the combination of values can lead to a model of a single neuron that displays many of the properties of real single neurons in V1. The surround spread, at first sight, seems to be rather small, as if the surround suppression arises very close to the receptive field centre. To et al. (2017) discuss similar fits (to grating detection data) and show that the small surround radius in these fits is still closely compatible with real neuronal data (Cavanaugh et al., 2002; Sceniak et al., 1999). The receptive-field aspect ratio of the best models (1.6–1.8) is greater than that we reported for grating fits (To et al., 2017) but is, perhaps, more compatible with real neuronal data (Tolhurst & Thompson, 1981). The Minkowski parameter has no neurophysiological equivalent (but see To, Baddeley et al., 2011). The values of 3.72–4.29 are close to those long used in studies that model the combination of independent detectability cues (e.g., Robson & Graham, 1981). 
Our VDP originates from antecedents successful at modeling monochromatic stimuli, and it is based largely on single neuron studies with monochromatic stimuli (see citations in Introduction). It was a major aim of this study to investigate the success of our extension of the model to deal with color stimuli. Thus, we constructed two new sets of stimuli from our original full-color ones: monochromatic variants and isoluminant color variants. As we might hope from its origins, the VDP was good at predicting the magnitude ratings for monochromatic stimuli (r = 0.845). The VDP was much less effective at modeling the isoluminant ratings (r = 0.702), and the overall moderate performance on the original full-color images (r = 0.712) can be blamed on a weaker model of color processing. To et al. (2010) noted that some of our image pairs would likely never be well fit by a V1-based model: the single-neuron receptive-fields are sensitive to very small changes in object location or texture, whereas in an 800 ms presentation time, human observers generally fail to perceive such differences (for some examples, see the Supplementary Material for To et al., 2010). If we discard these “unfittable” stimuli from our data sets, the model fits to the full-color and monochromatic stimuli improve; the monochromatic fit has a gratifying correlation of 0.894. Interestingly, discarding these stimuli from the isoluminant set did not provide a better model fit, perhaps because colors tend to be more uniform over larger areas than brightness so that small changes in the locations of similar objects (texture) would not much change the overall color organization. We argued previously (To et al., 2010) that the ratings given to some kinds of stimuli (e.g., faces, shadows) might be influenced by “higher” cognitive processes and not just the low-level visual differences, which we are capable of modeling. For that reason, we also studied inverted pixel-reversed image pairs to obscure such cognitive cues. Perhaps if we had studied other kinds of image difference, we might have had lower correlations than 0.845 or 0.894. None of our pairs, for example, consisted of affine changes in the whole scene, as if the observer had changed their viewpoint. However, given we did include such a variety of types and magnitudes of change, we would not expect our modeling of these to be less successful than our modeling of affine changes of objects within an otherwise-constant scene. 
Modeling the color planes
Given the many detailed and systematic quantitative studies of linear and nonlinear summation in V1 neurons (see citations in Introduction), it is possible to write computer program to simulate the key neuronal processes. The VDP is successful for monochromatic stimuli, but less so for colored ones. Retinal processing of colored lights and the connectivity of cones has been well studied, but neurophysiology has not provided a quantitative (and computable) consensus of V1 processing (Shapley & Hawken, 2011). In the absence of the required detail, we chose uncontroversially to decompose our color stimuli into three planes and to run a VDP on each plane separately. The way in which we chose to do this has not been totally successful, and there are a number of steps that may need revisiting. 
We decomposed the stimuli onto a luminance and two isoluminant cone-opponent planes (MacLeod-Boynton transform). Others have used CIE L*u*v or CIELAB spaces (Jin, Feng, & Newell, 1998; Lubin, 1995). Mollon and Cavonius (1987) and Schmidt et al. (2014) point out that the well-studied L/M cone opponency in retina and LGN is not equivalent to the perceptual red-green opponency of Hurvich and Jameson (1957), despite the two being often conflated. Neither is retinal S/(L+M) opponency equivalent to perceptual blue/yellow opponency. De Valois and De Valois (1993) proposed that the cone-opponent streams from the LGN must be repackaged in the cerebral cortex to give new opponent axes: (S + L) versus M for red/green, and (S + M) versus L for blue/yellow. There is as yet no evidence for such reorganization of cone inputs in V1. In fact, Schmidt et al. (2014, 2016) propose that these axes already appear in subsets of retinal ganglion cells. The (S + M)/L and (S + L)/M combinations were reported by DeMonasterio, Gouras, and Tolhurst (1975), and have been given much more credence by Tailby, Solomon, and Lennie (2008) and Field et al. (2010) particularly for the connectivity of OFF neurons. We await full descriptions and cortical fates of these ignored neurons, which Schmidt et al. (2014, 2016) propose are the basis of color perception while going so far as suggesting that the majority L/M P-cell pathway is not so involved. Psychophysical studies also imply an S-cone involvement in the perceptual red/green axis (Danilova & Mollon, 2012; Wuerger, Atkinson, & Cropper, 2005). A future modeling project would look at different ways of transforming stimuli into luminance and color planes. We would certainly expect better performance in modeling a perceptual task if we tried to use opponent axes that matched perceptual color opponency rather than cone opponency. 
In our model, the two opponent planes (which may be questionable) are processed by narrowly tuned, orientation-specific neurons just like those processing luminance information. This was partially motivated by the large body of psychophysical work suggesting that luminance, L/M and S/(L + M) might be processed in parallel, more or less independently (e.g., Beaudot & Mullen, 2005; Losada & Mullen, 1994). However, there is very little evidence to support such independence in V1; most neurons respond to both luminance-modulated and L/M-modulated gratings with a great range of bias between the two (Hass & Horwitz, 2013; Johnson, Hawken, & Shapley, 2008). An alternative view, with little quantitative backing, is that a small subset of V1 neurons process “color” (Hubel & Livingstone, 1987), and these might be broadly tuned for orientation, and responsive primarily to low spatial frequencies (e.g., Conway, 2001). Johnson et al. (2008) did report that the V1 neurons most biased towards L/M gratings and away from luminance gratings did prefer lower spatial frequencies, and some were poorly tuned for orientation. This has also been reported psychophysically (Gheiratmand & Mullen, 2014). While most V1 neurons might respond differentially to color stimuli to some exten/t, it is still not clear whether all these neurons contribute to color perception, or whether this is the role of some (yet to be characterized) subset of neurons (Schmidt et al., 2014, 2016). The receptive field organization, tuning, and nonlinear interactions of these putative neurons await documentation, but a future VDP model might investigate whether broadly-tuned, non-oriented, double-opponent neurons would better describe the isoluminant rating data than the multiple banks of narrow-band, oriented simple and complex cells that we presently use. Our present study, by showing the success of the monochromatic part of the VDP, allows us now to focus on searching for better ways of modeling just the “color” planes in natural stimuli. 
Acknowledgments
This research was funded by an ESRC: Transforming Social Science Programme Grant and an Early Career Researcher Grant from the Faculty of Science and Technology at Lancaster University. 
Commercial relationships: none. Corresponding author: Michelle P.S. To. 
Address: Department of Psychology, Lancaster University, Lancaster, UK. 
References
Alpern, M. & Pugh, E. N. (1977). Variation in the action spectrum of erythrolabe among deuteranopes. Journal of Physiology, 266, 613–646.
Baker, D. H., Meese, T. S., & Summers, R. J. (2007). Psychophysical evidence for two routes to suppression before binocular summation of signals in human vision, Neuroscience, 146, 435–448.
Beaudot, W. H., & Mullen, K. T. (2005). Orientation selectivity in luminance and color vision assessed using 2-d band-pass filtered spatial noise. Vision Research, 45 (6), 687–696.
Blakemore, C., & Tobin, E. A. (1972). Lateral inhibition between orientation detectors in the cat's visual cortex. Experimental Brain Research, 15 (4), 439–440.
Bonds, A. B. (1989). Role of inhibition in the specification of orientation selectivity of cells in the cat striate cortex. Visual Neuroscience, 2, 41–55.
Carandini, M., Heeger, D. J., & Movshon, J. A. (1997). Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17 (21), 8621–8644.
Cavanaugh, J. R., Bair, W., & Movshon, J. A. (2002). Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88 (5), 2530–2546. doi: 10.1152/jn.00692.2001.
Conway, B. R. (2001). Spatial structure of cone inputs to color cells in alert macaque primary visual cortex (V-1). Journal of Neuroscience, 21, 2768–2783.
Daly, S. (1993). The visible differences predictor: an algorithm for assessment of image fidelity. In Watson A. B. (Ed.), Digital images and human vision (pp. 179–206). Cambridge, MA: MIT Press.
Danilova, M. V., & Mollon, J. D. (2012). Foveal color perception: Minimal thresholds at a boundary between perceptual categories. Vision Research, 62, 162–172.
DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1993). Spatiotemporal organization of simple-cell receptive fields in the cat's striate cortex. II. Linearity of temporal and spatial summation. Journal of Neurophysiology, 69 (4), 1118–1135.
DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1994). Length and width tuning of neurons in the cat's primary visual cortex. Journal of Neurophysiology, 71 (1), 347–374.
DeAngelis, G. C., Robson, J. G., Ohzawa, I., & Freeman, R. D. (1992). Organization of suppression in receptive fields of neurons in cat visual cortex. Journal of Neurophysiology, 68 (1), 144–163.
De Monasterio, F. M., Gouras, P., & Tolhurst, D. J. (1975). Trichromatic colour opponency in ganglion cells of the rhesus monkey retina. Journal of Physiology, 251, 197–216.
De Valois, R. L. (1965). Analysis and coding of color vision in the primate visual system. In Cold Spring Harbor Symposia on Quantitative Biology, Vol. 30 (pp. 567–579). Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press.
De Valois, R. L., Albrecht, D. G., & Thorell, L. G. (1982). Spatial frequency selectivity of cells in macaque visual cortex. Vision Research, 22 (5), 545–559.
De Valois, R. L., & De Valois, K. K. (1993). A multi-stage color model. Vision Research, 33 (8), 1053–1065.
Dominy, N. J., & Lucas, P. W. (2001, March 15). Ecological importance of trichromatic vision to primates. Nature, 410, 363–366.
Durand, S., Freeman, T. C., & Carandini, M. (2007). Temporal properties of surround suppression in cat primary visual cortex. Vision Neuroscience, 24 (5), 679–690. doi: 10.1017/S0952523807070563.
Emery, K. J., Volbrecht, V. J., Peterzell, D. H., & Webster, M. A. (2017). Variations in normal color vision. VII. Relationships between color naming and hue scaling. Vision Research, 141, 66–75.
Eskew, R. T.,Jr., & Boynton, R. M. (1987). Effects of field area and configuration on chromatic and border discriminations. Vision Research, 27 (10), 1835–1844.
Field, D. J., & Tolhurst, D. J. (1986). The structure and symmetry of simple-cell receptive-field profiles in the cat's visual cortex. Proceedings of the Royal Society Series B, 228 (1253), 379–400.
Field, G. D., Gauthier, J. L., Sher, A., Greschner, M., Machado, T. A., Jepson, L. H., Shlens, J., Gunning, D. E., Mathieson, K., Dabrowski, W., Paninski, L., Litke, A. M., & Chichilnisky, E. J. (2010, October 7). Functional connectivity in the retina at the resolution of photoreceptors. Nature, 467, 673–678.
Foley, J. M. (1994). Human luminance pattern-vision mechanisms: masking experiments require a new model. Journal of the Optical Society of America A: Optics, Image Science & Vision, 11 (6), 1710–1719.
Gescheider, G. A. (1997). Psychophysics—The fundamentals. Mahwah, NJ: Erlbaum.
Gheiratmand, M., & Mullen, K.T. (2014). Orientation tuning in human colour vision at detection threshold. Scientific Reports, 4: 4285. doi:10.1038/srep04285.
Hass, C. A., & Horwitz, G. D. (2013). V1 mechanisms underlying chromatic contrast detection. Journal of Neurophysiology, 109 (10), 2483–2494.
Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9 (2), 181–197.
Henry, C. A., Joshi, S., Xing, D., Shapley, R. M., & Hawken, M. J. (2013). Functional characterization of the extraclassical receptive field in macaque V1: Contrast, orientation, and temporal dynamics. Journal of Neuroscience, 33 (14), 6230–6242.
Hubel, D. H., & Livingstone, M. S. (1987). Segregation of form, color, and stereopsis in primate area 18. Journal of Neuroscience, 7 (11), 3378–3415.
Hurvich, L. M., & Jameson, D. (1957). An opponent-process theory of color vision. Psychological Review, 64, 384–404.
Jin, E. W., Feng, X. F., & Newell, J. (1998). The development of a color visual difference model (CVDM). In Proceedings of IS&T PICS Conference, Portland OR (pp. 154–158). Springfield, VA: IS&T.
Johnson, E. N., Hawken, M. J., & Shapley, R. (2008). The orientation selectivity of color-responsive neurons in macaque V1. Journal of Neuroscience, 28 (32), 8096–8106.
Jones, J. P., & Palmer, L. A. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58 (6), 1233–1258.
Kingdom, F. A., Field, D. J., & Olmos, A. (2007). Does spatial invariance result from insensitivity to change? Journal of Vision, 7 (14): 11, 1–13, https://doi.org/10.1167/7.14.11. [PubMed] [Article]
Li, B., Thompson, J. K., Duong, T., Peterson, M. R., & Freeman, R. D. (2006). Origins of cross-orientation suppression in the visual cortex. Journal of Neurophysiology, 96 (4), 1755–1764. doi: 10.1152/jn.00425.2006.
Losada, M. A., & Mullen, K. T. (1994). The spatial tuning of chromatic mechanisms identified by simultaneous masking. Vision Research, 34 (3), 331–341.
Lubin, J. (1995). A visual discrimination model for imaging system design and evaluation. In Peli E. (Ed.), Vision models for target detection and recognition (pp. 245–283). Singapore: World Scientific.
MacLeod, D. I., & Boynton, R. M. (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of Optical Society of America, 69 (8), 1183–1186.
Meese, T. S. (2004). Area summation and masking. Journal of Vision, 4 (10): 8, 930–943, https://doi.org/10.1167/4.10.8. [PubMed] [Article]
Mollon, J. D., Bowmaker, J. K., & Jacobs, G. H. (1984). Variations of colour vision in a New World primate can be explained by polymorphism of retinal photopigments. Proceedings of the Royal Society of London B: Biological Sciences, 222 (1228), 373–399.
Mollon, J. D., & Cavonius, C. R. (1987). The chromatic antagonisms of opponent process theory are not the same as those revealed in studies of detection and discrimination. In Verriest G. (Ed.), Colour vision deficiencies VIII (pp. 473–483). Dordrecht, the Netherlands: Springer.
Motulsky, H., & Christopoulos, A. (2004). Fitting models to biological data using linear and nonlinear regression: A practical guide to curve fitting. Oxford, UK: Oxford University Press.
Movshon, J. A., Thompson, I. D., & Tolhurst, D. J. (1978a). Spatial summation in the receptive fields of simple cells in the cat's striate cortex. Journal of Physiology, 283, 53–77.
Movshon, J. A., Thompson, I. D., & Tolhurst, D. J. (1978b). The receptive field organization of complex cells in the cat's striate cortex. Journal of Physiology, 283, 79–99.
Movshon, J. A., Thompson, I. D., & Tolhurst, D. J. (1978c). Spatial and temporal contrast sensitivity of neurones in areas 17 and 18 of the cat's visual cortex. Journal of Physiology, 283, 101–120.
Mullen, K. T. (1985). The contrast sensitivity of human colour vision to red-green and blue-yellow chromatic gratings. The Journal of Physiology, 359 (1), 381–400.
Párraga, C. A., Brelstaff, G., Troscianko, T., & Moorehead, I. R. (1998). Color and luminance information in natural scenes. Journal of the Optical Society of America A: Optics, Image Science & Vision, 15 (3), 563–569.
Párraga, C. A., Troscianko, T., & Tolhurst, D. J. (2000). The human visual system is optimised for processing the spatial information in natural visual images. Current Biology, 10 (1), 35–38.
Párraga, C.A., Troscianko, T. & Tolhurst, D.J. (2002). Spatio-chromatic properties of natural images and human vision. Current Biology, 12, 483–487.
Párraga, C. A., Troscianko, T., & Tolhurst, D. J. (2005). The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model. Vision Research, 45, 3145–3168.
Petrov, Y., Carandini, M., & McKee, S. (2005). Two distinct mechanisms of suppression in human vision. Journal of Neuroscience, 25 (38), 8704–8707. doi: 10.1523/JNEUROSCI.2871-05.2005.
Pickford, R. W. (1951). Individual differences in colour vision. London: Routledge.
Ratliff, F. (1965). Mach bands: Quantitative studies on neural networks in the retina. Oxford, UK: Holden-Day.
Ringach, D. L. (2002). Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88 (1), 455–463.
Robson, J. G., & Graham, N. (1981). Probability summation and regional variation in contrast sensitivity across the visual field. Vision Research, 21 (3), 409–418.
Rohaly, A. M., Ahumada, A. J.,Jr., & Watson, A. B. (1997). Object detection in natural backgrounds predicted by discrimination performance and models. Vision Research, 37 (23), 3225–3235.
Sceniak, M. P., Ringach, D. L., Hawken, M. J., & Shapley, R. (1999). Contrast's effect on spatial summation by macaque V1 neurons. Nature Neuroscience, 2, 733–739.
Schmidt, B. P., Neitz, M., & Neitz, J. (2014). Neurobiological hypothesis of color appearance and hue perception. Journal of the Optical Society of America A: Optics, Image Science & Vision, 31 (4), A195–A207.
Schmidt, B. P., Touch, P., Neitz, M., & Neitz, J. (2016). Circuitry to explain how the relative number of L and M cones shapes color experience. Journal of Vision, 16 (8): 18, 1–17, https://doi.org/10.1167/16.8.18. [PubMed] [Article]
Shapley, R., & Hawken, M. J. (2011). Color in the cortex: Single-and double-opponent cells. Vision Research, 51 (7), 701–717.
Stockman, A., & Brainard, D. H. (2010). Color vision mechanisms. In Bass M. (Ed.), OSA handbook of optics (3rd ed., pp. 11.1–11.104). New York: McGraw-Hill.
Suero, M. I., Pardo, P. J., & Pérez, A. L. (2010). Colour characterization of handheld game console displays. Displays, 31 (4–5), 205–209.
Sumner, P., & Mollon, J. D. (2000). Catarrhine photopigments are optimized for detecting targets against a foliage background. Journal of Experimental Biology, 203, 1963–1986.
Tadmor, Y., & Tolhurst, D. J. (1994). Discrimination of changes in the second-order statistics of natural and synthetic images. Vision Research, 34 (4), 541–554.
Tailby, C., Solomon, S. G., & Lennie, P. (2008). Functional asymmetries in visual pathways carrying S-cone signals in macaque. Journal of Neuroscience, 28, 4078–4087.
Tansley, B. W., & Boynton, R. M. (1976, March 5). A line, not a space, represents visual distinctness of borders formed by different colors. Science, 191 (4230), 954–957.
To, M. P. S., Baddeley, R. J., Troscianko, T., & Tolhurst, D. J. (2011). A general rule for sensory cue summation: Evidence from photographic, musical, phonetic, and cross-modal stimuli. Proceedings of the Royal Society of London B: Biological Sciences, 278 (1710), 1365–1372.
To, M. P. S., Chirimuuta, M., & Tolhurst, D. J. (2017). Modeling grating contrast discrimination dippers: The role of surround suppression. Journal of Vision, 17 (12): 23, 1–17, https://doi.org/10.1167/17.12.23. [PubMed] [Article]
To, M. P. S., Gilchrist, I. D., & Tolhurst, D. J. (2015). Perception of differences in naturalistic dynamic scenes, and a V1-based model. Journal of Vision, 15 (1): 19, 1–13, https://doi.org/10.1167/15.1.19. [PubMed] [Article]
To, M. P. S., Gilchrist, I. D., Troscianko, T., & Tolhurst, D. J. (2011). Discrimination of natural scenes in central and peripheral vision. Vision Research, 51, 1686–1698.
To, M. P. S., Lovell, P. G., Troscianko, T., & Tolhurst, D. J. (2008). Summation of perceptual cues in natural visual scenes. Proceedings of the Royal Society of London B: Biological Sciences, 275 (1649), 2299–2308. doi: 10.1098/rspb.2008.0692.
To, M. P. S., Lovell, P. G., Troscianko, T., & Tolhurst, D. J. (2010). Perception of suprathreshold naturalistic changes in colored natural images. Journal of Vision, 10 (4): 12, 11–22, https://doi.org/10.1167/10.4.12. [PubMed] [Article]
Tolhurst, D. J., & Heeger, D. J. (1997). Comparison of contrast-normalization and threshold models of the responses of simple cells in cat striate cortex. Visual Neuroscience, 14 (2), 293–309.
Tolhurst, D. J., & Thompson, I. D. (1981). On the variety of spatial frequency selectivities shown by neurons in area 17 of the cat. Proceedings of the Royal Society of London B: Biological Sciences, 213 (1191), 183–199.
Tolhurst, D. J., To, M. P., Chirimuuta, M., Troscianko, T., Chua, P. Y., & Lovell, P. G. (2010). Magnitude of perceived change in natural images may be linearly proportional to differences in neuronal firing rates. Seeing Perceiving, 23 (4), 349–372.
Watson, A.B. (1987). Efficiency of a model human image code. Journal of the Optical Society of America A, 4, 2401–2417.
Watson, A. B., & Ahumada, A. J.,Jr. (2005). A standard model for foveal detection of spatial contrast. Journal of Vision, 5 (9): 6, 717–740, https://doi.org/10.1167/5.9.6. [PubMed] [Article]
Watson, A. B., & Solomon, J. A. (1997). Model of visual contrast gain control and pattern masking. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 14 (9), 2379–2391.
Wuerger, S. M., Atkinson, P., & Cropper, S. (2005). The cone inputs to the unique-hue mechanisms. Vision Research, 55, 3210–3223.
Yoonessi, A., & Kingdom, F. A. A. (2008). Comparison of sensitivity to color changes in natural and phase-scrambled scenes. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 25 (3), 676–684.
Figure 1
 
Here are two examples of ecologically valid image pairs that consist of two photographs of the same scene taken at different times, and their derived variants. Panel A presents a pair where a subject has appeared/disappeared (short time interval) and Panel B presents a scene where the lighting and content have changed (long time interval). The full-color (top row in each panel) normal images (left pair) and their pixel-reversed variants (right pair) were studied in To et al. (2010). Here we study the monochromatic and isoluminant variants of the normal full-color pairs on the left, and of the pixel-reversed pairs (right). In constructing the isoluminant images, we converted CIE XYZ representations with a matrix that made the final images isoluminant (according to L*a*b) on the experimental display. For the present figures, they have been transformed into RGB color space hopefully to make them look roughly isoluminant for the reader.
Figure 1
 
Here are two examples of ecologically valid image pairs that consist of two photographs of the same scene taken at different times, and their derived variants. Panel A presents a pair where a subject has appeared/disappeared (short time interval) and Panel B presents a scene where the lighting and content have changed (long time interval). The full-color (top row in each panel) normal images (left pair) and their pixel-reversed variants (right pair) were studied in To et al. (2010). Here we study the monochromatic and isoluminant variants of the normal full-color pairs on the left, and of the pixel-reversed pairs (right). In constructing the isoluminant images, we converted CIE XYZ representations with a matrix that made the final images isoluminant (according to L*a*b) on the experimental display. For the present figures, they have been transformed into RGB color space hopefully to make them look roughly isoluminant for the reader.
Figure 2
 
Here are two examples of image pairs that only differ along a color dimension in part of the image. Panels A and B present pairs where color changes are noticeable in the full-color and isoluminant pairs but less so in the monochromatic pairs. As in the previous figure, the original full-color normal pairs with their monochromatic and isoluminant variants are shown on the left, the pixel-reversed pairs, also presented with their variants, are shown on the right.
Figure 2
 
Here are two examples of image pairs that only differ along a color dimension in part of the image. Panels A and B present pairs where color changes are noticeable in the full-color and isoluminant pairs but less so in the monochromatic pairs. As in the previous figure, the original full-color normal pairs with their monochromatic and isoluminant variants are shown on the left, the pixel-reversed pairs, also presented with their variants, are shown on the right.
Figure 3
 
Standard pairs used in the original To et al. (2010) study with full-color pairs (A), Experiments 1 and 2 with monochromatic pairs, and (B) and Experiments 3 and 4 with isoluminant pairs. The same standard pair was used for the normal and pixel-reversed version of an experiment.
Figure 3
 
Standard pairs used in the original To et al. (2010) study with full-color pairs (A), Experiments 1 and 2 with monochromatic pairs, and (B) and Experiments 3 and 4 with isoluminant pairs. The same standard pair was used for the normal and pixel-reversed version of an experiment.
Figure 4
 
The graphs present the correspondence between magnitude estimation ratings from the current experiments with those previously collected in To et al. (2010). Monochromatic ratings from Experiments 1 (normal) and 2 (pixel-reversed) are plotted against full-color ratings of the equivalent originals in Panels A and B, respectively. Likewise, isoluminant ratings from Experiments 3 (normal) and 4 (pixel-reversed) are plotted against full-color ratings for the originals in Panels C and D, respectively. The red data points represent ratings for those image pairs that only contain image-processed color differences in the original full-color versions; they give only small or zero change in the monochrome versions. The gray data points correspond to all other stimulus types (see Methods).
Figure 4
 
The graphs present the correspondence between magnitude estimation ratings from the current experiments with those previously collected in To et al. (2010). Monochromatic ratings from Experiments 1 (normal) and 2 (pixel-reversed) are plotted against full-color ratings of the equivalent originals in Panels A and B, respectively. Likewise, isoluminant ratings from Experiments 3 (normal) and 4 (pixel-reversed) are plotted against full-color ratings for the originals in Panels C and D, respectively. The red data points represent ratings for those image pairs that only contain image-processed color differences in the original full-color versions; they give only small or zero change in the monochrome versions. The gray data points correspond to all other stimulus types (see Methods).
Figure 5
 
Minkowski summation of monochromatic and isoluminant ratings compared with the actual full-color ratings from To et al. (2010). In Panel A, the best Minkowski predictions (with m = 2.71) for all full-color normal (blue, r = 0.83) and pixel-reversed (purple, r = 0.87) ratings are plotted against the actual ratings from To et al. (2010).
Figure 5
 
Minkowski summation of monochromatic and isoluminant ratings compared with the actual full-color ratings from To et al. (2010). In Panel A, the best Minkowski predictions (with m = 2.71) for all full-color normal (blue, r = 0.83) and pixel-reversed (purple, r = 0.87) ratings are plotted against the actual ratings from To et al. (2010).
Figure 6
 
The panels A and B plot the magnitude estimation ratings for monochromatic and isoluminant variants against the ratings for the full-color versions from To et al. (2010) for ecologically valid pairs only. Panel A shows that the correspondence between monochromatic and full-color ratings is high (r = 0.81). Panel B shows that the correspondence between the isoluminant ratings and full-color ratings was weaker (r = 0.68). Panel C shows the best Minkowski predictions with m = 1.93 (r = 0.85) for the full-color ecologically valid ratings plotted against the actual ratings from To et al. (2010).
Figure 6
 
The panels A and B plot the magnitude estimation ratings for monochromatic and isoluminant variants against the ratings for the full-color versions from To et al. (2010) for ecologically valid pairs only. Panel A shows that the correspondence between monochromatic and full-color ratings is high (r = 0.81). Panel B shows that the correspondence between the isoluminant ratings and full-color ratings was weaker (r = 0.68). Panel C shows the best Minkowski predictions with m = 1.93 (r = 0.85) for the full-color ecologically valid ratings plotted against the actual ratings from To et al. (2010).
Figure 7
 
Experimental data plotted against the sequential model predictions. (A) Ratings from the original experiment with full-color images (To et al. 2010); (B) for Experiments 1 and 2 with monochromatic images; and (C) for Experiments 3 and 4 with isoluminant images, respectively. The regression lines of best fit are shown. Data corresponding to the normal images (original or variant) are shown in blue, and the data corresponding to the pixel-reversed images (original or variant) are shown in purple.
Figure 7
 
Experimental data plotted against the sequential model predictions. (A) Ratings from the original experiment with full-color images (To et al. 2010); (B) for Experiments 1 and 2 with monochromatic images; and (C) for Experiments 3 and 4 with isoluminant images, respectively. The regression lines of best fit are shown. Data corresponding to the normal images (original or variant) are shown in blue, and the data corresponding to the pixel-reversed images (original or variant) are shown in purple.
Table 1
 
Averages, maximal and minimal Pearson's r comparing each observer against others viewing the same stimuli.
Table 1
 
Averages, maximal and minimal Pearson's r comparing each observer against others viewing the same stimuli.
Table 2
 
The best fitting values of the various parameters (defined in Methods) of the main VDP models discussed here. Parallel and sequential versions of the model were fit to the full-color and monochromatic rating data, but the isoluminant rating data were fit only with a sequential model. The number of parameters depends on model type and on the experimental data set (see Methods). These fits are for n = 1,800, with all the normal and all the pixel-reversed data together.
Table 2
 
The best fitting values of the various parameters (defined in Methods) of the main VDP models discussed here. Parallel and sequential versions of the model were fit to the full-color and monochromatic rating data, but the isoluminant rating data were fit only with a sequential model. The number of parameters depends on model type and on the experimental data set (see Methods). These fits are for n = 1,800, with all the normal and all the pixel-reversed data together.
Table 3
 
(A) Summary statistics of the five model fits shown in Table 2 (i.e., for all 1,800 normal and pixel-reversed data). The table shows the correlation between ratings and model predictions, and the Akaike criterion (Equation 6) calculated from the residual sum of squares after fitting a regression to the experiment/model plot. Delta AIC is shown for the full-color and monochromatic models; it summarizes the difference in the fits of the parallel and sequential models. The correlation between rating and Euclidean distance is also shown. (B) The same, but for fits to a subset of the ratings data (n = 1,324 out of 1,800), after discarding the ratings given to image pairs that differed by a small object movement or a texture change (To et al., 2010).
Table 3
 
(A) Summary statistics of the five model fits shown in Table 2 (i.e., for all 1,800 normal and pixel-reversed data). The table shows the correlation between ratings and model predictions, and the Akaike criterion (Equation 6) calculated from the residual sum of squares after fitting a regression to the experiment/model plot. Delta AIC is shown for the full-color and monochromatic models; it summarizes the difference in the fits of the parallel and sequential models. The correlation between rating and Euclidean distance is also shown. (B) The same, but for fits to a subset of the ratings data (n = 1,324 out of 1,800), after discarding the ratings given to image pairs that differed by a small object movement or a texture change (To et al., 2010).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×