Free
Article  |   March 2012
Modeling center–surround configurations in population receptive fields using fMRI
Author Affiliations
  • Wietske Zuiderbaan
    Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlandsw.zuiderbaan@uu.nl
  • Ben M. Harvey
    Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlandsb.m.harvey@uu.nl
  • Serge O. Dumoulin
    Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlandshttp://www.sergedumoulin.netS.O.Dumoulin@uu.nl
Journal of Vision March 2012, Vol.12, 10. doi:https://doi.org/10.1167/12.3.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wietske Zuiderbaan, Ben M. Harvey, Serge O. Dumoulin; Modeling center–surround configurations in population receptive fields using fMRI. Journal of Vision 2012;12(3):10. https://doi.org/10.1167/12.3.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Antagonistic center–surround configurations are a central organizational principle of our visual system. In visual cortex, stimulation outside the classical receptive field can decrease neural activity and also decrease functional Magnetic Resonance Imaging (fMRI) signal amplitudes. Decreased fMRI amplitudes below baseline—0% contrast—are often referred to as “negative” responses. Using neural model-based fMRI data analyses, we can estimate the region of visual space to which each cortical location responds, i.e., the population receptive field (pRF). Current models of the pRF do not account for a center–surround organization or negative fMRI responses. Here, we extend the pRF model by adding surround suppression. Where the conventional model uses a circular symmetric Gaussian function to describe the pRF, the new model uses a circular symmetric difference-of-Gaussians (DoG) function. The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance explained is found for the DoG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The improvement of the fits was particularly striking in the parts of the fMRI signal below baseline. Estimates for the surround size of the pRF show an increase with eccentricity and over visual areas V1/2/3. For the suppression index, which is based on the ratio between the volumes of both Gaussians, we show a decrease over visual areas V1 and V2. Using non-invasive fMRI techniques, this method gives the possibility to examine assumptions about center–surround receptive fields in human subjects.

Introduction
Antagonistic receptive fields are found throughout the visual system. In the visual cortex, the responses to stimulation in the classical receptive fields can be modulated by stimulation in the extra-classical receptive field. These modulations can be excitatory or inhibitory and have been characterized in detail by electrophysiological and psychophysical studies (for review, see Allman, Miezin, & McGuinness, 1985; Carandini, 2004; Cavanaugh, Bair, & Movshon, 2002; Fitzpatrick, 2000; Hubel & Wiesel, 1968). Recent human brain imaging techniques have also found evidence of these center–surround organizations. Using functional magnetic resonance imaging (fMRI), center–surround configurations have been implicated in the amplitude changes of the fMRI signal due to stimulus size (Kastner et al., 2001; Nurminen, Kilpelainen, Laurinen, & Vanni, 2009; Press, Brewer, Dougherty, Wade, & Wandell, 2001), surround (Tajima et al., 2010; Williams, Singh, & Smith, 2003; Zenger-Landolt & Heeger, 2003), or contextual modulations (Dumoulin & Hess, 2006; Harrison, Penny, Ashburner, Trujillo-Barreto, & Friston, 2007; Kastner et al., 2001; Murray, Kersten, Olshausen, Schrater, & Woods, 2002). 
In addition, when measuring recording sites in the vicinity of regions that are positively correlated with a stimulus manipulation, negative Blood Oxygenation Level-Dependent (BOLD) responses (NBRs) have been reported in the visual cortex (Shmuel, Augath, Oeltermann, & Logothetis, 2006; Shmuel et al., 2002; Smith, Williams, & Singh, 2004). These NBRs are defined as BOLD responses below those elicited by viewing mean luminance gray (0% contrast). In addition, when identifying or reconstructing visual stimuli from the fMRI signals, local cortical sites may contribute either positively or negatively (Kay, Naselaris, Prenger, & Gallant, 2008; Miyawaki et al., 2008). The coupling between measured fMRI responses and the underlying neural activity is complex (for review, see Logothetis & Wandell, 2004) and neural inhibition per se may not necessarily result in a decrease of the BOLD response. Studies combining electrophysiology and fMRI report that these NBRs are caused by decreases in overall neural activation (Shmuel et al., 2006, 2002). They state that the NBR can be either caused by a suppression of the local neuronal activity and/or a decrease in afferent input, both caused by the activation of the positively responding regions. These results suggest that the neural center–surround organization influences the fMRI signal and also suggest that the spatial center–surround profile of the recorded neural population may be resolvable at the resolution of fMRI. 
Here, we extended a computational method for estimating receptive fields of a population of neurons (Dumoulin & Wandell, 2008) to measure center–surround configurations throughout the visual cortex. Since a receptive field measured with fMRI is estimated from a neuronal population instead of a single neuron, we refer to it as the population receptive field (pRF; Dumoulin & Wandell, 2008; Victor, Purpura, Katz, & Mao, 1994). This method fits a model of the pRF to the fMRI data. We compare two models of the pRF. The original pRF method describes the pRF with one single circular symmetric Gaussian (OG) in the visual field. This model can only represent positive responses to stimulation in any region of the visual field and cannot, therefore, explain negative fMRI responses. We extended the current pRF model by incorporating a suppressive surround using a Difference-of-Gaussians (DoG) function to represent the pRF. This model can account for suppression effects. 
The DoG model yielded improved performance in predicting the fMRI time series. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. We attribute the absence of a measurable center–surround configuration in later visual areas to technical limitations, which would hide this configuration at the resolution of fMRI though it may be present at neural resolutions. The properties of the center–surround configurations varied systematically across V1/2/3. The pRF center sizes increased with eccentricity and visual field maps, similar to previous reports (Amano, Wandell, & Dumoulin, 2009; Dumoulin & Wandell, 2008; Harvey & Dumoulin, 2011; Kay et al., 2008; Winawer, Horiguchi, Sayres, Amano, & Wandell, 2010). In addition, we show that the surround size of the pRF increases with eccentricity and up the visual field map hierarchy. To compute the total amount of suppression, we need to take into account the surround volume not just its size. Therefore, we defined the suppression index by the volume ratio of the two individual Gaussian components of the DoG model. The suppression index decreases with eccentricity and also decreases up the visual field map hierarchy. These results extend the notion of center–surround configurations to the spatial scale of fMRI measurements. This gives the ability to measure and test assumptions of center–surround population receptive fields in human subjects. 
Methods
Subjects
Four subjects (one female, ages 21–37 years) participated in this study. All subjects had normal or corrected-to-normal visual acuity. All studies were performed with the informed written consent of the subjects and were approved by the Human Ethics Committee of the University Medical Center Utrecht. 
Stimulus presentation
The visual stimuli were generated in Matlab using the PsychToolbox (Brainard, 1997; Pelli, 1997) on a Macintosh Macbook Pro. Stimuli were displayed by a configuration where optics back-projected the imaged stimuli of the projector onto a screen located outside the MRI bore. The subjects viewed the screen by mirrors placed on top of the scanner head coil. The stimulus radius was 6.25° of visual angle. 
Stimulus description
For the experiment, we used moving bar apertures that revealed a moving checkerboard pattern (100% contrast) that moved parallel to the bar orientation (Figure 1). The width of the bar subtended 1/4th of the stimulus radius (1.56°). Four bar orientations (0°, 45°, 90°, and 135°) and two different motion directions for each bar were used, giving a total of 8 different bar configurations within a given scan. The bar sweeps across the stimulus aperture in 20 steps (each 0.625° and 1.5 s), each pass taking 30 s. After a horizontally or vertically oriented sweep, a period of 30 s of mean luminance (0% contrast) is presented. This gives a total of 4 blocks of mean luminance during each scan, presented at evenly spaced intervals. 
Figure 1
 
A schematic illustration of the stimulus sequence. The bar apertures revealed a high-contrast checkerboard (100%). The checkerboard rows moved in opposite directions along the orientation of the bar. The bars moved through the visual field with 8 different orientation–motion configurations as indicated by the arrows (arrows not present in actual stimulus). The total stimulus sequence lasted 360 s.
Figure 1
 
A schematic illustration of the stimulus sequence. The bar apertures revealed a high-contrast checkerboard (100%). The checkerboard rows moved in opposite directions along the orientation of the bar. The bars moved through the visual field with 8 different orientation–motion configurations as indicated by the arrows (arrows not present in actual stimulus). The total stimulus sequence lasted 360 s.
To make sure subjects fixated at the center of the screen, a small fixation dot (0.125° radius) was presented in the middle of the stimulus. This fixation dot changed color (red–green) at random time intervals and subjects were instructed to respond at this change of color. An air pressure button was used to record these responses. When performance on the fixation task was below 75% correct, the scan was discarded (one scan total). 
Eye movement data examining fixation during the presentation of this stimulus were measured outside the scanner using a highly accurate Eyelink II system (SG Research, Mississauga, Ontario, Canada). Values obtained during moving bar presentations and the mean luminance blank periods were not significantly different, and so presentation of the moving bar did not appear to affect eye movements. Subjects had a fixation position distribution with a standard deviation of 0.22°, indicating a highly accurate fixation. Because there is inevitably some noise in the measurement of eye positions, this value represents an upper bound of the actual variation of fixation positions (Harvey & Dumoulin, 2011). 
Functional imaging and processing
MRI data were acquired using a Philips Achieva 3T scanner with an 8-channel SENSE head coil. The participants were scanned with a 2D echo-planar imaging sequence with 24 slices oriented perpendicular to the calcarine sulcus with no gap. The following parameters were used: repetition time (TR) = 1500 ms, echo time (TE) = 30 ms, and flip angle = 70°. The functional resolution was 2.5 × 2.5 × 2.5 mm, given a field of view (FOV) of 224 × 224 mm. 
The duration of each scan was 372 s (248 time frames), of which the first 12 s (8 time frames) were discarded due to start-up magnetization transients. For every subject, 9 or 10 scans were acquired during the same session. These repeated scans were averaged to obtain a high signal-to-noise ratio. 
Foam padding was used to minimize head movement. The functional images were corrected for head movement between and within the scans (Nestares & Heeger, 2000). For computation of the head movement between scans, the first functional volumes for each scan were aligned. Within scan, motion correction was computed by aligning the frames of a scan to the first frame. 
Anatomical imaging and processing
The T1-weighted MRI images were acquired in a separate session using an 8-channel SENSE head coil. The following parameters were used: TR/TE/flip angle = 9.88/4.59/8. The scans were acquired at a resolution of 0.79 × 0.80 × 0.80 mm and were resampled to a resolution of 1 mm3 isotropic. The functional MRI scans were aligned with the anatomical MRI using an automatic alignment technique (Nestares & Heeger, 2000). From the anatomical MRI, white matter was automatically segmented using the FMRIB's Software Library (FSL; Smith, Jenkinson et al., 2004). After the automatic segmentation, it was hand-edited to minimize segmentation errors (Yushkevich et al., 2006). The gray matter was grown from the white matter and to form a 4-mm layer surrounding the white matter. A smoothed 3D cortical surface can be rendered by reconstruction of the cortical surface at the border of the white and gray matter (Wandell, Chial, & Backus, 2000). 
pRF model-based analysis
The model estimates a pRF for every cortical location using a method previously described (Dumoulin & Wandell, 2008). In short, the method estimates the pRF by combining the measured fMRI time series with the position time course of the visual stimulus. A prediction of the time series is made by calculating the overlap of the pRF and the stimulus energy and a convolution with the hemodynamic response function (HRF). The optimal model parameters are chosen by minimizing the residual sum of squares between the predicted and measured time series. 
We compared two different models of the pRF (Figure 2). The conventional pRF model consists of one circular symmetric Gaussian (OG). The OG model has four parameters: position (x 0, y 0), size (σ 1), and amplitude (β 1). Here, we extend the pRF model to add an inhibitory surround. This model represents the pRF using a difference-of-Gaussians (DoG) function. The DoG model is made up of a subtraction of 2 Gaussian functions, in which the Gaussian with the largest standard deviation is subtracted from the smaller one. The center of the two Gaussians is at the same position, and therefore, the DoG adds two extra parameters to the model, the size of the negative surround (σ 2) and the amplitude of the surround (β 2). All parameters are in degrees of visual angle (°), except for the amplitudes (% BOLD/deg2/s). Restrictions of the DoG model are that σ 2 is larger than or equal to σ 1, and β 2 is negative with an absolute value that is smaller than β 1. These restrictions ensure a center–surround configuration. 
Figure 2
 
Flowchart of the fitting procedure of the two pRF models. The left panels illustrate the conventional one Gaussian pRF model, while the right panels illustrate the difference-of-Gaussians pRF model. The middle panels indicate the analysis input, shared between both pRF models, consisting of the stimulus aperture and fMRI data. Convolution of a pRF model with the stimulus sequence predicts the fMRI time series. The pRF model parameters are estimated by minimizing the sum-of-squared errors between the predicted and the measured time series.
Figure 2
 
Flowchart of the fitting procedure of the two pRF models. The left panels illustrate the conventional one Gaussian pRF model, while the right panels illustrate the difference-of-Gaussians pRF model. The middle panels indicate the analysis input, shared between both pRF models, consisting of the stimulus aperture and fMRI data. Convolution of a pRF model with the stimulus sequence predicts the fMRI time series. The pRF model parameters are estimated by minimizing the sum-of-squared errors between the predicted and the measured time series.
Thus, the DoG pRF model (g(x, y)) is defined as a combination of two Gaussians (g +(x, y) and (g (x, y)): 
g + ( x , y ) = exp ( ( x x 0 ) 2 + ( y y 0 ) 2 ( σ 1 ) 2 2 ) ,
(1)
 
g ( x , y ) = exp ( ( x x 0 ) 2 + ( y y 0 ) 2 ( σ 2 ) 2 2 ) ,
(2)
 
g ( x , y ) = g + ( x , y ) g ( x , y ) .
(3)
The next step is to define the stimulus. Assuming that all parts of the stimulus contribute equally to the fMRI response (Engel, Glover, & Wandell, 1997), the stimulus is defined as a binary indicator that marks over time each position of the stimulus in the visual field, s(x, y, t). Combining the position of the pRF in the visual field with the stimulus positions over time, the response of the pRF, r(t), is obtained by calculating the overlap for each Gaussian. We obtain two values of r(t), r +(t) for the positive Gaussian and r (t) for the negative Gaussian: 
r + ( t ) = x , y s ( x , y , t ) g + ( x , y ) ,
(4)
 
r ( t ) = x , y s ( x , y , t ) g ( x , y ) .
(5)
The prediction of the time series, p(t), is then calculated by convolution of the response of the pRF, r(t), with the HRF, h(t). Two different predictions of the time series are made: one for the positive Gaussian and one for the negative Gaussian of the pRF. Since the negative BOLD responses exhibit a similar HRF as the positive BOLD response (Shmuel et al., 2006), the same HRF (h(t)) is used for both parts of the pRF: 
p + ( t ) = r + ( t ) * h ( t ) ,
(6)
 
p ( t ) = r ( t ) * h ( t ) ,
(7)
where * denotes convolution. Assuming there is a linear relationship between the blood oxygenation levels and the MR signal (Birn, Saad, & Bandettini, 2001; Boynton, Engel, Glover, & Heeger, 1996; Hansen, David, & Gallant, 2004), a scaling factor β (no 1) is used on p(t). This scaling factor accounts for the unknown units of the fMRI signal, which in turn represents the amplitude of the pRF. β 1 is calculated by using a general linear model (GLM) in which measurement noise, e, is taken into account as well. The two values for the amplitudes of the Gaussians are calculated using a GLM with two unknown values for β, β 1 for the positive amplitude and β 2 for the negative amplitude: 
y ( t ) = p + ( t ) β 1 + p ( t ) β 2 + e .
(8)
The optimal parameters of the pRF are estimated by minimizing the RSS: 
R S S = t ( y ( t ) ( p + ( t ) β 1 + p ( t ) β 2 ) ) 2 .
(9)
To compare the performance of the two different models in predicting the fMRI time series, we computed the percentage of variance in the measured fMRI time series that was explained by the prediction (variance explained). 
Since it is possible that not all the voxels show a center–surround configuration, it should still be possible to obtain a configuration of the pRF without the suppressive surround. Therefore, we replaced all the voxels in the DoG model where the OG model fits better with the pRFs estimated by the OG model. 
For further technical and implementation details, see Dumoulin and Wandell (2008). 
Baseline estimation
The fMRI signal is without units, and the baseline activation must be indirectly deduced from the parts of the fMRI time series where a baseline stimulus was shown (mean luminance, 0% contrast). The conventional pRF model-based analysis estimates this baseline level as a component within the GLM fit over the whole time series. Initial observations indicated different parameters for the baseline in the two different pRF models (Supplementary Figure 1). Different baseline values suggest that the conventional pRF model may compensate for the negative fMRI signal by lowering its estimate of baseline activity. To ensure that differences in variance explained between the various models are not affected by differences of the baseline estimation, the time series of the mean luminance blocks are used to estimate the baseline. The standard HRF model (Glover, 1999) takes about 22.5 s to diminish to 0.55% of its maximal amplitude. Therefore, to remove all fMRI signals caused by the stimuli, the first 22.5 s of the mean luminance blocks are removed, and the remainder of 7.5 s is used to estimate the baseline. Since every scan contains 4 sequences of mean luminance (Figure 1), a total of 30 s of the fMRI time series is used to estimate the level of the baseline activation. This baseline correction is required when comparing the two models using the same baseline but is not necessary when estimating each model independently. 
Region of interest
Using the estimated position parameters x 0 and y 0 of the pRF model, values for the polar angle (atan(y 0/x 0)) and eccentricity (√(x 0 2 + y 0 2)) can be calculated. By rendering these polar angle and eccentricity maps onto the inflated cortical surface (Wandell et al., 2000), the borders of the visual areas can be drawn on the basis of their location in the visual field (DeYoe et al., 1996; Engel et al., 1997; Sereno et al., 1995; Wandell, Dumoulin, & Brewer, 2007). Visual areas V1, V2, V3, V3a, hV4, and LO-1 are defined as regions of interest (ROIs). 
Surround suppression index
To indicate the effect of the suppressive surround on the pRF, we use a measure for its suppression index that is based on the volumes under the two Gaussians that make the pRF (adapted from Sceniak, Hawken, & Shapley, 2001). We only take into account the total volume of the two Gaussians that falls inside the stimulus range (6.25° radius): 
S I = β 2 · ( σ 2 ) 2 β 1 · ( σ 1 ) 2 .
(10)
 
Results
The DoG pRF model captures systematic events of the fMRI time series
The pRF models give a prediction of the measured fMRI time series for every voxel. Figure 3A shows the prediction of the time series for the OG model for a sample voxel in V1. Comparison of the time series with the prediction of the model shows that the OG model fails to explain many parts of the time series that fall below the baseline activity (<0% BOLD signal); these are indicated by the gray arrows. Aside from the parts of the prediction that show the negative undershoot of the HRF, the OG model is unable to predict responses below the baseline. The prediction for the identical fMRI time series made by the pRF of the DoG model is shown in Figure 3B. Where the OG model misses the ability to predict the negative fMRI signal, the DoG model shows a clear improvement of the fit. This improvement is particularly evident at the negative parts of the time series. 
Figure 3
 
Example of the model fits for a cortical location (voxel) in V1 for both models of the pRF. The gray dotted lines indicate the fMRI time series; the black solid lines show the fits of the models. Gray regions indicate mean luminance presentations, whereas the dark gray regions indicate parts of the time series used to estimate the baseline (0). The circular patches below indicate the visual field (gray), the estimated pRF for the voxel (the red part for the positive part of the pRF and the blue part for the negative part of the pRF), and the stimulus at a given time. (A) The model fits of the OG model of the pRF. The OG model shows an accurate fitting for the positive parts of the time series. The parts where the OG model misses the ability to follow the time series are particularly evident for the negative parts of the time series. The gray arrows indicate these parts. (B) The model fits of the DoG model of the pRF. A clear improvement of the fit of the DoG model can be seen at the negative parts of the fMRI signal. In parts where the OG model misses the ability to give a correct prediction, the DoG model that incorporated a suppressive surround is able to give an accurate prediction of the time series. The models explain 49% and 72% of the variance, respectively. (C) From the same subject, this shows the distribution of the residuals from all the voxels in V1. To calculate the residuals, the predicted time series is subtracted from the measured time series (for all time points). The gray line shows the residuals for the predicted time series of the OG model; the black line shows the residuals of the DoG model. The arrows indicate the means of both curves. The mean of the residuals of the OG model is −0.23%; for the DoG model, this value is −0.07%. Where the OG model shows a negative shift in its residuals, the DoG model shows a more balanced distribution of the residuals around zero.
Figure 3
 
Example of the model fits for a cortical location (voxel) in V1 for both models of the pRF. The gray dotted lines indicate the fMRI time series; the black solid lines show the fits of the models. Gray regions indicate mean luminance presentations, whereas the dark gray regions indicate parts of the time series used to estimate the baseline (0). The circular patches below indicate the visual field (gray), the estimated pRF for the voxel (the red part for the positive part of the pRF and the blue part for the negative part of the pRF), and the stimulus at a given time. (A) The model fits of the OG model of the pRF. The OG model shows an accurate fitting for the positive parts of the time series. The parts where the OG model misses the ability to follow the time series are particularly evident for the negative parts of the time series. The gray arrows indicate these parts. (B) The model fits of the DoG model of the pRF. A clear improvement of the fit of the DoG model can be seen at the negative parts of the fMRI signal. In parts where the OG model misses the ability to give a correct prediction, the DoG model that incorporated a suppressive surround is able to give an accurate prediction of the time series. The models explain 49% and 72% of the variance, respectively. (C) From the same subject, this shows the distribution of the residuals from all the voxels in V1. To calculate the residuals, the predicted time series is subtracted from the measured time series (for all time points). The gray line shows the residuals for the predicted time series of the OG model; the black line shows the residuals of the DoG model. The arrows indicate the means of both curves. The mean of the residuals of the OG model is −0.23%; for the DoG model, this value is −0.07%. Where the OG model shows a negative shift in its residuals, the DoG model shows a more balanced distribution of the residuals around zero.
To quantify these observations in the fits of the OG model and the DoG model, we computed the difference of the measured fMRI time series and the predicted time series (residuals). Figure 3C shows the distribution of these residuals for both models of the pRF from one subject's V1 (in percent BOLD signal change). The distribution height is normalized according to the peak of the DoG model. A residual value of zero indicates that the prediction matched the measured fMRI signal perfectly. The arrows indicate where the means of both curves are. These are −0.23/−0.20% and −0.07/−0.05% for the mean/median of the OG and DoG models, respectively. The DoG model's residuals are distributed tighter and with higher amplitude around zero than the OG model, indicating a better fit to the time series, whereas the residuals of the OG model are balanced around a negative difference and with a slight skew. 
The DoG model explained more of the fMRI signal variance in V1, V2 and V3
To quantify the goodness of fit of the pRF models, we compared the variance explained of the two models. The difference in variance explained of the DoG model and the OG model is depicted on the cortical surface in Figure 4. For all subjects, only the values that have a minimal variance explained of 40% in the one Gaussian model were plotted. This figure illustrates that for all subjects the largest gain in variance explained for the DoG model is found in V1. This difference in variance explained between the two models decreases in later visual areas. 
Figure 4
 
The difference in variance explained of the DoG model and the OG model depicted on inflated cortical surfaces. We show the left occipital lobe of all four subjects from a lateral (left panels) and a medial (right panels) perspective. The largest difference in variance explained is found in visual area V1. This difference seems to decrease in later visual areas. Similar results were obtained in the right hemisphere (not shown). Only locations where the OG model explained more than 40% of the variance in the time series are shown.
Figure 4
 
The difference in variance explained of the DoG model and the OG model depicted on inflated cortical surfaces. We show the left occipital lobe of all four subjects from a lateral (left panels) and a medial (right panels) perspective. The largest difference in variance explained is found in visual area V1. This difference seems to decrease in later visual areas. Similar results were obtained in the right hemisphere (not shown). Only locations where the OG model explained more than 40% of the variance in the time series are shown.
To quantify these visual field map differences, the mean variance explained in the different visual areas is calculated for both models. Figure 5 shows the normalized variance explained of the different identified visual areas for both models. The averaged data are from all subjects for recording sites where both pRF models explained more than 30% of the variance in the fMRI time series between 1.5- and 6-deg eccentricity. We corrected for overall individual differences and then normalized the variance explained relative to the OG model. Similar to Figure 4, this figure shows an increase of variance explained for the DoG model. For V1, a relative improvement of ∼14% is found for the DoG model compared to the OG model (absolute improvement is ∼7%). This improvement decreases in V2 and V3 to ∼12 and even more (∼1%) in the other later visual areas V3a, hV4, and LO-1. 
Figure 5
 
The variance explained for both the models of the pRF normalized to the OG model. In V1, the increase of variance explained for the DoG model is found to be ∼14% with respect to the OG model. This improvement decreases in later visual areas. In V2 and V3, the improvement dropped to ∼12%. The difference becomes even less in later visual areas V3a, hV4, and LO-1 (∼1%). The averaged data are from all subjects, and the error bars reflect 1 standard error of the mean.
Figure 5
 
The variance explained for both the models of the pRF normalized to the OG model. In V1, the increase of variance explained for the DoG model is found to be ∼14% with respect to the OG model. This improvement decreases in later visual areas. In V2 and V3, the improvement dropped to ∼12%. The difference becomes even less in later visual areas V3a, hV4, and LO-1 (∼1%). The averaged data are from all subjects, and the error bars reflect 1 standard error of the mean.
pRF size increases with eccentricity and in V1–V3
For both the OG model as well as the DoG model, estimates of the positive (center) pRF size are obtained. Both models estimate the standard deviation (σ 1) of a Gaussian to represent the positive part of the pRF. The DoG model subtracts a second Gaussian from the positive one to obtain the total pRF. This subtraction leads to a change in the effective positive pRF size. For this reason, the full width at half-maximum (FWHM) was used to compare the effective positive pRF size. Figure 6 shows the relationship between eccentricity and FWHM in V1–V3. The averaged data are from the voxels of all subjects that have a variance explained >30% and with eccentricity values between 1.5 and 6 deg. The lines are fit by a linear regression analysis. Both the linear regression and the averaging procedure are weighted by the variance explained of the individual voxels. The error bars represent one standard error of the mean. For both models of the pRF, an increase of FWHM with eccentricity is shown as well as an increase of FWHM from V1–V3. 
Figure 6
 
The relationship between eccentricity and pRF size in visual field maps V1–V3 for both models of the pRF. The averaged data are from all subjects, and the error bars reflect the standard error of the mean. The lines are fit from the data by a linear regression analysis. The pRF size increases with eccentricity and from V1–V3. These pRF sizes are similar between the two pRF models.
Figure 6
 
The relationship between eccentricity and pRF size in visual field maps V1–V3 for both models of the pRF. The averaged data are from all subjects, and the error bars reflect the standard error of the mean. The lines are fit from the data by a linear regression analysis. The pRF size increases with eccentricity and from V1–V3. These pRF sizes are similar between the two pRF models.
Measured sizes of the surround and suppression index
Besides producing measurements for the positive pRF size, the DoG model gives estimates of the suppressive surround size as well. As a measure for the size of the surround, we took the distance between the points where the pRF reaches its minimum amplitude. The relationship between eccentricity and the size of the negative surround is shown in Figure 7A. We see an increase of the size of the surround with eccentricity and an increase of the surround from visual areas V1 to V3. 
Figure 7
 
The relationship between eccentricity and pRF surround in visual field maps V1–V3 for both models of the pRF. (A) The relationship between eccentricity and the size of the surround in visual field maps V1–V3. As a measure for the size of the surround, we take the distance between the points of the pRF where it reaches its minimum amplitude. The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see an increase of the size of the surround with eccentricity and an increase of surround size over visual areas V1–V3. (B) The relationship between eccentricity and the suppression index. The suppression index is calculated by dividing the volumes within the stimulus range of the two estimated Gaussians that make the pRF (Equation 10). The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see a decrease of the suppression index over visual areas V1–V3. In visual areas V1 and V2, we see a decrease of the suppression index with eccentricity, where in V3 the suppression index increases with eccentricity.
Figure 7
 
The relationship between eccentricity and pRF surround in visual field maps V1–V3 for both models of the pRF. (A) The relationship between eccentricity and the size of the surround in visual field maps V1–V3. As a measure for the size of the surround, we take the distance between the points of the pRF where it reaches its minimum amplitude. The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see an increase of the size of the surround with eccentricity and an increase of surround size over visual areas V1–V3. (B) The relationship between eccentricity and the suppression index. The suppression index is calculated by dividing the volumes within the stimulus range of the two estimated Gaussians that make the pRF (Equation 10). The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see a decrease of the suppression index over visual areas V1–V3. In visual areas V1 and V2, we see a decrease of the suppression index with eccentricity, where in V3 the suppression index increases with eccentricity.
Figure 7B shows the relationship between eccentricity and the suppression index of the volumes for visual field areas V1 to V3. The suppression index is calculated by dividing the volumes of the two estimated Gaussians that make the pRF (Equation 10). Since the pRFs can have a size that is beyond the size of the stimulus range, we only take into account the total volume of the two Gaussians that falls inside the stimulus range. For the suppression index, we see a decrease over visual areas V1–V3. In visual areas V1 and V2, we see a decrease of the suppression index with eccentricity, where in V3 there is an increase of the suppression index with eccentricity. For both the surround size and the suppression index, the averaged data are for the voxels that have a variance explained >30% between 1.5- and 6-deg eccentricity. The lines are fit from the data by a linear regression analysis; both the linear regression and the averaging procedure are weighted by the variance explained of the individual voxels. The error bars represent one standard error of the mean. For calculating the average surround size, we only take the values of the voxels that show surround suppression. The voxels that do not show surround suppression have a suppression index of 0. 
Cross-validation
Where the OG model represents the pRF by a single Gaussian, the pRF of the DoG model is made up of the difference of two Gaussians. Adding this extra Gaussian leads to an increase of two extra parameters in the model, σ 2 and β 2. The addition of these extra parameters leads to a model with more degrees of freedom, which may result in an increase of the variance explained. Because the increase of variance explained is mostly seen in V1–V3 but not in higher visual areas, we do not expect that the gain in variance explained is solely caused by the extra parameters. If it would be just the addition of the extra parameters causing the gain in variance explained, one would expect to see this gain throughout the visual cortex. 
Nevertheless, we show that the gain in variance explained is a robust finding that is not only caused by the addition of extra parameters to the model. We do this by a cross-validation; in the training stage, we fitted the model on a subset of the data, and in the validation stage, we evaluate these model parameters on the complementary part of the data. For this, we split the data into two independent subsets. The time series of the even scans were averaged for the first subset, and the time series of the odd scans were averaged for the other subset of the data. Models were fitted on both subsets of the data. The parameters of these models are evaluated on their complementary data set; the models that were fitted on the averaged time series of the even scans are evaluated to the averaged time series of the odd scans and vice versa. Figure 8 shows the variance explained for the evaluated predictions of both the OG model and the DoG model to its complementary data set. The variance explained of the validation stage is comparable to the variance explained of the training stage. The averaged data are from the voxels that have a variance explained >30% in both models and eccentricity values between 1.5 and 6 deg. The data are averaged over the subjects after normalization for individual differences. The error bars represent the 95% confidence interval. This result indicates that the DoG model captures time series characteristics that are systematically missed by the OG model. 
Figure 8
 
The variance explained of the model fits in the two stages of the cross-validation procedure: training and validation stages. The data were split into two different subsets. The data of the training stage give the variance explained of the models on the data that it was fitted to, where for the validation stage we evaluated these model parameters on a different subset of the data. The error bars represent the 95% confidence interval. This result indicates that the improvement of the DoG model fit captures systematic time series characteristics missed by the OG model and that this improvement is not due to the increased number of model parameters.
Figure 8
 
The variance explained of the model fits in the two stages of the cross-validation procedure: training and validation stages. The data were split into two different subsets. The data of the training stage give the variance explained of the models on the data that it was fitted to, where for the validation stage we evaluated these model parameters on a different subset of the data. The error bars represent the 95% confidence interval. This result indicates that the improvement of the DoG model fit captures systematic time series characteristics missed by the OG model and that this improvement is not due to the increased number of model parameters.
Discussion
We introduce a computational model using fMRI that captures center–surround configurations in aggregate receptive fields of the underlying neuronal population. This model extends the conventional pRF model (Dumoulin & Wandell, 2008), where only the positively responding part of the pRF is estimated using one circular symmetric Gaussian (OG). The extension of the pRF model with a center–surround configuration provides a more biologically plausible description of the pRF. Whereas the conventional OG model fails to explain parts of the time series where the signal drops below the baseline, the DoG model shows an improvement of the fits. The DoG model shows a higher variance explained compared to the OG model. This improvement was predominantly present in V1/2/3 and decreased in later visual areas. The method provides new opportunities to measure center–surround configurations in the human visual cortex using fMRI and to test assumptions that are made on surround suppression. 
Negative BOLD
In the cortex, many investigators have reported negative BOLD responses (NBRs; Harel, Lee, Nagaoka, Kim, & Kim, 2002; Shmuel et al., 2006, 2002; Smith, Williams et al., 2004; Tootell, Mendola, Hadjikhani, Liu, & Dale, 1998; Wade & Rowland, 2010). Visual stimuli can elicit NBRs, typically adjacent to positively responding regions. Studies combining electrophysiology with fMRI in early visual cortex couple this NBR to a decrease in neural activation (Shmuel et al., 2006, 2002). Furthermore, the NBR shows a similar onset time and time course as compared to the positive BOLD response. Consequently, we believe that the negative signals are neuronal in origin. We extend these observations by showing that the negative parts of the measured time series can be explained by a center–surround configuration of the pRF. 
Center–surround configurations
Suppression—a reduction in response arising from the introduction of a surround to a visual target—has been found to decrease fMRI signals when using narrowband stimuli (Beck & Kastner, 2007; McDonald, Seymour, Schira, Spehar, & Clifford, 2009; Wade & Rowland, 2010; Williams et al., 2003; Zenger-Landolt & Heeger, 2003), broadband stimuli patches (Kastner et al., 2001), and contextual modulations (Dumoulin & Hess, 2006; Harrison et al., 2007; Murray et al., 2002). Alternatively, fMRI responses elicited by viewing differently sized stimuli may also reveal surround suppression (Kastner et al., 2001; Nurminen et al., 2009; Press et al., 2001). As the stimulus size increases, the fMRI responses fail to increase linearly, which is interpreted as evidence for increasing suppressive contributions. These suppressive interactions may be local or long range, which are also referred to as overlay and surround suppression (Petrov, Carandini, & McKee, 2005; Petrov & McKee, 2006). Although our stimulus was not specifically designed to reveal suppressive interactions, we do reveal similar long-range suppressive contributions to the pRF consistent with a center–surround configuration. Therefore, we propose that our DoG model captures the suppressive surround of the underlying neural population. 
We found evidence for center–surround pRF configurations in V1 to V3 but not in later visual areas. Why did we not find a similar configuration in later visual areas? There are several possible explanations. First, the neuronal populations in later visual areas do not show surround suppression or may show less surround suppression. Yet, many other studies suggest surround suppression beyond V3 (for review, see Allman et al., 1985). To a certain extent, our inability to reveal center–surround configurations beyond V3 may be a consequence of the exact stimulus layout; however, we do not believe that this is the dominant reason. Second, the stimulus size may be too small. Given the increasing pRF size in later visual areas in combination with the maximum size of the stimulus (6.25° radius), this may not be enough to reconstruct the surround in later visual areas. Last, we may not be able to reconstruct the population center–surround configuration at the resolution of fMRI in later visual areas. Simulations (see 1) indicate that increasing the position scatter of individual neuronal receptive fields leads to decreased surround effects at the population level. Position scatter is supposed to increase with increasing receptive fields though typically proportional to the RF size (Albright & Desimone, 1987; Dow, Snyder, Vautin, & Bauer, 1981; Fiorani, Gattass, Rosa, & Sousa, 1989; Gattass & Gross, 1981; Hetherington & Swindale, 1999; Hubel & Wiesel, 1974). However, if neuronal position scatter is proportional to RF size, this effect would not be seen. Therefore, increasing position scatter at later visual areas may cause the neuronal center–surround configuration to be lost at the resolution of fMRI. This is a limitation of our population receptive field approach, and other techniques may still reveal effects of the surround in later visual areas. 
For the suppression index, we see a decrease over visual areas V1–V3. Where we see a decrease of the suppression index with eccentricity in visual areas V1 and V2, in V3 we see an increase of the suppression index with eccentricity. When calculating the suppression index, we only take into account the total volume of the two Gaussians that fall inside the stimulus range. This might account for the increase of the suppression index over eccentricity in V3. When taking the total volumes of the two Gaussians into account without considering the stimulus range, we still see a decrease of the suppression index over visual areas V1–V3 and we see a decrease over eccentricity for all visual areas V1–V3. On the other hand, taking the entire volume of the Gaussians includes the long tails of the Gaussian far beyond the stimulus range leading to unlikely high suppression indexes. Thus, as the surround sizes increase relative to the stimulus size, the suppression index becomes unstable. 
Center–surround size estimates
The pRF is defined as the region of the visual field that elicits an fMRI response (Dumoulin & Wandell, 2008). The conventional pRF models only considered positive fMRI responses. The DoG model estimates both the regions of visual space that elicits positive (center) and negative (surround) responses. The conventional OG model and the DoG model yield similar estimates for the center pRF sizes. Since the pRF size estimates of the OG model are comparable to electrophysiological measurements, this observation remains valid for the DoG model (Dumoulin & Wandell, 2008). 
In addition, the DoG model also estimates the size of the visual field that elicits negative fMRI responses. Our estimates of the DoG model give a mean surround size of ∼13° in V1. The mean surround size measured using electrophysiological approaches is ∼5° (Angelucci et al., 2002; Levitt & Lund, 2002). Psychophysical methods combining fMRI measured a mean surround size of ∼15° (Nurminen et al., 2009). Our measured sizes of the surround are most similar to the other fMRI study, whereas the electrophysiological studies show smaller surround sizes. This may be explained by a difference in the sampling sizes between fMRI and electrophysiology. FMRI measures from a larger neuronal population than electrophysiology. This leads to more variance in the visual field position, which in turn leads to an increase of the pRF size and surround estimates (see 1). 
Technical considerations
We found an average improvement of ∼7% in variance explained (∼14% relative) of the DoG compared to the conventional OG model in V1. We believe that this improvement captures biologically relevant signal modulations and that it is not simply due to the increased degrees of freedom. First, the DoG model shows most improvement in V1, with a decreasing difference found in higher visual areas. If the increase in variance explained were due to more degrees of freedom of the DoG model, one would expect to see this gain over the whole cortex. Second, testing of the DoG model on a different subset of the data than the model was trained on gives similar values for the variance explained. This suggests that the increase of variance explained for the DoG model is not solely caused by the addition of the extra parameters to the pRF model, which could lead to over-fitting of noise in the fMRI signal, but instead has a biological origin. Third, the estimates for both the size of the positively responding region as well as the negatively responding region of the pRF are comparable to previous studies. Last, when comparing the goodness of fit of the two models, not only the difference in variance explained is important. Inspection of the predictions of the time series for both models gives a more detailed illustration of the performance. The DoG model explains specific parts of the time series, which the OG model systematically leaves unexplained. The parts of the time series where the stimulus overlaps the surround of the pRF are the parts where the improvement for the DoG model is seen. This might explain a gain of “only” ∼7% in variance explained for V1, as during a large part of the stimulus presentation preferentially the positive center or neither center nor surround gets stimulated. In an experimental design where the surround of the pRF is stimulated to a greater extent, a larger difference in variance explained is expected between the two models of the pRF. These arguments suggest that the improvement in variance explained is not solely caused by more degrees of freedom of the DoG model compared to the OG model. The DoG model gives a representation of the pRF that is biologically more plausible than the pRF of the OG model. 
Conclusion
The DoG model makes it possible to measure center–surround configurations in the visual cortex using fMRI. This extended model of the pRF uses a biologically more plausible way to represent the pRF than the OG model by taking surround suppression into account. During a scanning session of ∼1 h and using standard mapping stimuli, we are able to estimate the parameters of the DoG model. Several clinical conditions may alter pRF properties and, in particular, center–surround properties (e.g., Butler, Silverstein, & Dakin, 2008; Dakin & Frith, 2005; Marin, 2012; Yoon et al., 2009). This method provides us with a direct measure on the properties of the pRFs throughout the visual system, which could be useful for studying both healthy and clinical populations. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Appendix A
Neuronal position scatter
With fMRI, we cannot measure the activation of single neurons; instead, activation of a population of neurons is measured. The size of the neuronal sampling population is dependent on the voxel size and thus dependent on the resolution of fMRI. 
The pRF is estimated from the total neuronal population, so eventually the properties of all the individual neurons influence the estimated pRF. In such a neuronal population, the position scatter of the individual neurons can vary (Hubel & Wiesel, 1974). The size of the estimated pRF has been reported to vary for both this neuronal position scatter, σ position_variance 2, as well as with the average receptive field size of the neuronal population, σ neuronal_RF 2 (Dumoulin & Wandell, 2008): 
σ p o p u l a t i o n R F 2 = σ n e u r o n a l R F 2 + σ p o s i t i o n v a r i a n c e 2 + k ,
(A1)
where k is a constant factor for capturing non-neural contributions to the pRF. 
A higher variance in the neuronal positions will increase the pRF size, despite unchanged neuronal RF sizes. The assumption we examine here is that the neuronal position scatter will not only influence the estimated pRF size but will also influence the ability to measure center–surround configurations of the pRFs. Figure A1 illustrates this idea. When the position variance of a neuronal population is low (Figure A1A), there is a clear circular receptive field noticeable as a pRF, which still shows a center–surround organization. With high position variance (Figure A1B), this center–surround organization seems to have disappeared. Using Equation A1, we calculated the pRF from a total of 100,000 neuronal receptive fields. While keeping the neuronal RF constant, we varied position variance (and k = 0). Figure A1C shows a representation of the pRFs that are calculated for neuronal populations with different position variances. Increasing position variance leads to a decrease in center–surround configuration of the pRF. Specifically, the amplitude of the negative surround becomes less pronounced (Figure A1D). 
Figure A1
 
Simulations estimating the pRF from a population of individual neuronal receptive fields (RFs). The pRFs are calculated from 100,000 individual neuronal receptive fields using Equation A1. Individual RFs are identical, each with their own center–surround configuration. Panels (A) and (B) illustrate a few individual RFs with low and high position variances, respectively. The actual simulations used 100,000 RFs. (C) These individual RFs were summed to give rise to the pRF. For the neuronal population with a low variance in its positions (A), the neuronal RF center–surround configuration is reflected in its pRF. However, the pRF of the neuronal population with high position scatter (B) has lost the center–surround configuration at the pRF level. The relationship between the position variance of a neuronal population to the response strength of the suppressive surround of the estimated pRF (arrow in (C)) is shown in (D). For a neuronal population with a high position variance, the response strength of the suppressive surround approaches zero. These simulations suggest that the ability to measure center–surround configurations is lost in neuronal populations with a high neuronal position scatter.
Figure A1
 
Simulations estimating the pRF from a population of individual neuronal receptive fields (RFs). The pRFs are calculated from 100,000 individual neuronal receptive fields using Equation A1. Individual RFs are identical, each with their own center–surround configuration. Panels (A) and (B) illustrate a few individual RFs with low and high position variances, respectively. The actual simulations used 100,000 RFs. (C) These individual RFs were summed to give rise to the pRF. For the neuronal population with a low variance in its positions (A), the neuronal RF center–surround configuration is reflected in its pRF. However, the pRF of the neuronal population with high position scatter (B) has lost the center–surround configuration at the pRF level. The relationship between the position variance of a neuronal population to the response strength of the suppressive surround of the estimated pRF (arrow in (C)) is shown in (D). For a neuronal population with a high position variance, the response strength of the suppressive surround approaches zero. These simulations suggest that the ability to measure center–surround configurations is lost in neuronal populations with a high neuronal position scatter.
This simulation illustrates that the neuronal position scatter will not only affect the pRF size but also the strength of the negative component of the pRF. For visual areas with a high neuronal position variance, the ability to measure center–surround configurations is therefore lost at the resolution of fMRI. 
Acknowledgments
This work was supported by Netherlands Organization for Scientific Research (NWO) Vidi Grant 452-08-008 to SD. 
Commercial relationships: none. 
Corresponding author: Wietske Zuiderbaan. 
Email: w.zuiderbaan@uu.nl. 
Address: Heidelberglaan 2, 3584 CS Utrecht, The Netherlands. 
References
Albright T. D. Desimone R. (1987). Local precision of visuotopic organization in the middle temporal area (MT) of the macaque Experimental brain research. Experimentelle Hirnforschung. Experimentation Cerebrale, 65, 582–592.
Allman J. Miezin F. McGuinness E. (1985). Stimulus specific responses from beyond the classical receptive field: Neurophysiological mechanisms for local–global comparisons in visual neurons. Annual Review of Neuroscience, 8, 407–430. [CrossRef] [PubMed]
Amano K. Wandell B. A. Dumoulin S. O. (2009). Visual field maps, population receptive field sizes, and visual field coverage in the human MT+ complex. Journal of Neurophysiology, 102, 2704–2718. [CrossRef] [PubMed]
Angelucci A. Levitt J. B. Walton E. J. Hupe J. M. Bullier J. Lund J. S. (2002). Circuits for local and global signal integration in primary visual cortex. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 22, 8633–8646. [PubMed]
Beck D. M. Kastner S. (2007). Stimulus similarity modulates competitive interactions in human visual cortex. Journal of Vision, 7(2):19, 1–12, http://www.journalofvision.org/content/7/2/19, doi:10.1167/7.2.19. [PubMed] [Article] [CrossRef] [PubMed]
Birn R. M. Saad Z. S. Bandettini P. A. (2001). Spatial heterogeneity of the nonlinear dynamics in the FMRI BOLD response. Neuroimage, 14, 817–826. [CrossRef] [PubMed]
Boynton G. M. Engel S. A. Glover G. H. Heeger D. J. (1996). Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience, 16, 4207–4221. [PubMed]
Brainard D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Butler P. D. Silverstein S. M. Dakin S. C. (2008). Visual perception and its impairment in schizophrenia. Biological Psychiatry, 64, 40–47. [CrossRef] [PubMed]
Carandini M. (2004). Receptive fields and suppressive fields in the early visual system. In Gazzaniga M. S. (Ed.), The cognitive neurosciences (3rd ed., pp. 312–326). Cambridge, MA: MIT Press.
Cavanaugh J. R. Bair W. Movshon J. A. (2002). Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88, 2530–2546. [CrossRef] [PubMed]
Dakin S. Frith U. (2005). Vagaries of visual perception in autism. Neuron, 48, 497–507. [CrossRef] [PubMed]
DeYoe E. A. Carman G. J. Bandettini P. Glickman S. Wieser J. Cox R. et al. (1996). Mapping striate and extrastriate visual areas in human cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America, 93, 2382–2386. [CrossRef] [PubMed]
Dow B. M. Snyder A. Z. Vautin R. G. Bauer R. (1981). Magnification factor and receptive field size in foveal striate cortex of the monkey. Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale, 44, 213–228. [CrossRef]
Dumoulin S. O. Hess R. F. (2006). Modulation of V1 activity by shape: Image-statistics or shape-based perception? Journal of Neurophysiology, 95, 3654–3664. [CrossRef] [PubMed]
Dumoulin S. O. Wandell B. A. (2008). Population receptive field estimates in human visual cortex. Neuroimage, 39, 647–660. [CrossRef] [PubMed]
Engel S. A. Glover G. H. Wandell B. A. (1997). Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cerebral Cortex, 7, 181–192. [CrossRef] [PubMed]
Fiorani M., Jr. Gattass R. Rosa M. G. Sousa A. P. (1989). Visual area MT in the Cebus monkey: Location, visuotopic organization, and variability. The Journal of Comparative Neurology, 287, 98–118. [CrossRef] [PubMed]
Fitzpatrick D. (2000). Seeing beyond the receptive field in primary visual cortex. Current Opinion in Neurobiology, 10, 438–443. [CrossRef] [PubMed]
Gattass R. Gross C. G. (1981). Visual topography of striate projection zone (MT) in posterior superior temporal sulcus of the macaque. Journal of Neurophysiology, 46, 621–638. [PubMed]
Glover G. H. (1999). Deconvolution of impulse response in event-related BOLD fMRI. Neuroimage, 9, 416–429. [CrossRef] [PubMed]
Hansen K. A. David S. V. Gallant J. L. (2004). Parametric reverse correlation reveals spatial linearity of retinotopic human V1 BOLD response. Neuroimage, 23, 233–241. [CrossRef] [PubMed]
Harel N. Lee S. P. Nagaoka T. Kim D. S. Kim S. G. (2002). Origin of negative blood oxygenation level-dependent fMRI signals. Journal of Cerebral Blood Flow & Metabolism, 22, 908–917. [CrossRef]
Harrison L. M. Penny W. Ashburner J. Trujillo-Barreto N. Friston K. J. (2007). Diffusion-based spatial priors for imaging. Neuroimage, 38, 677–695. [CrossRef] [PubMed]
Harvey B. M. Dumoulin S. O. (2011). The relationship between cortical magnification factor and population receptive field size in human visual cortex: Constancies in cortical architecture. Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31, 13604–13612. [CrossRef]
Hetherington P. A. Swindale N. V. (1999). Receptive field and orientation scatter studied by tetrode recordings in cat area 17. Visual Neuroscience, 16, 637–652. [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195, 215–243. [CrossRef] [PubMed]
Hubel D. H. Wiesel T. N. (1974). Uniformity of monkey striate cortex: A parallel relationship between field size, scatter, and magnification factor. Journal of Comparative Neurology, 158, 295–305. [CrossRef] [PubMed]
Kastner S. De Weerd P. Pinsk M. A. Elizondo M. I. Desimone R. Ungerleider L. G. (2001). Modulation of sensory suppression: Implications for receptive field sizes in the human visual cortex. Journal of Neurophysiology, 86, 1398–1411. [PubMed]
Kay K. N. Naselaris T. Prenger R. J. Gallant J. L. (2008). Identifying natural images from human brain activity. Nature, 452, 352–355. [CrossRef] [PubMed]
Levitt J. B. Lund J. S. (2002). The spatial extent over which neurons in macaque striate cortex pool visual signals. Visual Neuroscience, 19, 439–452. [CrossRef] [PubMed]
Logothetis N. K. Wandell B. A. (2004). Interpreting the BOLD signal. Annual Review of Physiology, 66, 735–769. [CrossRef] [PubMed]
Marin O. (2012). Interneuron dysfunction in psychiatric disorders. Nature Reviews Neuroscience, 13, 107–120. [PubMed]
McDonald J. S. Seymour K. J. Schira M. M. Spehar B. Clifford C. W. (2009). Orientation-specific contextual modulation of the fMRI BOLD response to luminance and chromatic gratings in human visual cortex. Vision Research, 49, 1397–1405. [CrossRef] [PubMed]
Miyawaki Y. Uchida H. Yamashita O. Sato M. A. Morito Y. Tanabe H. C. et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 60, 915–929. [CrossRef] [PubMed]
Murray S. O. Kersten D. Olshausen B. A. Schrater P. Woods D. L. (2002). Shape perception reduces activity in human primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 99, 15164–15169. [CrossRef] [PubMed]
Nestares O. Heeger D. J. (2000). Robust multiresolution alignment of MRI brain volumes. Magnetic Resonance in Medical Sciences, 43, 705–715. [CrossRef]
Nurminen L. Kilpelainen M. Laurinen P. Vanni S. (2009). Area summation in human visual system: Psychophysics, fMRI, and modeling. Journal of Neurophysiology, 102, 2900–2909. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Petrov Y. Carandini M. McKee S. (2005). Two distinct mechanisms of suppression in human vision. Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 25, 8704–8707. [CrossRef]
Petrov Y. McKee S. P. (2006). The effect of spatial configuration on surround suppression of contrast sensitivity. Journal of Vision, 6(3):4, 224–238, http://www.journalofvision.org/content/6/3/4, doi:10.1167/6.3.4. [PubMed] [Article] [CrossRef]
Press W. A. Brewer A. A. Dougherty R. F. Wade A. R. Wandell B. A. (2001). Visual areas and spatial summation in human visual cortex. Vision Research, 41, 1321–1332. [CrossRef] [PubMed]
Sceniak M. P. Hawken M. J. Shapley R. (2001). Visual spatial characterization of macaque V1 neurons. Journal of Neurophysiology, 85, 1873–1887. [PubMed]
Sereno M. I. Dale A. M. Reppas J. B. Kwong K. K. Belliveau J. W. Brady T. J. et al. (1995). Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science, 268, 889–893. [CrossRef] [PubMed]
Shmuel A. Augath M. Oeltermann A. Logothetis N. K. (2006). Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nature Neuroscience, 9, 569–577. [CrossRef] [PubMed]
Shmuel A. Yacoub E. Pfeuffer J. Van de Moortele P. F. Adriany G. Hu X. et al. (2002). Sustained negative BOLD, blood flow and oxygen consumption response and its coupling to the positive response in the human brain. Neuron, 36, 1195–1210. [CrossRef] [PubMed]
Smith A. T. Williams A. L. Singh K. D. (2004). Negative BOLD in the visual cortex: Evidence against blood stealing. Human Brain Mapping, 21, 213–220. [CrossRef] [PubMed]
Smith S. M. Jenkinson M. Woolrich M. W. Beckmann C. F. Behrens T. E. Johansen-Berg H. et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage, 23, S208–S219. [CrossRef] [PubMed]
Tajima S. Watanabe M. Imai C. Ueno K. Asamizuya T. Sun P. et al. (2010). Opposing effects of contextual surround in human early visual cortex revealed by functional magnetic resonance imaging with continuously modulated visual stimuli. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 30, 3264–3270. [CrossRef] [PubMed]
Tootell R. B. Mendola J. D. Hadjikhani N. K. Liu A. K. Dale A. M. (1998). The representation of the ipsilateral visual field in human cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America, 95, 818–824. [CrossRef] [PubMed]
Victor J. D. Purpura K. Katz E. Mao B. (1994). Population encoding of spatial frequency, orientation, and color in macaque V1. Journal of Neurophysiology, 72, 2151–2166. [PubMed]
Wade A. R. Rowland J. (2010). Early suppressive mechanisms and the negative blood oxygenation level-dependent response in human visual cortex. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 30, 5008–5019. [CrossRef] [PubMed]
Wandell B. A. Chial S. Backus B. T. (2000). Visualization and measurement of the cortical surface. Journal of Cognitive Neuroscience, 12, 739–752. [CrossRef] [PubMed]
Wandell B. A. Dumoulin S. O. Brewer A. A. (2007). Visual field maps in human cortex. Neuron, 56, 366–383. [CrossRef] [PubMed]
Williams A. L. Singh K. D. Smith A. T. (2003). Surround modulation measured with functional MRI in the human visual cortex. Journal of Neurophysiology, 89, 525–533. [CrossRef] [PubMed]
Winawer J. Horiguchi H. Sayres R. A. Amano K. Wandell B. A. (2010). Mapping hV4 and ventral occipital cortex: The venous eclipse. Journal of Vision, 10(5):1, 1–22, http://www.journalofvision.org/content/10/5/1, doi:10.1167/10.5.1. [PubMed] [Article] [CrossRef] [PubMed]
Yoon J. H. Rokem A. S. Silver M. A. Minzenberg M. J. Ursu S. Ragland J. D. et al. (2009). Diminished orientation-specific surround suppression of visual processing in schizophrenia. Schizophrenia Bulletin, 35, 1078–1084. [CrossRef] [PubMed]
Yushkevich P. A. Piven J. Hazlett H. C. Smith R. G. Ho S. Gee J. C. et al. (2006). User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage, 31, 1116–1128. [CrossRef] [PubMed]
Zenger-Landolt B. Heeger D. J. (2003). Response suppression in v1 agrees with psychophysics of surround masking. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23, 6884–6893. [PubMed]
Figure 1
 
A schematic illustration of the stimulus sequence. The bar apertures revealed a high-contrast checkerboard (100%). The checkerboard rows moved in opposite directions along the orientation of the bar. The bars moved through the visual field with 8 different orientation–motion configurations as indicated by the arrows (arrows not present in actual stimulus). The total stimulus sequence lasted 360 s.
Figure 1
 
A schematic illustration of the stimulus sequence. The bar apertures revealed a high-contrast checkerboard (100%). The checkerboard rows moved in opposite directions along the orientation of the bar. The bars moved through the visual field with 8 different orientation–motion configurations as indicated by the arrows (arrows not present in actual stimulus). The total stimulus sequence lasted 360 s.
Figure 2
 
Flowchart of the fitting procedure of the two pRF models. The left panels illustrate the conventional one Gaussian pRF model, while the right panels illustrate the difference-of-Gaussians pRF model. The middle panels indicate the analysis input, shared between both pRF models, consisting of the stimulus aperture and fMRI data. Convolution of a pRF model with the stimulus sequence predicts the fMRI time series. The pRF model parameters are estimated by minimizing the sum-of-squared errors between the predicted and the measured time series.
Figure 2
 
Flowchart of the fitting procedure of the two pRF models. The left panels illustrate the conventional one Gaussian pRF model, while the right panels illustrate the difference-of-Gaussians pRF model. The middle panels indicate the analysis input, shared between both pRF models, consisting of the stimulus aperture and fMRI data. Convolution of a pRF model with the stimulus sequence predicts the fMRI time series. The pRF model parameters are estimated by minimizing the sum-of-squared errors between the predicted and the measured time series.
Figure 3
 
Example of the model fits for a cortical location (voxel) in V1 for both models of the pRF. The gray dotted lines indicate the fMRI time series; the black solid lines show the fits of the models. Gray regions indicate mean luminance presentations, whereas the dark gray regions indicate parts of the time series used to estimate the baseline (0). The circular patches below indicate the visual field (gray), the estimated pRF for the voxel (the red part for the positive part of the pRF and the blue part for the negative part of the pRF), and the stimulus at a given time. (A) The model fits of the OG model of the pRF. The OG model shows an accurate fitting for the positive parts of the time series. The parts where the OG model misses the ability to follow the time series are particularly evident for the negative parts of the time series. The gray arrows indicate these parts. (B) The model fits of the DoG model of the pRF. A clear improvement of the fit of the DoG model can be seen at the negative parts of the fMRI signal. In parts where the OG model misses the ability to give a correct prediction, the DoG model that incorporated a suppressive surround is able to give an accurate prediction of the time series. The models explain 49% and 72% of the variance, respectively. (C) From the same subject, this shows the distribution of the residuals from all the voxels in V1. To calculate the residuals, the predicted time series is subtracted from the measured time series (for all time points). The gray line shows the residuals for the predicted time series of the OG model; the black line shows the residuals of the DoG model. The arrows indicate the means of both curves. The mean of the residuals of the OG model is −0.23%; for the DoG model, this value is −0.07%. Where the OG model shows a negative shift in its residuals, the DoG model shows a more balanced distribution of the residuals around zero.
Figure 3
 
Example of the model fits for a cortical location (voxel) in V1 for both models of the pRF. The gray dotted lines indicate the fMRI time series; the black solid lines show the fits of the models. Gray regions indicate mean luminance presentations, whereas the dark gray regions indicate parts of the time series used to estimate the baseline (0). The circular patches below indicate the visual field (gray), the estimated pRF for the voxel (the red part for the positive part of the pRF and the blue part for the negative part of the pRF), and the stimulus at a given time. (A) The model fits of the OG model of the pRF. The OG model shows an accurate fitting for the positive parts of the time series. The parts where the OG model misses the ability to follow the time series are particularly evident for the negative parts of the time series. The gray arrows indicate these parts. (B) The model fits of the DoG model of the pRF. A clear improvement of the fit of the DoG model can be seen at the negative parts of the fMRI signal. In parts where the OG model misses the ability to give a correct prediction, the DoG model that incorporated a suppressive surround is able to give an accurate prediction of the time series. The models explain 49% and 72% of the variance, respectively. (C) From the same subject, this shows the distribution of the residuals from all the voxels in V1. To calculate the residuals, the predicted time series is subtracted from the measured time series (for all time points). The gray line shows the residuals for the predicted time series of the OG model; the black line shows the residuals of the DoG model. The arrows indicate the means of both curves. The mean of the residuals of the OG model is −0.23%; for the DoG model, this value is −0.07%. Where the OG model shows a negative shift in its residuals, the DoG model shows a more balanced distribution of the residuals around zero.
Figure 4
 
The difference in variance explained of the DoG model and the OG model depicted on inflated cortical surfaces. We show the left occipital lobe of all four subjects from a lateral (left panels) and a medial (right panels) perspective. The largest difference in variance explained is found in visual area V1. This difference seems to decrease in later visual areas. Similar results were obtained in the right hemisphere (not shown). Only locations where the OG model explained more than 40% of the variance in the time series are shown.
Figure 4
 
The difference in variance explained of the DoG model and the OG model depicted on inflated cortical surfaces. We show the left occipital lobe of all four subjects from a lateral (left panels) and a medial (right panels) perspective. The largest difference in variance explained is found in visual area V1. This difference seems to decrease in later visual areas. Similar results were obtained in the right hemisphere (not shown). Only locations where the OG model explained more than 40% of the variance in the time series are shown.
Figure 5
 
The variance explained for both the models of the pRF normalized to the OG model. In V1, the increase of variance explained for the DoG model is found to be ∼14% with respect to the OG model. This improvement decreases in later visual areas. In V2 and V3, the improvement dropped to ∼12%. The difference becomes even less in later visual areas V3a, hV4, and LO-1 (∼1%). The averaged data are from all subjects, and the error bars reflect 1 standard error of the mean.
Figure 5
 
The variance explained for both the models of the pRF normalized to the OG model. In V1, the increase of variance explained for the DoG model is found to be ∼14% with respect to the OG model. This improvement decreases in later visual areas. In V2 and V3, the improvement dropped to ∼12%. The difference becomes even less in later visual areas V3a, hV4, and LO-1 (∼1%). The averaged data are from all subjects, and the error bars reflect 1 standard error of the mean.
Figure 6
 
The relationship between eccentricity and pRF size in visual field maps V1–V3 for both models of the pRF. The averaged data are from all subjects, and the error bars reflect the standard error of the mean. The lines are fit from the data by a linear regression analysis. The pRF size increases with eccentricity and from V1–V3. These pRF sizes are similar between the two pRF models.
Figure 6
 
The relationship between eccentricity and pRF size in visual field maps V1–V3 for both models of the pRF. The averaged data are from all subjects, and the error bars reflect the standard error of the mean. The lines are fit from the data by a linear regression analysis. The pRF size increases with eccentricity and from V1–V3. These pRF sizes are similar between the two pRF models.
Figure 7
 
The relationship between eccentricity and pRF surround in visual field maps V1–V3 for both models of the pRF. (A) The relationship between eccentricity and the size of the surround in visual field maps V1–V3. As a measure for the size of the surround, we take the distance between the points of the pRF where it reaches its minimum amplitude. The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see an increase of the size of the surround with eccentricity and an increase of surround size over visual areas V1–V3. (B) The relationship between eccentricity and the suppression index. The suppression index is calculated by dividing the volumes within the stimulus range of the two estimated Gaussians that make the pRF (Equation 10). The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see a decrease of the suppression index over visual areas V1–V3. In visual areas V1 and V2, we see a decrease of the suppression index with eccentricity, where in V3 the suppression index increases with eccentricity.
Figure 7
 
The relationship between eccentricity and pRF surround in visual field maps V1–V3 for both models of the pRF. (A) The relationship between eccentricity and the size of the surround in visual field maps V1–V3. As a measure for the size of the surround, we take the distance between the points of the pRF where it reaches its minimum amplitude. The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see an increase of the size of the surround with eccentricity and an increase of surround size over visual areas V1–V3. (B) The relationship between eccentricity and the suppression index. The suppression index is calculated by dividing the volumes within the stimulus range of the two estimated Gaussians that make the pRF (Equation 10). The averaged data are from all subjects, and the error bars represent 1 standard error of the mean. We see a decrease of the suppression index over visual areas V1–V3. In visual areas V1 and V2, we see a decrease of the suppression index with eccentricity, where in V3 the suppression index increases with eccentricity.
Figure 8
 
The variance explained of the model fits in the two stages of the cross-validation procedure: training and validation stages. The data were split into two different subsets. The data of the training stage give the variance explained of the models on the data that it was fitted to, where for the validation stage we evaluated these model parameters on a different subset of the data. The error bars represent the 95% confidence interval. This result indicates that the improvement of the DoG model fit captures systematic time series characteristics missed by the OG model and that this improvement is not due to the increased number of model parameters.
Figure 8
 
The variance explained of the model fits in the two stages of the cross-validation procedure: training and validation stages. The data were split into two different subsets. The data of the training stage give the variance explained of the models on the data that it was fitted to, where for the validation stage we evaluated these model parameters on a different subset of the data. The error bars represent the 95% confidence interval. This result indicates that the improvement of the DoG model fit captures systematic time series characteristics missed by the OG model and that this improvement is not due to the increased number of model parameters.
Figure A1
 
Simulations estimating the pRF from a population of individual neuronal receptive fields (RFs). The pRFs are calculated from 100,000 individual neuronal receptive fields using Equation A1. Individual RFs are identical, each with their own center–surround configuration. Panels (A) and (B) illustrate a few individual RFs with low and high position variances, respectively. The actual simulations used 100,000 RFs. (C) These individual RFs were summed to give rise to the pRF. For the neuronal population with a low variance in its positions (A), the neuronal RF center–surround configuration is reflected in its pRF. However, the pRF of the neuronal population with high position scatter (B) has lost the center–surround configuration at the pRF level. The relationship between the position variance of a neuronal population to the response strength of the suppressive surround of the estimated pRF (arrow in (C)) is shown in (D). For a neuronal population with a high position variance, the response strength of the suppressive surround approaches zero. These simulations suggest that the ability to measure center–surround configurations is lost in neuronal populations with a high neuronal position scatter.
Figure A1
 
Simulations estimating the pRF from a population of individual neuronal receptive fields (RFs). The pRFs are calculated from 100,000 individual neuronal receptive fields using Equation A1. Individual RFs are identical, each with their own center–surround configuration. Panels (A) and (B) illustrate a few individual RFs with low and high position variances, respectively. The actual simulations used 100,000 RFs. (C) These individual RFs were summed to give rise to the pRF. For the neuronal population with a low variance in its positions (A), the neuronal RF center–surround configuration is reflected in its pRF. However, the pRF of the neuronal population with high position scatter (B) has lost the center–surround configuration at the pRF level. The relationship between the position variance of a neuronal population to the response strength of the suppressive surround of the estimated pRF (arrow in (C)) is shown in (D). For a neuronal population with a high position variance, the response strength of the suppressive surround approaches zero. These simulations suggest that the ability to measure center–surround configurations is lost in neuronal populations with a high neuronal position scatter.
Supplementary PDF
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×