December 2014
Volume 14, Issue 14
Free
Article  |   December 2014
Visual field asymmetries in visual evoked responses
Author Affiliations
Journal of Vision December 2014, Vol.14, 13. doi:https://doi.org/10.1167/14.14.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Donald J. Hagler; Visual field asymmetries in visual evoked responses. Journal of Vision 2014;14(14):13. https://doi.org/10.1167/14.14.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP.

Introduction
The perception of visual stimuli varies as a function of visual field location (Karim & Kojima, 2010). Some asymmetries are obvious, such as increased visual acuity in the center relative to the periphery (De Valois & De Valois, 1988; Duncan & Boynton, 2003), due to the greater concentration of photoreceptors in the fovea (Curcio, Sloan, Kalina, & Hendrickson, 1990). More subtly, visual evoked responses (VERs) to stimuli outside but near the fovea, in the perifoveal region, are delayed relative to more peripheral stimuli (Baseler & Sutter, 1997; Kremlacek, Kuba, Chlubnova, & Kubova, 2004). Asymmetry between the upper and lower visual fields has been called the “lower field advantage,” manifested as faster behavioral responses, greater sensitivity, and shorter latency and larger amplitude VERs for lower visual field stimuli (Fioretto et al., 1995; Kremlacek et al., 2004; Lehmann & Skrandies, 1979; Levine & McAnany, 2005; McAnany & Levine, 2007; Portin, Vanni, Virsu, & Hari, 1999; Skrandies, 1987). 
The sources of these subtle variations are poorly understood, particularly in terms of which visual areas exhibit differences in visual processing. The earlier response to peripheral stimuli has been suggested to be related to the greater proportion of magnocellular (magno) input to primary visual area V1 in the periphery (Baseler & Sutter, 1997; Malpeli, Lee, & Baker, 1996), as the magno pathway response is thought to precede that of the parvocellular (parvo) pathway by ∼20 ms (Bullier, Schall, & Morel, 1996; Nowak, Munk, Girard, & Bullier, 1995; Schmolesky et al., 1998). It is unknown, however, whether magno/parvo distinctions persist in the population responses of V1, V2, and beyond (Lachica, Beck, & Casagrande, 1992; Martin, 1992; Merigan & Maunsell, 1993; Sincich & Horton, 2005; Skottun & Skoyles, 2008). Thus, it is unclear which visual areas contribute to VER latency differences. Identifying these areas may provide insight into the separation or integration of the magno/parvo pathways. 
Differences between responses to upper and lower field stimuli could be related to slightly larger lower field representations in the retina and multiple visual areas (Connolly & Van Essen, 1984; Curcio et al., 1990; Maunsell & Van Essen, 1987; Perry & Cowey, 1985; Tootell, Switkes, Silverman, & Hamilton, 1988; Van Essen, Newsome, & Maunsell, 1984). Learned differences, perhaps in higher level cortical areas related to attentional control, are also possible (He, Cavanagh, & Intriligator, 1996; Intriligator & Cavanagh, 2001; Karim & Kojima, 2010). A more trivial explanation, however, for the differences in measured response amplitudes is that the upper and lower field representations of V1 are located on the ventral and dorsal banks of the calcarine sulcus, respectively. Furthermore, V2 and V3 are each split into two, noncontiguous subareas, also located ventrally and dorsally. Because of this arrangement, the ventral, upper field subareas are deeper in the brain, raising the possibility that lower field responses measured with magnetoencephalography (MEG) or electroencephalography (EEG) have larger amplitudes because of their closer proximity to the sensors. 
Previous investigations of visual response asymmetries have been limited by an inability to adequately resolve the spatiotemporal patterns of activity in cortical visual areas. Functional magnetic resonance imaging (fMRI) allows topographic mapping of the several retinotopically arranged cortical visual areas, but because its signal is based on the slow hemodynamic response, it cannot offer meaningful information about the temporal dynamics of visual processing. In contrast, MEG and EEG have excellent temporal resolution, but the ill-posed nature of the inverse problem makes it extremely difficult to reliably estimate the time course of activity for individual visual areas. The close proximity of the several visual areas in occipital cortex typically results in a high degree of crosstalk between source estimates for these areas (Auranen et al., 2009; Bonmassar et al., 2001; Cottereau, McKee, Ales, & Norcia, 2012; Dale et al., 2000; Di Russo et al., 2005; Hagler et al., 2009; Kajihara et al., 2004; Liu, Belliveau, & Dale, 1998; Liu, Dale, & Belliveau, 2002; Moradi et al., 2003; Vanni et al., 2004; Yoshioka et al., 2008). 
Retinotopy-constrained source estimation (RCSE) is a recently developed method for estimating time courses of activation in individual visual areas that greatly reduces crosstalk (Hagler et al., 2009). RCSE uses fMRI retinotopic mapping to construct multidipole models simultaneously constrained by multiple stimulus locations (Ales, Carney, & Klein, 2010; Hagler, 2014; Hagler & Dale, 2013; Hagler et al., 2009). Cortical folding patterns in individual subjects determine for each visual area a distinct pattern of dipole orientation as a function of stimulus location (Ales et al., 2010; Hagler et al., 2009; Slotnick, Klein, Carney, Sutter, & Dastmalchi, 1999). Because of this, RCSE provides better separation between individual visual areas compared to conventional equivalent current or distributed dipole approaches. 
To better understand how the timing and amplitude of VERs in individual visual areas vary as a function of stimulus location, MEG, fMRI, and RCSE were used to probe the responses of V1, V2, V3, and V3A for sets of stimuli from different parts of the visual field. Because of the known asymmetries described previously, comparisons were made between perifoveal and peripheral stimuli as well as upper and lower field stimuli. Comparisons between left and right hemifield stimuli were included as a negative control. Besides investigating the basic properties of visual processing in early visual areas, these tests are also important for further validation of the RCSE approach, which assumes that VERs within a given visual area are identical, or at least highly similar, for all stimulus locations. This study is the first to use RCSE to estimate responses for V3A; one reason for its inclusion was that unlike V1, V2, and V3, its upper and lower field representations are not substantially different in terms of proximity to sensors. 
Methods
Participants
Nine right-handed, healthy adults with normal vision were included in this study (seven women, mean age: 25.3 ± 2.8 SD, age range: 22–30). One additional subject (female) was excluded because fMRI retinotopy data were extremely noisy and therefore unusable. Data for eight of these subjects were included in a previous study (Hagler, 2014). The experimental protocol was approved by the University of California, San Diego institutional review board, and informed consent was obtained from all participants. 
Data collection
MEG data were collected with an Elekta/Neuromag Vectorview 306-channel whole-head neuromagnetometer (Elekta, Stockholm, Sweden), with electrooculogram electrodes to monitor blinks and eye movements. The sampling frequency was at least 601 Hz, with an antialiasing low-pass filter of 200 Hz. For two subjects, the sampling frequency was 1000 Hz, with a low-pass filter of 330 Hz; otherwise, conditions were identical. The relative position of the head was determined with head position indicator (HPI) coils. Locations of HPI coils on the scalp, along with nasion, preauricular points, and at least 100 additional scalp locations were measured using a FastTrack 3-D digitizer (Polhemus, Colchester, VT). Visual stimuli were presented with a three-mirror DLP projector. The maximum visual angle (top to bottom of displayable area) was fixed at 25°. A laser and light sensor finger-lift device (Elekta) detected behavioral responses. 
MRI data, including T1-weighted structural images (1-mm isotropic voxels, repetition time [TR] = 10.5 ms, flip angle = 15°, bandwidth = 20.83 kHz, 256 × 256 matrix, 180 sagittal slices) and T2*-weighted, echo-planar imaging (EPI) functional images (2.5-mm isotropic voxels, TR = 2500 ms, echo time [TE] = 30 ms, flip angle = 90°, bandwidth = 62.5 kHz, 32 axial slices, 96 × 96 matrix, field of view [FOV] = 240 mm, fractional k-space acquisition, with fat saturation pulse) were collected using a GE 3T scanner with a GE eight-channel head coil (GE Healthcare, Little Chalfont, Buckinghamshire, UK). For each gradient-echo EPI scan, a pair of spin-echo EPI images with opposing phase-encode polarities was collected for estimating the B0 distortion field (TR = 10,000 ms, TE = 90 ms, identical slice prescription as gradient-echo images). In some subjects, dental impression bite bars were used to reduce head motion. A standard video projector with a custom zoom lens projected images onto a plastic screen inside the bore of the magnet, which subjects viewed via a mirror reflection. The maximum visual angle, which ranged from 26° to 29°, was measured for each session and used as input parameters in fMRI retinotopic map fitting and MEG dipole modeling; this allowed for consistent mapping between MEG stimuli and the cortical surface for each subject. Behavioral responses were recorded using an MRI-compatible fiber-optic button box (Current Designs, Philadelphia, PA). 
Data processing
Details of MEG and MRI/fMRI processing were as described previously (Hagler, 2014). MEG data were band-pass filtered between 0.2 and 120 Hz with a 60-Hz notch filter, and trials were linearly detrended and baseline adjusted before averaging, using a 100-ms prestimulus baseline and 350-ms poststimulus response period. Trials containing artifacts such as eye movements or blinks, and trials within 700 ms of a finger-lift response, were excluded. Very noisy or flat channels, and all magnetometers, were excluded from analysis. 
Structural MRI data were corrected for gradient nonlinearity distortions by applying a predefined, scanner-specific, nonlinear transformation (Jovicich et al., 2006). Two or more structural MRI volumes for each subject were registered, averaged, and rigidly resampled into alignment with an atlas brain. The FreeSurfer software package version 4.5.0 (http://surfer.nmr.mgh.harvard.edu) was used to create cortical surface models from structural MRI images (Dale, Fischl, & Sereno, 1999; Dale & Sereno, 1993; Fischl, Liu, & Dale, 2001; Fischl et al., 2002; Fischl, Sereno, & Dale, 1999; Segonne et al., 2004; Segonne, Pacheco, & Fischl, 2007). Manual editing of the white-matter segmentation was performed to correct surface defects. 
Functional MRI data were corrected for B0-inhomogeneity distortions using the reversing gradient method (Chang & Fitzpatrick, 1992; D. Holland, Kuperman, & Dale, 2010; Morgan, Bowtell, McIntyre, & Worthington, 2004). Displacement fields estimated from paired spin-echo test images with opposite phase-encode polarity were applied to each frame of the motion-corrected, gradient-echo EPI fMRI images. Slice timing differences and head motion were corrected using AFNI's 3dTshift and 3dvolreg (Cox, 1996), and gradient nonlinearity distortions were corrected for each frame (Jovicich et al., 2006). Spin-echo T2-weighted images were registered to T1-weighted images using mutual information (Wells, Viola, Atsumi, Nakajima, & Kikinis, 1996), with coarse prealignment based on within-modality registration to atlas brains. 
Retinotopic mapping
Retinotopic maps on the cortical surface were obtained from fMRI data using methods described previously (Hagler et al., 2009; Sereno et al., 1995). Stimuli were portions of a black-and-white dartboard pattern reversing contrast at 8 Hz. For polar angle mapping, a 12° polar angle wide wedge revolved about a central fixation cross; and for eccentricity mapping, a thin ring expanded or contracted periodically. For five subjects, a 32-s cycle was used with 10 cycles per scan, and for four subjects, a 64-s cycle was used with five cycles per scan. For each subject, four polar angle scans (two clockwise and two counterclockwise) and two eccentricity scans (one outward, one inward) were collected in a single MRI session. To maintain stable alertness and maximize attention, subjects performed a peripheral detection task, in which they pressed a button upon rare presentation (∼5–10-s interstimulus interval) of a gray circle at pseudorandom locations occluding the flickering dartboard pattern (Bressler & Silver, 2010). 
Time series for each voxel were normalized by mean intensity, and drift and head motion artifacts were removed via linear regression using a quadratic polynomial and the motion estimates from 3dvolreg. Fourier transforms were computed to estimate the amplitude and phase at the stimulus frequency, with phase corresponding to the preferred stimulus location for a given voxel. Real and imaginary Fourier components were averaged across scans, with phases for clockwise polar angle and contracting eccentricity scans reversed before averaging. Phase delays of ∼1 s were subtracted from the Fourier components before averaging, to account for some of the hemodynamic delay, and the combination of opposite-direction scans removed residual bias due to spatially varying hemodynamic delays (Hagler, Riecke, & Sereno, 2007; Hagler & Sereno, 2006; Warnking et al., 2002). Averaged Fourier components were sampled onto the cortical surface at a distance of 1 mm from the gray/white boundary. 
Retinotopic map fitting
Nonlinear optimization methods were used to fit conjoined template maps including V1, V2, and V3 to polar angle and eccentricity mapping data (Figure 1A; Dougherty et al., 2003; Hagler & Dale, 2013). Independent map fits were performed for V3A using templates with single, unified hemifield representations (Figure 1B). Manually defined regions and a coarse fitting step determined the overall shape and location of each map, and a fine fitting step smoothly deformed the templates to best match the fMRI retinotopy data. Regions of interest (ROIs) were manually drawn for each cortical hemisphere of each subject to encompass all of V1, V2, and V3, up to the maximum eccentricity measured with fMRI. Separate ROIs were drawn for V3A, which is located in dorsal occipital cortex, adjacent to dorsal V3, and is oriented so that the lower-to-upper field axis runs approximately posterior to anterior (DeYoe et al., 1996; Sereno et al., 1995; Tootell et al., 1997; Wandell, Dumoulin, & Brewer, 2007). V3A was assumed to be located superiorly, as distinguished from the contiguous, but more laterally located, V3B (Larsson & Heeger, 2006; Press, Brewer, Dougherty, Wade, & Wandell, 2001; Swisher, Halko, Merabet, McMains, & Somers, 2007; Wandell et al., 2007). 
Figure 1
 
Retinotopic map fitting. (A) To map the retinotopy of V1, V2, and V3, a six-segment conjoined template was smoothly deformed to match the data within a manually defined region of interest (ROI). (B) Separate ROIs were used to fit a single hemifield template to V3A.
Figure 1
 
Retinotopic map fitting. (A) To map the retinotopy of V1, V2, and V3, a six-segment conjoined template was smoothly deformed to match the data within a manually defined region of interest (ROI). (B) Separate ROIs were used to fit a single hemifield template to V3A.
Template maps were initialized as rectangular grids, and each grid node was assigned a preferred polar angle and eccentricity and a unique area code, corresponding to the lower or upper field portions of V1, V2, V3, and V3A. The coarse fitting step for V1-V2-V3 used 27 parameters to determine the shape of the template map to best fit the data; a polynomial curve was used to allow flexibility in the overall shape, and the width and length of upper and lower field subareas were allowed to vary independently. Without the need for this extra flexibility, the coarse fitting step for V3A used seven parameters. Coarse fitting was performed using constrained nonlinear search (Matlab's fmincon), initialized iteratively 200 times with randomly chosen starting parameters. In addition to the fit to the data and a cost for exceeding the manually defined ROI (Hagler & Dale, 2013), a vacancy cost was used to penalize template maps that did not fill the ROI, preventing map fits that artifactually avoid regions of noisy data. Because the eccentricity map of V3A is difficult to resolve in many subjects, they were not used to guide map fitting for V3A (Larsson & Heeger, 2006). After the coarse fitting step, a fine-scale fitting step using gradient descent smoothly deformed the template to better match the data. 
Stimuli for MEG sessions
Monochromatic pattern stimuli (Figure 2A) were presented one at a time for 100 ms at 36 visual field locations, with three eccentricities (3.6°, 5.3°, and 8.2° visual angle) and 12 polar angles (22° polar angle wide, contiguous, nonoverlapping portions of the visual field, excluding 24° polar angle centered on each horizontal or vertical meridian). Luminance contrast was varied, with 15%, 71%, and 95% Michelson contrast (Supplementary Figure S1). The interval between successive stimulus onsets was fixed at 117 ms. Ten percent of trials were “null” events in which no stimulus was presented. The average of the null events, which reflects the average, ongoing activity that overlaps with the responses, was subtracted from the other stimulus condition averages (Hagler & Dale, 2013; Hagler et al., 2009). Because of the large number of stimulus locations, the average presentation frequency for a given stimulus location was <0.22 Hz, much slower than that required to cause attenuation (Chen et al., 2005). Subjects made a finger-lift response upon rare dimming of the central fixation cross (approximately once every 5–10 s). In a single MEG session, which was separated into up to 20 150-s blocks with rest periods of 30 s or more, there were up to ∼16,000 total trials, divided approximately equally across all stimulus locations and contrast levels (∼150–200 trials per condition). 
Figure 2
 
Retinotopy-constrained source estimation. Stimuli at 36 visual field locations (A) were used to measure the varying amplitudes and polarities of MEG sensor waveforms (B) and calculate consensus estimates across stimulus locations for a single subject (C) and across both stimulus locations and subjects (D). Peak amplitudes (E) and peak latencies (F) derived from bootstrap resampled group average waveforms. Error bars and shaded regions correspond to standard error estimated from bootstrap resampling.
Figure 2
 
Retinotopy-constrained source estimation. Stimuli at 36 visual field locations (A) were used to measure the varying amplitudes and polarities of MEG sensor waveforms (B) and calculate consensus estimates across stimulus locations for a single subject (C) and across both stimulus locations and subjects (D). Peak amplitudes (E) and peak latencies (F) derived from bootstrap resampled group average waveforms. Error bars and shaded regions correspond to standard error estimated from bootstrap resampling.
Retinotopy-constrained source estimation (RCSE)
RCSE was used to estimate the time courses of VERs in V1, V2, V3, and V3A using procedures explained in detail previously (Hagler, 2014; Hagler & Dale, 2013). These are briefly described in the following in three main parts: forward solution and inverse calculations, group-constrained RCSE, and optimization of cortical patch locations constrained by prior (for a flowchart, see Supplementary Figure S2). The retinotopy-constrained forward solution specifies the expected sensor amplitudes due to activity in each visual area in response to the various stimulus locations, and the subsequently derived inverse solution is used to calculate source time courses from averaged sensor waveforms. Group-constrained RCSE is an extension of this method that produces consensus source estimates using a robust estimation approach to reduce the influence of outliers. The group-constrained solutions were then used as prior estimates in an optimization procedure for each individual subject, in which cortical dipole patches were displaced across the cortical surface in order to compensate for slight inaccuracies in the initial placements based on retinotopic mapping. All results reported in this article were derived from these optimized forward solutions. 
Forward solution and inverse calculations
Retinotopy-constrained forward and inverse matrices were calculated using weighted cortical patches for each stimulus location derived from retinotopic map fits (Hagler, 2014; Hagler & Dale, 2013). For more detailed descriptions of the procedures involved, as well as equations, the reader is directed to those previous publications. Gain matrices were calculated for dipoles oriented perpendicularly to the cortical surface using the boundary element method (Oostendorp & van Oosterom, 1989), with a single shell representing the inner skull boundary approximated by filling and dilating FreeSurfer's automated brain segmentation (Fischl et al., 2002). Brain conductivity was assumed to be 0.3 S/m. MRI and MEG reference frames were manually registered using a graphical interface (Matlab), 100 or more digitized locations on the scalp, and a representation of the outer scalp surface from FreeSurfer's watershed program. For each MEG stimulus location, weighting factors for every cortical surface vertex (∼0.8 mm intervertex distance) in V1, V2, V3, and V3A were calculated based on the preferred stimulus location derived from the fMRI retinotopy template fit. The extent of cortical activation for each stimulus was defined by stimulus sizes and realistic receptive field size estimates and limited by the visual area boundaries. Values of 1.01, 1.12, 1.86, and 3.12 (degrees visual angle) were used for V1, V2, V3, and V3A respectively, with slopes as a function of eccentricity of 0.15, 0.17, 0.27, and 0.35 (degrees visual angle divided by eccentricity degrees visual angle), derived from published group averages of receptive field sizes estimated from fMRI data (Kay, Winawer, Mezer, & Wandell, 2013). Vertex weights were normalized so that the sum across visual field locations equaled 1, and values less than 0.01 times the maximum for each cortical location were set to 0. Vertices in ipsilateral cortex were allowed (e.g., near vertical meridians), as was crossover between the upper and lower field subareas (e.g., near horizontal meridians; Hagler & Dale, 2013). 
Gain matrices and cortical patch weighting factors for a given visual area were used to calculate gain vectors for each stimulus location. Retinotopy-constrained forward matrices were constructed by arranging the gain vectors for multiple stimulus locations into a single column for each visual area (with length equal to number of sensors times number of stimulus locations), consistent with the assumption that a given visual area has the same evoked response regardless of stimulus location (Ales et al., 2010; Hagler et al., 2009; Slotnick et al., 1999). A regularized pseudoinverse with an identity matrix as the sensor noise covariance was used to calculate a time-invariant inverse matrix, which was used to calculate separate source estimates for each contrast level. To calculate normalized residual error, the across-sensor variance of the residual error was divided by the maximum variance of the data over time. 
Group-constrained RCSE
Group-constrained RCSE solutions were calculated using the retinotopy-constrained forward matrices and MEG data from all subjects (Hagler, 2014). The group retinotopy-constrained forward matrix was constructed by concatenating the forward matrices from each subject into a single matrix, with a column for each of the four visual areas and ∼66,000 rows for ∼204 gradiometers (excluding bad channels), 36 stimulus locations, and nine subjects. The resulting inverse matrix was then applied to event-related MEG data concatenated across subjects. Iteratively reweighted least squares reduced the contributions of individual subject responses to particular stimulus locations that had large residual error relative to other locations and subjects (Hagler, 2014; Hagler & Dale, 2013; P. W. Holland & Welsch, 1977; Huber, 1981). 
Optimization of cortical patch locations constrained by prior
Cortical dipole patch locations were nonlinearly optimized using the group-constrained RCSE solution as an atlas-based prior (Hagler, 2014). At each of 1,000 iterations, cortical patches were slightly displaced across the cortical surface using two-dimensional grids defined to encompass each manual ROI used for retinotopic map fitting. The mean and maximum optimal displacements were approximately 2.5 and 5 mm across the cortical surface, respectively (Hagler, 2014). The optimization procedure attempts to minimize the difference between the individual subject RCSE waveforms and the group-constrained RCSE solution as well as the normalized residual error of the fit to the data. To compare responses for different parts of the visual field, dipole optimization was performed independently for each set of stimulus locations (i.e., right, left, perifoveal, peripheral, upper, lower). So that only the shape of the prior constrained the solution, the amplitude of the prior was linearly scaled to optimally match the source estimate amplitude at each iteration of the optimization procedure. The prior was scaled independently for each visual area to avoid imposing assumptions about the relative amplitudes of V1, V2, V3, or V3A. Similarly, to avoid predetermining the contrast response functions, only the responses to high-contrast stimuli and the group-constrained RCSE solutions computed from them were used to determine the optimal dipole locations; these locations were then used to estimate waveforms for each contrast level. 
Waveform analysis
Because the RCSE waveforms for some individual subjects exhibited responses with double peaks, peak latency and amplitude were derived from RCSE waveforms averaged across subjects (Hagler, 2014). Peaks were detected by finding minima and maxima that were at least 0.25 nA · m from surrounding extrema (Eli Billauer's peakdet: http://www.billauer.co.il/peakdet.html) and choosing the negative peak with the largest amplitude between 50 and 170 ms poststimulus. Onset latency was determined by finding the time of initial threshold crossing, with the threshold determined to be median baseline noise (−100 to 40 ms) plus 4.5 times the difference between the 25th and 75th percentile noise values (Letham & Raij, 2011; Miller, Patterson, & Ulrich, 1998). To calculate 95% confidence intervals for average waveforms, amplitudes, and latencies, bootstrap resampling was used with 2,000 iterations (Efron, 1987). To characterize differences in amplitude and latency for paired subsets of stimulus locations (e.g., upper and lower field), differences were calculated for each bootstrap sample. To reduce the number of statistical tests, peak latency and amplitude differences were averaged across the three levels of luminance contrast for each bootstrap sample. Confidence intervals and p-value upper bounds were then derived from the distribution of observed difference values, using bias correction and acceleration (Efron, 1987). To control for multiple comparisons, a p-value threshold of 0.0175 or less was determined to result in a 0.05 false discovery rate (FDR; Benjamini & Hochberg, 1995). With an FDR of 0.01, the p-value threshold was 0.0015. 
Calculation of normalized, average sensor waveforms
Absolute values of the visual evoked fields (VEFs) were averaged across a set of 16 gradiometers near the occipital lobe, and the average baseline value (−100 to 0 ms) was subtracted from the resulting waveforms. The waveforms were then averaged within sets of stimulus locations (e.g., upper field, lower field, etc.). To compare the relative amplitudes of paired sets of stimulus locations and calculate 95% confidence intervals of the group means, we used bootstrap resampling, as described previously. To remove the between-subject variability in overall amplitude, waveforms for each subject were normalized by the maximum value across time points, averaged across the two sets of stimulus locations to avoid bias. For visualizing the resulting waveforms, group averages and confidence intervals were normalized by the maximum value across time of either waveform. 
Results
Mappings between visual space and the cortical surface for V1, V2, V3, and V3A were created by fitting retinotopic templates to phase-encoded fMRI data (Figure 1). They were used to construct retinotopy-constrained, multidipole models of the sources of VERs recorded with MEG for 36 stimulus locations (Figure 2A, B). RCSE was used to estimate the time courses of the evoked responses in each of these visual areas (Figure 2C, D). The effects of varying luminance contrast—smaller amplitude and longer latency for lower contrast stimuli—were essentially the same for each visual area (Supplementary Figure S1). To simplify presentation of results, waveforms shown in this article are for the intermediate contrast level (71%), and peak amplitudes and latencies were averaged across contrast levels. Average RCSE peak amplitude for V1 was about twice as large as for V2, V3, and V3A (Figure 2E). Average RCSE peak latency was shortest for V1 (∼85 ms), with successively longer delays for V2, V3, and V3A (Figure 2F; Table 1). 
Table 1
 
Average RCSE peak amplitudes (nA · m) and latencies (ms) for V1, V2, V3, and V3A.
Table 1
 
Average RCSE peak amplitudes (nA · m) and latencies (ms) for V1, V2, V3, and V3A.
Visual area Peak amplitude Peak latency
Mean [95% confidence interval] Mean [95% confidence interval]
V1 15 13 18 86 83 89
V2 8 6 11 100 97 105
V3 7 5 9 109 106 119
V3A 6 4 9 126 125 136
To test for consistent differences across the visual field, RCSE waveforms were calculated for paired subsets of stimulus locations, including left and right hemifields, perifoveal and peripheral visual fields, and upper and lower visual fields. Dipole optimization constrained by an atlas-based prior was performed independently for each set of stimulus locations. RCSE waveforms for responses to left and right hemifield stimuli were very similar, and there were no significant differences in either the peak amplitudes or latencies (Figure 3; Table 2; Supplementary Figures S3 and S4). In contrast, comparing the responses to perifoveal stimuli (at 3.6° visual angle) and peripheral stimuli (at 8.2° visual angle) revealed significant differences in both amplitudes and latencies (Figure 4; Table 2; Supplementary Figures S5 and S6). Peak amplitudes of responses to peripheral stimuli were significantly larger for V1 and V3A but significantly smaller for V3. Peak latency was significantly shorter for more peripheral stimuli in V1, V2, and V3A. 
Figure 3
 
Differences between left and right hemifields. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error.
Figure 3
 
Differences between left and right hemifields. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error.
Figure 4
 
Differences between perifoveal and peripheral stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 4
 
Differences between perifoveal and peripheral stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Table 2
 
RCSE peak amplitude (nA · m) and latency (ms) differences between portions of the visual field. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Table 2
 
RCSE peak amplitude (nA · m) and latency (ms) differences between portions of the visual field. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Comparison Visual area Amplitude difference Latency difference
Mean [95% confidence interval] Mean [95% confidence interval]
Right vs. left V1 2 −1 6 0 −5 2
V2 1 −2 3 1 −1 4
V3 −1 −5 2 0 −6 6
V3A −1 −2 0 5 1 14
Perifoveal vs. peripheral V1 −5** −8 −2 8** 2 9
V2 −3 −6 0 7** 5 11
V3 3* 1 5 6 −5 8
V3A −1* −3 0 9* 5 16
Upper vs. lower V1 −4** −6 −1 1 −2 7
V2 1 −2 2 −1 −5 2
V3 −1 −4 1 −4 −17 0
V3A −3* −8 −1 10** 3 20
Compared to upper field responses, lower field RCSE peak amplitudes were significantly larger for V1 and V3A (Figure 5; Table 2; Supplementary Figures S7 and S8). The only peak latency difference between upper and lower field responses was for V3A, in which upper field responses were slightly delayed. Because a previous report documented larger differences in the amplitude of upper and lower field VERs (Portin et al., 1999), normalized average sensor waveforms were compared for upper and lower field stimuli. Consistent with the previous report, sensor magnitudes were roughly twice as large for lower field stimuli (Figure 6A). For comparison, sensor waveforms were synthesized from the fitted RCSE waveforms derived separately for upper and lower field stimuli (Figure 6B). The waveforms synthesized for upper field locations were also smaller, but to a lesser degree (Table 3). Peak latencies were not significantly different for upper and lower field responses, but onset latencies were consistently shorter for lower field responses by ∼8–12 ms (Table 4). 
Figure 5
 
Differences between upper and lower stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 5
 
Differences between upper and lower stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 6
 
Normalized sensor data compared to simulations for upper and lower fields. (A) Normalized, averaged magnitudes of pooled occipital MEG sensor data. (B) Simulated data synthesized from the RCSE source waveforms. (C) Comparison of data predicted by separate estimates of V1+ and V1−. (D) Sensor magnitudes predicted by V1 with identical source waveforms for V1+ and V1−. (E–J) Sensor magnitudes predicted by V2, V3, and V3A. Shaded regions correspond to bootstrap standard error.
Figure 6
 
Normalized sensor data compared to simulations for upper and lower fields. (A) Normalized, averaged magnitudes of pooled occipital MEG sensor data. (B) Simulated data synthesized from the RCSE source waveforms. (C) Comparison of data predicted by separate estimates of V1+ and V1−. (D) Sensor magnitudes predicted by V1 with identical source waveforms for V1+ and V1−. (E–J) Sensor magnitudes predicted by V2, V3, and V3A. Shaded regions correspond to bootstrap standard error.
Table 3
 
Sensor magnitude peak amplitude (normalized, arbitrary units) and latency differences (ms) between upper and lower fields. Notes: ** = FDR < 0.01.
Table 3
 
Sensor magnitude peak amplitude (normalized, arbitrary units) and latency differences (ms) between upper and lower fields. Notes: ** = FDR < 0.01.
Source Amplitude difference Latency difference
Mean [95% confidence interval] Mean [95% confidence interval]
Data −0.41** −0.5 −0.3 −5 −17 0
V1-V2-V3-V3A −0.25** −0.4 −0.1 0 −8 16
V1
 Different −0.31** −0.4 −0.2 1 −2 9
 Equal −0.20** −0.3 −0.1 0 −1 1
V2
 Different −0.38** −0.5 −0.3 −2 −10 −2
 Equal −0.58** −0.7 −0.4 0 −1 1
V3
 Different −0.60** −0.7 −0.5 1 −20 19
 Equal −0.67** −0.8 −0.5 1 0 11
V3A
 Different −0.34 −0.7 −0.1 10 −1 16
 Equal −0.09 −0.3 0.1 −1 −1 2
Table 4
 
Sensor magnitude onset latency differences between upper and lower fields. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Table 4
 
Sensor magnitude onset latency differences between upper and lower fields. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Source Upper − lower field Upper field Lower field
Mean [95% confidence interval] Mean [95% confidence interval] Mean [95% confidence interval]
Data 9** 8 12 68 68 71 59 58 62
V1-V2-V3-V3A 8** 2 9 62 56 62 55 53 58
V1 7** 6 13 62 56 62 55 47 55
V2 5* 1 10 74 67 74 69 64 73
V3 11* 2 21 85 61 86 74 62 81
V3A 17 −3 23 87 58 87 70 51 69
To separate the effects of actual differences in evoked currents as opposed to differences in measured fields due to varying distances between sources and sensors, sensor waveforms were also synthesized separately for each visual area, with either different source waveforms for upper and lower field subareas (Figure 6C, E, G, I) or equal source waveforms for both (Figure 6D, F, H, J). For V1, the difference between synthesized upper and lower field sensor magnitudes was quite large when using source waveforms estimated separately for upper and lower field stimuli (Figure 6C), but greatly reduced when using equal source waveforms (Figure 6D). For V2 and V3, there were large differences in synthesized sensor magnitude regardless of whether equal source waveforms were used or not (Figure 6E through H), reflecting both the minimal difference in RCSE amplitudes observed for V2 and V3 (Figure 5B, C) and the substantial physical separation between their dorsal and ventral subareas. In contrast, for V3A there was a substantial difference between synthesized upper and lower field sensor magnitudes when using the separately estimated source waveforms (Figure 6I) that almost completely disappeared when source waveforms for upper and lower field were set equal (Figure 6J), reflecting how the upper and lower field subareas for V3A are adjacent to each other. 
Discussion
It is well documented that behavioral responses to stimuli vary with their location in the visual field (Boulinguez, Ferrois, & Graumer, 2003; Cameron, 2005; Christman, Kitterle, & Hellige, 1991; Corballis, 2003; Corballis, Funnell, & Gazzaniga, 2002; Karim & Kojima, 2010; Kitterle, Hellige, & Christman, 1992; Kitterle & Selig, 1991; Levine & McAnany, 2005; McAnany & Levine, 2004; Okubo & Nicholls, 2008; Peyrin, Chauvin, Chokron, & Marendaz, 2003; Peyrin, Mermillod, Chokron, & Marendaz, 2006; Previc, 1990). Previous work using EEG or MEG has demonstrated variation in the VERs elicited by stimuli in different parts of the visual field (Baseler & Sutter, 1997; Harter, 1970; Portin et al., 1999; Skrandies, 1987). It is unknown, however, at what level of the visual system differences in processing emerge and how any such differences might propagate to higher areas. In the current study, comparing perifoveal and peripheral stimuli, and stimuli in the upper and lower hemifields, multiple visual areas exhibited differences in either peak amplitude, peak latency, or both. In contrast, no consistent differences were observed between the left and right hemifield responses. 
Left versus right hemifield
There is ample behavioral evidence of differences in visual processing between the left and right hemifields (Boulinguez et al., 2003; Christman et al., 1991; Corballis, 2003; Corballis et al., 2002; Kitterle et al., 1992; Kitterle & Selig, 1991; Okubo & Nicholls, 2008; Peyrin et al., 2003; Peyrin et al., 2006); however, in the current study there was no evidence of a consistent difference between hemifields in peak amplitude or latency of the responses of the visual areas studied. It is possible that any variation in processing occurs in higher level, attention-related areas, where there is a right-hemisphere dominance over the left hemisphere in spatial attention (Corbetta, Miezin, Shulman, & Petersen, 1993; Heilman & Van Den Abell, 1979; Kinsbourne, 1970; Peyrin, Baciu, Segebarth, & Marendaz, 2004; Peyrin et al., 2005; Sturm, Reul, & Willmes, 1989). The current results are consistent with previous VEP studies (Ales et al., 2010; Dandekar, Ales, Carney, & Klein, 2007; Slotnick et al., 1999), but it should be noted that those studies relied on primarily qualitative comparisons in a small number of subjects. It is generally difficult to directly compare sensor waveforms for different stimulus locations because of the convoluted cortical surface. RCSE accounts for the cortical folding pattern and retinotopy, providing a consensus solution across stimulus locations. RCSE waveforms can, however, vary because of slight inaccuracies in the cortical patches selected based on fMRI retinotopy. In one or two subjects, a large asymmetry could occur by chance, due, for example, to variations in cortical surface reconstruction or retinotopic mapping. Given a large enough sample of subjects, such artifactual variation can presumably be distinguished from truly biological variation. 
Perifoveal versus peripheral
A previous VEP study has demonstrated earlier responses to stimuli in the periphery, perhaps due to a greater proportion of fast, magno input to V1 (Baseler & Sutter, 1997). Without adequate source estimation, sensor waveforms of VERs cannot distinguish between the activity of the several, closely packed early visual areas. There could be mixing of magno and parvo inputs at the level of V1 (Sincich & Horton, 2005) that eliminates or reduces the latency difference between the two volleys of input as the information is passed along to V2, V3, V3A, and other visual areas. In the current study, responses to perifoveal stimuli exhibited significant delays in peak activation for V1, V2, and V3A, suggesting a maintenance of the temporal separation between the magno and parvo pathways. The absence of a significant difference in V3 latency could be interpreted as a diminution of the delay due to mixing, but the V3 estimates could also have been noisier than the others. That significant differences in peak latency between peripheral and perifoveal stimuli were as short as 3 ms demonstrates the excellent temporal resolution of RCSE. 
Upper versus lower field
Differences between responses to stimuli in the upper and lower visual fields were predicted based on previous studies that have generally demonstrated a behavioral advantage for lower field stimuli (Levine & McAnany, 2005; McAnany & Levine, 2007; Previc, 1990; Skrandies, 1987), as well as much larger VEFs (Portin et al., 1999). It is important to note that the subareas of V1, V2, and V3 that respond to upper and lower field stimuli, respectively, are arrayed on opposite sides of the calcarine sulcus, with lower field sources considerably closer to extracranial EEG or MEG sensors, and thus having greater magnitude, particularly given the steep falloff in sensitivity for MEG (Cuffin & Cohen, 1979). Modest differences in RCSE peak amplitudes were observed for upper and lower field stimuli; however, about half of the twofold difference in sensor magnitudes can be accounted for on the basis of the difference in proximity of sensors and sources. For V2 and V3, the results suggest that there are no differences in amplitude that cannot be explained by proximity. Based on the time courses of sensor magnitudes for upper and lower visual field, it appears that a large portion of the difference occurs at time points later than what could be attributed to V1, V2, or V3, suggesting that downstream visual areas could be responsible. To examine this possibility, RCSE was extended to include estimates for V3A. Despite the fact that the upper and lower field halves of V3A are very close to each other and are both quite close to the sensors, significant differences in V3A upper and lower field amplitudes were observed. V3A alone exhibited a significant peak latency difference, with a lower field response that was ∼10 ms faster than for upper field, consistent with VEP reports of an 11–12-ms faster lower field response (Lehman, Meles, & Mir, 1977; Lehmann & Skrandies, 1979). 
V3 versus VP
The lack of a difference in the timing, amplitude, or contrast response functions of the upper and lower field V3 responses may have implications for visual area terminology. The subareas of V3 have been referred to using three different naming conventions: V3− and V3+, V3d and V3v, or V3 and VP. The V3−/V3+ and V3d/V3v naming conventions are essentially equivalent, referring to either the portion of the visual field represented (lower or upper field) or the relative cortical anatomy (dorsal or ventral). In contrast, the V3/VP naming convention, used in several of the early reports of retinotopic mapping using fMRI (DeYoe et al., 1996; Sereno et al., 1995; Tootell et al., 1997), carries with it the implication that these subareas are functionally distinct visual areas. The distinctness of V3 and VP was suggested by differences in response properties and connectivity observed in macaques (Burkhalter, Felleman, Newsome, & Van Essen, 1986; Burkhalter & Van Essen, 1986; Felleman, Burkhalter, & Van Essen, 1997; Felleman & Van Essen, 1987; Newsome, Maunsell, & Van Essen, 1986; Van Essen, Newsome, Maunsell, & Bixby, 1986). Despite connections between V1 and V3−, similar connections between V1 and V3+ (or VP) were not observed in several early macaque studies (Felleman et al., 1997; Newsome et al., 1986; Van Essen et al., 1986). More recently, however, such connections have been demonstrated in macaque and other primate species (Lyon & Kaas, 2001, 2002a, 2002b, 2002c), perhaps due to better, more sensitive methods for tracing connections (Lyon & Kaas, 2002b). Strong connections between V1 and V3, but not VP, would predict an earlier response in lower field V3, but this was not observed, consistent with recent suggestions that dorsal and ventral V3 are two halves of a single visual area (Lyon & Kaas, 2002b; Wandell et al., 2007). It is acknowledged that this analysis of peak amplitude and latency is not an exhaustive characterization of potential differences between V3− and V3+. Furthermore, because of the limited sample size, it is possible that small differences did not reach significance. The current results do, however, indicate that under the current conditions, any peak amplitude or latency differences between V3− and V3+ were very small. 
Differences between visual areas
Based on peak latencies, there appears to be an incremental delay between V1, V2, V3, and V3A, suggesting serial stages of processing (Figure 2F; Table 1). The extent to which early visual areas have distinct time courses of activation is, however, somewhat contentious (Ales, Yates, & Norcia, 2010; Kelly, Schroeder, & Lalor, 2013). According to direct recordings in monkeys, early visual areas first become active nearly simultaneously (Schmolesky et al., 1998; Schroeder, Mehta, & Givre, 1998). Additionally, V2, V3, and V3A receive some degree of direct, subcortical input that bypasses V1 (Benevento & Yoshida, 1981; Bullier & Kennedy, 1983; Ptito, Johannsen, Faubert, & Gjedde, 1999; Schmid, Panagiotaropoulos, Augath, Logothetis, & Smirnakis, 2009; Sincich, Park, Wohlgemuth, & Horton, 2004; Yoshida & Benevento, 1981). In the current study, onset latency varied across visual areas similarly to peak latency, but the differences were somewhat smaller, and in the case of V3 and V3A, apparently eliminated (Tables 1 and 4). In addition, in each of the V2, V3, and V3A responses, there were small, positive deflections roughly coincident with the onset of the V1 response (Figure 2D), which are likely related to the depolarization of layer 2/3 pyramidal neurons (Barth & Di, 1991; Einevoll et al., 2007; Hagler, 2014; Hagler et al., 2009). The much larger, negative peak that follows is likely explained by activation of layer 5 pyramidal neurons (Barth & Di, 1991; Einevoll et al., 2007; Hagler et al., 2009). Focusing on this large negative peak, common to all four areas, simplifies the analysis of timing variations across visual areas. Analyses of onset latency, though also providing important information, were not relied upon as heavily in the current study because they tended to have greater sampling variability than peak latency. Differences in peak latency presumably reflect a real shift in the temporal pattern of activation. Nonetheless, there is also substantial overlap in the overall activation time courses, consistent with a high degree of parallel processing. 
As mentioned, the effects of varying luminance contrast on peak amplitude and latency were very similar for V1, V2, V3, and V3A (Supplementary Figure S1), and so the primary analyses conducted for this study were collapsed across contrast levels. This lack of a difference in contrast response functions is consistent with the previous study involving this data, which was limited to V1, V2, and V3 (Hagler, 2014), and it is similar to the results of previous fMRI studies (Avidan et al., 2002; Buracas, Fine, & Boynton, 2005; Kastner et al., 2004). There is, however, evidence from single-unit recordings in macaques that the median contrast threshold and saturation level are lower for V3 than for V2 (Gegenfurtner, Kiper, & Fenstemaker, 1996; Gegenfurtner, Kiper, & Levitt, 1997), similar to differences observed between V1 and MT/V5 and between parvo and magno neurons in the lateral geniculate nucleus (Sclar, Maunsell, & Lennie, 1990). The apparent interspecies contradiction is possibly related to fundamental differences in the measurements. Whereas Gegenfurtner and colleagues recorded single-unit spiking activity in multiple—though primarily superficial—cortical layers, peak amplitudes measured with RCSE most likely reflected intracellular currents in layer 5 neurons. Another possible explanation is related to stimulus properties. In the current study, responses were evoked by briefly presented, pattern stimuli with broadband spatial frequencies, whereas drifting sine gratings were used in the macaque studies, with spatial and temporal frequencies optimized for each unit recording. Consistent with this, in human fMRI studies that used low-spatial-frequency drifting sine gratings, V3, MT/V5, and V3A were found to have low contrast saturation levels, unlike the more linear contrast response function of V1 (Tootell et al., 1997; Tootell et al., 1995). Future study is warranted in which stimulus properties are comprehensively varied, differentially driving magno and parvo activity, to more fully assess differences in the basic functional properties of visual areas in humans. 
Limitations
It is difficult to understand the interactions between brain areas when we cannot be confident about relative timing or amplitude of activation. Crosstalk between neighboring sources is a general limitation of source estimation methods, particularly for the tightly packed occipital visual areas (Auranen et al., 2009; Bonmassar et al., 2001; Cottereau et al., 2012; Dale et al., 2000; Di Russo et al., 2005; Hagler et al., 2009; Kajihara et al., 2004; Liu et al., 1998; Moradi et al., 2003; Sereno, 1998; Vanni et al., 2004; Yoshioka et al., 2008). RCSE greatly reduces crosstalk by carefully modeling the retinotopic distribution of cortical sources and using multiple stimulus locations to simultaneously constrain the solution (Ales et al., 2010; Hagler, 2014; Hagler & Dale, 2013; Hagler et al., 2009). Because of the specificity of the retinotopic pattern of dipole orientations, it is unlikely that the estimated source waveforms were affected by the omission from the model of other nearby visual areas (Hagler & Dale, 2013; Hagler et al., 2009). Nonetheless, the method is susceptible to inaccuracies in how cortical sources are specified. For example, B0 distortions in fMRI images, if uncorrected, could introduce a several-millimeter shift in the retinotopic map. Proper correction of these and other distortions is required to achieve precise registration between fMRI data and the cortical surfaces used to determine dipole orientations. The overall quality of fMRI retinotopy data is also important in determining the accuracy of the template fit used to select cortical dipole patches. Optimized fMRI retinotopy acquisition and analysis strategies may improve the accuracy of mapping the visual field to the cortical surface and thus improve the reliability of RCSE waveforms. 
Despite efforts to reduce the contribution of such inaccuracies—including the use of multiple stimulus locations (Ales et al., 2010; Hagler & Dale, 2013; Hagler et al., 2009), robust group analysis (Hagler, 2014), and atlas-based dipole optimization (Hagler, 2014)—it remains possible that some of the differences between responses to different visual field locations observed in the current study could reflect uncorrected biases, for example, map distortions that affect one portion of the visual field more than another, consistently across subjects. Amplitude estimation may be particularly vulnerable to small shifts across the cortical surface because of potential differences in cancellation patterns (Ahlfors et al., 2010; Irimia, Van Horn, & Halgren, 2012). Conversely, because of the relatively small sample in the current study, smaller differences may have failed to reach significance. Despite this inherent uncertainty, there do appear to be systematic differences between responses to upper and lower field stimuli and as a function of stimulus eccentricity. Thus, it is fair to question the validity of the key assumption of RCSE, that evoked responses are consistent across stimulus locations (Ales et al., 2010; Hagler et al., 2009; Slotnick et al., 1999). Given that the differences observed in waveform shape are generally subtle, it may be appropriate to view this as a simplifying assumption and the resulting source waveforms as consensus estimates. An alternative approach, however, is to allow the source estimates to smoothly vary as a function of stimulus location (Hagler et al., 2009). Future work could benefit from this approach as long as care is taken to distinguish between artifactual variation due to inaccurately specified dipole locations and biological variation from visual field asymmetries. 
More generally, a large number of approximations and assumptions have been made in the creation of retinotopy-constrained forward solutions and the estimation of source waveforms. A few important examples are the discretely sampled cortical surface, smoothness and other constraints on retinotopic map fitting, receptive field shape and sizes, and the assumed brain conductivity value. Because of this, and because the retinotopy-constrained forward solution is essentially an imperfect model of reality, the results of RCSE, like any source estimation method, should be interpreted with caution. 
Conclusions
Activation time courses in early human visual areas V1, V2, V3, and V3A were estimated using RCSE, integrating MEG and fMRI retinotopy. Using a robust estimation approach to reduce the contribution of outlier stimulus locations across subjects, consensus estimates of the VERs in V1, V2, V3, and V3A were obtained and then used to constrain dipole patch optimization in individual subjects (Hagler, 2014). This method improves the reliability of RCSE, particularly when using fewer stimulus locations to constrain the solution. This is the first reported use of RCSE to estimate V3A time courses. Although previous studies have estimated V3A time courses from noninvasive electrophysiological data (Aspell, Tanskanen, & Hurlbert, 2005; Wattam-Bell et al., 2010; Yang, Hsieh, & Chang, 2006), the use of RCSE accounts for the retinotopic arrangement of sources and excludes the contribution of activity in V1, V2, and V3. 
The amplitudes and latencies of the first major peak of estimated responses were compared for different subsets of stimuli in order to probe asymmetries across the visual field. As expected, responses to left and right hemifield stimuli did not differ. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli for V1, V2, and V3A, suggesting that volleys of activation due to magno and parvo pathways remain somewhat segregated in V1 and beyond. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field. About half of this difference was explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3; the other half was accounted for by source amplitude differences in V1 and V3A. There were no upper-versus-lower field differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. 
Acknowledgments
The author thanks Kendrick Kay for generously providing slope and intercept values for population receptive field size estimates as functions of eccentricity for visual areas including V1, V2, V3, and V3A. Thanks also to Chris Pung for assistance in data collection and analysis. This work was supported by National Institute of Mental Health K01MH079146. 
Commercial relationships: none. 
Corresponding author: Donald J. Hagler. 
Email: dhagler@ucsd.edu. 
Address: Department of Radiology, University of California–San Diego, La Jolla, CA, USA. 
References
Ahlfors S. P. Han J. Lin F. H. Witzel T. Belliveau J. W. Hamalainen M. S. Halgren E. (2010). Cancellation of EEG and MEG signals generated by extended and distributed sources. Human Brain Mapping, 31 (1), 140–149. [PubMed]
Ales J. Carney T. Klein S. A. (2010). The folding fingerprint of visual cortex reveals the timing of human V1 and V2. Neuroimage, 49 (3), 2494–2502. [CrossRef] [PubMed]
Ales J. M. Yates J. L. Norcia A. M. (2010). V1 is not uniquely identified by polarity reversals of responses to upper and lower visual field stimuli. Neuroimage, 52 (4), 1401–1409. [CrossRef] [PubMed]
Aspell J. E. Tanskanen T. Hurlbert A. C. (2005). Neuromagnetic correlates of visual motion coherence. European Journal of Neuroscience, 22 (11), 2937–2945. [CrossRef] [PubMed]
Auranen T. Nummenmaa A. Vanni S. Vehtari A. Hamalainen M. S. Lampinen J. Jaaskelainen I. P. (2009). Automatic fMRI-guided MEG multidipole localization for visual responses. Human Brain Mapping, 30 (4), 1087–1099. [CrossRef] [PubMed]
Avidan G. Harel M. Hendler T. Ben-Bashat D. Zohary E. Malach R. (2002). Contrast sensitivity in human visual areas and its relationship to object recognition. Journal of Neurophysiology, 87 (6), 3102–3116. [PubMed]
Barth D. S. Di S. (1991). Laminar excitability cycles in neocortex. Journal of Neurophysiology, 65 (4), 891–898. [PubMed]
Baseler H. A. Sutter E. E. (1997). M and P components of the VEP and their visual field distribution. Vision Research, 37 (6), 675–690. [CrossRef] [PubMed]
Benevento L. A. Yoshida K. (1981). The afferent and efferent organization of the lateral geniculo-prestriate pathways in the macaque monkey. Journal of Computation Neurology, 203 (3), 455–474. [CrossRef]
Benjamini Y. Hochberg Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57 (1), 289–300.
Bonmassar G. Schwartz D. P. Liu A. K. Kwong K. K. Dale A. M. Belliveau J. W. (2001). Spatiotemporal brain imaging of visual-evoked activity using interleaved EEG and fMRI recordings. Neuroimage, 13 (6 Pt. 1), 1035–1043. [CrossRef] [PubMed]
Boulinguez P. Ferrois M. Graumer G. (2003). Hemispheric asymmetry for trajectory perception. Brain Research. Cognitive Brain Research, 16 (2), 219–225. [CrossRef] [PubMed]
Bressler D. W. Silver M. A. (2010). Spatial attention improves reliability of fMRI retinotopic mapping signals in occipital and parietal cortex. Neuroimage, 53 (2), 526–533. [CrossRef] [PubMed]
Bullier J. Kennedy H. (1983). Projection of the lateral geniculate nucleus onto cortical area V2 in the macaque monkey. Experimental Brain Research, 53 (1), 168–172. [PubMed]
Bullier J. Schall J. D. Morel A. (1996). Functional streams in occipito-frontal connections in the monkey. Behavioural Brain Research, 76 (1-2), 89–97. [CrossRef] [PubMed]
Buracas G. T. Fine I. Boynton G. M. (2005). The relationship between task performance and functional magnetic resonance imaging response. Journal of Neuroscience, 25 (12), 3023–3031. [CrossRef] [PubMed]
Burkhalter A. Felleman D. J. Newsome W. T. Van Essen D. C. (1986). Anatomical and physiological asymmetries related to visual areas V3 and VP in macaque extrastriate cortex. Vision Research, 26 (1), 63–80. [CrossRef] [PubMed]
Burkhalter A. Van Essen D. C. (1986). Processing of color, form and disparity information in visual areas VP and V2 of ventral extrastriate cortex in the macaque monkey. Journal of Neuroscience, 6 (8), 2327–2351. [PubMed]
Cameron E. L. (2005). Perceptual inhomogeneities in the upper visual field. Journal of Vision, 5 (8): 176, http://www.journalofvision.org/content/5/8/176, doi:10.1167/5.8.176] [Abstract]
Chang H. Fitzpatrick J. M. (1992). A technique for accurate magnetic resonance imaging in the presence of field inhomogeneities. IEEE Transactions on Medical Imaging, 11 (3), 319–329. [CrossRef] [PubMed]
Chen W. T. Ko Y. C. Liao K. K. Hsieh J. C. Yeh T. C. Wu Z. A. Lin Y. Y. (2005). Optimal check size and reversal rate to elicit pattern-reversal MEG responses. Canadian Journal of Neurological Sciences, 32 (2), 218–224. [CrossRef] [PubMed]
Christman S. Kitterle F. L. Hellige J. (1991). Hemispheric asymmetry in the processing of absolute versus relative spatial frequency. Brain and Cognition, 16 (1), 62–73. [CrossRef] [PubMed]
Connolly M. Van Essen D. (1984). The representation of the visual field in parvicellular and magnocellular layers of the lateral geniculate nucleus in the macaque monkey. Journal of Computation Neurology, 226 (4), 544–564. [CrossRef]
Corballis P. M. (2003). Visuospatial processing and the right-hemisphere interpreter. Brain and Cognition, 53 (2), 171–176. [CrossRef] [PubMed]
Corballis P. M. Funnell M. G. Gazzaniga M. S. (2002). Hemispheric asymmetries for simple visual judgments in the split brain. Neuropsychologia, 40 (4), 401–410. [CrossRef] [PubMed]
Corbetta M. Miezin F. M. Shulman G. L. Petersen S. E. (1993). A PET study of visuospatial attention. Journal of Neuroscience, 13 (3), 1202–1226. [PubMed]
Cottereau B. R. McKee S. P. Ales J. M. Norcia A. M. (2012). Disparity-specific spatial interactions: Evidence from EEG source imaging. Journal of Neuroscience, 32 (3), 826–840. [CrossRef] [PubMed]
Cox R. W. (1996). AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, 29 (3), 162–173. [CrossRef] [PubMed]
Cuffin B. N. Cohen D. (1979). Comparison of the magnetoencephalogram and electroencephalogram. Electroencephalography and Clinical Neurophysiology, 47 (2), 132–146. [CrossRef] [PubMed]
Curcio C. A. Sloan K. R. Kalina R. E. Hendrickson A. E. (1990). Human photoreceptor topography. Journal of Computation Neurology, 292 (4), 497–523. [CrossRef]
Dale A. M. Fischl B. Sereno M. I. (1999). Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage, 9 (2), 179–194. [CrossRef] [PubMed]
Dale A. M. Liu A. K. Fischl B. R. Buckner R. L. Belliveau J. W. Lewine J. D. Halgren E. (2000). Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron, 26 (1), 55–67. [CrossRef] [PubMed]
Dale A. M. Sereno M. I. (1993). Improved localization of cortical activity by combining EEG and MEG with MRI cortical surface reconstruction: A linear approach. Journal of Cognitive Neuroscience, 5, 162–176. [CrossRef] [PubMed]
Dandekar S. Ales J. Carney T. Klein S. A. (2007). Methods for quantifying intra- and inter-subject variability of evoked potential data applied to the multifocal visual evoked potential. Journal of Neuroscience Methods, 165 (2), 270–286. [CrossRef] [PubMed]
De Valois R. L. De Valois K. K. (1988). Spatial vision. New York: Oxford University Press.
DeYoe E. A. Carman G. J. Bandettini P. Glickman S. Wieser J. Cox R. (1996). Mapping striate and extrastriate visual areas in human cerebral cortex. Proceedings of the National Academy of Sciences, USA, 93 (6), 2382–2386. [CrossRef]
Di Russo F. Pitzalis S. Spitoni G. Aprile T. Patria F. Spinelli D. Hillyard S. A. (2005). Identification of the neural sources of the pattern-reversal VEP. Neuroimage, 24 (3), 874–886. [CrossRef] [PubMed]
Dougherty R. F. Koch V. M. Brewer A. A. Fischer B. Modersitzki J. Wandell B. A. (2003). Visual field representations and locations of visual areas V1/2/3 in human visual cortex. Journal of Vision, 3 (10): 1, 586–598, http://www.journalofvision.org/content/3/10/1, doi:10.1167/3.10.1. [PubMed] [Article] [PubMed]
Duncan R. O. Boynton G. M. (2003). Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron, 38 (4), 659–671. [CrossRef] [PubMed]
Efron B. (1987). Better bootstrap confidence intervals. Journal of the American Statistical Association, 82 (397), 171–185. [CrossRef]
Einevoll G. T. Pettersen K. H. Devor A. Ulbert I. Halgren E. Dale A. M. (2007). Laminar population analysis: estimating firing rates and evoked synaptic activity from multielectrode recordings in rat barrel cortex. Journal of Neurophysiology, 97 (3), 2174–2190. [CrossRef] [PubMed]
Felleman D. J. Burkhalter A. Van Essen D. C. (1997). Cortical connections of areas V3 and VP of macaque monkey extrastriate visual cortex. Journal of Computation Neurology, 379 (1), 21–47. [CrossRef]
Felleman D. J. Van Essen D. C. (1987). Receptive field properties of neurons in area V3 of macaque monkey extrastriate cortex. Journal of Neurophysiology, 57 (4), 889–920. [PubMed]
Fennema-Notestine C. Hagler D. J. Jr. McEvoy L. K. Fleisher A. S. Wu E. H. Karow D. S. Dale A. M. (2009). Structural MRI biomarkers for preclinical and mild Alzheimer's disease. Human Brain Mapping, 30 (10), 3238–3253. [CrossRef] [PubMed]
Fioretto M. Gandolfo E. Orione C. Fatone M. Rela S. Sannita W. G. (1995). Automatic perimetry and visual P300: Differences between upper and lower visual fields stimulation in healthy subjects. Journal of Medical Engineering & Technology, 19 (2-3), 80–83. [CrossRef] [PubMed]
Fischl B. Liu A. Dale A. M. (2001). Automated manifold surgery: Constructing geometrically accurate and topologically correct models of the human cerebral cortex. IEEE Transactions on Medical Imaging, 20 (1), 70–80. [CrossRef] [PubMed]
Fischl B. Salat D. H. Busa E. Albert M. Dieterich M. Haselgrove C. Dale A. M. (2002). Whole brain segmentation: Automated labeling of neuroanatomical structures in the human brain. Neuron, 33 (3), 341–355. [CrossRef] [PubMed]
Fischl B. Sereno M. I. Dale A. M. (1999). Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system. Neuroimage, 9 (2), 195–207. [CrossRef] [PubMed]
Gegenfurtner K. R. Kiper D. C. Fenstemaker S. B. (1996). Processing of color, form, and motion in macaque area V2. Visual Neuroscience, 13 (1), 161–172. [CrossRef] [PubMed]
Gegenfurtner K. R. Kiper D. C. Levitt J. B. (1997). Functional properties of neurons in macaque area V3. Journal of Neurophysiology, 77 (4), 1906–1923. [PubMed]
Hagler D. J. Jr. (2014). Optimization of retinotopy constrained source estimation constrained by prior. Human Brain Mapping, 35 (5), 1815–1833. [CrossRef] [PubMed]
Hagler D. J. Jr. Dale A. M. (2013). Improved method for retinotopy constrained source estimation of visual-evoked responses. Human Brain Mapping, 34 (3), 665–683. [PubMed]
Hagler D. J. Jr. Halgren E. Martinez A. Huang M. Hillyard S. A. Dale A. M. (2009). Source estimates for MEG/EEG visual evoked responses constrained by multiple, retinotopically-mapped stimulus locations. Human Brain Mapping, 30 (4), 1290–1309. [CrossRef] [PubMed]
Hagler D. J. Jr. Riecke L. Sereno M. I. (2007). Parietal and superior frontal visuospatial maps activated by pointing and saccades. Neuroimage, 35 (4), 1562–1577. [CrossRef] [PubMed]
Hagler D. J. Jr. Sereno M. I. (2006). Spatial maps in frontal and prefrontal cortex. Neuroimage, 29 (2), 567–577. [CrossRef] [PubMed]
Harter M. R. (1970). Evoked cortical responses to checkerboard patterns: Effect of check-size as a function of retinal eccentricity. Vision Research, 10 (12), 1365–1376. [CrossRef] [PubMed]
He S. Cavanagh P. Intriligator J. (1996). Attentional resolution and the locus of visual awareness. Nature, 383 (6598), 334–337. [CrossRef] [PubMed]
Heilman K. M. Van Den Abell T. (1979). Right hemispheric dominance for mediating cerebral activation. Neuropsychologia, 17 (3-4), 315–321. [CrossRef] [PubMed]
Holland D. Kuperman J. M. Dale A. M. (2010). Efficient correction of inhomogeneous static magnetic field-induced distortion in echo planar imaging. Neuroimage, 50 (1), 175–183. [CrossRef] [PubMed]
Holland P. W. Welsch R. E. (1977). Robust regression using iteratively reweighted least-squares. Communications in Statistics—Theory and Methods, A6, 813–827. [CrossRef]
Huber P. J. (1981). Robust statistics. New York: Wiley.
Intriligator J. Cavanagh P. (2001). The spatial resolution of visual attention. Cognitive Psychology, 43 (3), 171–216. [CrossRef] [PubMed]
Irimia A. Van Horn J. D. Halgren E. (2012). Source cancellation profiles of electroencephalography and magnetoencephalography. Neuroimage, 59 (3), 2464–2474. [CrossRef] [PubMed]
Jovicich J. Czanner S. Greve D. Haley E. van der Kouwe A. Gollub R. Dale A. M. (2006). Reliability in multi-site structural MRI studies: Effects of gradient non-linearity correction on phantom and human data. Neuroimage, 30 (2), 436–443. [CrossRef] [PubMed]
Kajihara S. Ohtani Y. Goda N. Tanigawa M. Ejima Y. Toyama K. (2004). Wiener filter-magnetoencephalography of visual cortical activity. Brain Topography, 17 (1), 13–25. [CrossRef] [PubMed]
Karim A. K. Kojima H. (2010). The what and why of perceptual asymmetries in the visual domain. Advances in Cognitive Psychology, 6, 103–115. [CrossRef] [PubMed]
Kastner S. O'Connor D. H. Fukui M. M. Fehd H. M. Herwig U. Pinsk M. A. (2004). Functional imaging of the human lateral geniculate nucleus and pulvinar. Journal of Neurophysiology, 191 (1), 438–448.
Kay K. N. Winawer J. Mezer A. Wandell B. A. (2013). Compressive spatial summation in human visual cortex. Journal of Neurophysiology, 110 (2), 481–494. [CrossRef] [PubMed]
Kelly S. P. Schroeder C. E. Lalor E. C. (2013). What does polarity inversion of extrastriate activity tell us about striate contributions to the early VEP? A comment on Ales et al. Neuroimage, 76, 442–445. [CrossRef] [PubMed]
Kinsbourne M. (1970). The cerebral basis of lateral asymmetries in attention. Acta Psychologia (Amst), 33, 193–201. [CrossRef]
Kitterle F. L. Hellige J. B. Christman S. (1992). Visual hemispheric asymmetries depend on which spatial frequencies are task relevant. Brain Cognition, 20 (2), 308–314. [CrossRef] [PubMed]
Kitterle F. L. Selig L. M. (1991). Visual field effects in the discrimination of sine-wave gratings. Perception & Psychophysics, 50 (1), 15–18. [CrossRef] [PubMed]
Kremlacek J. Kuba M. Chlubnova J. Kubova Z. (2004). Effect of stimulus localisation on motion-onset VEP. Vision Research, 44 (26), 2989–3000. [CrossRef] [PubMed]
Lachica E. A. Beck P. D. Casagrande V. A. (1992). Parallel pathways in macaque monkey striate cortex: Anatomically defined columns in layer III. Proceedings of the National Academy of Sciences, USA, 89 (8), 3566–3570. [CrossRef]
Larsson J. Heeger D. J. (2006). Two retinotopic visual areas in human lateral occipital cortex. Journal of Neuroscience, 26 (51), 13128–13142. [CrossRef] [PubMed]
Lehman D. Meles H. P. Mir Z. (1977). Average multichannel EEG potential fields evoked from upper and lower hemi-retina: Latency differences. Electroencephalography and Clinical Neurophysiology, 43 (5), 725–731. [CrossRef] [PubMed]
Lehmann D. Skrandies W. (1979). Multichannel evoked potential fields show different properties of human upper and lower hemiretina systems. Experimental Brain Research, 35 (1), 151–159. [PubMed]
Letham B. Raij T. (2011). Statistically robust measurement of evoked response onset latencies. Journal of Neuroscience Methods, 194 (2), 374–379. [CrossRef] [PubMed]
Levine M. W. McAnany J. J. (2005). The relative capabilities of the upper and lower visual hemifields. Vision Research, 45 (21), 2820–2830. [CrossRef] [PubMed]
Liu A. K. Belliveau J. W. Dale A. M. (1998). Spatiotemporal imaging of human brain activity using functional MRI constrained magnetoencephalography data: Monte Carlo simulations. Proceedings of the National Academy of Sciences, USA, 95 (15), 8945–8950. [CrossRef]
Liu A. K. Dale A. M. Belliveau J. W. (2002). Monte Carlo simulation studies of EEG and MEG localization accuracy. Human Brain Mapping, 16 (1), 47–62. [CrossRef] [PubMed]
Lyon D. C. Kaas J. H. (2001). Connectional and architectonic evidence for dorsal and ventral V3, and dorsomedial area in marmoset monkeys. Journal of Neuroscience, 21 (1), 249–261. [PubMed]
Lyon D. C. Kaas J. H. (2002a). Connectional evidence for dorsal and ventral V3, and other extrastriate areas in the prosimian primate, Galago garnetti. Brain, Behavior and Evolution, 59 (3), 114–129. [CrossRef]
Lyon D. C. Kaas J. H. (2002 b). Evidence for a modified V3 with dorsal and ventral halves in macaque monkeys. Neuron, 33 (3), 453–461. [CrossRef] [PubMed]
Lyon D. C. Kaas J. H. (2002 c). Evidence from V1 connections for both dorsal and ventral subdivisions of V3 in three species of New World monkeys. Journal of Computation Neurology, 449 (3), 281–297. [CrossRef]
Malpeli J. G. Lee D. Baker F. H. (1996). Laminar and retinotopic organization of the macaque lateral geniculate nucleus: Magnocellular and parvocellular magnification functions. Journal of Computation Neurology, 375 (3), 363–377. [CrossRef]
Martin K. A. (1992). Parallel pathways converge. Current Biology, 2 (10), 555–557. [CrossRef] [PubMed]
Maunsell J. H. Van Essen D. C. (1987). Topographic organization of the middle temporal visual area in the macaque monkey: Representational biases and the relationship to callosal connections and myeloarchitectonic boundaries. Journal of Computation Neurology, 266 (4), 535–555. [CrossRef]
McAnany J. J. Levine M. W. (2004). The highs and lows of magnocellular and parvocellular processing. Journal of Vision, 4 (8): 515, http://www.journalofvision.org/content/4/8/515, doi:10.1167/4.8.515] [Abstract]
McAnany J. J. Levine M. W. (2007). Magnocellular and parvocellular visual pathway contributions to visual field anisotropies. Vision Research, 47 (17), 2327–2336. [CrossRef] [PubMed]
Merigan W. H. Maunsell J. H. (1993). How parallel are the primate visual pathways? Annual Review of Neuroscience, 16, 369–402. [CrossRef] [PubMed]
Miller J. Patterson T. Ulrich R. (1998). Jackknife-based method for measuring LRP onset latency differences. Psychophysiology, 35 (1), 99–115. [CrossRef] [PubMed]
Moradi F. Liu L. C. Cheng K. Waggoner R. A. Tanaka K. Ioannides A. A. (2003). Consistent and precise localization of brain activity in human primary visual cortex by MEG and fMRI. Neuroimage, 18 (3), 595–609. [CrossRef] [PubMed]
Morgan P. S. Bowtell R. W. McIntyre D. J. Worthington B. S. (2004). Correction of spatial distortion in EPI due to inhomogeneous static magnetic fields using the reversed gradient method. Journal of Magnetic Resonance Imaging, 19 (4), 499–507. [CrossRef] [PubMed]
Newsome W. T. Maunsell J. H. Van Essen D. C. (1986). Ventral posterior visual area of the macaque: Visual topography and areal boundaries. Journal of Computation Neurology, 252 (2), 139–153. [CrossRef]
Nowak L. G. Munk M. H. Girard P. Bullier J. (1995). Visual latencies in areas V1 and V2 of the macaque monkey. Visual Neuroscience, 12 (2), 371–384. [CrossRef] [PubMed]
Okubo M. Nicholls M. E. (2008). Hemispheric asymmetries for temporal information processing: Transient detection versus sustained monitoring. Brain and Cognition, 66 (2), 168–175. [CrossRef] [PubMed]
Oostendorp T. F. van Oosterom A. (1989). Source parameter estimation in inhomogeneous volume conductors of arbitrary shape. IEEE Transactions on Biomedical Engineering, 36 (3), 382–391. [CrossRef] [PubMed]
Perry V. H. Cowey A. (1985). The ganglion cell and cone distributions in the monkey's retina: Implications for central magnification factors. Vision Research, 25 (12), 1795–1810. [CrossRef] [PubMed]
Peyrin C. Baciu M. Segebarth C. Marendaz C. (2004). Cerebral regions and hemispheric specialization for processing spatial frequencies during natural scene recognition: An event-related fMRI study. Neuroimage, 23 (2), 698–707. [CrossRef] [PubMed]
Peyrin C. Chauvin A. Chokron S. Marendaz C. (2003). Hemispheric specialization for spatial frequency processing in the analysis of natural scenes. Brain and Cognition, 53 (2), 278–282. [CrossRef] [PubMed]
Peyrin C. Mermillod M. Chokron S. Marendaz C. (2006). Effect of temporal constraints on hemispheric asymmetries during spatial frequency processing. Brain and Cognition, 62 (3), 214–220. [CrossRef] [PubMed]
Peyrin C. Schwartz S. Seghier M. Michel C. Landis T. Vuilleumier P. (2005). Hemispheric specialization of human inferior temporal cortex during coarse-to-fine and fine-to-coarse analysis of natural visual scenes. Neuroimage, 28 (2), 464–473. [CrossRef] [PubMed]
Portin K. Vanni S. Virsu V. Hari R. (1999). Stronger occipital cortical activation to lower than upper visual field stimuli: Neuromagnetic recordings. Experimental Brain Research, 124 (3), 287–294. [CrossRef] [PubMed]
Press W. A. Brewer A. A. Dougherty R. F. Wade A. R. Wandell B. A. (2001). Visual areas and spatial summation in human visual cortex. Vision Research, 41 (10-11), 1321–1332. [CrossRef] [PubMed]
Previc F. H. (1990). Functional specialization in the upper and lower visual fields in humans: Its ecological origins and neurophysiological implications. Behavioral Brain Science, 13, 519–575. [CrossRef]
Ptito M. Johannsen P. Faubert J. Gjedde A. (1999). Activation of human extrageniculostriate pathways after damage to area V1. Neuroimage, 9 (1), 97–107. [CrossRef] [PubMed]
Schmid M. C. Panagiotaropoulos T. Augath M. A. Logothetis N. K. Smirnakis S. M. (2009). Visually driven activation in macaque areas V2 and V3 without input from the primary visual cortex. PLoS One, 4 (5), e5527. [CrossRef] [PubMed]
Schmolesky M. T. Wang Y. Hanes D. P. Thompson K. G. Leutgeb S. Schall J. D. Leventhal A. G. (1998). Signal timing across the macaque visual system. Journal of Neurophysiology, 79 (6), 3272–3278. [PubMed]
Schroeder C. E. Mehta A. D. Givre S. J. (1998). A spatiotemporal profile of visual system activation revealed by current source density analysis in the awake macaque. Cerebral Cortex, 8 (7), 575–592. [CrossRef] [PubMed]
Sclar G. Maunsell J. H. Lennie P. (1990). Coding of image contrast in central visual pathways of the macaque monkey. Vision Research, 30 (1), 1–10. [CrossRef] [PubMed]
Segonne F. Dale A. M. Busa E. Glessner M. Salat D. Hahn H. K. Fischl B. (2004). A hybrid approach to the skull stripping problem in MRI. Neuroimage, 22 (3), 1060–1075. [CrossRef] [PubMed]
Segonne F. Pacheco J. Fischl B. (2007). Geometrically accurate topology-correction of cortical surfaces using nonseparating loops. IEEE Transactions on Medical Imaging, 26 (4), 518–529. [CrossRef] [PubMed]
Sereno M. I. (1998). Brain mapping in animals and humans. Current Opinion in Neurobiology, 8 (2), 188–194. [CrossRef] [PubMed]
Sereno M. I. Dale A. M. Reppas J. B. Kwong K. K. Belliveau J. W. Brady T. J. Tootell R. B. (1995). Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science, 268 (5212), 889–893. [CrossRef] [PubMed]
Sincich L. C. Horton J. C. (2005). The circuitry of V1 and V2: Integration of color, form, and motion. Annual Review of Neuroscience, 28, 303–326. [CrossRef] [PubMed]
Sincich L. C. Park K. F. Wohlgemuth M. J. Horton J. C. (2004). Bypassing V1: A direct geniculate input to area MT. Nature Neuroscience, 7 (10), 1123–1128. [CrossRef] [PubMed]
Skottun B. C. Skoyles J. R. (2008). A few remarks on attention and magnocellular deficits in schizophrenia. Neuroscience & Biobehavior Reviews, 32 (1), 118–122. [CrossRef]
Skrandies W. (1987). The upper and lower visual field of man: Electrophysiological and functional differences. In Ottoson D. (Ed.), Progress in sensory physiology (pp. 1–93). Berlin: Springer.
Slotnick S. D. Klein S. A. Carney T. Sutter E. Dastmalchi S. (1999). Using multi-stimulus VEP source localization to obtain a retinotopic map of human primary visual cortex. Clinical Neurophysiology, 110 (10), 1793–1800. [CrossRef] [PubMed]
Sturm W. Reul J. Willmes K. (1989). Is there a generalized right hemisphere dominance for mediating cerebral activation? Evidence from a choice reaction experiment with lateralized simple warning stimuli. Neuropsychologia, 27 (5), 747–751. [CrossRef] [PubMed]
Swisher J. D. Halko M. A. Merabet L. B. McMains S. A. Somers D. C. (2007). Visual topography of human intraparietal sulcus. Journal of Neuroscience, 27 (20), 5326–5337. [CrossRef] [PubMed]
Tootell R. B. Mendola J. D. Hadjikhani N. K. Ledden P. J. Liu A. K. Reppas J. B. Dale A. M. (1997). Functional analysis of V3A and related areas in human visual cortex. Journal of Neuroscience, 17 (18), 7060–7078. [PubMed]
Tootell R. B. Reppas J. B. Kwong K. K. Malach R. Born R. T. Brady T. J. Belliveau J. W. (1995). Functional analysis of human MT and related visual cortical areas using magnetic resonance imaging. Journal of Neuroscience, 15 (4), 3215–3230. [PubMed]
Tootell R. B. Switkes E. Silverman M. S. Hamilton S. L. (1988). Functional anatomy of macaque striate cortex. II. Retinotopic organization. Journal of Neuroscience, 8 (5), 1531–1568. [PubMed]
Van Essen D. C. Newsome W. T. Maunsell J. H. (1984). The visual field representation in striate cortex of the macaque monkey: Asymmetries, anisotropies, and individual variability. Vision Research, 24 (5), 429–448. [CrossRef] [PubMed]
Van Essen D. C. Newsome W. T. Maunsell J. H. Bixby J. L. (1986). The projections from striate cortex (V1) to areas V2 and V3 in the macaque monkey: Asymmetries, areal boundaries, and patchy connections. Journal of Computation Neurology, 244 (4), 451–480. [CrossRef]
Vanni S. Warnking J. Dojat M. Delon-Martin C. Bullier J. Segebarth C. (2004). Sequence of pattern onset responses in the human visual areas: An fMRI constrained VEP source analysis. Neuroimage, 21 (3), 801–817. [CrossRef] [PubMed]
Wandell B. A. Dumoulin S. O. Brewer A. A. (2007). Visual field maps in human cortex. Neuron, 56 (2), 366–383. [CrossRef] [PubMed]
Warnking J. Dojat M. Guerin-Dugue A. Delon-Martin C. Olympieff S. Richard N. Segebarth C. (2002). fMRI retinotopic mapping—Step by step. Neuroimage, 17 (4), 1665–1683. [CrossRef] [PubMed]
Wattam-Bell J. Birtles D. Nystrom P. von Hofsten C. Rosander K. Anker S. Braddick O. (2010). Reorganization of global form and motion processing during human visual development. Current Biology, 20 (5), 411–415. [CrossRef] [PubMed]
Wells W. M. 3rd Viola P. Atsumi H. Nakajima S. Kikinis R. (1996). Multi-modal volume registration by maximization of mutual information. Medical Image Analysis, 1 (1), 35–51. [CrossRef] [PubMed]
Yang C. Y. Hsieh J. C. Chang Y. (2006). An MEG study into the visual perception of apparent motion in depth. Neuroscience Letters, 403 (1-2), 40–45. [CrossRef] [PubMed]
Yoshida K. Benevento L. A. (1981). The projection from the dorsal lateral geniculate nucleus of the thalamus to extrastriate visual association cortex in the macaque monkey. Neuroscience Letters, 22 (2), 103–108. [CrossRef] [PubMed]
Yoshioka T. Toyama K. Kawato M. Yamashita O. Nishina S. Yamagishi N. Sato M. A. (2008). Evaluation of hierarchical Bayesian method through retinotopic brain activities reconstruction from fMRI and MEG signals. Neuroimage, 42 (4), 1397–1413. [CrossRef] [PubMed]
Figure 1
 
Retinotopic map fitting. (A) To map the retinotopy of V1, V2, and V3, a six-segment conjoined template was smoothly deformed to match the data within a manually defined region of interest (ROI). (B) Separate ROIs were used to fit a single hemifield template to V3A.
Figure 1
 
Retinotopic map fitting. (A) To map the retinotopy of V1, V2, and V3, a six-segment conjoined template was smoothly deformed to match the data within a manually defined region of interest (ROI). (B) Separate ROIs were used to fit a single hemifield template to V3A.
Figure 2
 
Retinotopy-constrained source estimation. Stimuli at 36 visual field locations (A) were used to measure the varying amplitudes and polarities of MEG sensor waveforms (B) and calculate consensus estimates across stimulus locations for a single subject (C) and across both stimulus locations and subjects (D). Peak amplitudes (E) and peak latencies (F) derived from bootstrap resampled group average waveforms. Error bars and shaded regions correspond to standard error estimated from bootstrap resampling.
Figure 2
 
Retinotopy-constrained source estimation. Stimuli at 36 visual field locations (A) were used to measure the varying amplitudes and polarities of MEG sensor waveforms (B) and calculate consensus estimates across stimulus locations for a single subject (C) and across both stimulus locations and subjects (D). Peak amplitudes (E) and peak latencies (F) derived from bootstrap resampled group average waveforms. Error bars and shaded regions correspond to standard error estimated from bootstrap resampling.
Figure 3
 
Differences between left and right hemifields. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error.
Figure 3
 
Differences between left and right hemifields. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error.
Figure 4
 
Differences between perifoveal and peripheral stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 4
 
Differences between perifoveal and peripheral stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 5
 
Differences between upper and lower stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 5
 
Differences between upper and lower stimuli. (A–D) RCSE waveforms. (E) Peak amplitude differences. (F) Peak latency differences. Error bars and shaded regions correspond to bootstrap standard error. * = FDR < 0.05; ** = FDR < 0.01.
Figure 6
 
Normalized sensor data compared to simulations for upper and lower fields. (A) Normalized, averaged magnitudes of pooled occipital MEG sensor data. (B) Simulated data synthesized from the RCSE source waveforms. (C) Comparison of data predicted by separate estimates of V1+ and V1−. (D) Sensor magnitudes predicted by V1 with identical source waveforms for V1+ and V1−. (E–J) Sensor magnitudes predicted by V2, V3, and V3A. Shaded regions correspond to bootstrap standard error.
Figure 6
 
Normalized sensor data compared to simulations for upper and lower fields. (A) Normalized, averaged magnitudes of pooled occipital MEG sensor data. (B) Simulated data synthesized from the RCSE source waveforms. (C) Comparison of data predicted by separate estimates of V1+ and V1−. (D) Sensor magnitudes predicted by V1 with identical source waveforms for V1+ and V1−. (E–J) Sensor magnitudes predicted by V2, V3, and V3A. Shaded regions correspond to bootstrap standard error.
Table 1
 
Average RCSE peak amplitudes (nA · m) and latencies (ms) for V1, V2, V3, and V3A.
Table 1
 
Average RCSE peak amplitudes (nA · m) and latencies (ms) for V1, V2, V3, and V3A.
Visual area Peak amplitude Peak latency
Mean [95% confidence interval] Mean [95% confidence interval]
V1 15 13 18 86 83 89
V2 8 6 11 100 97 105
V3 7 5 9 109 106 119
V3A 6 4 9 126 125 136
Table 2
 
RCSE peak amplitude (nA · m) and latency (ms) differences between portions of the visual field. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Table 2
 
RCSE peak amplitude (nA · m) and latency (ms) differences between portions of the visual field. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Comparison Visual area Amplitude difference Latency difference
Mean [95% confidence interval] Mean [95% confidence interval]
Right vs. left V1 2 −1 6 0 −5 2
V2 1 −2 3 1 −1 4
V3 −1 −5 2 0 −6 6
V3A −1 −2 0 5 1 14
Perifoveal vs. peripheral V1 −5** −8 −2 8** 2 9
V2 −3 −6 0 7** 5 11
V3 3* 1 5 6 −5 8
V3A −1* −3 0 9* 5 16
Upper vs. lower V1 −4** −6 −1 1 −2 7
V2 1 −2 2 −1 −5 2
V3 −1 −4 1 −4 −17 0
V3A −3* −8 −1 10** 3 20
Table 3
 
Sensor magnitude peak amplitude (normalized, arbitrary units) and latency differences (ms) between upper and lower fields. Notes: ** = FDR < 0.01.
Table 3
 
Sensor magnitude peak amplitude (normalized, arbitrary units) and latency differences (ms) between upper and lower fields. Notes: ** = FDR < 0.01.
Source Amplitude difference Latency difference
Mean [95% confidence interval] Mean [95% confidence interval]
Data −0.41** −0.5 −0.3 −5 −17 0
V1-V2-V3-V3A −0.25** −0.4 −0.1 0 −8 16
V1
 Different −0.31** −0.4 −0.2 1 −2 9
 Equal −0.20** −0.3 −0.1 0 −1 1
V2
 Different −0.38** −0.5 −0.3 −2 −10 −2
 Equal −0.58** −0.7 −0.4 0 −1 1
V3
 Different −0.60** −0.7 −0.5 1 −20 19
 Equal −0.67** −0.8 −0.5 1 0 11
V3A
 Different −0.34 −0.7 −0.1 10 −1 16
 Equal −0.09 −0.3 0.1 −1 −1 2
Table 4
 
Sensor magnitude onset latency differences between upper and lower fields. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Table 4
 
Sensor magnitude onset latency differences between upper and lower fields. Notes: * = FDR < 0.05; ** = FDR < 0.01.
Source Upper − lower field Upper field Lower field
Mean [95% confidence interval] Mean [95% confidence interval] Mean [95% confidence interval]
Data 9** 8 12 68 68 71 59 58 62
V1-V2-V3-V3A 8** 2 9 62 56 62 55 53 58
V1 7** 6 13 62 56 62 55 47 55
V2 5* 1 10 74 67 74 69 64 73
V3 11* 2 21 85 61 86 74 62 81
V3A 17 −3 23 87 58 87 70 51 69
Supplementary Figure Legends
Supplementary Figure S1
Supplementary Figure S2
Supplementary Figure S3
Supplementary Figure S4
Supplementary Figure S5
Supplementary Figure S6
Supplementary Figure S7
Supplementary Figure S8
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×