**Abstract**:

**Abstract**
**Properties of human visual population receptive fields (pRFs) are currently estimated by performing measurements of visual stimulation using functional magnetic resonance imaging (fMRI), and then fitting the results using a predefined model shape for the pRF. Various models exist and different models may be appropriate under different circumstances, but the validity of the models has never been verified, suggesting the need for a model-free approach. Here, we demonstrate that pRFs can be directly reconstructed using a back-projection-tomography approach that requires no a priori model. The back-projection method involves sweeping thin contrast-defined bars across the visual field whose orientation and direction is rotated through 0°–180° in discrete increments. The measured fMRI time series within a cortical location can be approximated as a projection of the pRF along the long axis of the bar. The signals produced by a set of bar sweeps encircling the visual field form a sinogram. pRFs were reconstructed from these sinograms with a novel scheme that corrects for the blur introduced by the hemodynamic response and the stimulus-bar width. pRF positions agree well with the conventional model-based approach. Notably, a subset of the reconstructed pRFs shows significant asymmetry for both their excitatory and suppressive regions. Reconstructing pRFs using the tomographic approach is a fast, reliable, and accurate way to noninvasively estimate human pRF parameters and visual-field maps without the need for any a priori shape assumption.**

*T*

_{1}-weighted structural images was obtained on the same prescription at the end of the session using a three-dimensional RF-spoiled GRASS (SPGR) sequence. These images were used to align the functional data to a structural 3-D reference volume, which was acquired for each subject in a separate session. The structural reference volume was

*T*

_{1}weighted with good gray-white contrast and was acquired using a 3-D, inversion-prepared SPGR sequence (minimum TE and TR, inversion time = 450 ms, 15° flip angle, isometric voxel size of 0.7 mm, two excitations, duration ∼28 min). These volumes were segmented using ITKSnap and FreeSurfer to delineate the white and gray matter (Fischl, 2012; Yushkevich et al., 2006), and surface models were constructed at the gray-white interface. The gray matter of posterior occipital lobe was also flattened (B. A. Wandell, Chial, & Backus, 2000), to facilitate visualization.

*x*

_{0},

*y*

_{0}) and a size scale factor

*σ*which can also be expressed as an FWHM diameter

*d*= 2

_{FWHM}*σ*

*n*(

*t*) to a time-varying two-dimensional visual stimulus

*s*(

*x*,

*y*,

*t*) is given by where

*p*(

*x*,

*y*) describes the population receptive field (pRF), the spatial-response pattern of the neuronal population. Assume further that the stimulus is constant along the

*y*-axis, so that

*s*(

*x*,

*y*,

*t*) =

*s*(

*x*,

*t*), and define the projection of the pRF as then Now let us define the stimulus as a bar of contrast with its long axis along

*y*and width

*w*, moving with speed

*v*: where

*R*is the radius of the stimulus aperture. At

*t*= 0 and a sweep angle of 0°, the bar starts from the left edge of the aperture and moves to the right. Define a spatial variable

*x*

_{1}=

*vt*−

*R*and, noting that Rect is an even function, where the ∗ operator denotes convolution and the time dependence is now implicit in

*x*

_{1}. If we further assume that the fMRI response

*f*(

*t*) is given by the convolution of the neuronal response with a hemodynamic impulse response function (HRF)

*h*(

*t*), then where

*b*(

*t*) =

*h*(

*t*) ∗ Rect(

*x*

_{1}/

*w*) is a blur kernel that includes the effects of both the finite bar width and the HRF. The same relationship holds for any sweep angle

*θ*, with the pRF projection more generally defined as where the (

*x*′,

*y*′) coordinates correspond to the original coordinates rotated by angle

*θ*.

*b*(

*t*) using a Wiener filter in the Fourier-transform domain: where

*k*is the noise variance,

_{w}*B*(

*ω*) is the Fourier transform of time-domain variable

*b*(

*t*), and similarly for other quantities; and

*ω*is temporal frequency. Applying the Wiener filter to the blurred projection data gives an estimate of the projected pRF in the frequency domain: Thus, for each raw projection obtained from the fMRI data, we can use the Wiener filter in the frequency domain to mitigate the HRF and bar-width effects and then take the inverse transform to estimate the true projection of the pRF. By forming a discrete series of projections around the pRF on the semiopen interval [0°, 180°), we then obtain an estimate of the Radon transform of the pRF, a sinogram that can be inverted using various means, most commonly weighted back projection with a Ram-Lak filter (Kak & Slaney, 1988).

*k*chosen for the Wiener filter. Reconstruction noise can be quantified by examining the fraction of variance explained by the reconstruction in the raw sinogram, the set of fMRI time series. To perform this comparison, each reconstruction is multiplied by the stimulus aperture then convolved with the HRF to create a “reconstruction” time series. This is then correlated with the original raw sinogram. We report the mean variance explained for all voxels above our data-quality threshold (see later).

_{w}*d*= 2

*a*and

*b*are the major and minor radii of the ellipse, respectively. We used the elliptical fit to quantify the asymmetry of each contour, defining an aspect ratio AR =

*a*/

*b*.

*n*= 6, the correlation coefficients obtained in this fashion have a minimum value 1/

*x*and

*y*field-map positions, yielding an estimated sampling distribution for position data and their 68% confidence intervals,

*p*and

_{x}*p*. We chose 68% confidence intervals because they are analogous to the standard error of the mean. We then formed an overall positional variability

_{y}*p*=

*k*in the range of 0.01–0.1. The variance explained varies from subject to subject in the range of 0.72–0.88. If

_{w}*k*is chosen too small, the deconvolution becomes ill conditioned and adds noise to the reconstruction. In this case, the corrected sinograms of many voxels become too noisy to adequately reconstruct the pRF; when contours are constructed around the pRF maxima for such voxels, they tend to show sizes near the resolution limit for that value of

_{w}*k*. Larger values of

_{w}*k*increase the diameter of the PSF, reducing the visual-field resolution and decreasing noise but preventing the resolution of the smaller pRFs in early visual cortex. The blur also reduces the variance explained by the reconstruction. Based upon examination of the variance explained and the distribution of pRF sizes, we chose

_{w}*k*= 0.03 because it provided a satisfactory trade-off between visual-field resolution and noise. The resolution varies somewhat from subject to subject because of variations in their HRFs. The mean resolution for the data across all four subjects was 1.6°.

_{w}*x*and

*y*positions estimated by the two methods agree well (

*R*

^{2}= 0.97; median RMS error = 0.34°) for Subject 1, with similar results for the other subjects (Table 1). Comparison data for polar angle are subject to artificial phase-wrapping errors and are therefore not shown. However, the eccentricity data show that the tomographic field maps systematically give smaller values toward the edge of the FOV, yielding somewhat poorer agreement overall (

*R*

^{2}= 0.85; median RMS error = 0.41°).

V1 | V2 | V3 | ||||

R^{2} | RMS | R^{2} | RMS | R^{2} | RMS | |

Subject 1 | ||||||

x | 0.97 | 0.09 | 0.97 | 0.11 | 0.96 | 0.13 |

y | 0.97 | 0.64 | 0.98 | 0.56 | 0.96 | 0.55 |

ϵ | 0.85 | 0.41 | 0.64 | 0.49 | 0.82 | 0.47 |

Subject 2 | ||||||

x | 0.99 | 0.15 | 0.98 | 0.23 | 0.91 | 0.37 |

y | 0.95 | 0.42 | 0.97 | 0.40 | 0.86 | 0.49 |

ϵ | 0.87 | 0.30 | 0.82 | 0.26 | 0.63 | 0.48 |

Subject 3 | ||||||

x | 0.98 | 0.12 | 0.97 | 0.12 | 0.93 | 0.17 |

y | 0.96 | 0.22 | 0.98 | 0.29 | 0.89 | 0.38 |

ϵ | 0.91 | 0.22 | 0.88 | 0.25 | 0.70 | 0.37 |

Subject 4 | ||||||

x | 0.96 | 0.08 | 0.95 | 0.08 | 0.89 | 0.14 |

y | 0.92 | 0.19 | 0.94 | 0.21 | 0.65 | 0.38 |

ϵ | 0.88 | 0.30 | 0.82 | 0.26 | 0.63 | 0.48 |

*p*< 0.005) trends for pRF size increasing with eccentricity on the eccentricity range of 0°–6°. The trends are substantially larger in area V3, but again the sizes are limited by the resolution at small eccentricities. At large eccentricities, the rising size trend reverses; this is likely a clipping effect associated with the finite FOV analyzed by the tomographic reconstruction (see Discussion). In contrast, the model-based approach shows much larger trends for increasing size with eccentricity, particularly in V1. For Subject 1 (Figure 9D), the size estimates from the two methods do not agree except at the larger eccentricities in V3. However, for Subject 2 there is fairly good agreement throughout the eccentricity range in area V3 (Figure 9F). In general, the size estimates agree when both tomography measurements and model predictions are above the resolution limit for the tomography data. The size trends for Subject 1 were similar to those for Subject 3, but in extrastriate areas, the tomographic approach gave smaller sizes, close to the resolution limit (Figure A1).

*d*> 1.5

*d*to ensure our ability to resolve their shape, as well as visual-field eccentricities below (FOV/2 −

_{psf}*d*) to minimize clipping effects. Typically, about 30%–50% of the pRFs in areas V1–V3 met these constraints. Such pRFs typically were concentrated in a band of eccentricities of 1°–3.5°.

*p*< 0.0001). These patterns of excitation and suppression generally repeat well across sessions in the same subject (Figure 7C, D). Asymmetry is fairly common within this subset of our data. For areas V1–V3 combined, a substantial fraction of the measured pRFs exhibit asymmetry (Figure 7E, F), with 45% having an aspect ratio above 1.5 and 11% having an aspect ratio above 2. Asymmetry shows little variation across visual areas or subjects.

*, 102 (5), 2704–2718. [CrossRef] [PubMed]*

*Journal of Neurophysiology*

*Journal of Neuroscience**,*12 (8), 3139–3161. [PubMed]

*, 321 (6070), 579–585. [CrossRef] [PubMed]*

*Nature*

*Spatial Vision**,*10 (4), 433–436. [CrossRef] [PubMed]

*Journal of Physiology Paris**,*104 (1–2), 40–50. [CrossRef]

*NeuroImage**,*39 (2), 647–660. [CrossRef] [PubMed]

*, 21 (5), 476–480. [CrossRef]*

*IEEE Transactions on Analysis and Machine Intelligence*

*Journal of Neuroscience**,*31 (13), 4792–4804. [CrossRef] [PubMed]

*Physics in Medicine and Biology**,*50 (4), R1–R43. [CrossRef] [PubMed]

*, 25 (3), 365–374. [CrossRef] [PubMed]*

*Vision Research**, 9 (4), 416–429. [CrossRef] [PubMed]*

*NeuroImage**, 42 (2), 412–415. [CrossRef] [PubMed]*

*Magnetic Resonance in Medicine*

*Magnetic Resonance in Medicine**,*39 (3), 361–368. [CrossRef] [PubMed]

*, 44 (1), 162–167. [CrossRef] [PubMed]*

*Magnetic Resonance in Medicine*

*Journal of Neuroscience**,*31 (38), 13604–13612. [CrossRef] [PubMed]

*NeuroImage**,*65C

*,*424–432.

*Neuron**,*75 (3), 393–401. [CrossRef] [PubMed]

*Journal of Comparative Neurology**,*158 (3), 295–305. [CrossRef] [PubMed]

*Proceedings of the Royal Society of London. Series B, Biological Sciences**,*198 (1130), 1–59. [CrossRef] [PubMed]

*. New York: Institute of Electrical and Electronics Engineers.*

*Principles of computerized tomographic imaging*

*Nature**,*452 (7185), 352–355. [CrossRef] [PubMed]

*Journal of Neurophysiology**,*110 (2), 481–494. [CrossRef] [PubMed]

*Graphical Models**,*73

*,*313–322. [CrossRef] [PubMed]

*, 46 (4), 631–637. [CrossRef] [PubMed]*

*Magnetic Resonance in Medicine*

*NeuroImage**,*81

*,*144–157. [CrossRef] [PubMed]

*, 22 (1), 34–44. [CrossRef] [PubMed]*

*Current Opinion in Neurobiology*

*Journal of Neuroscience**,*29 (18), 5749–5757. [CrossRef] [PubMed]

*, 43 (5), 705–715. [CrossRef] [PubMed]*

*Magnetic Resonance in Medicine*

*Neural Computation**,*24 (10), 2543–2578. [CrossRef] [PubMed]

*NeuroImage**,*34 (1), 74–84. [CrossRef] [PubMed]

*Neuron**,*51 (5), 661–670. [CrossRef] [PubMed]

*Neuron**,*24 (4), 791–802. [CrossRef] [PubMed]

*. Cambridge: Cambridge University Press.*

*Introduction to medical imaging: Physics, engineering and clinical applications*

*Journal of Neuroscience**,*30 (1), 325–330. [CrossRef] [PubMed]

*Journal of Neuroscience**,*25 (39), 9046–9058. [CrossRef] [PubMed]

*Science**,*249 (4967), 417–420. [CrossRef] [PubMed]

*Vision Research**,*24 (5), 429–448. [CrossRef] [PubMed]

*Trends in Neurosciences**,*6 (9), 370–375. [CrossRef]

*Journal of Neurophysiology**,*72 (5), 2151–2166. [PubMed]

*. Sunderland, MA: Sinauer Associates.*

*Foundations of vision*(1st ed.)*, 12 (5), 739–752. [CrossRef] [PubMed]*

*Journal of Cognitive Neuroscience**, 17 (10), 2293–2302. [CrossRef] [PubMed]*

*Cerebral Cortex*

*NeuroImage**,*31 (3), 1116–1128. [CrossRef] [PubMed]

*Journal of Vision**,*12 (3): 10, 1–15, http://www.journalofvision.org/content/12/3/10, doi:10.1167/12.3.10. [PubMed] [Article]

*d*

*≈*

_{tomo}*d*

*pRF*is the actual size of the pRF and

*d*is the FWHM resolution. We can then roughly deblur the data by solving this approximation for

_{FWHM}*d*

*≈*

_{pRF}*d*≤

_{tomo}*d*), we obtain good agreement between the tomographic and model-based size estimates for the data from Subject 2, confirming the resolution limits of the tomographic method (Figure A2). Similar results were obtained for Subjects 1 and 3, but not for Subject 4.

_{FWHM}