Functional magnetic resonance imaging (fMRI) of the human brain has provided much information about visual cortex. These insights hinge on researchers' ability to identify cortical areas based on stimulus selectivity and retinotopic mapping. However, border identification around regions of interest or between retinotopic maps is often performed without characterizing the degree of certainty associated with the location of these key features; ideally, assertions about the location of boundaries would have an associated spatial confidence interval. We describe an approach that allows researchers to transform estimates of error in the intensive dimension (i.e., activation of voxels) to the spatial dimension (i.e., the location of features evident in patterns across voxels). We implement the approach by bootstrapping, with applications to: (1) the location of human MT+ and (2) the location of the V1/V2 boundary. The transformation of intensive to spatial error furnishes graphical, intuitive characterizations of spatial uncertainty akin to error bars on the borders of visual areas, instead of the conventional practice of computing and thresholding *p*-values for voxels. This approach provides a general, unbiased arena for evaluating: (1) competing conceptions of visual area organization; (2) analysis technique efficacy; and (3) data quality.

*location*of some feature of the data, rather than its intensity per se (Nickerson, Martin, Lancaster, Gao, & Fox, 2001). It is often important to be able to confidently assert not just whether or not a particular voxel's response passes some statistical threshold, but whether a spatial pattern of responses occurs in a particular location or exhibits some systematic structure. An appreciation of spatial uncertainty would strengthen such claims about the location of functional specialization and/or visual field organization. It thus seems desirable if not imperative to adopt a method of quantifying and displaying the spatial uncertainty associated with spatial features in fMRI data.

*spatial*uncertainty—one must transform error in the dimension of voxel intensity to error in the spatial dimension.

*y*-axis in Figure 1) to the spatial dimension (

*x*-axis in Figure 1). A simple way to implement this transformation is the application of a resampling technique such as the bootstrap. We describe the details of one particular bootstrapping implementation in later sections where we consider real fMRI data. For the purposes of this simulated example, it is sufficient to: (a) resample from the original ten runs with replacement and calculate the average response; (b) apply the same intensity threshold as applied in Figure 1B; and (c) repeat the resample and threshold steps (

*a*and

*b*) multiple times, recording the resulting locations of suprathreshold voxels identified on each iteration.

*x*-axis. One can thus characterize this variability in the spatial dimension, which amounts to quantifying spatial uncertainty. Figure 1D shows the result: the mean activated region, along with horizontally oriented error bars that indicate the spatial uncertainty (95% confidence intervals) associated with the location of suprathreshold activity.

*p*-values associated with particular voxels) does not allow one to directly infer the degree of error in the spatial domain (e.g., the precision of the localized activation) simply because the functions are nonlinear. Moreover, note that the form of this nonlinear dependence varies greatly with different thresholds, further emphasizing that transforming intensive error to spatial error provides novel information not easily recovered from visualization of thresholded statistical maps. Given that there is no universal standard for threshold levels across all types of experiment and analysis, this dependence further motivates the direct calculation of spatial error. The potential complexity of the relationship revealed by this simple exercise underscores the importance of expressing spatial errors when addressing spatial questions.

*amplitude*and

*phase*of the best-fitting sinusoid; and

*coherence,*the correlation between the data and this best-fitting sinusoid.

*Coherence*is a standard measure of signal-to-noise ratio. For the purposes of this exercise, it can be thought of as loosely equivalent to any of the standard signal-to-noise-based statistics considered in most voxel-based analyses (e.g.,

*t*-statistics,

*F*-values,

*z*-scores, etc.).

*r,*as a four-dimensional

*x–y–z–t*block of data (3 spatial dimensions of the images, fluctuating over time). Multiple (

*n*) runs are acquired, and so the

*i*th repetition is referred to as

*r*

_{ i}. Critically, the bootstrapping procedure will resample from this collection of

*r*s, meaning that each complete

*x–y–z–t*block of data (

*r*

_{ i}) is treated as the core unit. This preserves all the complex spatiotemporal correlation structure that is present in fMRI data, whatever its exact form may be. Thus, the bootstrap makes no assumptions about the particular form of the relation between intensive error and spatial error (e.g., those shown in Figure 2).

*r*are resampled with replacement to create a bootstrapped data set of length

*n*. Data analysis is then performed (in this example, simple calculation of coherence followed by thresholding), and the results (e.g., locations of suprathreshold voxels composing the ROI) are recorded. This process is then repeated for a large number of times (200 or more), with each iteration operating upon another

*n*samples from

*r*selected with replacement. The resulting

*distribution*of the results acquired across all bootstrapped iterations can then be analyzed and visualized in any number of ways. Figure 3 shows a graphic schematic of the spatial uncertainty framework and bootstrapping implementation.

*SEM*).

*x–y–z–t*blocks from the original data), any statistical analysis can be applied to the data, and any particular way of deciding and recording activated voxels or clusters can be implemented. In fact, the results of the bootstrapping process can be used to gauge the relative efficacy of particular analysis schemes or steps.

*increase,*approaching the median as threshold increases. In fact, the inner region of high spatial confidence is relatively large at a low coherence threshold and gradually

*shrinks*as the coherence threshold is increased.

^{3}–10

^{4}). We confirmed that the results shown in this paper do not change appreciably when greater than 200 iterations (i.e., the differences fall within measurement error). However, future applications that involve larger numbers of free parameters and or larger numbers of runs that can be resampled may indeed benefit from more iterations.

*region,*a set of voxels that cannot be confidently assigned to either abutting map. If one wanted to be particularly conservative in distinguishing the two areas (perhaps for the purposes of testing for additional functional differences between these two areas), selecting V1 and V2 as the respective regions that end at the outer contour surrounding the V1/V2 boundary would be appropriate. Other sizes of confidence intervals (e.g., 5% or 95%) could be used to perform more conservative or liberal distinctions between maps and their border.

*any*border or region finding method. In fact, we would welcome a demonstration of a more precise method for finding the V1/V2 border using the spatial uncertainty framework. Our proposed framework could in fact be used to compare the spatial precision of various techniques. One last point that can be made is that even the relatively course border identification technique we employed revealed pleasingly narrow estimates of spatial uncertainty.

*post hoc*to bolster a particular interpretation.

*Neuroimage*, 17, 583–591. [PubMed] [CrossRef] [PubMed]

*Journal of Magnetic Resonance Imaging*, 11, 228–231. [PubMed] [CrossRef] [PubMed]

*IEEE Transactions on Biomedical Engineering*, 52, 401–413. [PubMed] [CrossRef] [PubMed]

*Journal of Computer Assisted Tomography*, 25, 113–120. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 8, 1102–1109. [PubMed] [CrossRef] [PubMed]

*Statistical Methods in Medical Research*, 12, 375–399. [PubMed] [CrossRef] [PubMed]

*Human Brain Mapping*, 12, 61–78. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience Methods*, 54, 171–187. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 3, (10):1, 586–598, http://journalofvision.org/3/10/1/, doi:10.1167/3.10.1. [PubMed] [Article] [CrossRef] [PubMed]

*Neuroimage*, 39, 647–660. [PubMed] [CrossRef] [PubMed]

*Neuron*, 38, 659–671. [PubMed] [Article] [CrossRef] [PubMed]

*An introduction to the bootstrap*. New York: Chapman and Hall.

*Cerebral Cortex*, 7, 181–192. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 369, 525. [CrossRef] [PubMed]

*Neuroimage*, 25, 859–867. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 1, 235–241. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 35, 1562–1577. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 33, 1093–1103. [PubMed] [Article] [CrossRef] [PubMed]

*Neuroimage*, 29, 567–577. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 27, 11896–11911. [PubMed] [Article] [CrossRef] [PubMed]

*Magnetic Resonance Imaging*, 22, 631–638. [PubMed] [CrossRef] [PubMed]

*Magnetic Resonance in Medicine*, 49, 7–12. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 26, 13128–13142. [PubMed] [Article] [CrossRef] [PubMed]

*Neuroimage*, 23, 764–775. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience Methods*, 135, 137–147. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 34, 1562–1576. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 14, 194–201. [PubMed] [CrossRef] [PubMed]

*Journal of Magnetic Resonance*, 161, 1–14. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 97, 4284–4295. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 94, 1372–1384. [PubMed] [Article] [CrossRef] [PubMed]

*Neuroimage*, 37, 1186–1194. [PubMed] [CrossRef] [PubMed]

*Science*, 268, 889–893. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 9, 1337–1343. [PubMed] [CrossRef] [PubMed]

*Science*, 294, 1350–1354. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 15, 747–771. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 27, 5326–5337. [PubMed] [Article] [CrossRef] [PubMed]

*Cerebral Cortex*, 11, 298–311. [PubMed] [Article] [CrossRef] [PubMed]

*Philosophical Transactions of the Royal Society of London B: Biological Sciences*, 357, 963–973. [PubMed] [Article] [CrossRef]

*Cerebral Cortex*, 3, 79–94. [PubMed] [CrossRef] [PubMed]