Abstract
A neuroscientific experiment typically generates a large amount of data, of which only a small fraction is subjected to detailed analysis and presentation in a publication. This inevitable selection is a major determinant of the final conclusion, and selection among a set of noisy measurements can render circular an otherwise appropriate analysis, invalidating statistical tests. The issue of circularity is particularly important in both electrophysiological and neuroimaging experiments. Here we focus on neuroimaging and argue that the field needs to adjust some widespread practices to avoid the circularity that can arise from selection. Faced with even more parallel sites than electrophysiology (typically on the order of 100 000 voxels), neuroimaging has developed rigorous methods for statistical mapping. This powerful approach avoids selection altogether: by analyzing all locations equally, while accounting for the fact that multiple tests are performed. However, selective analysis is still commonly used to focus on a particular brain region and, in fact, statistical mapping can form the basis for defining a region of interest (ROI). Further analysis of such functionally defined regions must take the selection bias into account. This problem is well understood in theory and one solution is to use independent experimental data to analyze the ROI. In practice, however, the selection bias is often ignored and important claims rely on questionable circular analyses. In order to demonstrate the problem, we apply analyses widespread in the neuroimaging literature to data known not to contain any experimental effects: functional magnetic resonance imaging (fMRI) data acquired from a resting human subject performing no explicit task. This exercise shows that widespread analysis procedures can produce spurious effects in the context of both univariate activation analysis and multivariate pattern-information analysis. We conclude by suggesting some simple guidelines for better practice in research and reviewing.