Abstract
Many visual neurons exhibit tuning functions for stimulus features such as orientation. Methods for analyzing fMRI data reveal analogous feature-tuning in the BOLD signal (e.g., Inverted Encoding Models; Brouwer and Heeger, 2009). Because these voxel-level tuning functions (VTFs) are superficially analogous to the neural tuning functions (NTFs) observed with electrophysiology, it is tempting to interpret VTFs as mirroring the underlying NTFs. However, each voxel contains many subpopulations of neurons with different preferred orientations, and the distribution of neurons across the subpopulations is unknown. Because of this, there are multiple alternative accounts by which changes in the subpopulation-NTFs could produce a given change in the VTF. We developed a hierarchical Bayesian model to determine, for a given change in the VTF, which account of the change in underlying NTFs best explains the data. The model fits many voxels simultaneously, inferring both the shape of the NTF in different conditions and the distribution of neurons across subpopulations in each voxel. We tested this model in visual cortex by applying it to changes induced by increasing visual contrast -- a manipulation known from electrophysiology to produce multiplicative gain in NTFs. Although increasing contrast caused an additive shift in the VTFs, the Bayesian model correctly identified multiplicative gain as the change in the underlying NTFs. This technique is potentially applicable to any fMRI study of modulations in cortical responses that are tuned to a well-established dimension of variation (e.g., orientation, speed of motion, isoluminant hue).