Abstract
Recent methods for analyzing fMRI data produce voxel tuning functions (VTFs) that relate the value of a stimulus feature (e.g., orientation) to the intensity of the BOLD signal. Modulation of VTFs has been interpreted as reflecting changes in the shape of the response profile across populations of feature-selective neurons. However, knowing how the shape of a voxel response profile is modified by a change in brain state (e.g., viewing stimuli at low versus high contrast) does not tell us how the response profiles of neurons that contribute to that voxel are modified. Mapping a VTF back to neural tuning functions (NTFs) is an ill-posed inverse problem: there are two unknown distributions (the shape and the response magnitude of underlying NTFs) but only one observed distribution (the BOLD signal across values of the stimulus feature). We tackled this inverse problem by using two BOLD response profiles from two brain states (across which VTF shape is modulated) and solving for modulations in the distributions. We collected BOLD data from V1 in subjects viewing oriented sinusoidal gratings at low and high stimulus contrast. Taking orientation-selective voxel responses at low versus high contrast, we fitted multiple alternative models of the modulation of NTFs (additive shift, multiplicative gain, bandwidth narrowing) assumed to drive the modulation in the VTF. We used parametric bootstrapping to penalize overly flexible models. Although the VTF underwent additive shift from low to high contrast, the best-fitting models of NTF modulation accounting for this shift involved primarily multiplicative gain (in line with electrophysiological evidence). This demonstrates that the method can recover ‘ground truth’ by making use of the constraints imposed by many voxels across two conditions. The method links monkey neurophysiological data concerning NTFs to human fMRI data on VTFs and should ultimately be applicable in other (non-visual) sensory cortices.
Meeting abstract presented at VSS 2013