Abstract
The early visual areas (V1, V2, V3, hV4, V3a/b, LO1, VO1) are among the most well-characterized parts of the visual cortex in terms of both the image-features to which they respond and their topographic organization with respect to anatomical structure. Previous work has used models of image-feature sensitivity to explain much of the variance of BOLD fMRI measurements of these areas when subjects are shown static natural image stimuli (Kay et al., 2008, Nature 452:352). Such models are typically fit independently for each voxel or surface vertex, without taking advantage of the known retinotopic map organization or regularities in their parameters across the cortical surface. Simultaneously, orthogonal work on models of the cortical surface has resulted in the ability to predict the boundaries and retinotopic organization of early visual areas based on the anatomical structure of a subject's brain alone (Benson et al., 2014, PLOS Comput Biol 10:e1003538); although these models cannot predict responses to arbitrary images. In this presentation, we expand on and wed these two model types to produce a single model that is capable of predicting the BOLD signal in a subject's cortex based only on a T1-weighted anatomical image and a grayscale stimulus image.
This model, which we call the Standard Cortical Observer 1.0, takes advantage of regularities in the distribution of model parameters on the cortical surface with respect to retinotopic organization to eliminate the need for training. Without any training in an individual subject's data, the model correctly decodes more than half of a set of 120 natural stimuli in each of two subjects (chance performance = 1/120). The model also accurately predicts responses to sets of simple, controlled texture patterns that vary in basic properties such as orientation, contrast, spatial frequency, and sparsity.
Our model does a good job predicting responses to a wide range of stimuli, including artificial stimuli and images of natural scenes. Nonetheless, it is a first generation model and will improve as it incorporates additional computations and stimulus property such as motion and color. Therefore, the model has been designed with flexibility in mind and both the source code and universal executable forms are freely available. Additionally, we have developed a public database and website where researchers may deposit anatomical data, stimulus sets, and functional data and may run our model or their own versions of it. We hope that this space will facilitate the sharing of data, the comparison and further development of models, and collaboration between laboratories.