Abstract
The posterior visual field maps, V1-V3, have been well-characterized using neuroimaging techniques and computational models. One type of model, a retinotopic template, accurately predicts the retinotopic organization from the anatomical structure [Benson et al. 2012 & 2014; 10.1016/j.cub.2012.09.014 & 10.1371/journal.pcbi.100353]. A second type of model, an image-computable model, predicts fMRI response amplitude within these maps to a wide range of visual stimuli, including textures and natural images [e.g., Kay et al. 2008, 2013; 10.1038/nature06713 & 10.1371/journal.pcbi.1003079]. The parameters of these image-computable models are typically fit to fMRI data in each voxel independently. Here, we took advantage of the fact that these parameters are distributed regularly across the cortical surface, extending Benson et al.'s retinotopic templates to infer the parameters of an image-computable model, based on Kay et al. (2013). By merging these two types of models, and extending the model to incorporate multiple spatial scales, we can predict the percent BOLD change across all voxels in V1-V3 in response to an arbitrary gray-scale image in any individual subject given only the stimulus image and a T1-weighted anatomical image. Without any fitting to functional data, this model predicts responses with high accuracy (e.g., R = 0.80, 0.72, and 0.63 in V1, V2, and V3, respectively, from a sample experiment). Our model has been designed with flexibility in mind, and both source code and universal executables are freely available. Additionally, we have developed a database and website where researchers will be able to deposit anatomical data, stimulus sets, and functional data, and will be able to run our model or their own version of it. We hope that this space will facilitate the sharing of data, the comparison and further development of models, and collaboration between laboratories.
Meeting abstract presented at VSS 2017