September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
An anatomically-defined template of BOLD response in V1-V3
Author Affiliations
  • Noah Benson
    Department of Psychology, New York University
  • William Broderick
    Center for Neural Science, New York University
  • Heiko Müller
    Center for Data Science, New York University
  • Jonathan Winawer
    Department of Psychology, New York University
    Center for Neural Science, New York University
Journal of Vision August 2017, Vol.17, 585. doi:https://doi.org/10.1167/17.10.585
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Noah Benson, William Broderick, Heiko Müller, Jonathan Winawer; An anatomically-defined template of BOLD response in V1-V3. Journal of Vision 2017;17(10):585. https://doi.org/10.1167/17.10.585.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The posterior visual field maps, V1-V3, have been well-characterized using neuroimaging techniques and computational models. One type of model, a retinotopic template, accurately predicts the retinotopic organization from the anatomical structure [Benson et al. 2012 & 2014; 10.1016/j.cub.2012.09.014 & 10.1371/journal.pcbi.100353]. A second type of model, an image-computable model, predicts fMRI response amplitude within these maps to a wide range of visual stimuli, including textures and natural images [e.g., Kay et al. 2008, 2013; 10.1038/nature06713 & 10.1371/journal.pcbi.1003079]. The parameters of these image-computable models are typically fit to fMRI data in each voxel independently. Here, we took advantage of the fact that these parameters are distributed regularly across the cortical surface, extending Benson et al.'s retinotopic templates to infer the parameters of an image-computable model, based on Kay et al. (2013). By merging these two types of models, and extending the model to incorporate multiple spatial scales, we can predict the percent BOLD change across all voxels in V1-V3 in response to an arbitrary gray-scale image in any individual subject given only the stimulus image and a T1-weighted anatomical image. Without any fitting to functional data, this model predicts responses with high accuracy (e.g., R = 0.80, 0.72, and 0.63 in V1, V2, and V3, respectively, from a sample experiment). Our model has been designed with flexibility in mind, and both source code and universal executables are freely available. Additionally, we have developed a database and website where researchers will be able to deposit anatomical data, stimulus sets, and functional data, and will be able to run our model or their own version of it. We hope that this space will facilitate the sharing of data, the comparison and further development of models, and collaboration between laboratories.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×