September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
A hierarchical Bayesian model for inferring neural tuning functions from voxel tuning functions
Author Affiliations
  • Patrick Sadil
    Psychological and Brain Sciences, College of Natural Sciences, University of Massachusetts, Amherst
  • David Huber
    Psychological and Brain Sciences, College of Natural Sciences, University of Massachusetts, Amherst
  • John Serences
    Department of Psychology, University of California, San Diego
  • Rosemary Cowell
    Psychological and Brain Sciences, College of Natural Sciences, University of Massachusetts, Amherst
Journal of Vision September 2018, Vol.18, 536. doi:https://doi.org/10.1167/18.10.536
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Patrick Sadil, David Huber, John Serences, Rosemary Cowell; A hierarchical Bayesian model for inferring neural tuning functions from voxel tuning functions. Journal of Vision 2018;18(10):536. https://doi.org/10.1167/18.10.536.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is tempting to infer the behavior of individual neurons from the behavior of individual voxels in an fMRI experiment. For instance, voxel tuning functions (VTFs) measure the magnitude of the BOLD response to a range of stimulus features (e.g., orientation), producing results that resemble individual neural tuning functions (NTFs) from single-cell recordings – like a simple cell in V1, a voxel will prefer a particular orientation. However, a voxel likely reflects a mixture of different kinds of neurons with different preferred orientations. Taking a GLM approach to this problem, forward encoding models (e.g., Brouwer and Heeger, 2009, 2011) specify the strength of different neural sub-populations (e.g., neurons preferring different orientations) for each voxel. However, these models cannot identify changes in the shape of the neural tuning function because they assume a fixed NTF shape. For instance, these models could not identify whether the NTF sharpens with perceptual learning. To address this limitation, we developed a hierarchical Bayesian model for inferring not only the relative proportions of neural sub-populations contributing to a voxel, but also the shape of the NTF and changes in NTF shape. To test the validity of this approach, we collected fMRI data while subjects viewed oriented gratings at low and high contrast. We considered three alternative forms of NTF modulation by stimulus contrast (additive shift, multiplicative gain, bandwidth sharpening). To the naked eye, the VTFs revealed an additive shift from low to high contrast. However, the hierarchical Bayesian model indicated that this shift was caused by multiplicative gain in the underlying NTFs, in line with single cell recordings. Beyond orientation, this approach could determine the form of neuromodulation in many fMRI experiments that test multiple points along a well-established dimension of variation (e.g., speed of motion, angle of motion, isoluminant hue).

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×