October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Computational validity of neuroimaging software: the case of population receptive fields
Author Affiliations & Notes
  • Garikoitz Lerma-Usabiaga
    Department of Psychology, Stanford University, 450 Serra Mall, Jordan Hall Building, 94305 Stanford, California, USA
    BCBL. Basque Center on Cognition, Brain and Language. Mikeletegi Pasealekua 69, Donostia - San Sebastian, 20009 Gipuzkoa, Spain
  • Noah Benson
    Department of Psychology and Center for Neural Science, New York University, 6 Washington Pl, New York, NY, 10003, USA
  • Jonathan Winawer
    Department of Psychology and Center for Neural Science, New York University, 6 Washington Pl, New York, NY, 10003, USA
  • Brian Wandell
    Department of Psychology, Stanford University, 450 Serra Mall, Jordan Hall Building, 94305 Stanford, California, USA
  • Footnotes
    Acknowledgements  This work was supported by a Marie Sklodowska-Curie (H2020-MSCA-IF-2017-795807-ReCiModel) grant to G.L.-U. We thank the Simons Foundation Autism Research Initiative and Weston Havens foundation for support.
Journal of Vision October 2020, Vol.20, 341. doi:https://doi.org/10.1167/jov.20.11.341
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Garikoitz Lerma-Usabiaga, Noah Benson, Jonathan Winawer, Brian Wandell; Computational validity of neuroimaging software: the case of population receptive fields. Journal of Vision 2020;20(11):341. https://doi.org/10.1167/jov.20.11.341.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neuroimaging software methods are complex, making it a near certainty that some implementations will contain errors. Modern computational techniques (i.e. public code and data repositories, continuous integration, containerization) enable the reproducibility of the analyses and reduce coding errors, but cannot guarantee the scientific validity of the results. It is difficult, nay impossible, for researchers to check the accuracy of software by reading the source code; ground truth test datasets are needed. Computational reproducibility means providing software so that for the same input anyone obtains the same result, right or wrong. Computational validity means obtaining the right result for the same input data. We describe a framework for validating and sharing software implementations. We apply the framework to an application: population receptive field (pRF) methods for functional MRI data. The framework is composed of three main components implemented with containerization methods to guarantee computational reproducibility: (1) synthesis of fMRI time series from ground-truth pRF parameters, (2) implementation of four public pRF analysis tools and standardization of inputs and outputs, and (3) report creation to compare the results with the ground truth parameters. The framework and methods can be extended to other critical neuroimaging algorithms. In assessing validity across four implementations, we found and reported five coding errors. Most importantly, our results showed imperfect parameter recovery, with variation in ground truth values of one parameter influencing recovery of other parameters. This effect was present in all implementations. The computational validity framework supports scientific rigor and creativity, as opposed to the oft-repeated suggestion that investigators rely upon a few agreed upon packages. Having validation frameworks help (a) developers to build new software, (b) research scientists to verify the software’s accuracy, and (c) reviewers to evaluate the methods used in publications and grants.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×