Purchase this article with an account.
Garikoitz Lerma-Usabiaga, Noah Benson, Jonathan Winawer, Brian Wandell; Computational validity of neuroimaging software: the case of population receptive fields. Journal of Vision 2020;20(11):341. doi: https://doi.org/10.1167/jov.20.11.341.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Neuroimaging software methods are complex, making it a near certainty that some implementations will contain errors. Modern computational techniques (i.e. public code and data repositories, continuous integration, containerization) enable the reproducibility of the analyses and reduce coding errors, but cannot guarantee the scientific validity of the results. It is difficult, nay impossible, for researchers to check the accuracy of software by reading the source code; ground truth test datasets are needed. Computational reproducibility means providing software so that for the same input anyone obtains the same result, right or wrong. Computational validity means obtaining the right result for the same input data.
We describe a framework for validating and sharing software implementations. We apply the framework to an application: population receptive field (pRF) methods for functional MRI data. The framework is composed of three main components implemented with containerization methods to guarantee computational reproducibility: (1) synthesis of fMRI time series from ground-truth pRF parameters, (2) implementation of four public pRF analysis tools and standardization of inputs and outputs, and (3) report creation to compare the results with the ground truth parameters. The framework and methods can be extended to other critical neuroimaging algorithms.
In assessing validity across four implementations, we found and reported five coding errors. Most importantly, our results showed imperfect parameter recovery, with variation in ground truth values of one parameter influencing recovery of other parameters. This effect was present in all implementations.
The computational validity framework supports scientific rigor and creativity, as opposed to the oft-repeated suggestion that investigators rely upon a few agreed upon packages. Having validation frameworks help (a) developers to build new software, (b) research scientists to verify the software’s accuracy, and (c) reviewers to evaluate the methods used in publications and grants.
This PDF is available to Subscribers Only