Abstract
Virtual V1sion is a collaborative coding and data sharing project intended to move our understanding of primary visual cortex (V1) forward faster and more efficiently. There are dozens, if not hundreds, of good examples of established and effective computational models that succeed in relating perception to presumed underlying neuronal responses or electrophysiology data to neuroimaging data. The best examples succeed in fitting behavioral, electrophysiological and neuroimaging data simultaneously. Virtual V1sion grows from the premise that these models should be accessible to all and built on a framework that allows ready comparison between models and between computational results and data from a range of modalities. On the one hand: V1 is the most studied region of the brain … why should we invest more effort in it? On the other hand: the fact that we know so much about V1 means it is possible to generate falsifiable hypotheses and collaborate on building a model that performs at a level that cannot be accomplished with data and computational expertise housed in a single laboratory. The successes we have had with Virtual V1sion so far center on population-level models using divisive normalization to simulate interactions within and between classical and extra-classical receptive fields. Success is defined along several dimensions: improved fits to data of multiple modalities; improved visualizations and simulations of population-level responses that help the experimenter design and interpret fMRI experiments; reduction of the total number of models considered, after demonstrating that different model elements are equivalent or redundant or not relevant. We are presenting those successes and soliciting contributions and critiques of the framework, so on-going design choices can maximize accessibility and utility for all.
Meeting abstract presented at VSS 2017