Abstract
Understanding how the nervous system exploits task relevant properties of sensory stimuli to perform natural tasks is central to the study of perceptual systems. Recently, a Bayesian ideal observer method was developed for task-specific dimensionality reduction called Accuracy Maximization Analysis. AMA returns the encoding filters (receptive fields) that extract the most useful stimulus features for specific estimation and categorization tasks. Unfortunately, in its original form, AMA's compute time is quadratic in the number of stimuli in the training set, rendering it impractical for large scale problems without specialized computing resources. Here, we develop AMA-Gauss, a new more practical form of AMA that reduces compute time from quadratic to linear in the number of stimuli by incorporating the assumption that the conditional filter responses are Gaussian distributed. First, we verify the expected compute time decreases with two fundamental tasks in early vision: binocular disparity estimation and retinal speed estimation. Second, we demonstrate that the task-specific receptive fields returned by AMA-Gauss closely approximate the properties of receptive fields in cortex. Third, we show that the Gaussian assumption is justified for all three tasks with natural stimuli and biologically realistic contrast normalization. Fourth, we show that quadratic computations are required to compute the likelihood function and posterior probability distribution over the latent variable. Fifth, we make explicit the formal similarities between AMA-Gauss and the Generalized Quadratic Model (GQM), a recently developed method for neural systems identification. Together, these results provide a normative explanation for why energy-model-like (i.e. quadratic) computations account well for the response properties of neurons involved in these tasks. These developments should help accelerate research with natural stimuli, deepen our understanding of why classic descriptive models have proved successful, and improve our ability to evaluate results from subunit model fits to neural data.
Meeting abstract presented at VSS 2017