Abstract
Recent developments on the encoding and decoding of visual stimuli have relied on different feature representations such as pixel-level, Gabor wavelet or semantic representations. In previous work, we showed that high-quality reconstructions of images can be obtained via the analytical inversion of regularized linear models operating on individual pixels. However, such simple models do not account for the complex nonlinear transformations of sensory input that take place in the visual hierarchy. I will argue that these nonlinear transformations can be estimated independent of brain data using statistical approaches. Decoding based on the resulting feature space is shown to yield better results than those obtained using a hand-designed feature space based on Gabor wavelets. I will discuss how alternative feature spaces that are either learned or hand-designed can be compared with one another, thereby providing insight into what visual information is represented where in the brain. Finally, I will present some recent encoding and decoding results obtained using ultra-high field MRI.
Meeting abstract presented at VSS 2014