Purchase this article with an account.
Scott Gorlin, Ming Meng, Jitendra Sharma, Hiroki Sugihara, Mriganka Sur, Pawan Sinha; Decoding top-down information: Imaging prior knowledge in the visual system. Journal of Vision 2009;9(8):783. doi: https://doi.org/10.1167/9.8.783.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The visual system is exquisitely good at recognizing images, even when presented with obstructed, noisy, or degraded stimuli. To solve such a complex problem, people use prior information - or, knowledge of the fully coherent image - to aid recognition of noisy stimuli. Is this purely a cognitive effect, or can we find where and how this facilitation interacts with the feed-forward visual system? Using machine learning algorithms and a multivariate approach we can quantify the amount of information a given brain region contains about the stimuli, as a function of prior knowledge. Here we show how distinct regions from prefrontal to retinotopic cortex contain more information about degraded stimuli with prior knowledge, and that the gain in information is correlated with the strength of the behavioral effect - indicating that prior knowledge increases stimulus-specific information throughout the visual system. Interestingly, this form of priming depends critically on the complexity of the stimuli, so that prior information seems to be encoded over complex, real-world features, but not simple stimuli such as oriented gratings. Furthermore, this effect cannot be seen in regions modulated by the recognition of any degraded image, indicating that standard univariate analyses, like the GLM, may reveal a set of regions distinct from those regions involved in distinguishing between images.
This PDF is available to Subscribers Only