Purchase this article with an account.
Zoey J Isherwood, Katherine EM Tregillus, Michael A Webster; Contributed Session III: Assessing the neural coding of image blur using multivariate pattern analysis. Journal of Vision 2022;22(3):34. doi: https://doi.org/10.1167/jov.22.3.34.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Blur is a fundamental perceptual attribute of images, but the way in which the visual system encodes this attribute remains poorly understood. Previously, we examined the neural correlates of blur by measuring BOLD responses to in-focus images and their blurred or sharpened counterparts, formed by varying the slope of the amplitude spectra but maintaining constant RMS contrast (Tregillus et al. 2014). In visual cortex (V1-V4), highest activation occurred for in-focus images compared to blurred or sharpened images – a finding which counters expectations from norm-based or predictive coding but is consistent with other studies examining the effects of manipulating the 1/f amplitude spectrum (Olman et al. 2004; Isherwood et al. 2017). To further examine the representation of blur, here we reanalysed this dataset using multivariate pattern analysis and also expanded the analysis to include additional visual areas (VO1, VO2, V3AB, LO, TO). A linear classifier trained to distinguish blurred vs. sharpened images provided significant decoding irrespective of visual area, suggesting that information about blur may be preserved across much of the visual hierarchy. The decoding may reflect larger scale differences in the representation of spatial frequency information within regions (e.g. as a function of eccentricity) or a finer scale columnar organization.
This PDF is available to Subscribers Only