February 2022
Volume 22, Issue 3
Open Access
Optica Fall Vision Meeting Abstract  |   February 2022
Contributed Session III: Assessing the neural coding of image blur using multivariate pattern analysis
Author Affiliations
  • Zoey J Isherwood
    Department of Psychology, University of Nevada, Reno
  • Katherine EM Tregillus
    Department of Psychology, University of Minnesota
  • Michael A Webster
    Department of Psychology, University of Nevada, Reno
Journal of Vision February 2022, Vol.22, 34. doi:https://doi.org/10.1167/jov.22.3.34
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zoey J Isherwood, Katherine EM Tregillus, Michael A Webster; Contributed Session III: Assessing the neural coding of image blur using multivariate pattern analysis. Journal of Vision 2022;22(3):34. doi: https://doi.org/10.1167/jov.22.3.34.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Blur is a fundamental perceptual attribute of images, but the way in which the visual system encodes this attribute remains poorly understood. Previously, we examined the neural correlates of blur by measuring BOLD responses to in-focus images and their blurred or sharpened counterparts, formed by varying the slope of the amplitude spectra but maintaining constant RMS contrast (Tregillus et al. 2014). In visual cortex (V1-V4), highest activation occurred for in-focus images compared to blurred or sharpened images – a finding which counters expectations from norm-based or predictive coding but is consistent with other studies examining the effects of manipulating the 1/f amplitude spectrum (Olman et al. 2004; Isherwood et al. 2017). To further examine the representation of blur, here we reanalysed this dataset using multivariate pattern analysis and also expanded the analysis to include additional visual areas (VO1, VO2, V3AB, LO, TO). A linear classifier trained to distinguish blurred vs. sharpened images provided significant decoding irrespective of visual area, suggesting that information about blur may be preserved across much of the visual hierarchy. The decoding may reflect larger scale differences in the representation of spatial frequency information within regions (e.g. as a function of eccentricity) or a finer scale columnar organization.

Footnotes
 Funding: Supported by P20-GM-103650, EY-10834, and F32 EY031178-01A1.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×