September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Neural Correlates of Efficient Coding of Visual Scenes
Author Affiliations & Notes
  • Michelle Greene
    Bates College, Program in Neuroscience, Lewiston ME
  • Kathryn Leeke
    Bates College, Program in Neuroscience, Lewiston ME
  • Bruce Hansen
    Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton NY
  • David Field
    Cornell University, Department of Psychology, Ithaca, NY
  • Footnotes
    Acknowledgements  National Science Foundation grant (1736394) to MRG and BCH.
Journal of Vision September 2021, Vol.21, 2856. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michelle Greene, Kathryn Leeke, Bruce Hansen, David Field; Neural Correlates of Efficient Coding of Visual Scenes. Journal of Vision 2021;21(9):2856. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Despite the complexity of scenes, human visual processing is rapid and accurate. A longstanding framework for explaining this feat posits that the brain creates efficient representations of visual inputs by capitalizing on statistical redundancies (Attnaeve, 1954). This framework makes the testable prediction that images that are more redundant (i.e. those with less information) will have a processing advantage over those that are less redundant. As it is difficult to measure the information content of images, this hypothesis has remained open. Here, we reason that one only needs to know the relative amount of information that a scene contains, and that this information can be estimated by examining the relative compression efficacy of off-the-shelf algorithms such as JPEG and PNG. Specifically, more compressible images typically have more redundancy and thus less information. To test for processing differences between images, we computed the mutual information between images and their resulting visual evoked potentials using a state-space framework (Hansen et al., 2019). If early visual processing is information-limited, then we predict that highly compressible images will elicit neural signals with higher mutual information compared to less compressible images. We amassed a database of ~1000 photographs of common, daily content in RAW image format. We compressed each image in PNG (lossless) and JPEG-2000 (lossy) formats and examined the file size differences between original and compressed images. We found that the correlation between PNG and JPEG-2000 compressibility was high (r=0.97). Observers (N=11) viewed 25 of these photographs, each presented 40 times in a random order. We found a positive correlation between the neural mutual information and image compressibility (mean r=0.34, 95% CI = 0.16-0.52), suggesting that more redundant images may have an early processing advantage, and that early visual processing may employ redundancy reduction.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.