Abstract
Despite the complexity of scenes, human visual processing is rapid and accurate. A longstanding framework for explaining this feat posits that the brain creates efficient representations of visual inputs by capitalizing on statistical redundancies (Attnaeve, 1954). This framework makes the testable prediction that images that are more redundant (i.e. those with less information) will have a processing advantage over those that are less redundant. As it is difficult to measure the information content of images, this hypothesis has remained open. Here, we reason that one only needs to know the relative amount of information that a scene contains, and that this information can be estimated by examining the relative compression efficacy of off-the-shelf algorithms such as JPEG and PNG. Specifically, more compressible images typically have more redundancy and thus less information. To test for processing differences between images, we computed the mutual information between images and their resulting visual evoked potentials using a state-space framework (Hansen et al., 2019). If early visual processing is information-limited, then we predict that highly compressible images will elicit neural signals with higher mutual information compared to less compressible images. We amassed a database of ~1000 photographs of common, daily content in RAW image format. We compressed each image in PNG (lossless) and JPEG-2000 (lossy) formats and examined the file size differences between original and compressed images. We found that the correlation between PNG and JPEG-2000 compressibility was high (r=0.97). Observers (N=11) viewed 25 of these photographs, each presented 40 times in a random order. We found a positive correlation between the neural mutual information and image compressibility (mean r=0.34, 95% CI = 0.16-0.52), suggesting that more redundant images may have an early processing advantage, and that early visual processing may employ redundancy reduction.