Abstract
When human observers search among shaded visual stimuli, they find the targets in vertical shading much faster than the ones in horizontal shading. Here, we demonstrate that this asymmetry persists in an inattentional blindness (IAB) paradigm. In our study, subjects viewed naturalistic simulations of moving balls that were vertically or horizontally shaded. A portion of the trials contained an unexpected target, which had a reversed shading gradient, and was introduced to the simulation at random times. During each trial, subjects tracked a ball and counted the number of midline crossings made. They were also instructed to indicate when they noticed an unexpected target. Results showed that almost twice as many vertically shaded targets were detected compared to horizontally shaded targets, and this difference could not be attributed to differences in target visibility, false target detection rate, and average ball counting accuracy. To gain insight into the underlying mechanisms of these results, we propose a biologically inspired, computational IAB model based on predictive coding. The model undergoes unsupervised training to anticipate subsequent video frames by minimizing expected errors inherited from preceding predictions made during the structure analysis of naturalistic video sequences. Subsequently, the model is tested on the same videos used in human psychophysics experiments. Remarkably, this model exhibits a more pronounced variance in predictive errors, when the unexpected target is in horizontal shading. Together, our findings point to the emergence of IAB asymmetry through top-down expectation biases derived from the visual stimuli presented to both humans and the model.