Abstract
The early visual system is believed to use a sparse distributed code to represent scene information that enables intelligent behavior. Interestingly, the early visual code is not a static solution, but evolves over time to prioritize different scene regions (Hansen et al., 2021). Further, the code is not deterministic, but results in representations that best suit the goals of the observer (Schyns & Gosselin, 2003). However, we don’t know how behavioral goals shape the spatiotemporal evolution of sparse distributed coding. We developed a brain-supervised sparse coding network to assess the sparsification of the neural code at every location in scenes over time. 128-channel EEG recordings were made while participants viewed repeated presentations of 80 scenes while making cued assessments about either 1) their confidence that a given object was present in a scene, or 2) the likelihood that they would perform a given action afforded by a scene. We then used dynamic electrode-to-image (DETI) mapping (Hansen et al., 2021) to guide the selection of scene regions that would be used to train a sparse-coding network that was augmented by visual evoked potentials (VEPs) to build a large set of visual encoders. The stimuli were then reconstructed by those encoders at different time points and sparsified according to the participants’ VEP variance. The results revealed that identical scenes undergo different amounts of sparsification depending on the task as early as 70ms, with affordance judgements yielding more sparsification. Interestingly, while both tasks resulted in sparse codes for a third of the scenes by 170ms, the affordance task required de-sparsification of some of the initially sparse coded scenes. These results suggest that sparse distributed codes are not only shaped by behavioral goals early, but can actually be undone over the spatiotemporal evolution of the visual signal according to the goals of the observer.