Abstract
The ability to stably maintain visual information over brief delays is central to many cognitive tasks. A potential neural mechanism to achieve visual working memory stability is to maintain multiple concurrent representations at various levels of abstraction and cortical loci. Recent work has shown “sensory-like” mnemonic representations in early visual cortex, while the same mnemonic information is represented in a transformed format in the intraparietal sulcus. As an explicit test of mnemonic code transformations along the visual hierarchy, we quantitatively modeled the progression of veridical-to-categorical orientation representation via a reanalysis of an existing fMRI dataset. Six participants performed both a visual perception (rare target detection) and a visual working memory task (delayed estimation). fMRI activation patterns in different retinotopic regions of interest were sorted into bins based on the orientation shown or remembered. For each task and retinotopic area, the representational similarity of activation patterns in each orientation bin was determined based on Euclidean distances. We compared the resulting confusion matrices with two explicit models: The veridical model assumes that each orientation is most similar to adjacent orientations, and increasingly dissimilar to more distant orientations. The categorical model assumes that orientations are coded in quadrants relative to cardinal axes, so between either “twelve-to-three” or “three-to-six” o’clock. For the perceptual task, the veridical model explained the data well in all retinotopic areas, while the categorical model did not. While the veridical model also did well in the working memory task, the categorical model gradually gained explanatory strength for increasingly anterior retinotopically defined areas. These findings suggest that once visual representations are no longer tethered to sensory inputs, there is a gradual progression from veridical to more categorical mnemonic formats along the visual hierarchy.