Abstract
Often time in everyday visual perception, we need to retain multiple visual objects together in visual working memory (VWM). Yet recent fMRI decoding studies of VWM have predominately focused on the retention of a single object in human occipitotemporal cortex (OTC) and posterior parietal cortex (PPC). How are multiple objects represented together in VWM in these brain regions? Are they represented in an orthogonal and thus independent manner? Or are they coded interactively? To address this, we asked 12 human participants to retain two target objects in VWM. We trained a linear classifier to decode the fMRI response patterns of a pair of target objects A and B when each was retained with object C (i.e., decoding AC vs BC) and tested the classifier’s decoding performance for the same object pair either in the same condition (within-decoding, decoding AC vs BC) or when each was retained with object D (cross-decoding, decoding AD vs BD). Across OTC and PPC, we found no drop in cross-decoding compared to within-decoding during VWM delay, indicating that the two objects in VWM are represented in an orthogonal manner. Such a representational scheme enables independence in VWM representation, effectively preventing interference between the different target objects during VWM retention. Interestingly, during VWM encoding, a cross-decoding drop was observed in OTC (but not in PPC), indicating that an object’s representation is modulated by the identity of another object during encoding in this brain region. However, such a modulation appears to dissipate over the course of VWM retention, likely through feedback mechanims from brain regions such as PPC. Together, these results show independence in target object representations in VWM in the human OTC and PPC, and the emergence of such representations from perception to VWM.