Abstract
For humans, it is crucial to efficiently store facial information in visual working memory (VWM). Here we use psychophysics and a new variant of classification images (CI) to study 1) what are the critical facial features stored in VWM and 2) how these representations change in the course of forgetting. We used the same-different task, where subjects first memorized a face (the memory stimulus), followed by a blank screen lasting either 500 or 4000 milliseconds (the retention time). Then a morphed face (the retention stimulus) was presented, and subjects responded whether the faces were the same or different. In the morphing process for face stimuli, we independently morphed the faces for 12 different face features (e.g. eyes, nose, mouth) towards the mean face, which was an average morph over all face stimuli. Retention stimuli morphs were generated by adding random jitter for each facial feature towards the mean face, making them locally less characteristic. In “same” trials low-variance jitter was added to the retention stimulus. In “different” trials we sampled the jitter from a high-variance distribution, making the retention stimulus appear on average more different from the memory stimulus. Then we estimated decision weights for each facial feature with CIs. The weights show how strongly information in each feature contribute to memory-based decisions. CIs were estimated using a regression model, where the difference in randomized feature jitter values in the two stimuli predicted subjects’ responses. Thus, these CI weights indicate how much information subjects extracted from each facial feature in the memory task. The CIs revealed large weights for eyes and mouth in both retention times. However, CI weights were uniformly smaller in the longer retention time: forgetting seems not to change the set of facial features stored in memory representations. Instead, forgetting may cause uniform decay in all features.