Abstract
Humans have a remarkable fidelity for visual long-term memory. Much of the work on long-term memory has focused on processes associated with successful encoding and retrieval. However, more recent work on visual object recognition has developed a focus on the memorability of specific visual stimuli. However, studies on object recognition often fail to account for how these high- and low-level features interact to promote distinct forms of memory. Here, we present a novel object database with an extensive array of visual and semantic features assessed for each image, and investigate memory for these object images in two different memory paradigms. We first collected normative feature information on 1000 object images, comprising living and nonliving items spanning 29 different categories. Semantic feature norms were collected and collated to describe complex feature statistics consistent with the conceptual structural account (CSA). Next, we conducted a memory study where we presented these same images during encoding (picture target) on Day 1, and then either a Lexical (lexical cue) or Visual (picture cue) memory test on Day 2. Our findings indicate that higher-level visual factors (via DNNs) and semantic factors (via feature-based statistics) make independent contributions to object memory, and factors that predict object memory depend on the type of memory being tested. These findings help to provide a more complete picture of what factors influence object memorability. Furthermore, the public repository created in this project consists of useful information on the objects including, visual and semantic feature information, memorability of object images in two different memory paradigms, display and creation of multidimensional scaling plots, and downloads for feature and memory data. We hope that this object database will encourage users to interact with various kinds of information and select appropriate cut off points at different levels of novel analyses.