Abstract
The properties of any given image of a real-world object (e.g., a phone) are determined both by the particular exemplar of the category it represents, and by which state or pose the exemplar is in. For example, there are many different kinds of phones (exemplar-level information) and any given phone can be on or off the hook (state-level information).
Here, we use an object memory paradigm to examine the separability of these object properties in memory. If object properties can be encoded or decay independently, then we can infer that different high-level features underlie their representations.
12 observers were shown 120 briefly presented objects, and judged the physical size of the objects. Following this task, we gave observers a surprise memory test. To probe which properties of each object were incidentally encoded, a 4-alternative forced choice test display was presented for each object, consisting of two exemplars (one familiar, one novel), each in two states (one familiar, one novel). By modeling these data, we examined how often people remembered only the exemplar-level information or only state-level information.
Using both goodness of fit measures and Bayesian model selection, we examined whether the data were better fit by a model in which these kinds of information were represented independently or together.
Both methods support a model in which these two types of object information are bound together, suggesting that memory for both exemplar- and state-level object properties is supported by the same underlying high-level visual features.
Thus, as both visual long-term memory and object recognition depend on the same high-level object representation, memory errors can usefully inform models of object recognition by elucidating the underlying object representation.
Funded by an NSF Graduate Fellowship to T.F.B. and an NDSEG Fellowship to T.K.