Abstract
Whether complex real-world objects are represented in visual long-term memory as bound units or a set of independent features is debated. We apply a signal-detection approach to answering this question. Four groups of observers (n=100 each group) memorized the same set of 120 objects and then three groups performed 2-AFC recognition and the fourth performed 4-AFC recognition. For any given target, in one group, the foil that was a different exemplar (backpack A versus backpack B); in another group, the foil was the same exemplar but changed state (open backpack A versus closed backpack A). In a third group, the foil was a different exemplar in a changed state (open backpack A versus closed backpack B). The fourth group made 4-AFC with all three foil types. We calculated SDT discriminability (d’) for each target-foil combination and recovered a 2D signal-detection space for each target and its three foils. d’-s for discriminations based on exemplar (d’_exemplar) or state (d’_state) alone were set as centers of target familiarity distributions on the corresponding feature dimensions. Discriminations based on both features were determined by the separability of the bivariate signal and noise distributions d’_exemplar+state = f(d’_exemplar, d’_state, rho), where rho is a measure of noise correlation between the dimensions. We found that the majority of discrimination spaces showed relative feature independence (rho’s close to 0, median rho = 0.26), yet a fraction of spaces tended to strong dependence (rho’s close to 1). We also found that d’ estimates from the 2-AFC tasks yielded precise and specific predictions for 4-AFC (hits and false alarms to each individual foil), and together with the dependence measure rho further increased the precision of these predictions. We conclude that feature unity/separability of memory representations is not all-or-none but a continuous property depending on noise correlation in underlying discrimination spaces.