Abstract
One challenge in recognizing 3D objects is the variability in visual information that they present to our visual system. In order to identify objects in the future, do we store information that generalizes over changes in viewing perspective (view-invariant), or do we instead encode visual information that is specific to a particular viewing experience (view-specific)? When experimenters test this question they normally use a single memory task (e.g., old-new identification), but different memory tasks have been shown to produce distinct patterns of performance with the same perceptual input. Therefore, the process-dissociation procedure was used to get separate estimates of recollection (specific memory for an event) and familiarity (heightened activation of an encountered item in the absence of recollection) for rotated objects in a recognition task. Participants studied sets of familiar objects, with each object in each set presented individually. In a later test, participants were shown old and new objects; for old objects, they also had to try to remember which set the object had been presented in. This method enables independent estimates to be gained for recollection (remembering the set in which the object was presented) and familiarity (knowing the object was presented but lacking knowledge of the set it was presented in). These measures showed that recollection was better when the test view matched the studied view (or showed a very similar visual appearance), but that familiarity was viewpoint invariant. As such, these results suggest that the visual system encodes information about objects that can be utilized in different ways by different memory systems, depending upon the specific requirements of a given task. Further, they suggest that the "viewpoint debate" between view-specific and view-invariant models is intractable, because both patterns of data are found with the same set of objects across different memory measures.
Meeting abstract presented at VSS 2012