Abstract
A great deal of research in vision relies on the assumption that memory for absolute size is better than memory for relative size. View based models (e.g., Poggio & Edelman, 1990) recognize objects using absolute size. These models suggest we hold an exact size in memory for later object recognition, and do not consider relative sizes explicitly. Structural description models, on the other hand, emphasize categorical—not metric—size relations (e.g., Hummel & Biederman, 1992). Our study investigates memory for absolute size versus relative size. If we should find that memory for relative size is more precise than memory for absolute size, then this result would be inconsistent with both classes of models. In our study, subjects judge size difference between two shapes and relative size difference between two pairs of shapes. We vary the ratio of size differences, the exposure duration of the study object or pair, and the delay (ISI) between study and test. We found that subjects’ memory was more precise for relative size trials than for absolute size trials at all exposure durations, even the shortest. We also found that memory for relative size persists over a longer delay than memory for absolute size. Our findings suggest that memory for relative size is more accurate and more robust than memory for absolute size. Given the fact that absolute size on retina changes (when objects are at different distances), but relative size stays the same (over changes in distances), relative size carries more information.
Meeting abstract presented at VSS 2013