Abstract
Purpose: Stankiewicz et al. (1998) suggested that attention is necessary for encoding view-invariant object representations. We examined this hypothesis using realistic depth-rotated faces and non-face images. Methods: A sequential matching paradigm was employed whereby a face or a chair was presented at learning along with characters in its surround. The learning images were presented in frontal or 3/4 view. In the full-attention condition, observers followed the letters with their eyes while paying attention to the image. In the divided-attention condition, observers had to count the number of characters that were digits. At testing, the target was presented along with two distractors of the same category. The target test image and distractors were either in the same view or a different view than at learning. 40 Ss were tested on 2 blocks, one for each attention condition. Five trials for each testing view (same and different) and object category (face and chair) were presented randomly in each block. Results: Recognition accuracy performance produced a significant three-way interaction between Attention, Stimulus Category, and Testing View. In the full-attention condition, recognition in a different view was not significantly different than recognition in the same view for both faces and chairs. In the divided-attention condition, recognition of faces in a different view was significantly lower than recognition in the same view. However, recognition of chairs was equivalent in same and different views under divided-attention. Conclusions: We postulate that a face representation formed without attention may not contain the necessary information required to produce a match with its rotated equivalent. In contrast, the information used to recognize chairs after a small rotation may be encoded and stored in memory irrespective of attention. Ackowledgements: Supported by operating grants and graduate fellowships from CIHR (Canada) and NSERC (Canada).