Abstract
The current study assessed the contribution and interaction of contextual, featural and familiarity-based factors in the recognition of real-world, photographed objects.Participants sequentially resolved degraded photographs of household objects while attempting to identify them; the threshold of degradation at which each object could be correctly identified was taken as a measure of recognition performance. Participants included ‘experts’ who were highly familiar with the environments in which the pictures were taken and ‘non-experts’ who were unfamiliar with them. We also included three contextual conditions: 1) participants briefly viewed the contextual scene, as well as the target object’s location in the scene, before performing the recognition task 2) participants viewed the contextual scene only, without position information and 3) participants saw no visual contextual information. Across all conditions, we considered the impact of several contextual factors including the consistency of the object within the context and whether the object was moveable or non-moveable within the scene. In addition, we considered factors pertaining to the objects themselves including whether each object was a typical example of its category, the complexity of its shape, and the resolution of the original image of the object. Some main findings include that experts performed better than non-experts but only in the contextual conditions. Experts' performance benefited both from contextual and positional information for all objects and was not affected by the consistency of the object within the scene. Non-experts' performance benefited from consistency; furthermore, the facilitation of context for novices was modulated by both the consistency and movability of the objects. Typicality affected non-expert performance only, while shape complexity affected both experts and non-experts performance. These results demonstrate that both experts and non-experts utilize context for visual recognition, with experts relying on detailed representations of familiar scenes and non-experts relying on schema-level representations.
Meeting abstract presented at VSS 2012