Purchase this article with an account.
Alain Chavaillaz, Gregory Zelinsky; Redundancy gains using real-world objects. Journal of Vision 2011;11(11):1332. https://doi.org/10.1167/11.11.1332.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
It is well established that manual reaction times (MRTs) to the onset of a display depicting two simple objects are faster than a corresponding one-object display (e.g., Savazzi & Marzi, 2008), and that a similar benefit exists in visual search for feature-targets defined by two dimensions rather than one (e.g., Krummenacher et al., 2001). We investigated this phenomenon, referred to as redundancy gain (Miller, 1982), in the context of visually complex real-world objects and using both MRT and eye movement measures. Experiment 1 asked whether redundancy gains can be found for object parts. Search displays consisted of eight partial or whole teddy bears, one of which differed from the others in its number of parts. Participants were instructed to press a button when fixating the odd target in the search display, where oddness was defined by either one or two parts differing from the distractors. Redundancy gains were observed both in MRTs and in search guidance (time to target fixation), but not in verification time (difference between MRT and fixation time-to-target), suggesting that parts can be processed early by the visual system much like basic visual dimensions in a search task. Experiment 2 asked whether redundancy gains can be found across object category. Participants made a speeded response to the onset of a display containing either one or two objects. Two-object displays consisted of either two instances of the same object, two different objects of the same category, or two objects from different categories. MRTs were generally faster for all three two-object display conditions, but redundancy gains were unaffected by object category. This suggests that categorical information is not accumulated in parallel for the purpose of display onset detection. Redundancy gains can thus be found for real-world objects, bringing a new methodological tool to the study of object perception.
This PDF is available to Subscribers Only