Abstract
How are perceptual and cognitive resources allocated when we are faced with realistic tasks involving complicated objects? Dual-task experiments have attempted to address such questions by asking subjects to perform a demanding task while holding one or more objects in memory. However, since the memorized objects are unrelated to the primary task, such approaches do not reveal the underlying resource requirements of the primary task. In the current study, a more natural search task was devised in which perceptual and cognitive complexity was varied and search performance was used to assess memory load.
Subjects had to find 3 objects in a field of 9 hidden objects that belonged to a common category. Category definitions were based on features (e.g. color, shape and texture). The 5 levels of category complexity ranged from the simple (e.g. objects must share one feature) to the complex (e.g. objects must share two features and differ on one feature). During search, objects were revealed for 1 second intervals by a mouse click. Revisits were permitted. Stimuli shown on each trial were chosen randomly with the provision that the performance of an ideal searcher (no memory loss) would be the same across category complexity.
The number of revisits to previously viewed objects increased linearly with category complexity. Category complexity, not the number of features defining the category, determined performance. These results show that category complexity interacts with memory capacity: the greater the complexity of the category definition, the less memory is available to store the contents of the visual array. These results argue against strictly modular approaches to resource allocation during natural task performance, and in favor of a unitary pool that must be managed by shifts in overt or covert attention.
Supported by NSF DGE 0549115 (Rutgers IGERT in Perceptual Science).