Abstract
Performance in visuo-motor tasks suggests that usage of visual working memory is minimal and specific to the immediate needs of the task (Triesch et al., 2003). We sought to determine if visual memory also includes more global information about scene context. Subjects performed a sorting task in virtual reality with haptic feedback. From an array of five bricks on a tabletop, subjects picked up and sorted red and blue bricks on the basis of width, height and/or texture. Color was always irrelevant to the immediate micro-task of sorting. A change was made to one of the features of the brick being held on about 10% of trials. In one condition (consistent), the brick in hand changed from red to blue or vice versa. In another condition (novel), the brick changed from red or blue to yellow or green, colors that were not present in the scene. Subjects noticed changes between consistent colors (red and blue) less than half the time (43%), despite fixating the brick 1144ms before, and 727ms after the change. However, when a novel color (yellow or green) replaced a color consistent with the experimental context (red or blue), changes were detected nearly 100% of the time. This was true even though the remaining bricks were typically outside the field of view at the time of the change. Thus, the changes between colors consistent with scene context were not necessarily missed due to a failure to encode or lack of memory for brick color. Instead, long-term memory of scene properties may disguise changes between features consistent with the familiar context. Observers represent global scene properties in addition to the immediate task relevant information. These findings are consistent with predictive coding models of object recognition (Rao & Ballard, 1999).
Supported by NIH/PHS research grants EY07125, P41 RR09283 and EY05729