Purchase this article with an account.
Anne Gilman, Colin Ware; Location and meaningful visual detail influence crossmodal working memory capacity. Journal of Vision 2009;9(8):601. doi: https://doi.org/10.1167/9.8.601.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Complex objects have been found to occupy more space in visual working memory—as measured by lowered change-detection accuracy with such stimuli—than simple colored shapes (Treisman, 2006; Xu, 2002). While this result is consistent with verbal working memory findings showing reduced apparent capacity with longer words (Baddeley, 2007), other research has demonstrated that features contributing to object recognizability can help visual working memory capacity (Olsson & Poom, 2005; Alvarez & Cavanagh, 2004). Complex objects' memory load was further examined in a sequence of experiments adapting classic visual change detection procedures (Vogel et al., 2001) to measure crossmodal (auditory and visual) working memory capacity. The adapted method involves rapid sequential presentation of image-sound pairs, with a test pair appearing after a short delay (800—1000ms). Images are placed equidistant from each other in the initial array, approximately 3.5° from a central fixation. The images chosen depicted non-animate objects (e.g. apple, ball, book, glass); associated sounds consisted of 400ms recordings of animal sounds or pure tones. While in one experiment (N=12), meaningful images were better cues for crossmodal associations than colored balls, this result failed to replicate in an identical later study and in a related study: image type was not a significant source of variation either time. All three studies made location as well as visual image features available as cues for the audiovisual associations, presenting each test pair in its original location. In a follow-up experiment (N = 13), test items were displayed in the center of the screen, removing location as a possible cue. Full-color photographs provided greater crossmodal change detection accuracy than grayscale drawings (from Rossion & Pourtois, 2004) (M-photo 57.7%, M-grayscale 49.2%, p=.01). Overall, meaningful visual detail improved short-term recall for crossmodal associations when image location was not available as a parallel cue.
This PDF is available to Subscribers Only