Abstract
Growing evidence suggests that visual working memory can store items as perceptual groups rather than as independent units (Brady, Konkle, & Alvarez, 2011). Can perceptual grouping explain particularly high capacity estimates for certain types of stimuli? Based on our previous finding that working memory capacity for orientation is greatly enhanced for line stimuli in comparison to Gabor gratings (Park et al., VSS 2015), here we investigated whether working memory for line orientation can take advantage of Gestalt rules of organization to make more efficient use of its limited capacity. We hypothesized that multiple line orientations can be stored more efficiently if they can be organized into perceptual groups according to the rules of "similarity" and "good continuation". If so, then working memory performance should vary systematically depending on the spatial relations between items in any given visual display. We randomly generated 96 displays, each containing six oriented lines at various locations, and presented the same set of displays to 700+ observers in an online experiment. On each trial, observers were asked to report the orientation of a randomly selected item from memory. Pooling the responses from all observers (110+ trials/item), we observed marked differences in the average error magnitude across displays (25.4°-39.2°) and items (19.3°-84.4°), which proved highly consistent across observers (a random split-half correlation of .77, p < .0001). We characterized the frequency of orientation clustering in the displays, as well as the degree of collinearity among pairs of orientations based on the smoothness of their implied path. By entering these factors into a multiple regression model, we could accurately predict working memory performance for specific displays (R = 0.62) and items (R = 0.40). Our findings demonstrate that the presence of rich spatial structure in arrays of oriented lines allows for highly efficient storage of information in visual working memory.
Meeting abstract presented at VSS 2017