Abstract
In numerous studies, it is well documented that the visual system exploits regularities of multiple objects for the more efficient perception and storage of large chunks of information, thus struggling the severe limits of processing bottleneck. In terms of information framework, this can be called compression. It was shown that visual short-term-memory (VSTM) uses regularities in object features to compress the data (Brady, Konkle, & Alvarez, 2009). Also, it is likely that VSTM can store features of an object bound together, although at some cost of binding (Fougnie, Asplund, & Marios, 2010; Luck & Vogel, 1997). We tested how compression is carried out for separable features bound in objects. In our experiments, observers memorized the color and orientation of triangles and then, after a 1-second delay following the offset of the triangles, recalled either the color, or the orientation of a probed triangle. There were five conditions: (1) three triangles with all different features, (2) three triangles with different color and identical orientation, (3) three triangles with different orientation and identical color, (4) three triangles with all identical features, (5) one triangle. Using the mixture model algorithm (Suchow, Brady, Fougnie, & Alvarez, 2013; Zhang & Luck, 2008), we estimated the capacity and fidelity of VSTM. We found perfect capacity and same fidelity for both features in the conditions 4 and 5, which shows that observers compressed the information very well. Also, both capacity and fidelity reduced for one of two features when this feature became variable, but not for another one which remained the same (conditions 2 and 3). Finally, when all features were variable (condition 1), we observed some impairment in the capacity and fidelity of both. Overall, our results show that feature compression in VSTM can be performed independently for each dimension, even when those features are bound in objects.
Meeting abstract presented at VSS 2016