August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
The compression of bound features in visual short-term memory
Author Affiliations
  • Yuri Markov
    National Research University Higher School of Economics, Moscow, Russia
  • Igor Utochkin
    National Research University Higher School of Economics, Moscow, Russia
Journal of Vision September 2016, Vol.16, 1071. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuri Markov, Igor Utochkin; The compression of bound features in visual short-term memory. Journal of Vision 2016;16(12):1071.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In numerous studies, it is well documented that the visual system exploits regularities of multiple objects for the more efficient perception and storage of large chunks of information, thus struggling the severe limits of processing bottleneck. In terms of information framework, this can be called compression. It was shown that visual short-term-memory (VSTM) uses regularities in object features to compress the data (Brady, Konkle, & Alvarez, 2009). Also, it is likely that VSTM can store features of an object bound together, although at some cost of binding (Fougnie, Asplund, & Marios, 2010; Luck & Vogel, 1997). We tested how compression is carried out for separable features bound in objects. In our experiments, observers memorized the color and orientation of triangles and then, after a 1-second delay following the offset of the triangles, recalled either the color, or the orientation of a probed triangle. There were five conditions: (1) three triangles with all different features, (2) three triangles with different color and identical orientation, (3) three triangles with different orientation and identical color, (4) three triangles with all identical features, (5) one triangle. Using the mixture model algorithm (Suchow, Brady, Fougnie, & Alvarez, 2013; Zhang & Luck, 2008), we estimated the capacity and fidelity of VSTM. We found perfect capacity and same fidelity for both features in the conditions 4 and 5, which shows that observers compressed the information very well. Also, both capacity and fidelity reduced for one of two features when this feature became variable, but not for another one which remained the same (conditions 2 and 3). Finally, when all features were variable (condition 1), we observed some impairment in the capacity and fidelity of both. Overall, our results show that feature compression in VSTM can be performed independently for each dimension, even when those features are bound in objects.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.