August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
The sum is no more than its parts: No evidence for bound features during multi-feature visual change detection
Author Affiliations
  • Alex Burmester
    Department of Psychology, New York University Abu Dhabi (NYUAD)
  • Daryl Fougnie
    Department of Psychology, New York University Abu Dhabi (NYUAD)
Journal of Vision September 2016, Vol.16, 1068. doi:https://doi.org/10.1167/16.12.1068
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alex Burmester, Daryl Fougnie; The sum is no more than its parts: No evidence for bound features during multi-feature visual change detection. Journal of Vision 2016;16(12):1068. https://doi.org/10.1167/16.12.1068.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies have shown that we can hold very little information in working memory, even for simple visual features. But what type of information is stored in working memory? Some have suggested that we store coherent bound objects (e.g. colored triangles; Luck & Vogel, 1997). Others have suggested that the units of memory are individual visual features (e.g. color or orientation features; Bays, Wu, & Hussain, 2011; Fougnie & Alvarez, 2011). Reconciling the different findings in the literature is challenging due to the fact that studies used different tasks. Studies that have supported the 'features-bound' hypothesis have typically used change detection accuracy judgements. In contrast, studies that support the 'features-unbound' hypothesis have used production tasks where participants adjust feature values to match memory items. A concern is that different tasks may affect working memory representations. To explore this we contrasted feature-bound and feature-unbound accounts using a change detection task in which the number of changing features (one or two) was manipulated between and within objects. Both accounts predict improved performance with two changes. The features-unbound hypothesis predicts equivalent performance for two changes within and between objects. Critically, the features-bound hypothesis predicts that performance for one feature changing in two objects will be better than two features of one object changing, since the latter only allows improvements if participants miss a feature change and not if they didn't store that object. We found evidence consistent with unbound features (N=12). Change detection performance was equivalent for two objects changing a single feature (color in one object, orientation in the other) and for one object changing two features (colour and orientation). Furthermore, the data was well explained by a model where features were remembered independently across objects. We suggest that features, not coherent objects, are the units retained in memory.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×