Purchase this article with an account.
Ryan O'Donnell, Hui Chen, Baruch Eitam, Brad Wyble; From location to configuration: Does the Structure of a Display stick in memory as strongly as target location?. Journal of Vision 2018;18(10):693. doi: https://doi.org/10.1167/18.10.693.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
A recently discovered phenomenon, termed Attribute Amnesia (AA), demonstrates an inability to report an attribute of an attended item, even when that attribute was used to successfully perform a task in the immediately preceding trial. For example, when asked to locate a letter among digits, participants could not remember the specific letter they were locating when asked to identify it in a surprise question. However, Chen & Wyble (2015b) demonstrated that location is strongly spared from this effect and may be automatically consolidated into memory regardless of its relevance. Yet, it is unknown whether the automatic encoding of location information extends to other aspects of the display, such as the spatial structure of a display itself or the items surrounding a target. In this study, participants underwent a standard AA paradigm, in which they located a letter among number distractors followed by a surprise question that asked for the letter's identity. Importantly, participants were also asked in the surprise trial to identify the structure of the target-distractor display itself, which randomly varied between two configurations: top-left/bottom-right or top-right/bottom-left in a notional rectangle. The structure of this display should not be task relevant, as the location of the distractor would not help participants find the target. Memory performance on the structure (73.33%) of the display was statistically higher than chance of 50% (30 participants, p < .01), indicating that participants remember not just the location of the target but also the configuration of the display. This work provides insight into the nature of representations that are constructed automatically as we perform a task.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only