September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Generating reliable visual long-term memory representations for free: Incidental learning during natural behavior
Author Affiliations & Notes
  • Dejan Draschkow
    Department of Psychology, Goethe University Frankfurt
  • Melissa L.-H. Võ
    Department of Psychology, Goethe University Frankfurt
Journal of Vision September 2019, Vol.19, 291a. doi:https://doi.org/10.1167/19.10.291a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dejan Draschkow, Melissa L.-H. Võ; Generating reliable visual long-term memory representations for free: Incidental learning during natural behavior. Journal of Vision 2019;19(10):291a. https://doi.org/10.1167/19.10.291a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In comparison to traditional laboratory studies in which the capacity of our cognitive apparatus gets benchmarked, natural interactions with our environment can reveal its actual usage. In natural behavior we rarely use cognitive subsystems (e.g. WM) to capacity or follow a strict top-down encoding protocol (e.g. explicit memorization). In fact, we often act short-term goal and task-oriented, which results in a behaviorally optimal representation of our environment. In a series of computer-based, virtual-reality, and real-world experiments we investigated the role of sampling information during natural tasks (via eye movements and incidental exposure durations) for the generation of visual long-term memories (VLTMs). Even after incidental encounters with thousands of isolated objects, VLTM capacity for these objects is reliable, however, the detail of these representations is quite sparse and does not increase if the incidental encounters are longer (Study1). When searching for objects in an actual real-world environment – where locomotion is necessary – fixation durations on objects predict subsequent location memory, as measured with surprise memory tests (Study2). Incidental representations generated during search are more reliable than memories established after explicit memorization in a realistic virtual environment (Study3). Further, in a real-world object-sorting task, eye movements used for minimizing WM load and instead gathering task-relevant information just before it is required, significantly predict long-term location memory of objects (Study4). Finally, in a virtual-reality paradigm (Study5), we show that spatial priors can be activated within the first fixation into a new environment in a one-shot manner. Together, this rich set of studies shows that incidental information acquisition during natural behavior establishes reliable VLTM representations, which can be used to guide ongoing behavior in a proactive fashion. In conclusion, utilizing our inbuilt mechanism for efficient and goal-directed short-term task completion strongly contributes to the generation and utilization of VLTM.

Acknowledgement: This work was supported by DFG grant VO 1683/2-1 and by SFB/TRR 135 project C7 to MLV. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×