December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
The multiple encoding benefit: encoding specificity does not hinder the retrieval generalizability of visual long-term memory
Author Affiliations & Notes
  • Caitlin J. I. Tozios
    University of Toronto
  • Keisuke Fukuda
    University of Toronto
    University of Toronto Mississauga
  • Footnotes
    Acknowledgements  This research is funded by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2017-06866) and the Connaught New Researcher Award from the University of Toronto
Journal of Vision December 2022, Vol.22, 4047. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Caitlin J. I. Tozios, Keisuke Fukuda; The multiple encoding benefit: encoding specificity does not hinder the retrieval generalizability of visual long-term memory. Journal of Vision 2022;22(14):4047.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A robust method for enhancing visual long-term memory (VLTM) retrieval is to encode visual information over multiple opportunities, known as the multiple encoding benefit (MEB). An aspect of the MEB that may be overlooked is the potential detriments of encoding specificity. According to the encoding specificity principle, memory performance is best when the context at encoding matches that at retrieval. In a typical MEB experiment, visual information is not only encoded repeatedly in the same context but also retrieved in the same context. This raises the possibility that the MEB is contingent upon the context match between repeated encoding and retrieval. If so, the MEB may not extend to a new retrieval context, thus limiting the generalizability of memory retrieval. To examine the impact of encoding specificity on retrieval generalizability, we had participants encode a set of real-world objects presented on one of three nature scenes which served as the encoding context. Some objects were presented three times on the same scene (consistent encoding), while others were presented three times, each on a different scene (variable encoding). For each of the encoding conditions, we tested participant’s VLTM recognition by having them retrieve items in the same encoding context or in a brand-new context (a fourth scene). When comparing both encoding styles, we found that neither was more detrimental or effective for retrieval generalization. That is, regardless of consistent or variable encoding, VLTM performance was similar when items were retrieved in a new context. Furthermore, when comparing retrieval following consistent encoding, we found that performance was similar for items retrieved in the original encoding context and those retrieved in a new context. Therefore, encoding specificity in the MEB does not hinder retrieval generalizability and further demonstrates the enduring benefit of multiple encoding opportunities.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.