Purchase this article with an account.
Patrick Garrigan; Representations of Meaningful Objects in Visual Long-Term Memory Have Greater Invariance and Resist Proactive Interference. Journal of Vision 2020;20(11):23. doi: https://doi.org/10.1167/jov.20.11.23.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Visual long-term memory is better when the encoded items are meaningful. Three experiments were conducted, each investigating why a representation that includes both visual and semantic content may increase VLTM performance. A behavioral calibration procedure was used to sort images of tools (broadly defined) into a familiar set (known function; both visual and semantic information available for encoding) and unfamiliar set (unknown function, mostly just visual information available for encoding). In all three experiments, participants viewed 50 images of objects (either familiar or unfamiliar) in sequence, and then completed a 2AFC memory test in which they were required to indicate which of two simultaneously-presented objects was previously shown. In the first two experiments, the hypothesis that meaningful, familiar objects are encoded in a format more invariant to incidental viewing conditions was tested. Experiment one showed that familiar objects are less affected by removal of color information between study and test. Experiment two showed that familiar objects are less affected by partial occlusion at test. The third experiment tested the hypothesis that visual memory representations of meaningful, familiar objects are more robust to proactive interference. Using a pre-exposure procedure, the results showed that familiar objects benefit from lower levels of interference, provided the interfering objects viewed during pre-exposure are not semantically similar to the test set. When the pre-exposure objects shared both visual and semantic information with the test set, performance for familiar objects decreased to the same level as the unfamiliar objects, suggesting that the advantage of additional semantic content was entirely lost. The results of all three experiments demonstrate how incorporating semantic information into memory representations enhances visually-guided recognition. Specifically, visual-semantic representations are more invariant to viewing conditions and less susceptible to proactive interference.
This PDF is available to Subscribers Only