Abstract
When searching for a relevant item in our visual environment (say, an apple) we create a memory template (e.g., a small circular red object), which causes our visual system to favor template-matching visual input (e.g., apples) at the expense of template-mismatching visual input (e.g., leaves of the apple tree). While this principle seems straight-forward in a lab-setting, it poses a problem in naturalistic viewing: Two objects that produce the same size on the retina, can be of a different size in the real world if one is nearby and the other is far away. We questioned whether visual objects that match the perceived size of a memory template are favored over mismatching visual objects, even when the competing objects encompass the same size on the retina. On each trial, participants were retro-cued to memorize the size of one out of two objects for subsequent recall. During the retention interval, participants swiftly reported the tilt of a target grating. Critically, the target was preceded by a scene comprising two identical objects (of a size in between that of the cued and non-cued objects) that differed in perceptual size due to their placement in the scene (near or far). Finally, participants adjusted the size of a test object to reproduce the size of the memorized object. Gratings appearing on the location of a 'far' object elicited faster response times when a larger object was memorized, whereas gratings appearing on the location of a 'near' object elicited faster response times when a smaller object was memorized. Firstly, these findings show for the first time that memory templates favor concurrent visual processing of size-matching objects (expanding previous findings with color or shape templates). Secondly, our data reveal that memory templates impact the processing of visual input at a perceptual rather than veridical level.
Meeting abstract presented at VSS 2018