Purchase this article with an account.
Paige Scalf, Samantha Srivathsan Koushik, Autri Hafezi, Erica Wager, Jonathan Folstein; Perceptual Training and Competition for Representation in Visual Cortex. Journal of Vision 2015;15(12):1128. doi: 10.1167/15.12.1128.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
When multiple stimuli simultaneously fall within the receptive fields of a common cell population, they compete fore representation via a series of mutually inhibitory interactions (e.g. Duncan & Desimone, 1995). Competition is reduced if the items form a single perceptual entity, either due to lower level perceptual grouping factors such as shape or color (McMains & Kastner, 2010) or higher level relationships such as actions affordance (e.g. Wager, Humphreys & Scalf, 2014). In this study, we investigate whether the relationships that reduce competition among multiple stimuli can be learned in a brief series of perceptual training sessions (~five sessions). Participants learned to name individuals groups of five peripherally presented stimulus items. We measured blood oxygen-level dependent BOLD activity evoked in visual cortex by the trained stimulus configuration and compared it with that evoked by the same stimuli presented in untrained configurations. Competition for representation was quantified by comparing signal evoked when stimuli were presented simultaneously (and were thus likely to compete for representation) with that evoked by stimuli that were presented sequentially (and were thus unlikely to compete for representation). Preliminary data indicate that stimuli compete less for representation when presented in trained than in untrained configurations. The relationships that allow multiple stimuli to be simultaneously represented can be acquired after relatively brief sequences of perceptual training.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only