Purchase this article with an account.
Erica Wager, Glyn W. Humphreys, Paige E. Scalf; Correct action affordance among unattended objects reduces their competition for representation in V4. Journal of Vision 2014;14(10):304. doi: 10.1167/14.10.304.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Real world objects do occur in an interactive context rather than in isolation. Previous evidence suggests that the neural processing of multiple visual objects changes as a function of their action affordances; specifically, objects evoke greater visual signal when positioned correctly for interaction (Roberts et al., 2010). This has functional consequences for parietal lobe patients who experience visual extinction. An ipsilesional visual item is less likely to extinguish a contralesional visual item if the two items are positioned to interact with one another (e.g., corkscrew going into the top of a wine bottle; Riddoch et al., 2003). Roberts, Riddoch, Humphreys and colleagues posit that objects positioned appropriately for action are likely to form a single functional unit and less likely to compete for representation within the visual system (Duncan & Desimone, 1995). Here, we use fMRI to directly test this hypothesis. We presented participants with pairs of semantically related objects (e.g., a frog and a wand) in the upper left visual field (center to center separation = 2.2 degrees). Objects were either positioned with correct (wand located to the upper right of the frog) or incorrect (frog located to the upper right of the wand) action affordances. We assessed competitive interactions between the items by comparing the signal they evoked when presented simultaneously (likely to compete) vs. when presented sequentially (unlikely to compete). Items were less likely to compete for representation in V4 when they were presented in the correct action affordance (Sensory suppression index (SSI) = .155) than when they were presented in the incorrect action affordance (SSI=.358, p <.05). Objects did not compete for representation in V1 and V2 (whose cells' receptive fields are small enough to permit their independent representations). These data suggest that knowledge of objects and their interactions create a larger "action unit" that modulates early visual processing.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only