December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Intention to grasp reactivates shape processing
Author Affiliations
  • Nina Lee
    University of Toronto
  • Lawrence Lin Guo
    University of Toronto
  • Adrian Nestor
    University of Toronto
  • Matthias Niemeier
    University of Toronto
Journal of Vision December 2022, Vol.22, 4202. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nina Lee, Lawrence Lin Guo, Adrian Nestor, Matthias Niemeier; Intention to grasp reactivates shape processing. Journal of Vision 2022;22(14):4202.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Paying attention to object features such as motion, contours, or colours can enhance visual processes throughout the visual field. Relatedly, when we grasp an object, processing visual properties of the object are necessary to allow us to successfully execute the grasp. However, few studies have reported attention-induced changes to the visual computations of object features, specifically in terms of temporal dynamics. Here, we aimed to clarify the time frames of attention to object features using multivariate EEG analyses. We recorded electrophysiological signals from 64 scalp electrodes in human participants while they viewed and acted upon real objects. Objects had one of two shapes (“flower” and “pillow”), and materials (steel and wood), respectively. To manipulate attention to these features, participants either grasped and lifted these objects, or touched them with their knuckle, thus making shape and material more or less relevant to the task. We then performed pattern classification of shape and material based on spatiotemporal EEG data. We found that classifiers reached transient accuracies around 100-200 ms after stimulus presentation. Shape classification was more robust than material, but there was not a marked difference in classification performance between tasks. However, the cross-temporal generalization of shape representations revealed that only for grasping did early and late neural generators reactivate one another during action planning. In contrast, knuckling shape computations involved a chained activation of generators. Our findings suggest that task-related attention does indeed modulate the visual processing of shape such that earlier representations are stored and reactivated during action planning, only if they are important to the task at hand.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.