August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Human see, human do: comparing visual and motor representations of hand gestures
Author Affiliations
  • Hunter Schone
    Laboratory of Brain & Cognition, National Institute of Mental Health, National Institutes of Health
    Institute of Cognitive Neuroscience, University College London
  • Tamar Makin
    Institute of Cognitive Neuroscience, University College London
    MRC Cognition and Brain Sciences Unit, University of Cambridge
  • Chris Baker
    Laboratory of Brain & Cognition, National Institute of Mental Health, National Institutes of Health
Journal of Vision August 2023, Vol.23, 5628. doi:https://doi.org/10.1167/jov.23.9.5628
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hunter Schone, Tamar Makin, Chris Baker; Human see, human do: comparing visual and motor representations of hand gestures. Journal of Vision 2023;23(9):5628. https://doi.org/10.1167/jov.23.9.5628.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Our hands are the primary means for interacting with our surroundings. As such, they are supported by a plethora of relevant representations in the brain, most notable are the somatosensory and motor representations in sensorimotor cortex and visual representations within occipitotemporal cortex, respectively. Here, we compared the representational structure when observing and executing hand gestures within and across visual and sensorimotor cortices using 3T functional MRI and 8-channel electromyography in human participants (n=60). To characterize both visual and motor features of hand representation, participants performed a visuomotor task that required them to either execute a specific hand gesture (8 gestures: open, close, pinch, tripod, one finger, two finger, three fingers, four fingers) or to observe a first-person video of a biological or robotic hand perform the same gesture. First, when visualizing the univariate activity for the contrast: actions vs. observations, we found an expected preference for actions in sensorimotor cortex. However, within OTC, we found separate regions that prefer hand actions (anterior portion) and hand observation (posterior portion). Next, using representational similarity analysis, we quantified the multivariate representational structure of observed and executed hand gestures in both regions. We found that OTC has more separable representations for hand actions and observations compared to sensorimotor cortex. When quantifying just observations, we found distinct representational structure between observations within OTC only, which were similar for observations of biological or robotic hands, and limited similarity in the structure of observations between regions. Surprisingly, when quantifying just hand actions, we found similar distances between actions in OTC as sensorimotor cortex and a strong correlation when comparing the representational structure between the two regions. Collectively, these results reveal fine-grained representational structure about hands in OTC that is similar for both observation and action and suggest a systematic visuomotor organization within OTC.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×