August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Grasping type affects Configural Encoding in Visual Working Memory
Author Affiliations
  • Shinhae Ahn
    Washington University in St. Louis
  • Hyung-Bum Park
    University of California, Riverside
  • Richard A. Abrams
    Washington University in St. Louis
Journal of Vision August 2023, Vol.23, 5877. doi:https://doi.org/10.1167/jov.23.9.5877
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shinhae Ahn, Hyung-Bum Park, Richard A. Abrams; Grasping type affects Configural Encoding in Visual Working Memory. Journal of Vision 2023;23(9):5877. https://doi.org/10.1167/jov.23.9.5877.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The present study examined how different hand-grasping postures affect the encoding of global configural information in visual working memory (VWM). In three experiments, we conducted an orientation change-detection task while participants grasped handles in two different postures: power or precision grasp. In the change-detection task, participants were briefly presented an array of multiple rotated bars in a memory array and then, after a retention interval, reported whether a cued item in a test array had the same or a different orientation compared to the first array. The availability of global configural information (i.e., the overall shape connecting individual bars) was manipulated by the presence of surrounding circles which impair configural processing, or by the spatial organization of the stimuli (systematic vs. random). The results show that overall change-detection performance benefited from configural encoding. Surprisingly, the magnitude of the configural encoding benefit was larger when participants maintained a precision grasp compared to when they produced a power-grasp. The results are consistent with a bias in favor of parvocellular processing when a precision grasp is prepared. Overall, the findings highlight the functional interaction between manual gestures and feature-specific visual information processing.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×