September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Graspable objects grab attention more than images do – even when no motor response is required
Author Affiliations & Notes
  • Pedro Sztybel
    The University of Nevada, Reno
  • Michael A. Gomez
    The University of Nevada, Reno
  • Jacqueline C. Snow
    The University of Nevada, Reno
Journal of Vision September 2019, Vol.19, 221. doi:https://doi.org/10.1167/19.10.221
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pedro Sztybel, Michael A. Gomez, Jacqueline C. Snow; Graspable objects grab attention more than images do – even when no motor response is required. Journal of Vision 2019;19(10):221. https://doi.org/10.1167/19.10.221.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent research from our lab has shown that real-world objects can bias attention and influence manual responses more strongly than computerized images. Specifically, using a flanker task, we showed that response times (RT) for real graspable objects were slower overall, and elicited greater flanker interference effects, compared with RTs for matched two-dimensional (2-D) or three-dimensional (3-D) images of the same objects; however, when the real objects were placed out of reach or behind a transparent barrier, overall RTs and flanker interference effects were comparable with images (Gomez, Skiba and Snow, 2017). A potential explanation for these results is that graspable objects (but not images) capture attention because they afford manual interaction, and the action required to respond to the central target (i.e., a button-press) conflicts with the motor plan generated by the irrelevant real object flanker (i.e., a grasp). This leads to the prediction that when a manual response is not required to complete a task, differences in attentional capture should remain, whereas overall RTs should be comparable across display formats. To test this prediction, we used an exogenous spatial cueing paradigm and compared capture effects for real objects versus matched 2-D and 3-D images of the same items, where the task required a verbal (instead of manual) response. We found that the real objects elicited a stronger spatial cueing effect compared to both the 2-D and 3-D images, but overall RTs were comparable across display formats. These findings replicate previous results from our lab showing that real-world graspable objects capture attention more so than 2-D or 3-D images, and further demonstrate that the attentional effect persists even when no motor response is required to complete the task.

Acknowledgement: NIH grant R01EY026701 awarded to J.C.S. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×