September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Processing Speed for Semantic Features and Affordances
Author Affiliations & Notes
  • Tyler A Surber
    University of Southern Mississippi
  • Mark Huff
    University of Southern Mississippi
  • Mary Brown
    University of Southern Mississippi
  • Joseph D Clark
    University of Southern Mississippi
  • Catherine Dowell
    University of Southern Mississippi
  • Alen Hajnal
    University of Southern Mississippi
Journal of Vision September 2019, Vol.19, 220b. doi:https://doi.org/10.1167/19.10.220b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tyler A Surber, Mark Huff, Mary Brown, Joseph D Clark, Catherine Dowell, Alen Hajnal; Processing Speed for Semantic Features and Affordances. Journal of Vision 2019;19(10):220b. doi: https://doi.org/10.1167/19.10.220b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Gibson (1979) conjectured that perception of affordances involves detecting meaningful possibilities for action. Is the meaning obtained when an affordance is perceived qualitatively different from other types of semantic knowledge? Pilot investigations in our lab have discovered that affordance primes are processed slower than semantic features and non-associates in a linguistic semantic-categorization task that presented words on a computer screen. The slower processing of affordance primes might be due to the fact that affordances are typically encountered through our senses, and not as linguistic information. Chainay and Humphreys (2002) found that action knowledge was processed faster when objects were presented as pictures rather than words. Sensory information (pictures over words) may therefore be more relevant to action. For the present study, we hypothesized that pictorial depictions of objects might be better suited for facilitating affordance-based priming than linguistic information such as reading words on a computer screen. We investigated the effects of affordance priming using a spatial categorization task. 81 object nouns were compiled from the McRae et al. (2005) norms. We used photographs of objects drawn from the database compiled by as visual stimuli (Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010). Affordances denoted possibilities for action in relation to objects (e.g. sit – chair), whereas semantic features indicated definitional characteristics (e.g. has legs – chair). Participants were presented with a prime and asked to respond by indicating whether the presented target object could fit inside of a shoebox (Bowers & Turner, 2003). We manipulated image quality at three levels of blur to assess differential effects of the fidelity of visual information. Results showed that blurry images were processed the slowest. Consistent with our hypothesis, affordances were processed faster than control stimuli across all levels of image quality, suggesting that the availability of affordance versus feature information may facilitate processing of visual objects.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×