Journal of Vision Cover Image for Volume 23, Issue 9
August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Perceptual Learning of Feelies
Author Affiliations
  • Catherine Dowell
    University of Southern Mississippi
  • Alen Hajnal
    University of Southern Mississippi
Journal of Vision August 2023, Vol.23, 5479. doi:https://doi.org/10.1167/jov.23.9.5479
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Catherine Dowell, Alen Hajnal; Perceptual Learning of Feelies. Journal of Vision 2023;23(9):5479. https://doi.org/10.1167/jov.23.9.5479.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

According to the ecological theory, perceptual learning is not a process of enriching input from the environment, but of developing the skill to differentiate previously overgeneralized information that was already available (Gibson & Gibson, 1955). This study investigated visual perceptual learning using ecologically valid, novel objects known as “feelies”. 175 Participants performed a same-different discrimination task over learning blocks with the goal of learning to perfectly discriminate a target object (one feelie) from all other objects (nine other feelies). Discrimination sensitivity (d’) was used to assess learning. Motion and viewpoint were manipulated to determine their influence on learning. Objects were viewed from either a front or a top view, and were displayed as either static or rotating about a vertical axis. Participants were randomly assigned to one of four viewing conditions (Top-Static; Side-Static; Top-Motion; Side-Motion). Perceptual learning was expected to occur in all conditions but was hypothesized to occur sooner in the Sideview-Motion condition because this viewing angle is the most natural, and the presence of motion provides more information about 3-D shape compared to a static image. Criterion was reached sooner in static conditions compared to motion conditions, regardless of viewpoint. The side view – motion condition showed most improvement. The discrimination task did not necessitate the utilization of 3-D shape or motion information. However, moving stimuli provided rich information that eventually resulted in superior learning. If the same shape discrimination task were framed within a context where 3-D shape information would be more useful (such as with affordances), would the outcome differ? The effects of viewpoint and motion will be discussed in the context of manipulating the nature of the task: perceiving affordances should necessitate the utilization of motion information from optimal viewpoints.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×