September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Modifying perceptual rules for surface representation with perceptual learning
Author Affiliations
  • Jessica Holmin
    College of Optometry, The Ohio State University
  • Chao Han
    College of Optometry, The Ohio State University
  • Teng Leng Ooi
    College of Optometry, The Ohio State University
  • Ziajiang He
    Department of Psychological and Brain Sciences, University of Louisville
Journal of Vision August 2017, Vol.17, 1368. doi:10.1167/17.10.1368
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jessica Holmin, Chao Han, Teng Leng Ooi, Ziajiang He; Modifying perceptual rules for surface representation with perceptual learning. Journal of Vision 2017;17(10):1368. doi: 10.1167/17.10.1368.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Two-dimensional retinal images are ambiguous about the 3-D configurations of real-world surfaces/objects. For example, a square surface juxtaposed with an L-shaped target (L-coplanar condition) could either be perceived as a square and an L, or two overlapping square surfaces due to partial occlusion. The latter surface representation is often preferred due to the visual system's past experience of learning to use T-junction information for image segmentation that renders the L-shaped target in back, leading it to be represented as an occluded square (amodal completion). Additionally, if one renders the L-shaped target in back with uncrossed binocular disparity (L-back condition), a bottom-up depth cue, the tendency to see the L-shaped target as a partially occluded square is increased. Only if one renders the L-shaped target in front using crossed disparity (L-front condition) is the L-shaped target unambiguously seen as an L. Using such stimuli, He & Nakayama (1992) found that observers took longer to find the "L" in the L-back condition than the L-front condition in a visual search task. This is because the L-back search elements had T-junctions and binocular disparity cues, leading the visual system to interpret the L-shapes as partially occluded squares according to its internal perceptual rule. Here, we investigated if the perceptual rule (T-junction) could be modified if we compel the visual system to search for the L-shaped target (task specific learning). We did this by having observers perform roughly 15,000 trials of visual search trials with the L-coplanar condition. We found search time improved, with the L-coplanar and L-front RTs becoming similar. However, search in the L-back condition remained slower. This suggests that with task specific learning, the visual system can learn to discount T-junction information (experiential knowledge) for image segmentation, but that it is harder to discount binocular disparity information (bottom-up).

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×