September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Learned bias for 3-D shape perception without object motion
Author Affiliations
  • Anshul Jain
    Graduate Center for Vision Research, SUNY College of Optometry, USA
    SUNY Eye Institute, USA
  • Benjamin T. Backus
    Graduate Center for Vision Research, SUNY College of Optometry, USA
    SUNY Eye Institute, USA
Journal of Vision September 2011, Vol.11, 981. doi:https://doi.org/10.1167/11.11.981
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anshul Jain, Benjamin T. Backus; Learned bias for 3-D shape perception without object motion. Journal of Vision 2011;11(11):981. https://doi.org/10.1167/11.11.981.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual signals, such as retinotopic location and object translation direction can be recruited as cues that influence the perceived rotation direction of 3D objects (e.g. Haijiang et al., PNAS, 2006; Harrison & Backus, J Vis, 2010). However, all of these stimuli contained motion, so the learning could have been idiosyncratic. For example, training could have caused bias in MT neuron populations jointly tuned for motion and disparity.

We tested whether location and grating orientation can be learnt as cues to 3-D shapes in absence of motion signals. In Experiment 1, stimuli depicted a perceptually ambiguous dihedral angle (book cover). On training trials observers' percept was controlled using disparity, occlusion and luminance cues; stimuli were presented above or below fixation with location and stimulus configuration correlated. In Experiment 2, stimuli depicted a 3D zigzag shape with the two end surfaces (out of three attached surfaces) being frontoparallel; this stimulus was also ambiguous and one or the other frontoparallel surface looked closer on each trial. On training trials observers' percept was controlled using disparity and luminance cues. The end surfaces were textured using oriented hatching and their depth-order was correlated with hatching orientation during training. Ambiguous test stimuli were pseudo-randomly interleaved with training stimuli to measure learning.

In Experiment 1, observers' (N = 8) perceived configuration on test trials was consistent with the location-configuration correlation. The bias was evident on the next day. No learning was observed in Experiment 2 (N = 6). Thus, location-dependent biases are a general property of the visual system and “priors” (in a Bayesian sense) for interpreting ambiguous stimuli can be learned at specific retinotopic locations, presumably due to retinotopic organization of the early visual system. Cues that are more abstract, such as texture orientation, presumably require more extensive training.

NSF grant BCS-0617422 and NIH grant EY-013988 to B. T. Backus. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×