July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Shape From Very Little: The Visual and Haptic Kinetic Depth Effect
Author Affiliations
  • Flip Phillips
    Skidmore College, Neuroscience & Psychology
  • Farley Norman
    Western Kentucky University, Psychology
  • Kriti Behari
    Skidmore College, Neuroscience & Psychology
  • Kayla Kleinman
    Skidmore College, Neuroscience & Psychology
  • Julia Mazzarella
    Skidmore College, Neuroscience & Psychology
Journal of Vision July 2013, Vol.13, 266. doi:https://doi.org/10.1167/13.9.266
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Flip Phillips, Farley Norman, Kriti Behari, Kayla Kleinman, Julia Mazzarella; Shape From Very Little: The Visual and Haptic Kinetic Depth Effect. Journal of Vision 2013;13(9):266. https://doi.org/10.1167/13.9.266.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 

Our companion poster at this conference (Norman, Phillips, et al. 2013) examines the oft noted phenomenon that structure from motion facilitates 3D shape perception. Traditional structure from motion algorithms rely on a manifold of trackable features to recover three dimensional shape, but the simple deformation of a boundary silhouette is often enough to make reliable judgements about object shape. Furthermore, active haptic explorations of objects provides a similarly rich amount of information for shape discrimination, but again, reliable discriminations can still be made with passive encounters with objects. In this work we use a series of shapes, (nee 'glavens') which are scaleable in relative complexity, to examine subjects' ability to identify shapes with various amounts of impoverished visual and tactile information. We chose to present the stimuli via boundary contours, specular highlights, solid texture, or haptically. In each condition block, the objects were presented with or without motion or actively / passively scanned in the haptic condition. On a given trial, a randomly selected glaven was presented for approximately 12 seconds via computer graphics. The subject then identified the presented glaven from a 'lineup' array of 3D-printed object instances. Results for the various presentation conditions were identical to those in our companion poster — basically that motion and active haptic exploration facilitated identification by similar amounts. Furthermore, since our objects were scaled in complexity, we were able to establish that the magnitude of the object confusion error decreased in the motion / active exploration conditions (e.g. errors in identification were usually with objects of similar complexity).

 

Meeting abstract presented at VSS 2013

 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×