September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Mitigating Perceptual Error in Synthetic Animatronics using Visual Feature Flow
Author Affiliations
  • Ryan Schubert
    Institute for Simulation and Training, University of Central Florida
    Computer Science Department, University of North Carolina at Chapel Hill
  • Gerd Bruder
    Institute for Simulation and Training, University of Central Florida
  • Greg Welch
    Institute for Simulation and Training, University of Central Florida
Journal of Vision August 2017, Vol.17, 331. doi:https://doi.org/10.1167/17.10.331
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ryan Schubert, Gerd Bruder, Greg Welch; Mitigating Perceptual Error in Synthetic Animatronics using Visual Feature Flow. Journal of Vision 2017;17(10):331. https://doi.org/10.1167/17.10.331.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The perceived visual flow of features on a 3D object provides cues about the underlying shape and motion. Likewise, imagery of a dynamic virtual object projected onto a static featureless physical object can be used to simulate shape or motion. If the shape of the virtual object is geometrically different from the physical object, the visual flow of features moving across the surface will be distorted, conveying incorrect shape cues. Mitigating these incorrect shape cues supports synthetic animatronics—simulating physical motion or deformation on geometrically static display surfaces. To achieve this, we define two sets of feature flow curves that represent the visual flow of a set of features over the course of an animation for a specific viewpoint: one set of features for the flow corresponding to the correct perception of the virtual object, and a second for the flow of features otherwise distorted by the display surface. These feature flow curves provide a basis for a perceptual error measure at single time steps (e.g., visual angular error) and for identifying temporal flow patterns that might give perceptual shape cues for the underlying display surface (e.g., a sharp trajectory change indicative of a fold or edge). We then dynamically alter the virtual imagery on the physical surface to reduce perceptual error by diminishing the visibility of specific features (and thus the resulting visual flow). This is achieved by contrast reduction or low-pass filtering proportional to the aggregate error across a set of viewpoints. We have observed that by doing this dynamic filtering of the virtual imagery we can reduce the unwanted perception of the underlying surface while maintaining feature salience in areas of geometric similarity, upholding the overall perception of the desired virtual shape and motion.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×