September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
3D cue remapping resulting from experienced variability of scene parameters
Author Affiliations
  • James Wilmott
    Brown University
  • Jovan Kemp
    Brown University
  • Fulvio Domini
    Brown University
Journal of Vision September 2021, Vol.21, 2119. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Wilmott, Jovan Kemp, Fulvio Domini; 3D cue remapping resulting from experienced variability of scene parameters. Journal of Vision 2021;21(9):2119.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Multiple cues, such as texture gradients and binocular disparity, are combined to derive 3D scene structure. Perceptual experience changes the mapping between cue values and 3D estimates, termed here 3D cue remapping. Prominent Bayesian models of cue combination assume that 3D cue remapping occurs via changes in relative reliabilities, resulting in ‘cue reweighting’ (Ernst, Banks & Bültoff, 2000). An alternative model, termed Intrinsic Constraint (IC; Domini & Vishwanath, 2020), postulates the existence of deterministic image operators for each cue that do not estimate reliability. Instead, these operators are tuned to ideal scene parameters learned through repeated interactions with the environment. For example, the ideal material composition of an object yields a well-defined texture gradient. IC combines cues through a function that maximizes the response to 3D properties while minimizing the influence of scene parameters unrelated to 3D shape. This is achieved by scaling each cue by the variability of the corresponding scene parameter within the natural environment; 3D cue remapping occurs when the visual system changes its estimate of this variability. Here, we reasoned that repeated interactions with texture- and disparity-defined 3D objects varying in material composition should lower the contribution of texture gradients to 3D perception, even when cues are congruent and no mismatch between haptic and visual information is present. Before and after a training session, we determined the relative contribution of monocular and binocular cues. During training, observers repeatedly grasped cue-consistent 3D half-ellipsoids, always receiving the appropriate haptic feedback. However, three material compositions determined in a previous experiment were randomly selected on each trial, artificially expanding the range of variation of texture information. As predicted, the contribution of monocular information was significantly reduced after training. These results suggest cue contributions to estimated depth can be dynamically adjusted according to experienced variability of scene parameters, rather than reliability.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.