August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Visual guessing relies on metacognitive reasoning
Author Affiliations & Notes
  • Caroline Myers
    Johns Hopkins University
  • Chaz Firestone
    Johns Hopkins University
  • Justin Halberda
    Johns Hopkins University
  • Footnotes
    Acknowledgements  NSF BCS #2021053 awarded to C.F.
Journal of Vision August 2023, Vol.23, 5346. doi:https://doi.org/10.1167/jov.23.9.5346
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Caroline Myers, Chaz Firestone, Justin Halberda; Visual guessing relies on metacognitive reasoning. Journal of Vision 2023;23(9):5346. https://doi.org/10.1167/jov.23.9.5346.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

If you were shown a color but weren't sure what it was, you'd guess. But what if you weren't shown a color and only believed you were? Traditional models of visual processing assume that observers unsure of what they’ve seen either 1) generate guesses by randomly selecting from a uniform distribution of all possible values, or in extreme cases 2) never guess. Yet, to date, no study has systematically measured whether and how human observers generate guesses. In the present study, adult observers performed a visual working memory task in which they were asked to report the color of a target stimulus presented for a brief (16, 33, 66, or 132 ms) duration before being masked. Critically, we were able to assess observers’ guess responses via the inclusion of 0-ms trials, in which no stimulus appeared. Responses on 0-ms trials were systematically non-uniform and characterized by distinct individual- and group-level preferences for regions of color space, suggesting that rather than responding randomly, guessing observers weight specific feature dimensions strategically, in a way that might reflect prior knowledge about the visual world or one’s own perceptual capacities. To test these possibilities, we measured guess responses in an equivalent orientation task. If guesses reflect a bias toward high-precision values, guesses should favor high-prevalence, high-precision regions of orientation space (horizontals and verticals). However, responses on 0-ms trials were instead characterized by the inverse pattern: observers were more likely to guess inter-cardinal compared to cardinal orientations, reflecting a bias away from high-precision regions. This pattern is consistent with a self-representing strategy in which observers take into account the precision of their own visual processing. Together, our findings suggest that rather than being uniform or nonexistent, guesses are informed by observers’ knowledge of their own perceptual capacity under perceptual uncertainty.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×