September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Predictive processing of upcoming scene views in immersive environments: evidence from continuous flash suppression
Author Affiliations
  • Anna Mynick
    Dartmouth College
  • Michael A. Cohen
    Amherst College
  • Adithi Jayaraman
    Dartmouth College
  • Kala Goyal
    Dartmouth College
  • Caroline E. Robertson
    Dartmouth College
Journal of Vision September 2024, Vol.24, 979. doi:https://doi.org/10.1167/jov.24.10.979
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Mynick, Michael A. Cohen, Adithi Jayaraman, Kala Goyal, Caroline E. Robertson; Predictive processing of upcoming scene views in immersive environments: evidence from continuous flash suppression. Journal of Vision 2024;24(10):979. https://doi.org/10.1167/jov.24.10.979.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although our visual environment is immersive, we explore it in discrete and fleeting glimpses. How do we overcome our limited field of view to attain our continuous sense of visual space? Previous studies show that memory for a visual stimulus can speed perceptual awareness (Jiang et al., 2007). Here, we used virtual reality (VR) to test whether memory for immersive environments likewise facilitates perceptual awareness of upcoming scene views across head turns. Participants (N=29) first studied immersive, real world scenes drawn from the local campus in head mounted VR (Study Phase). In each trial of a subsequent priming task, a studied scene (prime) was first presented, then fully occluded. Participants then head-turned (left/right) toward a target image. The target, presented to the non-dominant eye, was initially masked by a dynamic Mondrian presented to the dominant eye (CFS). Participants’ task was to detect the target, which either contained a spatially congruent view of the prime (e.g. the left view following a left head turn) or a spatially incongruent view (e.g. the right view following a left head turn). To ensure true target detection, only half of the target was displayed (a semi-circle) and participants indicated which side of the circle the target was on (left/right). Participants detected incongruent scene views faster than congruent ones (t(28) = 2.27, p = .031), suggesting that memory-based predictions affect the speed of perceptual awareness for scene information across head turns, favoring unexpected over expected input. This interpretation dovetails with a predictive processing account of visual processing, wherein top-down predictions suppress responses to expected sensory input, allowing deviations from expected input to be represented (Walsh et al., 2020). More broadly, this work underscores the possibility that memory-based predictions support efficient processing of visual input across fields-of-view as we move our eyes, heads, and bodies in space.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×