August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Representation of event boundaries in the first-person navigation
Author Affiliations & Notes
  • Byunghoon Choi
    Yonsei University
  • Donald Shi Pui Li
    Johns Hopkins University
  • Soojin Park
    Yonsei University
  • Footnotes
    Acknowledgements  This research was support by NEI grant (R01EY026042)
Journal of Vision August 2023, Vol.23, 5993. doi:https://doi.org/10.1167/jov.23.9.5993
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Byunghoon Choi, Donald Shi Pui Li, Soojin Park; Representation of event boundaries in the first-person navigation. Journal of Vision 2023;23(9):5993. https://doi.org/10.1167/jov.23.9.5993.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

As we navigate in our daily lives, we experience a continuous percept of the visual environment, seamlessly gluing fragments of seconds into a continuous stream of visual perception. Seminal works on memory have used movie films with plots and narratives to show how a continuous stream of perceptual input is chunked into separate events. However, it is still unclear how the visual system organizes this seemingly continuous percept of navigation into blocks of places, spaces, and navigational turning points. In this study, we used naturalistic first-person perspective navigation videos to explore how the scene-selective regions represent continuous navigational experience. Eight six-minute long walking travel videos (4 indoor & 4 outdoor) without drastic viewpoint transitions and scripts were viewed during fMRI scanning. We asked whether the neural boundaries in scene-selective regions are segmented based on navigation specific boundaries, such as first-person location, navigational direction and turns, high-level semantic place category changes, or low-level image statistic changes between the frames. Using a data-driven Hidden Markov Model (HMM) approach, we extracted the neural boundary from each region of interest. Preliminary results (N=4) suggest that boundaries in scene-selective ROIs differ from the boundaries obtained by low-level image statistics extracted from the frames. Interestingly, turning back to the video time points corresponding to the neural boundaries of scene-selective regions revealed navigationally relevant events of first-person navigation, such as arriving or leaving one place from another (e.g., arriving to the next floor). Segmentation of a spatio-temporally continuous visual experience into events may facilitate our visually guided navigation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×