October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Representing navigational affordance based on high-level knowledge of scenes
Author Affiliations & Notes
  • Byunghoon Choi
    Yonsei University
  • Michael McCloskey
    Johns Hopkins University
  • Soojin Park
    Yonsei University
  • Footnotes
    Acknowledgements  This work was supported by National Eye Institute (NEI) grant (R01EY026042) to MM and SP, National Research Foundation of Korea (NRF) grant (funded by MSIP-2019028919) and Yonsei University Future-leading Research Initiative (2018-22-0184) to SP.
Journal of Vision October 2020, Vol.20, 646. doi:https://doi.org/10.1167/jov.20.11.646
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Byunghoon Choi, Michael McCloskey, Soojin Park; Representing navigational affordance based on high-level knowledge of scenes. Journal of Vision 2020;20(11):646. https://doi.org/10.1167/jov.20.11.646.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When navigating in everyday life, the visual system constantly needs to estimate which way to move forward. Navigation processes might use visual scene properties as paths, walls, or high-level knowledge such as memory about locked/-unlocked status of a door. Recent studies suggested that a scene-selective Occipital Place Area (OPA) represents navigational affordances of a scene such as the direction of paths or the distance to boundary. What levels of navigational affordance information does OPA represent? Here we used fMRI to test whether OPA can use high-level knowledge cues, such as colored signs with learned meanings, to represent the navigational affordance of an environment. We constructed views of artificial rooms with one possible exit, on the left or right. In the low-level cue condition, the room had an exit on one side and a wall on the other side. In the high-level cue condition, the room had a door on both sides, with a small colored sign above each door to indicate which was unlocked (e.g., blue = unlocked, yellow = locked). Colors indicating locked vs. unlocked status counterbalanced across participants. Using a two-way SVM classification, we asked if multi-voxel patterns of scene-selective regions could represent path direction based on low- or high-level navigational cues (N = 14). First, we found significantly above-chance classification accuracy for path direction based on low-level cues in the OPA, but not in other scene-selective regions, consistent with previous suggestions for a specialized role of OPA in navigational affordance computation. Crucially, we also found significantly above-chance classification accuracy based on high-level color signs in the OPA. This result provides the first direct evidence that OPA can actively utilize high-level knowledge to compute navigational affordances of scenes, representing navigationally-relevant information that cannot be computed from the visual properties of the scene alone.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×