Purchase this article with an account.
Byunghoon Choi, Michael McCloskey, Soojin Park; Representing navigational affordance based on high-level knowledge of scenes. Journal of Vision 2020;20(11):646. doi: https://doi.org/10.1167/jov.20.11.646.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When navigating in everyday life, the visual system constantly needs to estimate which way to move forward. Navigation processes might use visual scene properties as paths, walls, or high-level knowledge such as memory about locked/-unlocked status of a door. Recent studies suggested that a scene-selective Occipital Place Area (OPA) represents navigational affordances of a scene such as the direction of paths or the distance to boundary. What levels of navigational affordance information does OPA represent? Here we used fMRI to test whether OPA can use high-level knowledge cues, such as colored signs with learned meanings, to represent the navigational affordance of an environment. We constructed views of artificial rooms with one possible exit, on the left or right. In the low-level cue condition, the room had an exit on one side and a wall on the other side. In the high-level cue condition, the room had a door on both sides, with a small colored sign above each door to indicate which was unlocked (e.g., blue = unlocked, yellow = locked). Colors indicating locked vs. unlocked status counterbalanced across participants. Using a two-way SVM classification, we asked if multi-voxel patterns of scene-selective regions could represent path direction based on low- or high-level navigational cues (N = 14). First, we found significantly above-chance classification accuracy for path direction based on low-level cues in the OPA, but not in other scene-selective regions, consistent with previous suggestions for a specialized role of OPA in navigational affordance computation. Crucially, we also found significantly above-chance classification accuracy based on high-level color signs in the OPA. This result provides the first direct evidence that OPA can actively utilize high-level knowledge to compute navigational affordances of scenes, representing navigationally-relevant information that cannot be computed from the visual properties of the scene alone.
This PDF is available to Subscribers Only