August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Combined representation of mid-level visual features in the scene-selective cortex
Author Affiliations & Notes
  • Jisu Kang
    Department of Psychology, Yonsei University
  • Soojin Park
    Department of Psychology, Yonsei University
  • Footnotes
    Acknowledgements  This research was support by NEI grant (R01EY026042) and NRF grant (funded by MSIP-2019028919) to SP.
Journal of Vision August 2023, Vol.23, 5965. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jisu Kang, Soojin Park; Combined representation of mid-level visual features in the scene-selective cortex. Journal of Vision 2023;23(9):5965.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual features of separable dimensions like color and shape can conjoin to represent an integrated object. We investigated how mid-level visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in occipital place area (OPA). We tested how separate features are concurrently represented in OPA. Participants saw eight different types of scenes (2: number of paths, 2: directions, 2: distances) in the fMRI scanner (N = 20). In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. Each path contained a glass wall located either near or far, changing the navigational distance. We hypothesized that OPA represents paths as a combination of direction and distance. OPA’s multi-voxel pattern similarities were analyzed, and voxel-wise linear combinations for scene pairs were computed for comparison. Results showed that OPA’s representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were based on a combination of two corresponding single-path units. Specifically, a double-path scene (e.g., Left-Near Right-Far) consisted of an average of two single-path scenes with specific directions and distances (e.g., Left Near+Right Far) rather than a pooled representation of all features (Left+Near+Right+Far). These results suggest that when navigational distance is unambiguously attributed to a single path direction, OPA represents distance and direction independently. Contrastingly, OPA combines the two features to form an integrated representations of path units in multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by OPA’s ability to quickly combine multiple features relevant for navigation and represent a navigational file.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.