December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Can a saliency model using feature sets derived cityscapes predict cultural differences in visual search asymmetry?
Author Affiliations & Notes
  • Yoshiyuki Ueda
    Kyoto University
  • Shohei Kato
    Kyoto University
  • Footnotes
    Acknowledgements  JSPS KAKENHI #19K14472 & #19K21814
Journal of Vision December 2022, Vol.22, 4405. doi:https://doi.org/10.1167/jov.22.14.4405
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yoshiyuki Ueda, Shohei Kato; Can a saliency model using feature sets derived cityscapes predict cultural differences in visual search asymmetry?. Journal of Vision 2022;22(14):4405. https://doi.org/10.1167/jov.22.14.4405.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search asymmetries in line length (long or short), line orientation (tilted or vertical), and circle with line versus circle vary in presence and magnitude depending on where they are measured (e.g., Canada, Japan, and the United States, see Ueda et al., 2018). These results indicate that our visual cognition is affected by the environments around us. What induces these differences? Several previous studies suggest that it may due to the cityscape and orthographies of each location. It has been shown that visual attention and eye movements change after viewing Japanese and American cityscapes for a certain amount of time (Miyamoto et al. 2006; Ueda & Komiya, 2012). A study dealing directly with search asymmetry showed that visual saliency calculated using the Attention based on Information Maximization (AIM; Bruce & Tsotsos, 2009) model, which derives a set of visual features from orthographic characters (alphabets, Japanese hiragana, and Kanji characters) produced different saliency levels in line length asymmetry (Saiki, 2020). In this study, we used the AIM model to investigate whether differences in the scenery of different locations can produce different visual saliency in the search asymmetry. In the original AIM model, visual feature sets were derived from various scene images, while instead of this, we collected more than 10,000 photos of cities across Japan, and used only them to derive visual feature sets with independent component analysis (ICA). Comparing visual saliency calculated by visual features derived from the original image set with the image set of Japanese cities, consistent results across image sets were obtained for search asymmetry. These results suggest that the AIM model with feature sets derived from scenery can robustly predict visual search asymmetry, and cultural differences in search asymmetry may emerge not from the cityscapes in each culture, but rather from long-term experiences with orthographic letters.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×