September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
VowelWorld 2.0: Using artificial scenes to study semantic and syntactic scene guidance
Author Affiliations & Notes
  • Yuri Markov
    Goethe University Frankfurt, Scene Grammar Lab, Germany
  • Melissa Le-Hoa Vo
    Goethe University Frankfurt, Scene Grammar Lab, Germany
  • Jeremy M Wolfe
    Brigham and Women’s Hospital, Harvard Medical School
  • Footnotes
    Acknowledgements  This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project number 222641018 SFB/TRR 135 TP C7 granted to MLHV and the Hessisches Ministerium für Wissenschaft und Kunst (HMWK; project ‘The Adaptive Mind’).
Journal of Vision September 2024, Vol.24, 920. doi:https://doi.org/10.1167/jov.24.10.920
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuri Markov, Melissa Le-Hoa Vo, Jeremy M Wolfe; VowelWorld 2.0: Using artificial scenes to study semantic and syntactic scene guidance. Journal of Vision 2024;24(10):920. https://doi.org/10.1167/jov.24.10.920.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Scene guidance is difficult to investigate in realistic scenes because it is hard to systematically control complex, realistic images. Parameters like set size are often ambiguous in real or even VR scenes. We created a new version of VowelWorld 2.0 (Vo & Wolfe, 2013), where we control various parameters of a highly artificial “scene”. Scenes are 20x20 grids of colored cells with 120 cells containing letters. Participants search for a vowel, present on 67% of trials. Each scene contained three big disks (2x2 cells) with consonants on them. These served as “anchor objects” which are known to predict target locations in real-world searches (Vo, 2021). An additional 96 cells featured rings which were grouped into larger analogs of surfaces. A vowel’s placement could follow three rules. Color rule (semantic): certain targets were associated with one background color “gist” (e.g., A’s appear in red scenes). Structure rule (syntactic): vowels were placed near or inside the small rings. Anchor rule (syntactic): vowels were close to a big circle containing a neighboring consonant (e.g., “B” implies “A”). Two vowels followed all three rules, two vowels followed color and surface rules, and one vowel was placed randomly. On half of the trials, participants were precued with a specific vowel. Otherwise, participants searched for any vowel. For the first three blocks, participants attempted to learn the rules from experience. Then, we explained the rules. Participants failed to fully learn rules but did benefit from the learned anchor rule (shorter RTs). Knowing rules markedly speeded performance for vowels that followed only color and surface rules. Anchor rule vowels showed less improvement over initial learning. Knowing rules had a major impact on ending absent trials. Future work will systematically vary the predictability of different rules to test under which circumstances rule learning becomes more or less optimal.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×