Abstract
Scene guidance is difficult to investigate in realistic scenes because it is hard to systematically control complex, realistic images. Parameters like set size are often ambiguous in real or even VR scenes. We created a new version of VowelWorld 2.0 (Vo & Wolfe, 2013), where we control various parameters of a highly artificial “scene”. Scenes are 20x20 grids of colored cells with 120 cells containing letters. Participants search for a vowel, present on 67% of trials. Each scene contained three big disks (2x2 cells) with consonants on them. These served as “anchor objects” which are known to predict target locations in real-world searches (Vo, 2021). An additional 96 cells featured rings which were grouped into larger analogs of surfaces. A vowel’s placement could follow three rules. Color rule (semantic): certain targets were associated with one background color “gist” (e.g., A’s appear in red scenes). Structure rule (syntactic): vowels were placed near or inside the small rings. Anchor rule (syntactic): vowels were close to a big circle containing a neighboring consonant (e.g., “B” implies “A”). Two vowels followed all three rules, two vowels followed color and surface rules, and one vowel was placed randomly. On half of the trials, participants were precued with a specific vowel. Otherwise, participants searched for any vowel. For the first three blocks, participants attempted to learn the rules from experience. Then, we explained the rules. Participants failed to fully learn rules but did benefit from the learned anchor rule (shorter RTs). Knowing rules markedly speeded performance for vowels that followed only color and surface rules. Anchor rule vowels showed less improvement over initial learning. Knowing rules had a major impact on ending absent trials. Future work will systematically vary the predictability of different rules to test under which circumstances rule learning becomes more or less optimal.