September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Exploring the limits of relational guidance using categorical and non-categorical text cues
Author Affiliations
  • Steven Ford
    University of Central Florida
  • Younha Collins
    University of Central Florida
  • Daniel Go
    University of Central Florida
  • Joseph Schmidt
    University of Central Florida
Journal of Vision September 2024, Vol.24, 1105. doi:https://doi.org/10.1167/jov.24.10.1105
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Steven Ford, Younha Collins, Daniel Go, Joseph Schmidt; Exploring the limits of relational guidance using categorical and non-categorical text cues. Journal of Vision 2024;24(10):1105. https://doi.org/10.1167/jov.24.10.1105.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Objects in the environment do not exist in isolation; they exist relative to other objects (your wallet may be to the right of your keys). Recent work suggests that following a pictorial target preview, spatial relationships between objects do guide search, as measured by the proportion of trials in which the target pair is fixated first (Ford et al., 2021; Ford et al., In revision). To parameterize this finding, we conducted three experiments to assess the oculomotor guidance of attention generated by spatial relationships in response to text cues. In all three experiments, participants searched for arbitrary object pairs in particular spatial arrangements (e.g., "fish above car"), amongst other pairs of random objects and we assessed performance between matched (target pairs matched the cued spatial relationship) and swapped (target pairs relationship was reversed) search displays. Experiment one investigated relational guidance using categorical text cues, with one or both objects cued. The second also used categorical text cues, but two objects were always cued, and the search array contained both, one, or neither of the cued objects in matched or swapped arrangements. Relational guidance did not emerge in either experiment, suggesting that relational guidance might rely on highly specific visual features. To test this, we conducted a final experiment in which participants memorized a limited set of targets so that they could verbally describe each object’s specific visual features. They were then given text cues pertaining to the specific targets they memorized. In this case, relational information impacted oculomotor search guidance. The findings suggest that relational guidance can be extended beyond pictorial previews, but depends on well-learned visual features that can be precisely coded. Variance in visual features that result from a category of objects may eliminate relational guidance.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×