September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
At what stage of the visual processing hierarchy is visual search relational and context-dependent vs. feature-specific?
Author Affiliations & Notes
  • Stefanie I. Becker
    School of Psychology, The University of Queensland, Brisbane, Australia
  • Aimee Martin
    School of Psychology, The University of Queensland, Brisbane, Australia
  • Nonie J Finlayson
    School of Psychology, The University of Queensland, Brisbane, Australia
Journal of Vision September 2019, Vol.19, 132b. doi:https://doi.org/10.1167/19.10.132b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stefanie I. Becker, Aimee Martin, Nonie J Finlayson; At what stage of the visual processing hierarchy is visual search relational and context-dependent vs. feature-specific?. Journal of Vision 2019;19(10):132b. https://doi.org/10.1167/19.10.132b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies showed that attention, eye movements and visual short-term memory operate on (partly) context-dependent representations of stimuli. Specifically: When observers have to search for a target with particular features (e.g., medium orange), attention is usually tuned to the relative size and colour (e.g., largest, reddest; ‘relational search’) rather than the physical features (e.g., medium, orange). Attention can also be tuned to the specific features of the target, but feature-specific search is more effortful and slower. Importantly, it is currently unknown whether information about relative features is derived from lower-level neurons that respond to specific features, or whether visual inputs are first encoded relationally, with feature-specific codes extracted later. The present study addressed this question using functional magnetic resonance imaging (fMRI) in a colour search task in which we enforced relational vs. feature-specific search. Our current findings support the first possibility, with inputs being first processed in a feature-specific manner, and later relationally: In V1, repetition suppression was most pronounced in the feature-specific condition, indicating that these neurons respond to specific feature values. In V2, repetition suppression was equally strong for both conditions, but in later areas (V3, parietal and frontal areas), the result reversed, with stronger repetition suppression for relational search. Surprisingly, these results were obtained even when both the target and nontarget colours changed on a trial-by-trial-basis in relational search, and only the nontarget colour in feature-specific search. These findings show that repetition suppression is not always tightly linked to repetitions of the stimulus input, but can depend on top-down search goals, especially during later processing stages. Moreover, while V1 seems to respond to specific features, relational information is apparently derived as early as V3, and dominates throughout the visual processing hierarchy. This dominance may explain why relational search is more efficient and generally preferred to feature-specific search.

Acknowledgement: Australian Research Council (ARC) 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×