December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Context effects on object recognition in real world environments
Author Affiliations & Notes
  • Victoria Nicholls
    University of Cambridge
  • Kyle Alsbury-Nealy
    University of Toronto
  • Alexandra Krugliak
    University of Cambridge
  • Alex Clarke
    University of Cambridge
  • Footnotes
    Acknowledgements  This work was supported by a Royal Society and Wellcome Trust Sir Henry Dale Fellowship to AC (211200/Z/18/Z)
Journal of Vision December 2022, Vol.22, 3252. doi:https://doi.org/10.1167/jov.22.14.3252
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Victoria Nicholls, Kyle Alsbury-Nealy, Alexandra Krugliak, Alex Clarke; Context effects on object recognition in real world environments. Journal of Vision 2022;22(14):3252. https://doi.org/10.1167/jov.22.14.3252.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The environment in which objects are located impacts recognition. This occurs through initial coding of global scene context, enabling the generation of predictions about potential objects in the environment (Bar, 2004; Trapp & Bar, 2015). When correct, these predictions facilitate object recognition, but when these predictions are violated object recognition is impeded, shown by slower RTs and larger N300/400 ERPs (Mudrik, et al, 2010; 2014; Lauer, et al, 2020). The majority of research on object recognition and visual contexts has been done in controlled laboratory settings, where objects and scenes often occur simultaneously. However, in the real world, the environment is relatively stable over time while objects come and go. Research in real world environments is the ultimate test of how context changes our perceptions, and is fundamental in determining how we understand what we see. In this research, we asked how the visual context influenced object recognition in real-world settings through a combination of mobile EEG (mEEG) and augmented reality (AR). During the experiment, participants approached AR arrows placed either in an office or an outdoor environment while mEEG was recorded. When participants reached the arrows, they changed colour indicating that a button could be pressed, which then revealed an object that was either congruent or incongruent with the environment. We analysed the ERP data (aligned to the appearance of the objects) with hierarchical generalised linear mixed models with a fixed factor of congruency, and object and participants as random factors. Similarly to laboratory experiments, we found that scene-object incongruence impeded object recognition as shown through larger amplitudes of the N300/N400. These findings suggest that visual contexts constrain our predictions of likely objects even in real-world environments, helping to bridge between research in laboratory and real life situations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×