Abstract
The environment in which objects are located impacts recognition. This occurs through initial coding of global scene context, enabling the generation of predictions about potential objects in the environment (Bar, 2004; Trapp & Bar, 2015). When correct, these predictions facilitate object recognition, but when these predictions are violated object recognition is impeded, shown by slower RTs and larger N300/400 ERPs (Mudrik, et al, 2010; 2014; Lauer, et al, 2020). The majority of research on object recognition and visual contexts has been done in controlled laboratory settings, where objects and scenes often occur simultaneously. However, in the real world, the environment is relatively stable over time while objects come and go. Research in real world environments is the ultimate test of how context changes our perceptions, and is fundamental in determining how we understand what we see. In this research, we asked how the visual context influenced object recognition in real-world settings through a combination of mobile EEG (mEEG) and augmented reality (AR). During the experiment, participants approached AR arrows placed either in an office or an outdoor environment while mEEG was recorded. When participants reached the arrows, they changed colour indicating that a button could be pressed, which then revealed an object that was either congruent or incongruent with the environment. We analysed the ERP data (aligned to the appearance of the objects) with hierarchical generalised linear mixed models with a fixed factor of congruency, and object and participants as random factors. Similarly to laboratory experiments, we found that scene-object incongruence impeded object recognition as shown through larger amplitudes of the N300/N400. These findings suggest that visual contexts constrain our predictions of likely objects even in real-world environments, helping to bridge between research in laboratory and real life situations.