Abstract
Our visual environment impacts multiple aspects of cognition including perception, memory and attention. However, to understand the neural underpinnings of cognition, studies traditionally remove or control the external environment. As a result, we have limited understanding of neurocognitive processes beyond the lab. Here, in order to bridge the gap between studying cognition in the classic lab setting and in the real world environment, we used mobile EEG (mEEG) and augmented reality (AR) that allows to maintain some control over perception by placing virtual objects into the real world. In this research, we aimed to validate our AR and mEEG setup using a well-characterised cognitive response - the face inversion effect. Participants viewed upright and inverted faces in three settings (1) lab-based, (2) walking through an indoor environment while seeing photos of faces, and (3) walking through an indoor environment while seeing virtual faces. Results show greater low frequency activity for inverted compared to upright faces in all experimental settings, demonstrating that cognitively relevant signals can be extracted from mEEG and AR paradigms. Further, we present a potential route to explore the link between dynamic environments and EEG. Low and mid-level visual features were extracted from head-mounted videos recorded while walking through an outdoor environment using a computational model of visual cortex, before statistically relating the features to continuous EEG. As expected, we find that posterior electrodes correlated with video features, highlighting a framework to relate continuous perceptions to continuous neural activity. Together, this research helps pave the way to exploring neurocognitive processes in real world environments.