Abstract
Visual recognition is a phenomenon that seems to occur almost instantaneously. However, this is just an impression: not only does it require hundreds of milliseconds of processing, but information from the world must also be sampled during tens of milliseconds. This means that brain activity related to the recognition of an object is in fact composed of the brain responses to information sampled in different time windows. Furthermore, we can expect activity in response to different time windows to be different, partly because different features are attended and used at different moments during recognition, and because information perceived earlier must be maintained longer to be integrated with information perceived later. In this study, we aimed to decompose brain activity according to the sampling moment of information. To do so, we randomly sampled the main face features across 200ms on each trial while subjects performed a gender or expression recognition task and while their EEG activity was recorded. We then reverse correlated EEG amplitude in occipito-temporal sensors at all time points with information presented in different time windows: this allowed us to uncover the processing time course of information sampled at specific moments. We observed that processing was significantly different across presentation moments at several latencies and that the time windows leading to high activity correlated with the time windows leading to accurate responses. We also found that presentation moment modulated the durations of the P1 and P3 components. Importantly, these differences were not the same across tasks, indicating that their origin is partly top-down. In summary, we uncovered for the first time the processing of information sampled at different moments during recognition. We showed that sampling moment modulates the processing of information in more than one way, and that this modulation is partly related to top-down routines of information extraction
Meeting abstract presented at VSS 2018