Purchase this article with an account.
Linda Henriksson, Kaisu Elander, Riitta Hari; Understanding visual scenes: a combined MEG and eye-tracking study. Journal of Vision 2016;16(12):522. doi: https://doi.org/10.1167/16.12.522.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Natural visual scenes are rich in information, and scanning eye movements are needed to gather this information. But how do we select where to look next? Previous studies suggest that faces attract especially effectively attention and saccades (Yarbus, 1967; Crouzet et al., 2010). In this combined magnetoencephalography (MEG) and eye-tracking study, we aimed to identify the saccade and brain dynamics underlying target selection during free viewing of natural scenes. The stimuli were 199 grayscale photographs of natural scenes, including landscapes, scenes with a single person, and cluttered scenes with several persons presented for 1 s at 6.3–7.1 s inter-stimulus intervals. The subjects (N = 18) were instructed to fixate a cross at the centre of the screen before the stimulus onset, and to look freely at the images when they appeared. Each stimulus was followed by a question about its content (e.g., 'Was there water?'). We applied representational similarity analysis (RSA; Kriegeskorte et al., 2008) to study the MEG responses and to relate the MEG and eye-tracking results. From ~50 ms after stimulus onset, the similarity of the MEG responses correlated with the low-level similarity of the images, indicating that the evoked single-trial MEG responses contained information about the natural-scene stimuli. At later latencies, but before the onset of the first saccade, the similarity of the MEG responses correlated also with the target of the first saccade and with the upcoming scan-path of eye movements. These results suggest that combining MEG with eye tracking can inform about the cortical dynamics of visual-scene understanding.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only