Purchase this article with an account.
James Tanaka, Carley Piatt, Javid Sadr; The Visual Aha!: Insights into object and face perception using event related potentials. Journal of Vision 2006;6(6):84. doi: https://doi.org/10.1167/6.6.84.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In these experiments, a continuous presentation paradigm was used to investigate the temporal dynamics of object and face perception with event related potentials (ERPs). A sequence of noise-to-object image frames was generated using the Random Image Structure Evolution program (Sadr & Sinha, 2001, 2004). RISE allowed for the phase spectrum of the object image to be parametrically manipulated while maintaining the low level visual properties (e.g., luminance, spatial frequency, contrast) of the stimulus. When the RISE sequence was shown in a continuous presentation paradigm (500 ms per frame), there was one frame (the “Aha!” frame) in the series where the object appeared abruptly out of the noise background. ERPs were then employed to examine the neural correlates of the visual Aha! frame. It was found that the Aha! frame was accompanied by the early onset of visual ERP components at posterior recording sites and a later semantic ERP component at central locations. Activation at central sites returned to pre-recognition levels by the next frame in the sequence (Aha! +1) whereas posterior activity returned to baseline levels two frames later (Aha! +2). The distinct patterns of the activation and adaptation suggest separable contributions of visual and semantic processes to object recognition. In subsequent experiments, the RISE technique and ERPs were used to examine top-down effects in object recognition and category differences between the perception of faces and non-face objects. More generally, this line of research suggests a novel and powerful paradigm for studying the temporal dynamics of high level vision.
This PDF is available to Subscribers Only