June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
The Visual Aha!: Insights into object and face perception using event related potentials
Author Affiliations
  • James Tanaka
    Dept. of Psychology, University of Victoria, British Columbia, Canada
  • Carley Piatt
    Dept. of Psychology, University of Victoria, British Columbia, Canada
  • Javid Sadr
    Vision Sciences Laboratory, Dept. of Psychology, Harvard University, Cambridge, Massachusetts, USA
Journal of Vision June 2006, Vol.6, 84. doi:https://doi.org/10.1167/6.6.84
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Tanaka, Carley Piatt, Javid Sadr; The Visual Aha!: Insights into object and face perception using event related potentials. Journal of Vision 2006;6(6):84. https://doi.org/10.1167/6.6.84.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In these experiments, a continuous presentation paradigm was used to investigate the temporal dynamics of object and face perception with event related potentials (ERPs). A sequence of noise-to-object image frames was generated using the Random Image Structure Evolution program (Sadr & Sinha, 2001, 2004). RISE allowed for the phase spectrum of the object image to be parametrically manipulated while maintaining the low level visual properties (e.g., luminance, spatial frequency, contrast) of the stimulus. When the RISE sequence was shown in a continuous presentation paradigm (500 ms per frame), there was one frame (the “Aha!” frame) in the series where the object appeared abruptly out of the noise background. ERPs were then employed to examine the neural correlates of the visual Aha! frame. It was found that the Aha! frame was accompanied by the early onset of visual ERP components at posterior recording sites and a later semantic ERP component at central locations. Activation at central sites returned to pre-recognition levels by the next frame in the sequence (Aha! +1) whereas posterior activity returned to baseline levels two frames later (Aha! +2). The distinct patterns of the activation and adaptation suggest separable contributions of visual and semantic processes to object recognition. In subsequent experiments, the RISE technique and ERPs were used to examine top-down effects in object recognition and category differences between the perception of faces and non-face objects. More generally, this line of research suggests a novel and powerful paradigm for studying the temporal dynamics of high level vision.

Tanaka, J. Piatt, C. Sadr, J. (2006). The Visual Aha!: Insights into object and face perception using event related potentials [Abstract]. Journal of Vision, 6(6):84, 84a, http://journalofvision.org/6/6/84/, doi:10.1167/6.6.84. [CrossRef]
Footnotes
 This work is supported by grants from the National Science and Engineering Research Council of Canada, the National Science Foundation and the James S. McDonnell Foundation (Perceptual Expertise Network)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×