August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Decoding of object position using magnetoencephalography (MEG)
Author Affiliations
  • Thomas Carlson
    Department of Psychology, University of Maryland
  • Ryota Kanai
    Helmholtz Institute, Experimental Psychology, University of Utrecht
  • Hinze Hogendoorn
    Institute of Cognitive Neuroscience & Department of Psychology, University College London
  • Juraj Mesik
    Department of Psychology, University of Maryland
  • Jeremy Turret
    Department of Psychology, University of Maryland
Journal of Vision August 2010, Vol.10, 1001. doi:https://doi.org/10.1167/10.7.1001
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thomas Carlson, Ryota Kanai, Hinze Hogendoorn, Juraj Mesik, Jeremy Turret; Decoding of object position using magnetoencephalography (MEG). Journal of Vision 2010;10(7):1001. https://doi.org/10.1167/10.7.1001.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Contemporary theories of object recognition posit that an object's position in the visual field is quickly discarded at an early stage in visual processing, in favor of a high level, position-invariant representation. The present study investigated this supposition by examining how the location of an object is encoded in the brain as a function of time. In three experiments, participants viewed images of objects while brain activity was recorded using MEG. In each trial, subjects fixated a central point and images of objects were presented to variable locations in the visual field. The nature of the representation of an object's position was investigated by training a linear classifier to decode the position of the object based on recorded physiological responses. Performance of the classifier was evaluated as a function of time by training the classifier with data from a sliding 10ms time window. The classifier's performance for decoding the position of the object rose to above chance levels at roughly 75ms, peaked at approximately 115ms, and decayed slowly as a function of time up to 1000ms post-stimulus onset. Within the interval of 75 to 1000ms, classification performance correlated with the angular distance between targets, indicating a metric representation of visual space. Notably, prior to the time that classification performance returned to chance, object category information could be decoded from physiological responses; and, participants were able to accurately make high level judgments about the objects (i.e. category and gender for faces). These findings suggest that position may be a fundamental feature encoded in the representation of an object, in contrast to the notion that position information is discarded at an early stage of visual processing.

Carlson, T. Kanai, R. Hogendoorn, H. Mesik, J. Turret, J. (2010). Decoding of object position using magnetoencephalography (MEG) [Abstract]. Journal of Vision, 10(7):1001, 1001a, http://www.journalofvision.org/content/10/7/1001, doi:10.1167/10.7.1001. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×