Purchase this article with an account.
Thomas Carlson, Ryota Kanai, Hinze Hogendoorn, Juraj Mesik, Jeremy Turret; Decoding of object position using magnetoencephalography (MEG). Journal of Vision 2010;10(7):1001. doi: https://doi.org/10.1167/10.7.1001.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Contemporary theories of object recognition posit that an object's position in the visual field is quickly discarded at an early stage in visual processing, in favor of a high level, position-invariant representation. The present study investigated this supposition by examining how the location of an object is encoded in the brain as a function of time. In three experiments, participants viewed images of objects while brain activity was recorded using MEG. In each trial, subjects fixated a central point and images of objects were presented to variable locations in the visual field. The nature of the representation of an object's position was investigated by training a linear classifier to decode the position of the object based on recorded physiological responses. Performance of the classifier was evaluated as a function of time by training the classifier with data from a sliding 10ms time window. The classifier's performance for decoding the position of the object rose to above chance levels at roughly 75ms, peaked at approximately 115ms, and decayed slowly as a function of time up to 1000ms post-stimulus onset. Within the interval of 75 to 1000ms, classification performance correlated with the angular distance between targets, indicating a metric representation of visual space. Notably, prior to the time that classification performance returned to chance, object category information could be decoded from physiological responses; and, participants were able to accurately make high level judgments about the objects (i.e. category and gender for faces). These findings suggest that position may be a fundamental feature encoded in the representation of an object, in contrast to the notion that position information is discarded at an early stage of visual processing.
This PDF is available to Subscribers Only