Abstract
In an event-related fMRI-A paradigm, switching the relative positions of two separated objects, so that an elephant above a bus is followed by the bus above the elephant, results in a much greater release from adaptation in LOC than a translation of equal extent of the original scene (Hayworth et al, 2008).
Could this greater sensitivity to relative rather than absolute position be evident in the kinds of rotation paradigms that have been used to assess retinotopic organization in V1-V4? Because the wedges used in common retinotopic mapping can be perceived as highlighted portions of a larger object (the screen), common retinotopy techniques do not discriminate between object-centered and retinotopic coding. We devised a new paradigm to test whether LOC might contain an object-centered coordinate map, which would be consistent with the possibility that relative position is derived from an object-centered coordinate system.
While maintaining central fixation, subjects viewed an object which rotated around fixation at a constant eccentricity of approximately 3°. As the object rotated around the screen, subjects performed one of two shape judgments (one-back matching or a fit-to-gap similarity judgment) on a region at the perimeter of the object that changed shape at a frequency of 2 hz. The task location rotated around the object faster than the object itself rotated around fixation (24s vs. 32s). In the posterior fusiform gyrus (which evidences the strongest relative-position effects) no BOLD signal modulation was observed at either frequency of rotation. The signal in LO was modulated more strongly at the frequency of rotation around the screen than at the frequency of rotation around the object, indicating that LO cannot be characterized by an object-centered map at the scale measurable by fMRI. Any effects of relative position must come from sub-MRI-voxel neural circuits or interactions with other visual areas.