Free
Article  |   October 2011
Viewed actions are mapped in retinotopic coordinates in the human visual pathways
Author Affiliations
Journal of Vision October 2011, Vol.11, 17. doi:https://doi.org/10.1167/11.12.17
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuval Porat, Yoni Pertzov, Ehud Zohary; Viewed actions are mapped in retinotopic coordinates in the human visual pathways. Journal of Vision 2011;11(12):17. https://doi.org/10.1167/11.12.17.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Viewed object-oriented actions elicit widespread fMRI activation in the dorsal and ventral visual pathways. This activation is typically stronger in the hemisphere contralateral to the visual field in which action is seen. However, since in previous studies participants kept fixation at the same screen position throughout the scan, it was impossible to infer if the viewed actions are represented in retina-based coordinates or in a more elaborated coordinate system. Here, participants changed their gaze between experimental conditions, such that some conditions shared the same retinotopic coordinates (but differed in their screen position), while other pairs of conditions shared the opposite trait. The degree of similarity between the patterns of activation elicited by the various conditions was assessed using multivoxel pattern analysis methods. Regions of interest, showing robust overall activation, included the intraparietal sulcus (IPS) and the occipitotemporal cortex. In these areas, the correlation between activation patterns for conditions sharing the same retinotopic coordinates was significantly higher than that of those having different retinotopic coordinates. In contrast, the correlations between activation patterns for conditions with the same spatiotopic coordinates were not significantly greater than for non-spatiotopic conditions. These results suggest that viewed object-oriented actions are likely to be maintained in retinotopic-framed coordinates.

Introduction
Observation and execution of action elicit enhanced fMRI activation in widely overlapping cortical regions (Buccino et al., 2001; Sakreida, Schubotz, Wolfensteller, & von Cramon, 2005; Shmuelof & Zohary, 2005). These results, together with evidence for a dual visuomotor preference for the same actions in single cell recordings (Fogassi et al., 2005; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996), suggest that observing object manipulation action activates (at least to some extent) cortical networks that are used when performing the same action. However, the elementary visual representation of an object is in retina-based coordinates, while muscle activation is necessarily defined in body-centered coordinates. The aim of this study is to investigate if activity elicited by action observation (in the absence of actual action) is conveyed in a strictly retinotopic coordinate frame or in other egocentric reference frames, which are more directly related to the motor output. 
Previous fMRI experiments suggested that visuomotor parietal regions are organized in a retinotopic fashion (Schluppeck, Glimcher, & Heeger, 2005; Sereno, Pitzalis, & Martinez, 2001; Silver, Ress, & Heeger, 2005; Swisher, Halko, Merabet, McMains, & Somers, 2007), similar to that found in earlier occipital areas of the dorsal pathway (V1d–V3d). These studies, requiring participants to make a delayed saccade from a central fixation to peripheral targets in various locations (or attend to stimuli without gaze shifting), indicated that parietal regions display an orderly mapping of the target's position on the retina. A recent fMRI study in our laboratory, in which the same saccadic eye movements were performed from various starting points, suggests that although the prevalent representation of saccadic targets is indeed retinotopic, a specific region within the intraparietal sulcus (IPS) may carry information about both the saccade vector (which is in retinotopic coordinates) and the saccade endpoint, which is encoded with respect to the head, body, or screen (Pertzov, Avidan, & Zohary, 2011). 
While the abovementioned study investigated the coordinate system in which prospective oculomotor movements (and/or spatial attention) are coded, the human parietal cortex is also involved in the spatial representation of visually guided actions, such as reaching (Connolly, Andersen, & Goodale, 2003; Connolly, Goodale, Desouza, Menon, & Vilis, 2000) and grasping (Binkofski, Buccino, Posse et al., 1999; Culham et al., 2003; Frey, Vinton, Norlund, & Grafton, 2005). The complex nature of visuomotor transformations suggests that information represented by the BOLD signal is likely to be multiplexed. For example, during visually guided action, the level of BOLD activation along the IPS shows clear preference for the (contralateral) position of the image in the visual field. However, in the anterior-most parts of the IPS, the BOLD response is also dependent on the self-acting or viewed hand identity (Shmuelof & Zohary, 2006; Stark & Zohary, 2008), thereby displaying motor-like specificity. A spatial representation that is invariant to changes in gaze direction may be extremely useful when interacting with the surroundings. It seems plausible, therefore, that these regions may contain spatial information about visual objects, which is encoded in a non-retinotopic coordinate system. 
In the current experiment, we exploited the fact that various cortical areas in the human parietal and frontal cortices are activated by grasping movements as well as during observation of grasping (Binkofski, Buccino, Stephan et al., 1999; Buccino et al., 2001; Shmuelof & Zohary, 2005). Framing the task as an action observation task enabled us to inspect activation patterns throughout the visual system, including in areas that are involved in visuomotor integration, while simplifying matters by avoiding motor action components. Thus, we could pinpoint more precisely whether and under what behavioral demands transformation from retinotopic to non-retinotopic coordinates may occur. Previous work has demonstrated that anterior parietal regions are sensitive to the stimulus position during action observation (Shmuelof & Zohary, 2005, 2006). Critically, since gaze position was the same in these experiments, one could not discern if this visual field effect depended on the stimulus position in retinotopic coordinates or in a different coordinate frame (non-retinotopic mapping could be in either head, body, screen, or world-based coordinates, termed here as spatiotopic mapping). To discriminate between these possibilities, we utilized a similar action observation paradigm in the current study, with an addition that the gaze position was altered across conditions. We use here a multivoxel analysis approach, which previously revealed the presence of information about specific visual features (e.g., motion direction or object category) in the fMRI response patterns across occipital cortex (Chan, Kravitz, Truong, Arizpe, & Baker, 2010; Kamitani & Tong, 2005; Williams et al., 2008) and visuomotor representations in the parietal cortex (Dinstein, Gardner, Jazayeri, & Heeger, 2008; Gallivan, McLean, Valyear, Pettypiece, & Culham, 2011; Pertzov et al., 2011). While it is debated whether MVPA has subvoxel sensitivity (Op de Beeck, 2010), it can clearly be sensitive to subtle information that might be undetected by the classic ROI analysis, which focuses on the average activation across all voxels within a region of interest. 
Material and methods
Subjects
Eighteen right-handed, healthy subjects gave their informed consent to participate in the fMRI study, which was approved by the Helsinki Ethics Committee of Hadassah Hospital, Jerusalem, Israel. None of the subjects had any history of neurological, psychiatric, or visual deficits. Three subjects were excluded from further analysis due to poor task performance (not being able to maintain stable fixation as required, see Supplementary materials). Thus, the fMRI data from fifteen subjects (mean age = 26 ± 4, 9 females) served as the database for this study. 
Experimental design
The study consisted of two parts: the main experiment and an auxiliary localizer scan. Each subject participated in one MRI session, which included two runs of the main experiment and two runs of the localizer scan that were later used to define regions of interest for the analysis. In addition, five of the subjects participated in three additional scans to functionally localize brain regions involved in the perception of motion, objects, and hands, as detailed below. 
Main experiment
The main experiment was a block design fMRI paradigm, in which subjects viewed video clips of a right hand reaching and grasping objects. The clips appeared to the right or left of one of three possible fixation points (red squares, 0.19° × 0.19°), which were located either at the screen center, 6.3° to the left of the screen center, or 6.3° to the right of the screen center. The clips were presented 3.15° to the left or right of the fixation point. Thereby, the experiment included a total of six conditions (see Figure 1). The “active task” blocks were made of ten 1-s video clips. Each block was terminated by movement of the fixation point to its new location and was followed by a rest period that lasted 10 s. Each clip consisted of 10 grasping movements (a hand grasping a single different object in each movement), concatenated together. The ten objects in each block were pseudorandomly selected from a pool of seventeen objects and their order of appearance within the block was different in each version. There were a total of 10 versions of 10-s clips. The clips were constructed in such a way that each type of MVPA comparison (RS, R, S, D; see Multivoxel pattern analysis section) included all 17 objects in both of the compared conditions. This was done to ensure that correlation values between the abovementioned conditions would not be biased by object-identity similarity. The experiment included a total of 24 task blocks (four repetitions for each of the six conditions) in each of the two runs, presented in pseudorandom order. The subjects were instructed to covertly count the number of fingers that participated in each grasping movement while keeping their gaze at the red fixation point. Stimulus size was 192 × 144 pixels (4.5° × 3.3°), and all clips were monochromatic. An eye-tracking device was used online to monitor the subjects' eye position online. 
Figure 1
 
Main task. (A) Temporal aspects: A 10-s resting block in which a fixation point appeared in one of 3 possible locations preceded the trial block. The trial block itself lasted 10 s and consisted of a series of video clips featuring hands grasping objects, which appeared to the right (in this example) or to the left of the fixation point. The trial block was followed by another resting block, in which the fixation point moved (in its beginning) to one of the two other possible locations, and so on. In the example shown here, the stimuli in the two conditions have identical locations on the screen but opposite locations on the retina. (B) Spatial aspects: Stimuli appeared at one of four possible locations, 3.15° to the right or left of one of three possible fixation points, making a total of six possible conditions (combinations of stimulus and fixation locations). Some subgroups of conditions have the same location on the retina (conditions 1, 3, and 5 as well as conditions 2, 4, and 6), others are matched in their screen position (conditions 2 and 3 and conditions 4 and 5), while others differ in both their screen position and their retinal location (conditions 1 and 2, 1 and 4, 1 and 6, 2 and 5, 3 and 4, 3 and 6, and 5 and 6).
Figure 1
 
Main task. (A) Temporal aspects: A 10-s resting block in which a fixation point appeared in one of 3 possible locations preceded the trial block. The trial block itself lasted 10 s and consisted of a series of video clips featuring hands grasping objects, which appeared to the right (in this example) or to the left of the fixation point. The trial block was followed by another resting block, in which the fixation point moved (in its beginning) to one of the two other possible locations, and so on. In the example shown here, the stimuli in the two conditions have identical locations on the screen but opposite locations on the retina. (B) Spatial aspects: Stimuli appeared at one of four possible locations, 3.15° to the right or left of one of three possible fixation points, making a total of six possible conditions (combinations of stimulus and fixation locations). Some subgroups of conditions have the same location on the retina (conditions 1, 3, and 5 as well as conditions 2, 4, and 6), others are matched in their screen position (conditions 2 and 3 and conditions 4 and 5), while others differ in both their screen position and their retinal location (conditions 1 and 2, 1 and 4, 1 and 6, 2 and 5, 3 and 4, 3 and 6, and 5 and 6).
Localizer task
During this scan, the subjects maintained their fixation on the red square (0.19° × 0.19° in size) at the center of the screen throughout the experiment. The scan was based on a similar block design paradigm, consisting of ten 1-s clips of a hand grasping objects. In each block, the clips were shown either the right or the left hand and appeared to the left or right of the central fixation point (6.56° from stimuli center to fixation). The subjects were instructed to covertly count the number of fingers that participated in the grasping action, without shifting their gaze from the central fixation point. Two extra conditions were added, in which a scrambled version of the clips consistently appeared in one of the two positions. In total, the task included six conditions (left location–left hand, left location–right hand, left location-scrambled, right location–left hand, right location–right hand, and right location-scrambled) that were repeated six times in a pseudorandom, counterbalanced fashion in two runs. The stimuli were monochromatic, 448 × 304 pixels (10.44° × 6.95°) in size. As in the main task, an eye-tracking device was used to monitor the subjects' gaze position. 
Additional localizers
The localizer task resulted in widespread activation in the lateral occipital cortex (LOC), bilaterally. However, due to the complex nature of the video clips that included movement, hands, and objects, it was not possible to pinpoint the exact region within the LOC and its related functionality. We therefore performed additional localizing sessions for 5 of the 15 subjects, in an effort to parse the widespread activation regions into functional regions that are more sensitive to motion (e.g., MT and MST), hands (e.g., EBA), or visual objects. In all additional localizer scans, an eye-tracking device was used to monitor the subjects' gaze position. 
Motion localizer
This localizer scan consisted of two sessions, based on a design introduced by Huk, Dougherty, and Heeger (2002). In the first session, subjects fixated at a point (yellow crosshair) located in the right side of the screen 4° from the edge of the screen. The stimuli (extracting and contracting random assortment of dots, 12° in diameter) appeared 10° to the left of the fixation point. Each dynamic stimulus appeared for 10 s, followed by 10 s of static stimulus (random dots, same contrast as the moving stimuli). There were a total of fifteen dynamic and fifteen static stimuli in the session. During the two sessions, the fixation point blinked for 100 ms 32 times. Subjects were instructed to keep fixation and covertly count the number of fixation point blinks. They were required to report the number of blinks at the end of the session. Their average report was N = 32 ± 1, suggesting that they were indeed attentive to the fixation point at all times. The second session was the exact mirror image of the first, with the fixation point placed at the left end of the screen and the stimuli appearing 10° to the right of the fixation point. This design was chosen to position the moving stimuli as far as one can in the visual periphery, so that motion-sensitive areas MT and MST could be differentiated. Typically, MST receptive fields are much larger than MT and include a larger portion of the ipsilateral visual field. The further the stimulus is in the visual peripheral field, the more likely it is that only neurons in MST will respond to motion in the ipsilateral side. 
Object and hand localizer
The localizer contained one session that consisted of 32 blocks, divided into four conditions (images of hands, scrambled version of the hand images, objects, and scrambled object images, each repeated eight times in pseudorandom order). Each block lasted 8 s and consisted of eight images, each appearing for 800 ms, with a 200-ms blank interval between images. A fixation point (red square, 0.19° × 0.19°) appeared in the center of the screen at all times. Subjects were instructed to keep fixation and press a button whenever the same image (or scrambled images) appeared twice in a consecutive order (e.g., 1-back task). All images were centered on the fovea and spanned 10° × 10°. The scrambled versions of the images were created such that they contained the same frequency components as the original images (Rainer, Augath, Trinath, & Logothetis, 2001). The images were transformed to the frequency domain using Fourier transform, separately for each RGB channel. Next, the phase spectra were randomized and recombined with the original amplitude components using an inverted Fourier transform to generate the noisy patterns. 
Stimuli presentation
Stimuli were presented using Presentation software (Neurobehavioral Systems, Albany, CA). Visual stimuli were back projected via an MR shielded projector (Sharp XGC335X, Osaka, Japan; Navitar SST150 and Navitar 485MCZ500 lenses, Navitar, Rochester, NY) onto a screen located 95 cm behind the participants. The screen was made visible to the participants via a tilted mirror, positioned above the participants' faces. The screen's dimensions were 39 cm wide and 31 cm high, and the display resolution was 1024 × 768 pixels (23.2° × 17.52°). 
fMRI data acquisition
The BOLD fMRI measurements were obtained using a whole-body 3-T Magnetom Trio Siemens scanner and a 32-channel head coil. The functional MRI protocols were based on a multislice gradient echo-planar imaging and obtained under the following timing parameters: TR = 2 s, TE = 30 ms, flip angle = 90 degrees, imaging matrix = 64 × 64, FOV = 192 mm; 30–32 slices with 3-mm slice thickness were oriented in the oblique position, covering the whole brain (with a gap of 0.45 mm between slices). The functional voxels were thus of 3 × 3 × 3 mm. In addition, high-resolution T1-weighted magnetization-prepared rapid acquisition gradient-echo (MPRAGE) images were acquired (1 × 1 × 1 mm in resolution). 
fMRI data processing
Data analysis was conducted using the BrainVoyager QX software package (Brain Innovation, Maastricht, The Netherlands), SPSS (version 16.0 for Windows, SPSS, Chicago, IL), and in-house analysis tools developed in Matlab (The MathWorks, Natick, MA). The first resting block and task block (10 TRs) of each functional scan were discarded. Preprocessing of functional scans included 3D motion correction, slice scan time correction, and removal of low frequencies up to three cycles per scan (linear trend removal and high-pass filtering). No spatial smoothing was applied. The smoothed appearance of the ROI activation in the relevant figures is the result of an interpolation procedure, which was carried out for visualization purposes only. The anatomical and functional images were transformed to the Talairach coordinate system using a trilinear interpolation. The cortical surface was reconstructed from the high-resolution anatomical images using standard procedures implemented by the BrainVoyager software. 
Region-of-interest (ROI) selection
To select task-relevant voxels on an individual basis, we used the conventional general linear model (GLM) implemented in the BrainVoyager software. This model estimates the neural response as a boxcar function for each condition block, convolved with a standard hemodynamic response function (sum of two gamma functions; Friston et al., 1998). An independent localizer scan was used to identify voxels responsive during viewing of object manipulation acts. To that end, conditions containing object-grasping clips (irrespective of the viewed acting hand and of the visual field in which they were presented) were contrasted with conditions containing scrambled versions of the same clips (again, irrespective of their position on the screen; q(FDR) < 0.05). Since the scrambled version had the same luminance distribution (but higher local contrast), this procedure effectively screened out low-level visual areas (which are typically more responsive to the scrambled version of the images). The resulting ROIs were isolated clusters of voxels confined within the following anatomical boundaries: the intraparietal sulcus (IPS) for parietal areas (extending into the superior and inferior parietal lobules; the IPS activation was further divided to its anterior, middle, and posterior parts (aIPS, mIPS, and pIPS, correspondingly), since, in most participants, these were clearly distinguishable clusters). Activation in the ventral pathway included the posterior inferior temporal sulcus, middle occipital gyrus, and the inferior temporal gyrus, termed here jointly as MOG/pITS. We focus here mainly on these regions of interest. Activation was also found in the frontal cortex in the junction of the precentral and superior frontal sulci (termed here as dorsal premotor cortex or dPM) and the interhemispheric fissure adjacent to the paracentral sulcal branch of the cingulate sulcus (termed SMA). An example of the main interest ROIs in one participant is depicted in Figure 2A (see detailed specifications in Table 1). 
Figure 2
 
MVPA analysis. (A) Selected ROIs. The localizer task elicited several distinct clusters of activation that served as ROIs for the MVPA analysis. MOG/pITS (1), pIPS (2), mIPS (3), and aIPS (4) of a single subject's right hemisphere are displayed (Subject S4). Dorsal premotor (dPM, 5) activity is also apparent. Abbreviations: IPS = intraparietal sulcus, CS = central sulcus, STS = superior temporal sulcus. (B) A matrix represents all possible pairing of conditions, color coded by pairing type (see legend). The icons on the leftmost column and first row depict the specific condition: X represents the position of the stimulus on the screen (in black); the red point represents the location of the fixation point. To minimize the possible confounding effect of gaze position, the analysis was done only on correlations between conditions with neighboring fixation points (marked by checkmarks), apart from the “same” conditions (RS) that necessarily involved the same gaze position. The results from the same pairing (e.g., same color) were then pooled and averaged across subjects to obtain the mean correlation for each of the four pairing types.
Figure 2
 
MVPA analysis. (A) Selected ROIs. The localizer task elicited several distinct clusters of activation that served as ROIs for the MVPA analysis. MOG/pITS (1), pIPS (2), mIPS (3), and aIPS (4) of a single subject's right hemisphere are displayed (Subject S4). Dorsal premotor (dPM, 5) activity is also apparent. Abbreviations: IPS = intraparietal sulcus, CS = central sulcus, STS = superior temporal sulcus. (B) A matrix represents all possible pairing of conditions, color coded by pairing type (see legend). The icons on the leftmost column and first row depict the specific condition: X represents the position of the stimulus on the screen (in black); the red point represents the location of the fixation point. To minimize the possible confounding effect of gaze position, the analysis was done only on correlations between conditions with neighboring fixation points (marked by checkmarks), apart from the “same” conditions (RS) that necessarily involved the same gaze position. The results from the same pairing (e.g., same color) were then pooled and averaged across subjects to obtain the mean correlation for each of the four pairing types.
Table 1
 
Regions of interest and Talairach coordinates (mean and STD across subjects) of the center of mass of each selected ROI.
Table 1
 
Regions of interest and Talairach coordinates (mean and STD across subjects) of the center of mass of each selected ROI.
Region of interest Hemisphere Talairach coordinates Volume (mm3) Subjects
X Y Z
MOG/pITS L −47 ± 3 −69 ± 6 3 ± 6 747 ± 39 15
R 47 ± 4 −64 ± 5 1 ± 5 763 ± 37 15
pIPS L −25 ± 4 −75 ± 5 28 ± 4 755 ± 35 13
R 28 ± 5 −74 ± 7 28 ± 6 750 ± 39 13
mIPS L −26 ± 6 −59 ± 7 51 ± 5 746 ± 34 15
R 25 ± 4 −59 ± 4 52 ± 4 738 ± 33 15
aIPS L −38 ± 5 −41 ± 4 48 ± 5 775 ± 34 13
R 32 ± 6 −45 ± 6 53 ± 4 754 ± 35 12
dPM L −28 ± 4 −10 ± 3 55 ± 3 765 ± 35 12
R 31 ± 5 −10 ± 5 53 ± 5 768 ± 33 12
SMA −2 ± 3 0 ± 5 51 ± 4 775 ± 31 10
For each subject, activation patterns in all ROIs were analyzed using multivoxel pattern analysis: Since the correlation between such patterns may be affected by the number of voxels in the ROI, we varied the statistical threshold of the localizer GLM contrast, such that the ROIs in all subjects were of equal size (∼750 mm3, see Table 1 for exact volumes). Importantly, none of the thresholds exceeded a false discovery rate of q(FDR) < 0.05. For most subjects, all ROIs were of sufficient size to be included in the analysis. In a few subjects, some “highlighted” areas were too small (using our statistical threshold that was corrected for multiple comparisons) and were left out to preserve a uniform ROI size across subjects (posterior IPS, bilaterally, in two subjects; left anterior IPS, in two subjects; right anterior IPS, in three subjects). To further ensure that the results were not biased by the number of selected voxels, the same regions were defined again using a more lenient threshold, resulting in six ROIs of size ∼1500 mm3 along the IPS (pIPS, mIPS, and aIPS, in the two hemispheres) encapsulating the previous ROIs with the same names, as well as two ∼3000-mm3 ROIs in the lateral occipitotemporal cortex (encapsulating MOG/pITS). Note that while the ROI voxel inclusion selection was conducted at the anatomical spatial scale (1 mm), all further analysis was conducted on the corresponding 3-mm functional voxels. 
ROI average activation analysis
The data analysis was done for ROIs that were determined by an independent localizer session. The data of each subject were divided to two separate data sets, collected in the two independent runs. Each run contained all conditions, with the same number of repetitions per condition. First, the activation time course of each voxel from each run was Z-transformed to yield a normalized activation profile. Each voxel's activation time course (within a specific ROI) was modeled with 6 parameters, one for each condition. Each predictor was created by convolving a standard hemodynamic response function (see above) with a square function representing the 4 blocks (repetitions) of the specific condition. The GLM resulted in six activation values (beta weights) that yielded the best-fitting model of the activation time course. All further analysis was done separately for each of the two data sets. The first (standard) ROI analysis focused on differences in the average activation level for the various conditions within a specific ROI. To that end, beta values for each condition were averaged across all voxels in each ROI. This was done for each subject separately, in that subject's individual ROI. We then performed a two-way repeated measures analysis of variance (ANOVA, left/right screen position × left/right retinotopic position) on the four central conditions (2, 3, 4, and 5 in Figure 1). Conditions 1 and 6 were excluded from this analysis because each of these new spatiotopic positions was observed from only one gaze position. 
Multivoxel pattern analysis
Our next objective was to obtain a pattern of activation (per condition) across all voxels within a specific ROI, in order to see the degree to which these patterns (vectors) are correlated. To that end, we first normalized the response amplitude (beta value) of each voxel, by subtracting its mean response across all 6 conditions: 
i = { c o n d 1 , c o n d 2 c o n d 6 } , β i , r u n 1 n o r m a l i z e d = β i , r u n 1 i = c o n d 1 c o n d 6 β i , r u n 1 / 6 , β i , r u n 2 n o r m a l i z e d = β i , r u n 2 i = c o n d 1 c o n d 6 β i , r u n 2 / 6 .
 
This normalization was conducted, as in similar studies (Haxby et al., 2001; Williams, Dang, & Kanwisher, 2007), to eliminate correlations due to mere differences in the mean response level between voxels (within each data set). Each ROI's response pattern was therefore represented as a one-dimensional vector of size N, where N is the number of active voxels in that ROI. The similarity between different patterns of response (obtained for the different experimental conditions) was then assessed by a simple voxel-wise correlation between the two vectors. (Note that the voxel-wise correlation should be treated as a complementary method to the standard ROI analysis, taking advantage of the information contained in the subtle differences in the activation pattern of voxels within the ROI that is lost when averaging the activation of all voxels within a region.) To assess the statistical significance of the correlation results, we first applied the Fisher's Z transformation (Z = (1/2)ln(1 + r) / (1 − r)) to the correlation coefficients (thereby converting the correlation coefficient values into a normally distributed variable, amenable for parametric statistical testing) and then performed ANOVAs. As in the conventional (average signal ROI) analyses, this process was done for each subject in each of the ROIs defined specifically for that subject. 
The design allowed for four different categories of comparisons, assessing the similarity between response patterns of the various conditions. Correlations were calculated between conditions where stimuli appeared in the same retinotopic and the same spatiotopic (real world) location (denoted by “RS”; correlating the activation pattern for each condition with its counterpart from the other run), conditions in which the stimuli appeared in the same retinotopic location but at different screen positions (conditions 1, 3, and 5 as well as conditions 2, 4, and 6; denoted by “R”), conditions in which the stimuli appeared at the same spatiotopic location but their retinotopic position differed (conditions 2 and 3 and again conditions 4 and 5; denoted by “S”), and conditions in which the stimuli appeared at different retinotopic and spatiotopic locations (conditions 1 and 2, 1 and 4, 1 and 6, 2 and 5, 3 and 4, 3 and 6, and 5 and 6; denoted by “D”). 
One must be aware of the possibility that the direction of the subject's gaze (which determines what parts of the (uncontrolled) visual surround inside the bore is seen) may affect the activation level of individual voxels and thereby affect the correlations values. It is therefore important to control for this effect, when assessing the effect of the location of the stimuli on the screen or relative to the retina. Note that comparisons within group D can be divided into three separate groups: comparisons between conditions that have the same fixation point, neighboring fixation points, or fixation points further apart (see Figure S1). That is, even though all comparisons in this group are similar in the sense that both the retinotopic and spatiotopic locations of the stimuli in these conditions are different, the distance between the fixation positions vary. The same holds for comparisons between conditions of the same retinotopic location (groups R + RS). Conditions with the same spatiotopic location (S), which differ in their retinotopic location, must necessarily have different gaze positions. In our specific experimental design, these conditions (S) always had neighboring fixation points and therefore had a fixed difference in gaze position. 
To assess the effect of gaze position on the degree of correlation between the multivoxel patterns of the various conditions (in groups R, RS, and D in which the gaze difference was variable), we performed a two-way repeated measures ANOVA on the correlation values (with factors: (1) retinotopic identical, yes/no and (2) gaze difference between conditions, 0°/6.3°/12.6°) in each of the ROIs. A significant effect of the subject's gaze position was found in most of the ROIs (full details and results are shown in the Supplementary materials). Therefore, in our main text, the analysis of the multivoxel pattern correlations was limited to comparisons between conditions that had neighboring fixation points, thus excluding correlations between conditions 1 and 2, 1 and 6, 1 and 5, 2 and 5, 2 and 6, and 3 and 4. Figure 2B shows the full matrix of correlations and the selected comparisons. 
Comparison between ROIs
In order to assess the significance level of the changes in the retinotopic effect between the parietal ROIs, we performed an additional repeated measures ANOVA on the averaged ROI activation, with four factors (left/right hemisphere × pIPS/mIPS/aIPS × left/right spatiotopic location × left/right retinotopic location), resulting in a 2 × 3 × 2 × 2 design. Subjects whose BOLD activation was not robust enough to include all parietal ROIs were excluded from this analysis (see Region-of-interest (ROI) selection section). This assured an equal sample size (n = 12 subjects) in each ROI. The ANOVA yielded a significant triple interaction between the hemisphere, ROI, and retinotopic location, and therefore, two 3-level ANOVAs (pIPS/mIPS/aIPS × left/right spatiotopic location × left/right retinotopic location) were performed, one for each hemisphere. The whole process was repeated for the MVPA data (pIPS/mIPS/aIPS × same/different retinotopic location × same/different spatiotopic location). 
Eye tracking
All subjects were trained on the task while maintaining fixation prior to the scanning session and reported that they had no trouble performing the task during the scan. To verify that our results did not stem from atypical eye movements inside the scanner, the eye position of the subjects was monitored during the fMRI scan using an infrared video camera equipped with a telephoto lens. The data of three subjects were discarded from further analysis due to problematic eye movements during the scan, which clearly indicated that they did not follow the behavioral task requirements. The eye-tracking device, iView X MRI-LR (SensoMotoric Instruments, Teltow, Berlin, Germany), was located at the bottom of the subject's bed and sampled the eye position at 50 Hz with gaze position accuracy of ∼0.5°. A built-in calibration routine of 9 points covering the screen was employed in the beginning of each scan. Eye position was tracked during the whole experiment, including the main experiment and the localizer scans. Data were analyzed with BeGaze software (SensoMotoric Instruments, Teltow, Berlin, Germany). 
Results
Our participants watched short clips of hands grasping objects in a block design fMRI paradigm. In different blocks, the video clips appeared to the left or right of three possible fixation locations, thereby allowing differentiation between the retinotopic and spatiotopic locations of the stimuli (see Figure 1B). During the experiment, subjects were asked to covertly count the number of fingers participating in each grasp (that is, the number of fingers that made contact with the objects) while maintaining fixation. This combination of stimuli and task was previously shown to elicit robust fMRI activation in both the dorsal and ventral visual streams (Shmuelof & Zohary, 2005). We verified, using online eye tracking (as well as post hoc analysis), that following a change in the location of the fixation point gaze was redirected to the new position and that the subjects maintained stable and unbiased fixation during the block conditions. 
Regions of interest
An ANOVA performed on the whole brain data of all subjects (random effects, aligned in Talairach space) revealed that while retinotopically selective voxels were abundant in the occipital and parietal cortices, no voxels were sensitive to the spatiotopic location of the stimuli. However, such spatiotopic sensitivity might be annulled by intersubject anatomical difference. We therefore defined functional regions of interest (ROIs), using an independent localizer scan. This scan included similar object manipulation video clips, presented left or right of the fixation point. Activation during these conditions was contrasted against activation during the observation of scrambled versions of these clips in order to filter out early visual areas. This resulted in three distinct clusters of activation in the parietal cortex (in each hemisphere), mainly inside the intraparietal sulcus (IPS), termed posterior, middle, and anterior IPS (pIPS, mIPS, and aIPS, respectively) according to their relative position within the IPS: The pIPS is located near the parietal-occipital sulcus (POS) and area V7, while the mIPS is located anterior to pIPS along the IPS extending dorsally (into the superior parietal lobule, SPL). The aIPS lies in the most anterior portion of the IPS, on the junction of the IPS and the postcentral sulcus. These areas are known to be important for visuomotor interaction and are specifically active during object manipulation, grasping, reaching, eye movements, and action observation (for a review, see Culham & Valyear, 2006). The same contrast yielded wide bilateral activation in the lateral occipital cortex, namely, in the middle occipital gyrus, lateral occipital sulcus, and the posterior inferior temporal sulcus. This region roughly corresponds to area hMT+ and the extrastriate body area (EBA) that participate in the representation of visual motion (Tootell et al., 1995) and body parts (Downing, Jiang, Shuman, & Kanwisher, 2001; review Peelen & Downing, 2007), respectively. In this study, hMT+ was found to be partly overlapping and encapsulated by EBA, as previously reported elsewhere (Weiner & Grill-Spector, 2011). Under more lenient thresholds (never exceeding the threshold of q(FDR) < 0.05), the activation extended ventrally to the occipitotemporal sulcus (see Figure 3). We elaborate more on the location and possible roles of the occipital and parietal areas in the discussion. Finally, activations were found bilaterally in the dorsal premotor area (dPM) and in the supplementary motor area (SMA). These areas, together with the IPS, are often regarded as the frontoparietal network of attention (Corbetta et al., 1998; Thompson & Bichot, 2005). Importantly, similar areas have been recently suggested to play a role in the coordination of reaching and grasping (Cavina-Pratesi et al., 2010). Our video clips consisted of hands grasping objects in different postures, including the final approach of the hand toward the object. As such, it is likely that both reaching and grasping circuits are utilized during action observation. Full analysis of the frontal area data can be found in the Supplementary materials. To control for the effect of ROI size on the MVPA results, we defined additional parietal ROIs of 1500 mm3 in size, each encapsulating one of original parietal ROIs. The results from these ROIs were not significantly different from those of the 750-mm3 ROIs, and therefore, only the latter will be discussed below. The Talairach coordinates of all ROIs and further details are described in Table 1
Figure 3
 
The ventral region of interest and its spatial relationship to other specific regions within MOG/pITS. The selected voxels in the MOG/pITS (shown in black lines; 3000-mm3 ROIs, grasping videos > scrambled videos) are shown together with regions representing motion (red, contralateral activation; blue, ipsilateral activation; moving dots > static dots), hands (hands > scrambled hands ∩ hands > objects; yellow), and objects (objects > scrambled objects ∩ objects > hands; green) in two representative subjects (S1 and S2) from a posterior lateral view. Orange voxels were active both during random dot motion and arms presentation. Purple voxels were active both during the presentation of moving stimuli in the contralateral and in the ipsilateral hemifields. STS denotes the superior temporal sulcus.
Figure 3
 
The ventral region of interest and its spatial relationship to other specific regions within MOG/pITS. The selected voxels in the MOG/pITS (shown in black lines; 3000-mm3 ROIs, grasping videos > scrambled videos) are shown together with regions representing motion (red, contralateral activation; blue, ipsilateral activation; moving dots > static dots), hands (hands > scrambled hands ∩ hands > objects; yellow), and objects (objects > scrambled objects ∩ objects > hands; green) in two representative subjects (S1 and S2) from a posterior lateral view. Orange voxels were active both during random dot motion and arms presentation. Purple voxels were active both during the presentation of moving stimuli in the contralateral and in the ipsilateral hemifields. STS denotes the superior temporal sulcus.
Contralateral retinotopic representation along the IPS
Mean fMRI activation was computed for each ROI, for each of the six conditions. We previously reported that, in general, the IPS is sensitive to the visual field position of the stimulus during action observation (Shmuelof & Zohary, 2006). With the current task, dissociation between retinotopic and spatiotopic effects could be achieved. The results of this analysis for the regions defined along the IPS are displayed (separately for each hemisphere) in Figures 4A4F. One can notice that the most dominant feature, determining the level of activation in both hemispheres, is the laterality of the stimulus in retinotopic coordinates: The contralateral stimuli evoke greater fMRI activation than the ipsilateral ones (thus causing the reversal between preferred stimuli between the two hemispheres). To assess this quantitatively, we performed a two-way repeated measures ANOVA using the activation level (beta weight) for the four central conditions (conditions 2, 3, 4, and 5 in Figure 1B). The first factor was the retinotopic location of the stimulus (i.e., left/right of the fixation point) and the second factor was its spatiotopic location (i.e., left/right of the central fixation point). All parietal regions exhibited a clear retina-based contralateral preference, in terms of their regional mean activation level, as can be seen in Figures 4A4F [left pIPS, F(1,12) = 82.24, p < 0.001; right pIPS, F(1,12) = 58.53, p < 0.001; left mIPS, F(1,14) = 53.01, p < 0.001; right mIPS, F(1,14) = 53.46, p < 0.001; left aIPS, F(1,12) = 7.36, p = 0.019; right aIPS, F(1,8) = 21.58, p = 0.001]. None of the above regions showed any effect of the spatiotopic position on the level of fMRI response (p > 0.42) or any interaction effect between the retinotopic and spatiotopic locations of the stimuli (p > 0.08). These results suggest that the representation in these regions is retinotopic. However, the size of the retinotopic effect (i.e., the relative difference in activation level between the retinotopically different conditions) declines systematically as one moves from the posterior regions to anterior regions (see Figure 4). This was confirmed in two additional 3 × 2 × 2 repeated measures ANOVAs that were performed on all three parietal ROIs in each hemisphere (significant ROI × retinotopic interaction; left hemisphere, F(2,10) = 16.05, p = 0.001; right hemisphere, F(2,10) = 11.05, p = 0.003). Notably, this is observed without any indication for any spatiotopic effect or interaction (p > 0.55). That is, it seems that while the retinotopic location of the stimulus accounts for less of the variance in the elicited signal in the anterior IPS, it is not replaced by a spatiotopic representation. However, it is still possible that spatiotopic sensitivity is only apparent at a finer spatial scale or in the multivoxel pattern of responses that were previously shown to reveal subtle visual information that was unseen in the univoxel analysis (Kamitani & Tong, 2005). 
Figure 4
 
Mean activation level for the various conditions in the IPS. As in Figure 2, the icons below the bars depict the specific stimulus location (X) and the fixation point (in red) in each of the 6 conditions. (A, B) Mean beta values of each of the six task conditions in the left and right aIPS, respectively. (C, D) Mean beta values of each of the six task conditions in the left and right mIPS, respectively. (E, F), Mean beta values of each of the six task conditions in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows point to the specific region of interest.
Figure 4
 
Mean activation level for the various conditions in the IPS. As in Figure 2, the icons below the bars depict the specific stimulus location (X) and the fixation point (in red) in each of the 6 conditions. (A, B) Mean beta values of each of the six task conditions in the left and right aIPS, respectively. (C, D) Mean beta values of each of the six task conditions in the left and right mIPS, respectively. (E, F), Mean beta values of each of the six task conditions in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows point to the specific region of interest.
Multivoxel pattern analysis
To further study if any of these regions carried information about the location of the stimulus in various coordinate frames, we analyzed the patterns of multivoxel activation (MVPA) of the different conditions (Haxby et al., 2001). We reasoned that areas that are mapped in a retinotopic coordinate frame would show higher correlation in their MVPA for conditions sharing the same retinotopic location, irrespective of the position of the stimuli on the screen. In contrast, areas that encode information in spatiotopic coordinates should reveal similar response patterns for stimuli that share the same screen position (note that these two options are by no means necessarily mutually exclusive). This analysis method complements the conventional one reported above as it is not sensitive to the overall level of response across the region (which is subtracted from the responses of all the voxels) but rather to the differential response between the voxels. It also allows for a more careful estimation of the similarity between patterns of fMRI activation that are evoked by the different conditions (as described below). 
Data were therefore divided into two independent data sets, each containing an equal number of repetitions of each condition, taken from different scans. In each ROI, the multivoxel patterns of responses for each pair of conditions were correlated to generate a similarity index for these conditions (Dinstein et al., 2008; Haxby et al., 2001; Williams et al., 2007). Specifically, each of the six conditions was represented by a vector comprised of the response values (beta) of all the voxels within the ROI. This response vector was computed separately for the two independent data sets (scans), thus allowing for thirty-six distinctive correlation values of similarity between all possible pairs (see Material and methods section). We divided the obtained similarity indices into four distinct categories (see Figure 2A): (1) conditions in which the stimuli appeared in the same retinotopic and the same spatiotopic (screen) location (termed RS), (2) conditions in which the stimuli shared the same retinotopic location (R) but had different spatiotopic locations, (3) conditions in which they appeared in the same spatiotopic location (S) but at different retinotopic locations, and (4) conditions in which the stimuli appeared at different retinotopic and spatiotopic locations (D). 
The similarity between pairs of multivoxel patterns of responses may be affected by covarying factors other than the stimulus location in the two coordinate frames. One clear candidate is a possible gaze effect (e.g., the distance between gaze positions in the compared conditions). Different gaze directions not only may affect the response directly (indicating gaze position sensitivity at the voxel level) but also may affect it indirectly, since diverse gaze positions are necessarily associated with dissimilar visual background of the scanner surrounding (extending beyond the projection zone seen through the mirror). To minimize this problem, the three categories differing in spatiotopic or retinotopic position (categories R, S, and D) were matched in that they included only comparisons between conditions in which gaze was directed to neighboring fixation points (i.e., distance level 1 or 6.3°, see Material and methods section). Thus, some of the possible comparisons in categories R and D were ruled out, due to greater or smaller distance between fixation points. Category RS necessarily included only comparisons between pairs of conditions in which gaze was at the same position in the two conditions. A complete diagram of analyzed comparisons is shown in the Material and methods section (Figure 2B; see also detailed discussion of this issue in the Supplementary materials). 
Figures 5A5F depict the mean similarity index in both hemispheres of all areas along the IPS (averaged across subjects) for each of the four categories. In all areas (in both hemispheres), the similarity index was positive between condition pairs that shared the same retinotopic location (RS and R) and negative for pairs of conditions with different retinotopic locations (S and D), suggesting that the retinotopic position played a major factor in determining the patterns of activation. A 2 × 2 repeated measures ANOVA (retinotopic stimulus location × spatiotopic stimulus location) performed on the similarity indices confirmed this: for the pIPS, a strong retinotopic effect was found bilaterally [left pIPS, F(1,12) = 29.75, p < 0.001; right pIPS, F(1,12) = 77.74, p < 0.001]. Similarly, both the mIPS [left mIPS, F(1,14) = 13.48, p = 0.003; right mIPS, F(1,14) = 28.67, p < 0.001] and the aIPS [left aIPS, F(1,12) = 12.03, p = 0.005; right aIPS, F(1,11) = 9. 92, p = 0.009] exhibited similar patterns for conditions that shared a retinotopic location and different patterns for conditions that shared the same spatiotopic location (p > 0.25 in all ROIs). Importantly, if the spatiotopic coordinates of the stimulus affect the multivoxel fMRI signal, on top of the dominant retinotopic representation, the similarity index for comparisons of the evoked response for categories RS and S should be greater than that for categories R and D, respectively. We therefore performed additional statistics (two-tailed t-tests, Bonferroni corrected for multiple comparisons). These tests revealed no significant difference between comparisons of the same retinotopic and spatiotopic locations (RS) and those with the same retinotopic location but different spatiotopic position (R; p > 0.12 for all areas). In addition, there was no significant difference between the similarity index of pairs of conditions having different retinotopic but the same spatiotopic location (S) and condition pairs having different retinotopic and different spatiotopic locations (D; p > 0.25 for all areas). This latter result is important, since if the similarity indices for group S were significantly higher than those of group D, this would suggest a subtle, multiplexed, effect of the spatiotopic location on the fMRI signal. Such an effect, if existed, could be overlooked when inspecting only the mean activation level across all voxels within the ROI. Finally, no interaction between the retinotopic and spatiotopic locations was found in any of the ROIs (p > 0.09 for all regions), with the exception of a marginal effect in the left aIPS [F(1,9) = 5.09, p = 0.05]. Indeed, in this region, the average correlation for pairs belonging to category RS was somewhat higher than that for category R (p = 0.12), suggesting that left aIPS may be more prone to gaze position effects than the other regions (see Supplementary materials for more details). Taken together, these results point toward a retinotopic representation of viewed object manipulation along the IPS. 
Figure 5
 
MVPA results for the various comparison groups in the IPS. (A, B) Similarity indices of the four different comparison types in the left and right aIPS, respectively. (C, D) Similarity indices of the four different comparison types in the left and right mIPS, respectively. (E, F) Similarity indices of the four different comparison types in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows denote activation in the specific region of interest.
Figure 5
 
MVPA results for the various comparison groups in the IPS. (A, B) Similarity indices of the four different comparison types in the left and right aIPS, respectively. (C, D) Similarity indices of the four different comparison types in the left and right mIPS, respectively. (E, F) Similarity indices of the four different comparison types in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows denote activation in the specific region of interest.
Two points can be made regarding the retinotopic effects found in the abovementioned analysis: First, while inspecting the mean ROI activation in the left aIPS revealed weaker retinotopic effects, the MVPA shows that sensitivity to the retinotopic location of the stimuli does exist in this region and that it is comparable with its right hemisphere counterpart. Second, and not surprisingly, as in the mean ROI signal analysis, the retinotopic effect was strongest in the posterior parts of the IPS and declined systematically along the posterior–anterior axis, becoming only marginally significant in the left aIPS. This can be seen in Figure 5, in which the difference between the combined values of conditions R and RS and the combined values of conditions D and S is greater in the more posterior regions. Again, this was confirmed in two additional 3 × 2 × 2 repeated measures ANOVAs that were performed on all three parietal ROIs in each hemisphere (significant ROI × retinotopic interaction; left hemisphere, F(2,10) = 8.89, p = 0.006; right hemisphere, F(2,10) = 15.53, p = 0.001). As was the case in the counterpart analysis of the averaged ROI signal, no indication of a spatiotopic effect across the parietal regions was found (p > 0.36). Figure 6 illustrates this point by displaying the spatial distribution of activation from one selected condition in the first data set (left column) against the spatial patterns of activation from the second data set of an identical retinotopic condition (central column) and the same spatiotopic condition (rightmost column). Note that the matched retinotopic conditions generate similar patterns of activation (coloring in figure) while the spatiotopic-matched conditions generate a very different pattern of activation in both exemplary subjects (upper and lower rows). However, the difference in the activation level between opposite retinotopic locations is greater in the more posterior areas. Thus, the data suggest that the retinotopic location of the stimuli carries less weight in explaining the elicited BOLD signal in the anterior IPS than in the more posterior parts of the IPS. We elaborate more on this in the Discussion section. 
Figure 6
 
Illustrative examples of the spatial patterns of voxel activation in the parietal regions of interest during various stimulus conditions, depicted from a dorsal view of the inflated brain in two representative subjects (S1: top row and S3: bottom row). The color scale denotes the normalized activation level after subtraction of the average voxel activation across all stimulus conditions. Left column: The pattern of activation in condition 3 (denoted by X in the screen icon, with central fixation), taken from data set I, showing contralateral preference (i.e., orange colors in the right hemisphere and dark green in the left hemisphere). Middle column: The pattern of activation elicited in the retinotopically matching condition 5, taken from data set II, showing a similar contralateral preference (note the similarity of the color patterns in each subject to that in the leftmost column). Right column: The patterns of activation in the spatiotopically matching (but retinotopically opposite) condition 2 (indicated by the same position of X in the screen icon), taken from data set II, showing quite the opposite preference. Generally, the contralateral retinotopic preference was more pronounced in the more posterior regions. PCS denotes the postcentral sulcus.
Figure 6
 
Illustrative examples of the spatial patterns of voxel activation in the parietal regions of interest during various stimulus conditions, depicted from a dorsal view of the inflated brain in two representative subjects (S1: top row and S3: bottom row). The color scale denotes the normalized activation level after subtraction of the average voxel activation across all stimulus conditions. Left column: The pattern of activation in condition 3 (denoted by X in the screen icon, with central fixation), taken from data set I, showing contralateral preference (i.e., orange colors in the right hemisphere and dark green in the left hemisphere). Middle column: The pattern of activation elicited in the retinotopically matching condition 5, taken from data set II, showing a similar contralateral preference (note the similarity of the color patterns in each subject to that in the leftmost column). Right column: The patterns of activation in the spatiotopically matching (but retinotopically opposite) condition 2 (indicated by the same position of X in the screen icon), taken from data set II, showing quite the opposite preference. Generally, the contralateral retinotopic preference was more pronounced in the more posterior regions. PCS denotes the postcentral sulcus.
Retinotopic effect in the occipitotemporal cortex
The same analysis was applied to the ventral pathway regions (MOG/pITS). When investigating the overall regional response level, a clear (contralateral) retinotopic main effect was found in both hemispheres of MOG/pITS [left MOG/pITS, F(1,14) = 114.90, p < 0.001; right MOG/pITS, F(1,14) = 103.82, p < 0.001] while no evidence for spatiotopic main effect was found in these areas (p > 0.44). Figures 7A and 7B show the mean activation level in the different conditions for the left and right MOG/pITS, respectively. The analysis of similarity between the various response patterns yielded similar results, which are depicted in Figures 7C and 7D (for each hemisphere, respectively). Both hemispheres showed strong positive correlations between conditions sharing the same retinotopic location (RS and R) and negative correlations between conditions with different retinotopic locations (either S or D). This was clearly demonstrated in a strong retinotopic effect found in the 2 × 2 repeated measures ANOVA (retinotopic stimulus location × spatiotopic stimulus location): left MOG/pITS, F(1,14) = 59.34, p < 0.001; right MOG/pITS, F(1,14) = 55.99, p < 0.001. No spatiotopic effect (p > 0.69) was found in either of the hemispheres. 
Figure 7
 
MOG/pITS results. (A, B) Mean activation level (betas) of each of the six task conditions in the left and right LOC, respectively. The pattern of activity suggests contralateral retinal representation of the stimuli. (C, D) Similarity indices of the four different comparison types in the left and right LOC, respectively. In all cases, error bars denote SEM. The icon at the central column shows a horizontal slice of the brain of one representative subject with an overlaid activity map (Subject S1).
Figure 7
 
MOG/pITS results. (A, B) Mean activation level (betas) of each of the six task conditions in the left and right LOC, respectively. The pattern of activity suggests contralateral retinal representation of the stimuli. (C, D) Similarity indices of the four different comparison types in the left and right LOC, respectively. In all cases, error bars denote SEM. The icon at the central column shows a horizontal slice of the brain of one representative subject with an overlaid activity map (Subject S1).
As was the case in the parietal areas, no significant difference was found between RS and R, or between S and D, indicating that the fMRI activation in the MOG/pITS during observation of actions is not modulated by the spatiotopic location of the stimulus (post hoc t-test, Bonferroni corrected for multiple comparisons). The left MOG/pITS exhibited some difference between R and RS similarity indices (p = 0.06), while no such difference was found in the right MOG/pITS. As in the pIPS, an interaction effect between the retinotopic location and the spatiotopic location was found in this region [F(1,14) = 8.21, p = 0.012], indicating that this difference is likely to be due to differences that relates to gaze position (see Supplementary materials). 
The spread of activation was generally much larger in the occipital parts of the visual system than in their parietal counterparts, under the same statistical threshold. This allowed us to define larger ROIs of 3000 mm3 in size, in addition to the MOG/pITS ROIs described above. Averaged signal analysis yielded a strong retinotopic effect [left MOG/pITS, F(1,14) = 122.04, p < 0.001; right MOG/pITS, F(1,14) = 114.78, p < 0.001] and no spatiotopic effect [left MOG/pITS, F(1,14) = 0.89, p = 0.362; right MOG/pITS, F(1,14) = 0.07, p = 0.793]. The MVPA showed the same result with a strong retinotopic effect [left MOG/pITS, F(1,14) = 65.08, p < 0.001; right MOG/pITS, F(1,14) = 73.66, p < 0.001] and no spatiotopic effect [left MOG/pITS, F(1,14) = 2.21, p = 0.160; right MOG/pITS, F(1,14) = 0.14, p = 0.718]. 
These results show that, as a whole, the MOG/pITS was very sensitive to the retinotopic location of the stimuli and not sensitive to their spatiotopic location. To better estimate the variability of the retinotopic sensitivity in the area, we calculated the activation amplitude (beta) for each voxel in each of the six conditions, in the same way it was done for all other ROIs. Figure 8 shows these values as they are plotted on the cortical map of one subject (additional MVPA maps can be found in the Supplementary materials, Figure S9). As can be seen, the difference between activation amplitudes of conditions in which the stimuli appeared in opposite retinotopic sides (expressed by a greater change in color in this figure) is greater in the posterior voxels than in the anterior one. In fact, some of the voxels in the proximity of the middle temporal sulcus show similar activation values regardless of the retinotopic location of the stimuli, consistent with the existence of large receptive fields that may cover parts of the ipsilateral hemifield (Huk et al., 2002). To test if this apparent behavior of activation is statistically significant, we divided the MOG/pITS ROI (in each hemisphere) into anterior and posterior portions, separated at the median Y coordinates of the ROI (separately for each subject). Next, we performed a repeated measures ANOVA that yielded a significant anteriority × retinotopic interaction effect [left MOG/pITS, F(1,14) = 56.39, p < 0.001; right MOG/pITS, F(1,14) = 95.01, p < 0.001]. Specifically, this analysis showed that the retinotopic sensitivity was greater in the posterior parts of the MOG/pITS ROI (main effects of F(1,14) = 148.41, p < 0.001 for the left posterior part; F(1,14) = 140.83, p < 0.001 for the right posterior part) than in the anterior parts (main effects of F(1,14) = 78.52, p < 0.001 for the left anterior part; F(1,14) = 66, p < 0.001 for the right anterior part) and is thus consistent with the abovementioned literature. 
Figure 8
 
Spatial patterns of voxel activation in the MOG/pITS region in one representative subject (S1) shown from a lateral posterior viewpoint of a partially inflated left hemisphere. Top row: Patterns of activation in conditions 1, 3, and 5, which share the same retinotopic position (left side). Bottom row: Patterns of activation in conditions 2, 4, and 6 (retinotopic right). The two middle columns therefore represent conditions that match spatiotopically (left, 2 and 3; right, 4 and 5) but are in symmetrically opposite sides in their retinotopic location. Note the clear reversal of colors between the top and bottom spatial patterns. This clearly demonstrates that, as in the lateral occipital cortex, the location of the image in retinotopic coordinates governs the response in the ventral regions, although once again the retinotopic preference is stronger in the more posterior parts. STS denotes the superior temporal sulcus.
Figure 8
 
Spatial patterns of voxel activation in the MOG/pITS region in one representative subject (S1) shown from a lateral posterior viewpoint of a partially inflated left hemisphere. Top row: Patterns of activation in conditions 1, 3, and 5, which share the same retinotopic position (left side). Bottom row: Patterns of activation in conditions 2, 4, and 6 (retinotopic right). The two middle columns therefore represent conditions that match spatiotopically (left, 2 and 3; right, 4 and 5) but are in symmetrically opposite sides in their retinotopic location. Note the clear reversal of colors between the top and bottom spatial patterns. This clearly demonstrates that, as in the lateral occipital cortex, the location of the image in retinotopic coordinates governs the response in the ventral regions, although once again the retinotopic preference is stronger in the more posterior parts. STS denotes the superior temporal sulcus.
Together, these results corroborate earlier work suggesting that the relevant coordinates of mapping in LOC are strictly retinotopic (Gardner, Merriam, Movshon, & Heeger, 2008; Hemond, Kanwisher, & Op de Beeck, 2007; Larsson & Heeger, 2006). They are somewhat at odds with a previous study of our group (McKyton & Zohary, 2007), which indicated that the stimulus' spatiotopic coordinates affect the degree of fMRI adaptation in LOC, although in more ventral areas. We address this issue in detail in the Discussion section. 
Discussion
We investigated whether the encoding of information in the human visual pathways is in retinotopic or spatiotopic coordinate system. Displaying video clips of hands grasping objects enabled us to effectively activate and study both ventral and dorsal visual areas. In all regions of interest, in both pathways, the retinotopic location was the dominant factor determining the similarity between the multivoxel patterns of response. In contrast, no traces of enhanced pattern similarity could be found when the stimuli appeared at the same location on the screen. These results indicate that the dominant representation (at least at a coarse scale) for the perception of object manipulation, throughout the visual system, is likely to be retinotopic. Note that planning or initiation of (ocular or limb) movements toward the objects of interest may elicit different results. In the following discussion, we interpret these results in light of previous findings, focusing separately on the dorsal and ventral pathways. 
Parietal activity
The parietal areas reported here are known to play a key role in visuomotor interaction and sensory integration and more specifically in grasping, reaching, and eye movements (for a review, see Culham, Cavina-Pratesi, & Singhal, 2006). Activation in the pIPS was previously associated with reaching (Gallivan, Cavina-Pratesi, & Culham, 2009), representation of body parts (Weiner & Grill-Spector, 2011), observation of object grasping, or even viewing graspable items without any action (Creem-Regehr & Lee, 2005; Rice, Valyear, Goodale, Milner, & Culham, 2007). The region termed mIPS is in proximity to areas previously shown to be active during grasping observation, especially when the task required specific attention to the number of fingers engaged in the action (Shmuelof & Zohary, 2005). Activation in parts of this region were previously reported in tasks involving pointing to targets (Astafiev et al., 2003), as well as visually guided reaching for grasping (with activation extending along the anterior axis into aIPS; Hinkley, Krubitzer, Padberg, & Disbrow, 2009). The aIPS was previously found to be active during grasping (Cavina-Pratesi et al., 2010; Culham et al., 2003; Frey et al., 2005; Hinkley et al., 2009) and also during grasping observation (Shmuelof & Zohary, 2005, 2006, 2008). The regions of parietal activation in our study also partially overlap with areas involved in spatial attention (Saygin & Sereno, 2008; Silver et al., 2005), as well as planning and execution of saccadic eye movements (Schluppeck et al., 2005; Swisher et al., 2007). Since peripheral attention was necessary to perform the current task, it is conceivable that some of the activation reported here was indeed attention related: object-oriented actions are most often coupled with visual attention in everyday situations (Table S1 in the Supplementary materials lists previously mapped regions that approximately match our found foci of parietal and frontal activation). 
A representation of viewed actions in (at least) a head-centered coordinate frame may be useful, given the complex nature of visuomotor processes. Alternatively, representations can be held in retina-centered coordinates and calculations can be performed on demand when performing an action (Buneo, Jarvis, Batista, & Andersen, 2002; Henriques, Klier, Smith, Lowy, & Crawford, 1998). Previous human imaging studies found posterior parietal areas to be organized according to a topographic mapping principle (e.g., Schluppeck et al., 2005; Sereno et al., 2001; Silver et al., 2005; Swisher et al., 2007), but their experimental design precluded dissociation between retinotopic and spatiotopic reference frames. 
Our initial results, using conventional analysis methods (i.e., whole brain analysis and averaged ROI activation), showed no sign of spatiotopic encoding. However, spatiotopic mapping effects may go undetected, as they might be averaged out in the conventional analysis techniques. To that end, we applied multivoxel pattern analysis (MVPA) methods that take advantage of intervoxel differences within a region (although the MVPA does not necessarily derive its power from an organization principle at subvoxel resolution, see Op de Beeck, 2010). Our MVPA analysis yielded similar results to those obtained by the conventional methods: positive correlations between conditions of the same retinotopic location (regardless of their spatiotopic location). The fact that the MVPA analysis failed to detect spatiotopic sensitivity in the various parietal ROIs suggests that spatiotopic modulations during action observation are not strong, at least at a (macroscopic) millimeter-scale characteristic of functional imaging. It is possible, however, that spatiotopic representations are only found when the location of the viewed action is behaviorally important or when actions are explicitly programmed toward that location. 
The interpretation of negative similarity indices
Our MVPA analysis typically yielded negative correlations between conditions sharing the same spatiotopic locations. Negative similarity indices were found in other MVPA studies (for example, Haxby et al., 2001; Williams et al., 2007). While positive correlation between conditions could be taken as an indicator of similar brain activity (at a coarse scale), the interpretation of the negative correlations is somewhat ambiguous. Note that, in the process of analysis, the response estimates (beta values) were normalized by deducting the mean response level (i.e., average beta, across conditions) from the obtained betas. This procedure was implemented to emphasize intercondition differences in the multivoxel response patterns and ignore intravoxel differences in activation level regardless of the specific condition (see Material and methods section). However, it has some drawbacks: Consider a hypothetical situation where some conditions generate robust BOLD activation while others do not (more or less around zero). Deducting the mean response (across conditions) of each voxel yields negative values for the latter conditions, especially for the most active voxels. As a result, the correlation between the normalized patterns of activation between conditions eliciting positive betas and those eliciting zero activation is likely to become negative, although the two conditions do not elicit opposite patterns of activation. Our current experimental design may have accentuated such an effect: The similarity index for categories S and D were all based on comparisons between conditions in which the video clips were presented in opposite retinotopic locations. As neurons typically have a strong preference for the contralateral side, the response at the voxel level is likely to mirror this. Thus, comparisons between conditions of opposite retinotopic locations are likely to yield a negative similarity index. We therefore do not assign much importance to the level of correlation per grouped comparisons but to the difference between these correlation values across groups (R, RS, S, and D). 
A gradient of the retinotopic effect strength
The immediate observation is that the retinotopic location of the stimulus played a substantial role in determining the level of activation, as well as the degree of similarity of multivoxel patterns, throughout the parietal cortex. However, we found a clear gradient in the retinotopic effect's strength: It was more prominent in the posterior parts than in the anterior parts of the parietal cortex. This matches the posterior to anterior gradient of the visual field effect, observed in a previous experiment, in which visually guided object grasping was performed (Stark & Zohary, 2008). It also sides with the existence of clear retinotopic maps in the more posterior parts of the parietal cortex in the context of attention and saccade preparation (Corbetta et al., 1998; Silver et al., 2005; Swisher et al., 2007). Note that, in the current experiment, the reduction in the magnitude of the retinotopic effect was observed without any evidence for an emerging non-retinal (spatiotopic) effect. Thus, factors other than the spatiotopic location of the stimuli are likely to affect the observed BOLD response. For example, we have previously shown that, during visually guided object grasping, the identity of the grasping hand (left or right) becomes more prominent in determining the BOLD response, the more anterior the region is, along the IPS (Stark & Zohary, 2008). The identity of the grasping hand was also shown to affect the BOLD signal in the anterior-most parts of the IPS but not in more posterior areas, during grasping observation (Shmuelof & Zohary, 2006). Further experiments might be needed to determine the exact relative weights of the different possible non-retinotopic factors that affect the BOLD signal in the parietal cortex during action observation. 
Relation to previous research in non-human primates
The current parietal regions lie within or near areas that have been previously identified as the human homologues to known areas in primates that participate in various aspects of visual and visuomotor processing: The pIPS is a good candidate for the human parallel of area DP (Tootell et al., 1998), in which reach-related activity has been found in the monkey (Heider, Karnik, Ramalingam, & Siegel, 2010). The aIPS roughly covers areas that parallel VIP and AIP in monkey and is sensitive to multimodal representations of position (Sereno & Huang, 2006) and tool manipulation and grasping (Culham et al., 2003; Grefkes, Weiss, Zilles, & Fink, 2002). The potential human analogue of the parietal reach region (PRR; Filimon, Nelson, Huang, & Sereno, 2009) falls within the boundaries of aIPS as well, extending into mIPS (for full reviews, see Culham & Valyear, 2006; Silver & Kastner, 2009). It is important to mention, however, that the functional mapping of primate parietal regions onto the human brain is not conclusive. This is especially true in the case of mIPS that lies posterior to aIPS but anterior to regions that are associated with LIP in the macaque. It is likely that mIPS covers both the human homologues of VIP and LIP (Astafiev et al., 2003). It is also possible that this region does not have a direct primate counterpart, as suggested elsewhere (Silver & Kastner, 2009). Both aIPS and mIPS have been shown to be functionally connected with the dorsal premotor cortex and hMT+ in humans, in resemblance to cortical connections that exist in the macaque (Mars et al., 2011). 
Evidence from single neuron recordings in the parietal cortex of monkeys suggests that both retinotopic and non-retinotopic factors determine their activity. Specifically, a considerable proportion of neurons in the posterior parietal cortex (∼50%) is gain-modulated by the direction of gaze (Andersen, Bracewell, Barash, Gnadt, & Fogassi, 1990) or subserves a hybrid retino-centric and head-centric representation (Mullette-Gillman, Cohen, & Groh, 2005, 2009). The activity of some neurons in area VIP in the macaque is best explained by non-retinotopic representation of space (Duhamel, Bremmer, BenHamed, & Graf, 1997). Furthermore, the response of neurons in the posterior parts of the parietal cortex that are involved in reaching is best explained by the existence of both hand-centered and eye-centered coordinate frames (Buneo et al., 2002). How, then, does one explain the exclusive sensitivity to retinal factors in our study? 
Representations of space: A matter of scale
One possible explanation is that spatiotopic properties of neurons are simply not evident at the voxel level because of the huge difference in the spatial scale between the two techniques (a voxel's activation indirectly reflects the pooled activity of ∼1-m neurons). Even if many parietal neurons are sensitive to subjects' gaze direction, in addition to their more dominant retinotopic sensitivity (Andersen et al., 1990; Andersen, Essick, & Siegel, 1985), unless there is topographic organization of such neurons in large clusters with the same gain fields of signal modulation, no gaze-dependent modulation will be observed at the voxel level. We, in fact, do find a clear gaze modulation effect in the parietal cortex. However, this effect is even stronger in the calcarine sulcus, suggesting that it is probably due to changes in visual features outside the display area (inside the scanner bore), which depend on the gaze position, rather than true sensitivity to gaze angle (see Supplementary materials for detailed discussion). 
Action preparation vs. passive observation
It is also possible that non-retinal representations are evoked only under certain behavioral requirements. Our recent work (Pertzov et al., 2011) indicates that spatiotopic mapping may be stronger when considering oculomotor responses to memorized locations. This study required participants to plan eye movements in a number of different directions from various starting points. Using similar MVPA techniques as in the present study, a parietal region (which generally corresponds to mIPS; Talairach coordinates: x = −23, y = −67, x = 44; x = 24, y = −66, z = 45) was found to be sensitive to both the impending saccade vector (i.e., its angle and amplitude from the current eye position) and its endpoint (regardless of the current eye position). Thus, while eye-centered coordinates of the target location were the most dominant factor determining the similarity of multivoxel response patterns, head-centered (or possibly body- or world-centered) effects were also significant. Other human imaging studies involving motor preparation found that the BOLD activity is modulated by eye position (DeSouza et al., 2000; Medendorp, Goltz, Vilis, & Crawford, 2003) and head position relative to the body (Brotchie et al., 2003). Collectively, these results may suggest that preparation of action (which is usually associated with an initial allocation of spatial attention to the saccadic target) may be an important component to evoke non-retinal responses. 
Our task required no eye movements and no arm movements per se. It is plausible that the observation of grasping movements activates parts of the parietal cortex that are also active during the act of grasping. However, during action viewing, in the absence of self-action, the retinal position of the object may suffice for its accurate representation. Similarly, when no saccades are made or planned, there is no need to remap the world, taking into account the eye's future position. This distinction between encoding for action vs. encoding for perception without action seems to be corroborated by recent results from behavioral studies: In strictly perceptual tasks, attention seems to be encoded in a retinotopic coordinate frame (Golomb, Chun, & Mazer, 2008), while in similar tasks requiring motor responses to the object of interest (e.g., saccades) spatiotopic effects can be clearly observed (Pertzov, Zohary, & Avidan, 2010). Recent work on the nature of spatial representations during reaching movements also indicates that specific behavioral settings are needed for the initiation of spatial transformations in parietal regions: Reaching to visual targets was framed in a mix of retinotopic and head-based coordinates, while reaching toward targets defined proprioceptively was framed solely in head-based coordinates (Bernier & Grafton, 2010). Thus, it seems that, during reaching, the spatial frame of the action depends on task requirements. 
Spatial representations in the occipitotemporal cortex
The current task elicited robust bilateral activation in ventral areas of the visual system. Widespread activation was observed in the dorsal and anterior parts of the LOC (roughly within the medial occipital gyrus and the posterior part of the inferior temporal sulcus; MOG/pITS). This includes the extrastriate body area (EBA, involved in representation of body parts; Downing et al., 2001) and the middle temporal complex (hMT+, implicated in motion perception; Tootell et al., 1995). Weaker fMRI activation was also evident more ventrally, in occipitotemporal sulcus, and more posterior, in object-selective area LO (Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Malach et al., 1995). The magnitude of activation within the MOG/pITS as a whole showed a clear preference for a contralateral presentation of the stimuli (in retinotopic coordinates): The mean activation in the right MOG/pITS was significantly higher when stimuli appeared left of the fixation than when stimuli appeared right of the fixation, while the left MOG/pITS showed the opposite pattern. In addition, our multivoxel pattern analysis indicated that the correlation between activation patterns elicited by different stimulation conditions was positive and highly significant if the retinotopic location of the projected images matched. In contrast, no effect of the spatiotopic location was found. Thus, activation patterns seem to side with earlier reports of a retinotopic (contralateral-biased) representation in the LOC (Gardner et al., 2008; Hemond et al., 2007; Larsson & Heeger, 2006). 
Motion-related activation
The active MOG/pITS regions included substantial parts of hMT+, which was defined individually using a separate motion localizer scan. Substantial voxel clusters within MOG/pITS were sensitive to random dot motion. However, the overlap between hMT+ (defined using random dot motion) and MOG/pITS was not complete, suggesting that other regions within MOG/pITS are only active during viewing of a specific type of movement (i.e., reaching and grasping by the hands). Within hMT+, the majority of voxels responded to motion in the contralateral hemifield, but some responded also to motion in the ipsilateral hemifield. These voxels supposedly compose the human analogue of macaque area MST (Huk et al., 2002), which typically have large receptive fields extending into the ipsilateral side. The medial-dorsal MST (MSTd) is connected to area VIP in the macaque (Lewis & Van Essen, 2000), whose human homologue lies very close to aIPS. Interestingly, cortical microstimulation of MST neurons alters the movement of the hand during reaching for a moving target (Ilg & Schumann, 2007). Our pattern of activation is therefore in general accordance with previous accounts of visually guided hand action representations. 
A gradient of the retinotopic effect strength
We show that the preference for the contralateral retinotopic representation is stronger in the posterior MOG/pITS than in its anterior parts, much like the posterior–anterior gradient we found in the parietal regions. Similar results were previously found when using static limb stimuli (Weiner & Grill-Spector, 2011). The most likely cause for this would be larger receptive fields in the anterior parts of MOG/pITS that invade into the ipsilateral hemifield. A small number of voxels were indeed responsive to ipsilateral motion stimuli, supporting this hypothesis. Alternatively, but probably less likely, this gradient could indicate the emergence of a multiplexed coordinate frame representation in the more anterior regions (though no spatiotopic effects were detected). Prior reports regarding the coordinate frame of representation in hMT+ stand in contradiction to one another: An earlier study found evidence for spatiotopic coding in the hMT+ (area MT; d'Avossa et al., 2007), but this was later disputed (Gardner et al., 2008). It is possible that the two diametrically opposed results are due to the fact that the two studies differed in their specific requirements of spatial attention allocation (Burr & Morrone, 2011; Crespi et al., 2009). Further research would be necessary to determine the existence of non-retinotopic representations in hMT+. 
Recruitment of non-retinotopic representation for remapping
Finally, it should be noted that the results in the MOG/pITS are inconsistent with a previous study of our group, in which the degree of fMRI adaptation in nearby regions depended on the spatiotopic location of the stimuli (McKyton & Zohary, 2007). However, the focus of activation in that study was more posterior (lateral occipital sulcus) and ventral (collateral sulcus) than the current activation. In addition, it might be the case that a dynamic condition, in which eye movements are constantly made between two alternating fixation targets (as has been used in McKyton & Zohary, 2007), is more likely to induce non-retinal representations than the static gaze conditions used here. Indeed, single unit recording in the parietal cortex of monkeys show that neurons are responsive to a stimulus in the classical retinotopic receptive field as well as to its future receptive field only during a brief time period just before an impending saccade. This phenomenon, called spatial remapping, was suggested to underlie the creation of non-retinal representation of space and lead to the known visual stability of the scene across eye movements (Colby & Goldberg, 1999; Melcher & Colby, 2008). Clearly, further research is required to clarify the nature of spatial representations in the ventral stream. 
Conclusion
Using multivoxel pattern analysis, we demonstrate that action observation elicits activity in the parietal and occipital cortices, which is sensitive to the position of the manipulated object in eye-centered coordinates rather than its position with respect to the head, body, or world. This suggests that information about the observed action is held in a retinotopic reference frame. While these results could be due to the inherent caveats of the fMRI signal (low spatial resolution), we propose that it is more likely to be related to the behavioral demands of the task: Motor actions may facilitate representation in body-centered or world-centered coordinates, but merely observing actions may not. Discovering what specific conditions evoke a non-retinotopic representation should be the goal of future research. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Acknowledgments
We thank Tanya Orlov for her help with data acquisition and her insightful comments. This research was supported by an Israel Science Foundation Grant 39/09 to E.Z. 
Commercial relationships: none. 
Corresponding author: Yuval Porat. 
Email: yporat@gmail.com. 
Address: Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University, Jerusalem 91904, Israel. 
References
Andersen R. A. Bracewell R. M. Barash S. Gnadt J. W. Fogassi L. (1990). Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. Journal of Neuroscience, 10, 1176–1196. [PubMed]
Andersen R. A. Essick G. K. Siegel R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456–458. [CrossRef] [PubMed]
Astafiev S. V. Shulman G. L. Stanley C. M. Snyder A. Z. Van Essen D. C. Corbetta M. (2003). Functional organization of human intraparietal and frontal cortex for attending, looking, and pointing. Journal of Neuroscience, 23, 4689–4699. [PubMed]
Bernier P. M. Grafton S. T. (2010). Human posterior parietal cortex flexibly determines reference frames for reaching based on sensory context. Neuron, 68, 776–788. [CrossRef] [PubMed]
Binkofski F. Buccino G. Posse S. Seitz R. J. Rizzolatti G. Freund H. (1999). A fronto-parietal circuit for object manipulation in man: Evidence from an fMRI-study. European Journal of Neuroscience, 11, 3276–3286. [CrossRef] [PubMed]
Binkofski F. Buccino G. Stephan K. M. Rizzolatti G. Seitz R. J. Freund H. J. (1999). A parieto-premotor network for object manipulation: Evidence from neuroimaging. Experimental Brain Research, 128, 210–213. [CrossRef] [PubMed]
Brotchie P. R. Lee M. B. Chen D. Y. Lourensz M. Jackson G. Bradley W. G., Jr. (2003). Head position modulates activity in the human parietal eye fields. Neuroimage, 18, 178–184. [CrossRef] [PubMed]
Buccino G. Binkofski F. Fink G. R. Fadiga L. Fogassi L. Gallese V. et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. European Journal of Neuroscience, 13, 400–404. [PubMed]
Buneo C. A. Jarvis M. R. Batista A. P. Andersen R. A. (2002). Direct visuomotor transformations for reaching. Nature, 416, 632–636. [CrossRef] [PubMed]
Burr D. C. Morrone M. C. (2011). Spatiotopic coding and remapping in humans. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 366, 504–515. [CrossRef]
Cavina-Pratesi C. Monaco S. Fattori P. Galletti C. McAdam T. D. Quinlan D. J. et al. (2010). Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. Journal of Neuroscience, 30, 10306–10323. [CrossRef] [PubMed]
Chan A. W. Kravitz D. J. Truong S. Arizpe J. Baker C. I. (2010). Cortical representations of bodies and faces are strongest in commonly experienced configurations. Nature Neuroscience, 13, 417–418. [CrossRef] [PubMed]
Colby C. L. Goldberg M. E. (1999). Space and attention in parietal cortex. Annual Reviews in Neuroscience, 22, 319–349. [CrossRef]
Connolly J. D. Andersen R. A. Goodale M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research, 153, 140–145. [CrossRef] [PubMed]
Connolly J. D. Goodale M. A. Desouza J. F. Menon R. S. Vilis T. (2000). A comparison of frontoparietal fMRI activation during anti-saccades and anti-pointing. Journal of Neurophysiology, 84, 1645–1655. [PubMed]
Corbetta M. Akbudak E. Conturo T. E. Snyder A. Z. Ollinger J. M. Drury H. A. et al. (1998). A common network of functional areas for attention and eye movements. Neuron, 21, 761–773. [CrossRef] [PubMed]
Creem-Regehr S. H. Lee J. N. (2005). Neural representations of graspable objects: Are tools special? Brain Research. Cognitive Brain Research, 22, 457–469. [CrossRef] [PubMed]
Crespi S. A. Biagi L. Burr D. C. d'Avossa G. Tosetti M. Morrone M. C. (2009). Spatial attention modulates the spatiotopicity of human MT complex. Perception, 38, ECVP Abstract Supplement, 6.
Culham J. C. Cavina-Pratesi C. Singhal A. (2006). The role of parietal cortex in visuomotor control: What have we learned from neuroimaging? Neuropsychologia, 44, 2668–2684. [CrossRef] [PubMed]
Culham J. C. Danckert S. L. DeSouza J. F. Gati J. S. Menon R. S. Goodale M. A. (2003). Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas. Experimental Brain Research, 153, 180–189. [CrossRef] [PubMed]
Culham J. C. Valyear K. F. (2006). Human parietal cortex in action. Current Opinion in Neurobiology, 16, 205–212. [CrossRef] [PubMed]
d'Avossa G. Tosetti M. Crespi S. Biagi L. Burr D. C. Morrone M. C. (2007). Spatiotopic selectivity of BOLD responses to visual motion in human area MT. Nature Neuroscience, 10, 249–255. [CrossRef] [PubMed]
DeSouza J. F. Dukelow S. P. Gati J. S. Menon R. S. Andersen R. A. Vilis T. (2000). Eye position signal modulates a human parietal pointing region during memory-guided movements. Journal of Neuroscience, 20, 5835–5840. [PubMed]
Dinstein I. Gardner J. L. Jazayeri M. Heeger D. J. (2008). Executed and observed movements have different distributed representations in human aIPS. Journal of Neuroscience, 28, 11231–11239. [CrossRef] [PubMed]
Downing P. E. Jiang Y. Shuman M. Kanwisher N. (2001). A cortical area selective for visual processing of the human body. Science, 293, 2470–2473. [CrossRef] [PubMed]
Duhamel J. R. Bremmer F. BenHamed S. Graf W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389, 845–848. [CrossRef] [PubMed]
Filimon F. Nelson J. D. Huang R. S. Sereno M. I. (2009). Multiple parietal reach regions in humans: Cortical representations for visual and proprioceptive feedback during on-line reaching. Journal of Neuroscience, 29, 2961–2971. [CrossRef] [PubMed]
Fogassi L. Ferrari P. F. Gesierich B. Rozzi S. Chersi F. Rizzolatti G. (2005). Parietal lobe: From action organization to intention understanding. Science, 308, 662–667. [CrossRef] [PubMed]
Frey S. H. Vinton D. Norlund R. Grafton S. T. (2005). Cortical topography of human anterior intraparietal cortex active during visually guided grasping. Brain Research. Cognitive Brain Research, 23, 397–405. [CrossRef] [PubMed]
Friston K. J. Fletcher P. Josephs O. Holmes A. Rugg M. D. Turner R. (1998). Event-related fMRI: Characterizing differential responses. Neuroimage, 7, 30–40. [CrossRef] [PubMed]
Gallivan J. P. Cavina-Pratesi C. Culham J. C. (2009). Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand. Journal of Neuroscience, 29, 4381–4391. [CrossRef] [PubMed]
Gallivan J. P. McLean D. A. Valyear K. F. Pettypiece C. E. Culham J. C. (2011). Decoding action intentions from preparatory brain activity in human parieto-frontal networks. Journal of Neuroscience, 31, 9599–9610. [CrossRef] [PubMed]
Gardner J. L. Merriam E. P. Movshon J. A. Heeger D. J. (2008). Maps of visual space in human occipital cortex are retinotopic, not spatiotopic. Journal of Neuroscience, 28, 3988–3999. [CrossRef] [PubMed]
Golomb J. D. Chun M. M. Mazer J. A. (2008). The native coordinate system of spatial attention is retinotopic. Journal of Neuroscience, 28, 10654–10662. [CrossRef] [PubMed]
Grefkes C. Weiss P. H. Zilles K. Fink G. R. (2002). Crossmodal processing of object features in human anterior intraparietal cortex: An fMRI study implies equivalencies between humans and monkeys. Neuron, 35, 173–184. [CrossRef] [PubMed]
Grill-Spector K. Kushnir T. Edelman S. Itzchak Y. Malach R. (1998). Cue-invariant activation in object-related areas of the human occipital lobe. Neuron, 21, 191–202. [CrossRef] [PubMed]
Haxby J. V. Gobbini M. I. Furey M. L. Ishai A. Schouten J. L. Pietrini P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430. [CrossRef] [PubMed]
Heider B. Karnik A. Ramalingam N. Siegel R. M. (2010). Neural representation during visually guided reaching in macaque posterior parietal cortex. Journal of Neurophysiology, 104, 3494–3509. [CrossRef] [PubMed]
Hemond C. C. Kanwisher N. G. Op de Beeck H. P. (2007). A preference for contralateral stimuli in human object- and face-selective cortex. PLoS One, 2, e574.
Henriques D. Y. Klier E. M. Smith M. A. Lowy D. Crawford J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18, 1583–1594. [PubMed]
Hinkley L. B. Krubitzer L. A. Padberg J. Disbrow E. A. (2009). Visual-manual exploration and posterior parietal cortex in humans. Journal of Neurophysiology, 102, 3433–3446. [CrossRef] [PubMed]
Huk A. C. Dougherty R. F. Heeger D. J. (2002). Retinotopy and functional subdivision of human areas MT and MST. Journal of Neuroscience, 22, 7195–7205. [PubMed]
Ilg U. J. Schumann S. (2007). Primate area MST-l is involved in the generation of goal-directed eye and hand movements. Journal of Neurophysiology, 97, 761–771. [CrossRef] [PubMed]
Kamitani Y. Tong F. (2005). Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8, 679–685. [CrossRef] [PubMed]
Larsson J. Heeger D. J. (2006). Two retinotopic visual areas in human lateral occipital cortex. Journal of Neuroscience, 26, 13128–13142. [CrossRef] [PubMed]
Lewis J. W. Van Essen D. C. (2000). Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. Journal of Comparative Neurology, 428, 112–137. [CrossRef] [PubMed]
Malach R. Reppas J. B. Benson R. R. Kwong K. K. Jiang H. Kennedy W. A. et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America, 92, 8135–8139. [CrossRef] [PubMed]
Mars R. B. Jbabdi S. Sallet J. O'Reilly J. X. Croxson P. L. Olivier E. et al. (2011). Diffusion-weighted imaging tractography-based parcellation of the human parietal cortex and comparison with human and macaque resting-state functional connectivity. Journal of Neuroscience, 31, 4087–4100. [CrossRef] [PubMed]
McKyton A. Zohary E. (2007). Beyond retinotopic mapping: The spatial representation of objects in the human lateral occipital complex. Cerebral Cortex, 17, 1164–1172. [CrossRef] [PubMed]
Medendorp W. P. Goltz H. C. Vilis T. Crawford J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23, 6209–6214. [PubMed]
Melcher D. Colby C. L. (2008). Trans-saccadic perception. Trends in Cognitive Sciences, 12, 466–473. [CrossRef] [PubMed]
Mullette-Gillman O. A. Cohen Y. E. Groh J. M. (2005). Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. Journal of Neurophysiology, 94, 2331–2352. [CrossRef] [PubMed]
Mullette-Gillman O. A. Cohen Y. E. Groh J. M. (2009). Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame. Cerebral Cortex, 19, 1761–1775. [CrossRef] [PubMed]
Op de Beeck H. P. (2010). Against hyperacuity in brain reading: Spatial smoothing does not hurt multivariate fMRI analyses? Neuroimage, 49, 1943–1948. [CrossRef] [PubMed]
Peelen M. V. Downing P. E. (2007). The neural basis of visual body perception. Nature Reviews Neuroscience, 8, 636–648. [CrossRef] [PubMed]
Pertzov Y. Avidan G. Zohary E. (2011). Multiple reference frames for saccadic planning in the human parietal cortex. Journal of Neuroscience, 31, 1059–1068. [CrossRef] [PubMed]
Pertzov Y. Zohary E. Avidan G. (2010). Rapid formation of spatiotopic representations as revealed by inhibition of return. Journal of Neuroscience, 30, 8882–8887. [CrossRef] [PubMed]
Rainer G. Augath M. Trinath T. Logothetis N. K. (2001). Nonmonotonic noise tuning of BOLD fMRI signal to natural images in the visual cortex of the anesthetized monkey. Current Biology, 11, 846–854. [CrossRef] [PubMed]
Rice N. J. Valyear K. F. Goodale M. A. Milner A. D. Culham J. C. (2007). Orientation sensitivity to graspable objects: An fMRI adaptation study. Neuroimage, 36, T87–T93. [CrossRef] [PubMed]
Rizzolatti G. Fadiga L. Gallese V. Fogassi L. (1996). Premotor cortex and the recognition of motor actions. Brain Research. Cognitive Brain Research, 3, 131–141. [CrossRef] [PubMed]
Sakreida K. Schubotz R. I. Wolfensteller U. von Cramon D. Y. (2005). Motion class dependency in observers' motor areas revealed by functional magnetic resonance imaging. Journal of Neuroscience, 25, 1335–1342. [CrossRef] [PubMed]
Saygin A. P. Sereno M. I. (2008). Retinotopy and attention in human occipital, temporal, parietal, and frontal cortex. Cerebral Cortex, 18, 2158–2168. [CrossRef] [PubMed]
Schluppeck D. Glimcher P. Heeger D. J. (2005). Topographic organization for delayed saccades in human posterior parietal cortex. Journal of Neurophysiology, 94, 1372–1384. [CrossRef] [PubMed]
Sereno M. I. Huang R. S. (2006). A human parietal face area contains aligned head-centered visual and tactile maps. Nature Neuroscience, 9, 1337–1343. [CrossRef] [PubMed]
Sereno M. I. Pitzalis S. Martinez A. (2001). Mapping of contralateral space in retinotopic coordinates by a parietal cortical area in humans. Science, 294, 1350–1354. [CrossRef] [PubMed]
Shmuelof L. Zohary E. (2005). Dissociation between ventral and dorsal fMRI activation during object and action recognition. Neuron, 47, 457–470. [CrossRef] [PubMed]
Shmuelof L. Zohary E. (2006). A mirror representation of others' actions in the human anterior parietal cortex. Journal of Neuroscience, 26, 9736–9742. [CrossRef] [PubMed]
Shmuelof L. Zohary E. (2008). Mirror-image representation of action in the anterior parietal cortex. Nature Neuroscience, 11, 1267–1269. [CrossRef] [PubMed]
Silver M. A. Kastner S. (2009). Topographic maps in human frontal and parietal cortex. Trends in Cognitive Sciences, 13, 488–495. [CrossRef] [PubMed]
Silver M. A. Ress D. Heeger D. J. (2005). Topographic maps of visual spatial attention in human parietal cortex. Journal of Neurophysiology, 94, 1358–1371. [CrossRef] [PubMed]
Stark A. Zohary E. (2008). Parietal mapping of visuomotor transformations during human tool grasping. Cerebral Cortex, 18, 2358–2368. [CrossRef] [PubMed]
Swisher J. D. Halko M. A. Merabet L. B. McMains S. A. Somers D. C. (2007). Visual topography of human intraparietal sulcus. Journal of Neuroscience, 27, 5326–5337. [CrossRef] [PubMed]
Thompson K. G. Bichot N. P. (2005). A visual salience map in the primate frontal eye field. Progress in Brain Research, 147, 251–262. [PubMed]
Tootell R. B. Hadjikhani N. Hall E. K. Marrett S. Vanduffel W. Vaughan J. T. et al. (1998). The retinotopy of visual spatial attention. Neuron, 21, 1409–1422. [CrossRef] [PubMed]
Tootell R. B. Reppas J. B. Kwong K. K. Malach R. Born R. T. Brady T. J. et al. (1995). Functional analysis of human MT and related visual cortical areas using magnetic resonance imaging. Journal of Neuroscience, 15, 3215–3230. [PubMed]
Weiner K. S. Grill-Spector K. (2011). Not one extrastriate body area: Using anatomical landmarks, hMT+, and visual field maps to parcellate limb-selective activations in human lateral occipitotemporal cortex. Neuroimage, 56, 2183–2199. [CrossRef] [PubMed]
Williams M. A. Baker C. I. Op de Beeck H. P. Shim W. M. Dang S. Triantafyllou C. et al. (2008). Feedback of visual object information to foveal retinotopic cortex. Nature Neuroscience, 11, 1439–1445. [CrossRef] [PubMed]
Williams M. A. Dang S. Kanwisher N. G. (2007). Only some spatial patterns of fMRI response are read out in task performance. Nature Neuroscience, 10, 685–686. [CrossRef] [PubMed]
Figure 1
 
Main task. (A) Temporal aspects: A 10-s resting block in which a fixation point appeared in one of 3 possible locations preceded the trial block. The trial block itself lasted 10 s and consisted of a series of video clips featuring hands grasping objects, which appeared to the right (in this example) or to the left of the fixation point. The trial block was followed by another resting block, in which the fixation point moved (in its beginning) to one of the two other possible locations, and so on. In the example shown here, the stimuli in the two conditions have identical locations on the screen but opposite locations on the retina. (B) Spatial aspects: Stimuli appeared at one of four possible locations, 3.15° to the right or left of one of three possible fixation points, making a total of six possible conditions (combinations of stimulus and fixation locations). Some subgroups of conditions have the same location on the retina (conditions 1, 3, and 5 as well as conditions 2, 4, and 6), others are matched in their screen position (conditions 2 and 3 and conditions 4 and 5), while others differ in both their screen position and their retinal location (conditions 1 and 2, 1 and 4, 1 and 6, 2 and 5, 3 and 4, 3 and 6, and 5 and 6).
Figure 1
 
Main task. (A) Temporal aspects: A 10-s resting block in which a fixation point appeared in one of 3 possible locations preceded the trial block. The trial block itself lasted 10 s and consisted of a series of video clips featuring hands grasping objects, which appeared to the right (in this example) or to the left of the fixation point. The trial block was followed by another resting block, in which the fixation point moved (in its beginning) to one of the two other possible locations, and so on. In the example shown here, the stimuli in the two conditions have identical locations on the screen but opposite locations on the retina. (B) Spatial aspects: Stimuli appeared at one of four possible locations, 3.15° to the right or left of one of three possible fixation points, making a total of six possible conditions (combinations of stimulus and fixation locations). Some subgroups of conditions have the same location on the retina (conditions 1, 3, and 5 as well as conditions 2, 4, and 6), others are matched in their screen position (conditions 2 and 3 and conditions 4 and 5), while others differ in both their screen position and their retinal location (conditions 1 and 2, 1 and 4, 1 and 6, 2 and 5, 3 and 4, 3 and 6, and 5 and 6).
Figure 2
 
MVPA analysis. (A) Selected ROIs. The localizer task elicited several distinct clusters of activation that served as ROIs for the MVPA analysis. MOG/pITS (1), pIPS (2), mIPS (3), and aIPS (4) of a single subject's right hemisphere are displayed (Subject S4). Dorsal premotor (dPM, 5) activity is also apparent. Abbreviations: IPS = intraparietal sulcus, CS = central sulcus, STS = superior temporal sulcus. (B) A matrix represents all possible pairing of conditions, color coded by pairing type (see legend). The icons on the leftmost column and first row depict the specific condition: X represents the position of the stimulus on the screen (in black); the red point represents the location of the fixation point. To minimize the possible confounding effect of gaze position, the analysis was done only on correlations between conditions with neighboring fixation points (marked by checkmarks), apart from the “same” conditions (RS) that necessarily involved the same gaze position. The results from the same pairing (e.g., same color) were then pooled and averaged across subjects to obtain the mean correlation for each of the four pairing types.
Figure 2
 
MVPA analysis. (A) Selected ROIs. The localizer task elicited several distinct clusters of activation that served as ROIs for the MVPA analysis. MOG/pITS (1), pIPS (2), mIPS (3), and aIPS (4) of a single subject's right hemisphere are displayed (Subject S4). Dorsal premotor (dPM, 5) activity is also apparent. Abbreviations: IPS = intraparietal sulcus, CS = central sulcus, STS = superior temporal sulcus. (B) A matrix represents all possible pairing of conditions, color coded by pairing type (see legend). The icons on the leftmost column and first row depict the specific condition: X represents the position of the stimulus on the screen (in black); the red point represents the location of the fixation point. To minimize the possible confounding effect of gaze position, the analysis was done only on correlations between conditions with neighboring fixation points (marked by checkmarks), apart from the “same” conditions (RS) that necessarily involved the same gaze position. The results from the same pairing (e.g., same color) were then pooled and averaged across subjects to obtain the mean correlation for each of the four pairing types.
Figure 3
 
The ventral region of interest and its spatial relationship to other specific regions within MOG/pITS. The selected voxels in the MOG/pITS (shown in black lines; 3000-mm3 ROIs, grasping videos > scrambled videos) are shown together with regions representing motion (red, contralateral activation; blue, ipsilateral activation; moving dots > static dots), hands (hands > scrambled hands ∩ hands > objects; yellow), and objects (objects > scrambled objects ∩ objects > hands; green) in two representative subjects (S1 and S2) from a posterior lateral view. Orange voxels were active both during random dot motion and arms presentation. Purple voxels were active both during the presentation of moving stimuli in the contralateral and in the ipsilateral hemifields. STS denotes the superior temporal sulcus.
Figure 3
 
The ventral region of interest and its spatial relationship to other specific regions within MOG/pITS. The selected voxels in the MOG/pITS (shown in black lines; 3000-mm3 ROIs, grasping videos > scrambled videos) are shown together with regions representing motion (red, contralateral activation; blue, ipsilateral activation; moving dots > static dots), hands (hands > scrambled hands ∩ hands > objects; yellow), and objects (objects > scrambled objects ∩ objects > hands; green) in two representative subjects (S1 and S2) from a posterior lateral view. Orange voxels were active both during random dot motion and arms presentation. Purple voxels were active both during the presentation of moving stimuli in the contralateral and in the ipsilateral hemifields. STS denotes the superior temporal sulcus.
Figure 4
 
Mean activation level for the various conditions in the IPS. As in Figure 2, the icons below the bars depict the specific stimulus location (X) and the fixation point (in red) in each of the 6 conditions. (A, B) Mean beta values of each of the six task conditions in the left and right aIPS, respectively. (C, D) Mean beta values of each of the six task conditions in the left and right mIPS, respectively. (E, F), Mean beta values of each of the six task conditions in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows point to the specific region of interest.
Figure 4
 
Mean activation level for the various conditions in the IPS. As in Figure 2, the icons below the bars depict the specific stimulus location (X) and the fixation point (in red) in each of the 6 conditions. (A, B) Mean beta values of each of the six task conditions in the left and right aIPS, respectively. (C, D) Mean beta values of each of the six task conditions in the left and right mIPS, respectively. (E, F), Mean beta values of each of the six task conditions in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows point to the specific region of interest.
Figure 5
 
MVPA results for the various comparison groups in the IPS. (A, B) Similarity indices of the four different comparison types in the left and right aIPS, respectively. (C, D) Similarity indices of the four different comparison types in the left and right mIPS, respectively. (E, F) Similarity indices of the four different comparison types in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows denote activation in the specific region of interest.
Figure 5
 
MVPA results for the various comparison groups in the IPS. (A, B) Similarity indices of the four different comparison types in the left and right aIPS, respectively. (C, D) Similarity indices of the four different comparison types in the left and right mIPS, respectively. (E, F) Similarity indices of the four different comparison types in the left and right pIPS, respectively. In all cases, error bars denote SEM. The icons at the central column show a horizontal slice of the brain of one representative subject with an overlaid activity map (S4 for aIPS, S5 for mIPS, and S6 for pIPS). Arrows denote activation in the specific region of interest.
Figure 6
 
Illustrative examples of the spatial patterns of voxel activation in the parietal regions of interest during various stimulus conditions, depicted from a dorsal view of the inflated brain in two representative subjects (S1: top row and S3: bottom row). The color scale denotes the normalized activation level after subtraction of the average voxel activation across all stimulus conditions. Left column: The pattern of activation in condition 3 (denoted by X in the screen icon, with central fixation), taken from data set I, showing contralateral preference (i.e., orange colors in the right hemisphere and dark green in the left hemisphere). Middle column: The pattern of activation elicited in the retinotopically matching condition 5, taken from data set II, showing a similar contralateral preference (note the similarity of the color patterns in each subject to that in the leftmost column). Right column: The patterns of activation in the spatiotopically matching (but retinotopically opposite) condition 2 (indicated by the same position of X in the screen icon), taken from data set II, showing quite the opposite preference. Generally, the contralateral retinotopic preference was more pronounced in the more posterior regions. PCS denotes the postcentral sulcus.
Figure 6
 
Illustrative examples of the spatial patterns of voxel activation in the parietal regions of interest during various stimulus conditions, depicted from a dorsal view of the inflated brain in two representative subjects (S1: top row and S3: bottom row). The color scale denotes the normalized activation level after subtraction of the average voxel activation across all stimulus conditions. Left column: The pattern of activation in condition 3 (denoted by X in the screen icon, with central fixation), taken from data set I, showing contralateral preference (i.e., orange colors in the right hemisphere and dark green in the left hemisphere). Middle column: The pattern of activation elicited in the retinotopically matching condition 5, taken from data set II, showing a similar contralateral preference (note the similarity of the color patterns in each subject to that in the leftmost column). Right column: The patterns of activation in the spatiotopically matching (but retinotopically opposite) condition 2 (indicated by the same position of X in the screen icon), taken from data set II, showing quite the opposite preference. Generally, the contralateral retinotopic preference was more pronounced in the more posterior regions. PCS denotes the postcentral sulcus.
Figure 7
 
MOG/pITS results. (A, B) Mean activation level (betas) of each of the six task conditions in the left and right LOC, respectively. The pattern of activity suggests contralateral retinal representation of the stimuli. (C, D) Similarity indices of the four different comparison types in the left and right LOC, respectively. In all cases, error bars denote SEM. The icon at the central column shows a horizontal slice of the brain of one representative subject with an overlaid activity map (Subject S1).
Figure 7
 
MOG/pITS results. (A, B) Mean activation level (betas) of each of the six task conditions in the left and right LOC, respectively. The pattern of activity suggests contralateral retinal representation of the stimuli. (C, D) Similarity indices of the four different comparison types in the left and right LOC, respectively. In all cases, error bars denote SEM. The icon at the central column shows a horizontal slice of the brain of one representative subject with an overlaid activity map (Subject S1).
Figure 8
 
Spatial patterns of voxel activation in the MOG/pITS region in one representative subject (S1) shown from a lateral posterior viewpoint of a partially inflated left hemisphere. Top row: Patterns of activation in conditions 1, 3, and 5, which share the same retinotopic position (left side). Bottom row: Patterns of activation in conditions 2, 4, and 6 (retinotopic right). The two middle columns therefore represent conditions that match spatiotopically (left, 2 and 3; right, 4 and 5) but are in symmetrically opposite sides in their retinotopic location. Note the clear reversal of colors between the top and bottom spatial patterns. This clearly demonstrates that, as in the lateral occipital cortex, the location of the image in retinotopic coordinates governs the response in the ventral regions, although once again the retinotopic preference is stronger in the more posterior parts. STS denotes the superior temporal sulcus.
Figure 8
 
Spatial patterns of voxel activation in the MOG/pITS region in one representative subject (S1) shown from a lateral posterior viewpoint of a partially inflated left hemisphere. Top row: Patterns of activation in conditions 1, 3, and 5, which share the same retinotopic position (left side). Bottom row: Patterns of activation in conditions 2, 4, and 6 (retinotopic right). The two middle columns therefore represent conditions that match spatiotopically (left, 2 and 3; right, 4 and 5) but are in symmetrically opposite sides in their retinotopic location. Note the clear reversal of colors between the top and bottom spatial patterns. This clearly demonstrates that, as in the lateral occipital cortex, the location of the image in retinotopic coordinates governs the response in the ventral regions, although once again the retinotopic preference is stronger in the more posterior parts. STS denotes the superior temporal sulcus.
Table 1
 
Regions of interest and Talairach coordinates (mean and STD across subjects) of the center of mass of each selected ROI.
Table 1
 
Regions of interest and Talairach coordinates (mean and STD across subjects) of the center of mass of each selected ROI.
Region of interest Hemisphere Talairach coordinates Volume (mm3) Subjects
X Y Z
MOG/pITS L −47 ± 3 −69 ± 6 3 ± 6 747 ± 39 15
R 47 ± 4 −64 ± 5 1 ± 5 763 ± 37 15
pIPS L −25 ± 4 −75 ± 5 28 ± 4 755 ± 35 13
R 28 ± 5 −74 ± 7 28 ± 6 750 ± 39 13
mIPS L −26 ± 6 −59 ± 7 51 ± 5 746 ± 34 15
R 25 ± 4 −59 ± 4 52 ± 4 738 ± 33 15
aIPS L −38 ± 5 −41 ± 4 48 ± 5 775 ± 34 13
R 32 ± 6 −45 ± 6 53 ± 4 754 ± 35 12
dPM L −28 ± 4 −10 ± 3 55 ± 3 765 ± 35 12
R 31 ± 5 −10 ± 5 53 ± 5 768 ± 33 12
SMA −2 ± 3 0 ± 5 51 ± 4 775 ± 31 10
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×