Different areas within the cortical smooth pursuit eye movement (SPEM) network of monkeys and humans have been functionally described to participate in the integration of different modalities. The posterior parietal cortex (PPC; within and near to the intraparietal sulcus—IPS), with the ventral intraparietal cortex (VIP; for a review, see Bremmer,
2005; Bremmer et al.,
2001) and the lateral intraparietal cortex (LIP; Andersen,
1997) have been described as multimodal. That is, they integrate visual, somatosensory, auditory, and vestibular signals. Furthermore, the IPS has been shown to code visual motion and eye movements in retinal or head/space coordinates (Andersen, Essick, & Siegel,
1985; Bremmer,
2005), to subserve visuomotor (visual to head centered coordinates) transformations (Andersen, Snyder, Batista, Buneo, & Cohen,
1998; Grefkes & Fink,
2005; Grefkes, Ritzl, Zilles, & Fink,
2004). In contrast to the human occipital cortex where maps of visual space are retinotopic, not spatiotopic (Gardner, Merriam, Movshon, & Heeger,
2008), the motion-sensitive area MT+ has been shown to process visual (retinal) input and head centric eye movement signals (Bremmer, Distler, & Hoffmann,
1997; Dukelow et al.,
2001; Goossens, Dukelow, Menon, Vilis, & van den Berg,
2006; Ilg, Schumann, & Thier,
2004).