Free
Article  |   November 2014
Auditory motion processing after early blindness
Author Affiliations
  • Fang Jiang
    Department of Psychology, University of Washington, Seattle, WA, USA
    fjiang@uw.edu
  • G. Christopher Stecker
    Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
    Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville TN, USA
    g.christopher.stecker@vanderbilt.edu
  • Ione Fine
    Department of Psychology, University of Washington, Seattle, WA, USA
    ionefine@uw.edu
Journal of Vision November 2014, Vol.14, 4. doi:https://doi.org/10.1167/14.13.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fang Jiang, G. Christopher Stecker, Ione Fine; Auditory motion processing after early blindness. Journal of Vision 2014;14(13):4. https://doi.org/10.1167/14.13.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion.

Introduction
Numerous studies show that occipital cortex neurons respond to auditory and tactile stimuli after early blindness and these responses are functionally important (for review see L. B. Lewis & Fine, 2011). Based on these data it has been tempting to speculate that early blind subjects “see” auditory and tactile stimuli. However, to argue that blind subjects see auditory or tactile stimuli using the occipital cortex requires the further demonstration that responses within the occipital cortex mediate the perception of auditory stimuli, rather than simply modulating or augmenting responses within other auditory or tactile areas. We chose to examine this question using auditory motion, a stimulus of high ecological importance for blind individuals. Specifically, using fMRI pattern classification techniques, we tested whether the perceived direction of motion for both coherent and ambiguous auditory motion can be categorized based on neural responses within auditory areas and/or hMT+ in both normally sighted and early blind individuals. 
A variety of specialized subregions have been associated with auditory motion processing in sighted individuals, including the posterior superior temporal sulcus (e.g., Griffiths et al., 1998), the inferior parietal lobule (e.g., Griffiths et al., 1997; Krumbholz et al., 2005), and the planum temporale (e.g., Baumgart, Gaschler-Markefski, Woldorff, Heinze, & Scheich, 1999; Bremmer et al., 2001; J. W. Lewis, Beauchamp, & DeYoe, 2000; Warren, Zielinski, Green, Rauschecker, & Griffiths, 2002). The only previous study to examine the ability to classify the direction of motion of an auditory stimulus based on multivariate blood oxygenation-level dependent (BOLD) responses (and thereby specifically test for tuning for direction of motion) found that decoding was most reliable within the planum temporale and a region within right lateral occipital cortex (aLOC; Alink, Euler, Kriegeskorte, Singer, & Kohler, 2012). 
A wide collection of evidence, including animal (Newsome, Wurtz, & Dürsteler, 1985), human lesion (Zeki et al., 1991), electrophysiological, microstimulation (Salzman, Britten, & Newsome, 1990; Salzman, Murasugi, Britten, & Newsome, 1992; Shadlen, Britten, Newsome, & Movshon, 1996), and BOLD imaging (Huk, Dougherty, & Heeger, 2002; Tootell et al., 1995) implicate hMT+ (including the middle temporal [MT] area, the medial superior temporal [MST] area, and possibly additional adjacent motion selective areas; Beauchamp, Cox, & DeYoe, 1997), as playing an important role in visual motion perception. It also has been suggested in a number of fairly recent papers and reviews that hMT+ may in fact be supramodal and respond to tactile and auditory as well as visual motion in sighted subjects (for review see Kupers, Pietrini, Ricciardi, & Ptito, 2011; Renier, De Volder, & Rauschecker, 2014). However, as described more fully in the Discussion, many of the papers reporting supramodal responses used stereotaxic group averaging to define hMT+ and may therefore have included adjacent areas within their definition of hMT+. 
In early blind individuals a variety of studies have found responses to auditory motion stimuli within hMT+ (Bedny, Konkle, Pelphrey, Saxe, & Pascual-Leone, 2010; Poirier et al., 2006; Saenz, Lewis, Huth, Fine, & Koch, 2008), and multivariate pattern classification analysis has shown that activations within the hMT+ complex in early blind participants contain selective information about auditory motion (Strnad, Peelen, Bedny, & Caramazza, 2013; Wolbers, Zahorik, & Giudice, 2011). However, it has not yet been clearly established whether such responses directly mediate the perception of auditory motion stimuli, or simply modulate or augment responses within other sensory areas. 
Using an ambiguous auditory motion stimulus allowed us to examine the relationship between neural signals and behavioral choice, independently of the effects of stimulus driven responses (Britten, Newsome, Shadlen, Celebrini, & Movshon, 1996). Our motivation for using ambiguous auditory motion is that it is possible for selective responses to a given feature to be distributed relatively broadly across the visual system, while the conscious experience of that feature may be primarily based on activity within specialized cortical areas (Kilian-Hütten, Valente, Vroomen, & Formisano, 2011; Serences & Boynton, 2007). For example, with an analogous approach to that used here, it has been shown that for unambiguous stimuli the direction of visual motion can be classified based on BOLD responses across much of the visual cortex; however, the reported perceptual state of the observer for ambiguous visual motion stimuli can only be classified based on activity patterns in the human MT complex (Serences & Boynton, 2007). Examining neural responses for ambiguous stimuli therefore allows us to associate responses within the areas of interest with the perceptual state of the observer. 
Materials and methods
Auditory classification stimuli
Auditory stimuli were delivered via MRI-compatible stereo headphones (S14, Sensimetrics, Malden, MA) and sound amplitude was adjusted to each participant's comfort level. Auditory motion was simulated using stimuli that contained dynamic interaural time differences (ITD), interaural level differences (ILD), and Doppler shift. The stimuli consisted of eight spectrally and temporally overlapping bands of noise, each 1000 Hz wide, with center frequencies evenly spaced between 1500–3500 Hz. Subjects were presented with the sum of eight such bands (Figure 1A). Unambiguous stimuli (50% coherence) contained six bands moving to the right and two to the left (or vice versa), Ambiguous stimuli (0% coherence) had four bands moving to the left and four to the right, resulting in no net applied motion signal. For further details of the stimulus see Supplementary Materials, including Supplementary Figure S1
Figure 1
 
Experimental design. (A) Schematic of the auditory motion stimulus: The 50% coherence condition is shown. Frequency noise bands were generated by filtering white noise in the Fourier domain. (B) Early blind subjects are significantly better at determining the direction of 50% coherence auditory motion stimulus. Error bars show SEM. *** p < 0.001, Wilcoxon rank sum test, two-tailed.
Figure 1
 
Experimental design. (A) Schematic of the auditory motion stimulus: The 50% coherence condition is shown. Frequency noise bands were generated by filtering white noise in the Fourier domain. (B) Early blind subjects are significantly better at determining the direction of 50% coherence auditory motion stimulus. Error bars show SEM. *** p < 0.001, Wilcoxon rank sum test, two-tailed.
Each trial lasted 18 s and contained 6 s of silence and 12 s of auditory stimulus presentation. The stimulus consisted of twelve 900-ms auditory motion bursts, each separated by a silent interval of 100 ms. In addition, two brief probe beeps occurred roughly 4 s and 10 s after sound onset. Participants listened to the auditory stimuli with their eyes closed and were asked to report the apparent direction of motion after each probe beep by pressing the corresponding right or left button with their index or middle finger. We alternated the hand of response between scans so that no specific association was developed between right/left buttons and index/middle fingers. For unambiguous motion, a trial was counted as correct and considered for subsequent analysis if the observer correctly identified the global direction of auditory motion, and the observer did not switch his or her answer during the trial. For ambiguous motion, a trial was considered for subsequent analysis if the observer did not switch his or her answer during that trial. 
Auditory localizer stimuli
Auditory localizer stimuli included coherent motion (100% coherence, all bands moving in the same direction), ambiguous motion (0% coherence, four bands in each direction), static (sound bursts were presented in the center of the head), and silence. These four experimental conditions were repeated in a block design with a fixed order (coherent motion, ambiguous motion, static, and silence). Participants were asked to passively listen to the auditory stimuli with their eyes closed. Every participant (both early blind and sighted) performed six scans. 
Note that our sighted subjects were not blindfolded during auditory scans. They were asked to have their eyes closed throughout the auditory scans in the dark scanner room. A previous study (L. B. Lewis, Saenz, & Fine, 2010) showed no difference in response in sighted participants between blindfolding and eyes-closed conditions across a variety of auditory tasks including auditory frequency, auditory motion, and auditory letter discrimination. It is unlikely that the difference between blindfolding and eyes closed was critical to the current study. 
Visual hMT+ localizer stimuli
To localize hMT+ in sighted participants, we used a traditional hMT+ localizer stimulus consisting of a circular aperture (radius 8°) of moving dots with a central fixation cross surrounded by a gap (radius 1.5°, to minimize motion induced eye movements) in the dot field. Dots were white on a black background and each subtended 0.3° (dot density one per degree). All the dots moved coherently in one of eight directions (spaced evenly between 0° and 360°) with a speed of 8°/s. To prevent the tracking of individual dots, dots had limited life time (200 ms). 
Each block lasted 10 s, during which one of the three visual stimulation conditions (motion, static, and fixation) was presented. In the motion block, dots moved coherently in one of the eight directions and the direction of motion changed once per second (the same direction was prevented from appearing twice in a row). In the static block, dots were presented without motion, and the positions of the dots were reset once per second. In the fixation condition, participants were presented with only a fixation cross but no dots. The three conditions were presented in a fixed order (motion, static, and fixation). Participants were asked to fixate throughout the scan and performed no task. 
Participants
Participants included seven young sighted (three males; 27 ± 3.2 years old) and seven early blind (EB) adults (four males; 50 ± 12 years old) with low or no light perception. Demographic data and the causes of blindness are summarized in Table 1. Note that EB1 reported very poor vision even before 5 years old and has no visual memories. We included EB1 because her data is very typical of the other early blind adults. 
Table 1
 
Blind participants' characteristics.
Table 1
 
Blind participants' characteristics.
Sex Age Blindness onset Cause of blindness Light perception
EB1 F 63 Right eye ruptured 2 months, detached retina 5 years Detached retina No
EB2 M 59 Born blind Retinopathy of prematurity No
EB3 F 60 1.5 years Optic nerve virus infection Low
EB4 M 47 Born blind Congenital glaucoma Low
EB5 F 52 Born blind Retinopathy of prematurity No
EB6 M 38 Born blind Congenital glaucoma Low in right eye
EB7 M 31 Born blind Leber's congenital amaurosis No
All participants reported normal hearing and no history of psychiatric illness. Written and informed consent was obtained from all participants prior to the experiment, following procedures approved by the University of Washington. 
MRI scanning
Scanning was performed with a 3T Philips system (Philips, Eindhoven, The Netherlands) at the University of Washington Diagnostic Imaging Sciences Center (DISC). Three-dimensional (3-D) anatomical images were acquired at 1 × 1 × 1-mm resolution using a T1-weighted magnetization-prepared rapid gradient echo (MPRAGE) sequence. BOLD functional scans were acquired with following common parameters: 2.75 × 2.75 × 3-mm voxels; flip angle = 76°; field of view = 220 × 220. 
For the auditory motion localizer experiment, we used a sparse block design (repetition time (TR) 10 s, echo time (TE) 16.5 ms): Each 10-s block consisted of a 8-s stimulus presentation interval containing eight sound bursts (during which there was no scanner noise) followed by a 2-s acquisition period in which 32 transverse slices were acquired. Each scan lasted approximately 5 min, and included thirty-two 8-s auditory stimulus presentation intervals followed by 32 fMRI data acquisitions. 
Based on pilot data, we used a continuous (rather than sparse) imaging paradigm (Hall et al., 1999) for the auditory motion classification experiment: A repetition time of 2 s was used to acquire 30 transverse slices (TE 20 ms). Every participant (both early blind and sighted) performed six scans. Each scan lasted approximately 7 min, and included 24 trials. The advantages of sparse scanning techniques are that they limit the effects of acoustic noise and have higher signal to noise for individual volumes (Hall et al., 1999; Petkov, Kayser, Augath, & Logothetis, 2009). However, many fewer volumes can be acquired (Hall et al., 1999; Petkov et al., 2009). We used a continuous sequence and averaged across four volumes. Our pilot data suggested that the increase in signal-to-noise for each individual acquisition does not compensate fully for the loss in the number of acquisitions, and sparse techniques require an increase in the amount of scanning time needed as compared to continuous acquisition to obtain comparable signal to noise after averaging across acquisitions. 
A similar continuous block design was used for the hMT+ localizer experiment: A repetition time of 2 s was used to acquire 30 transverse slices (TE 30 ms). Every sighted participant performed two scans. Each scan lasted approximately 5 min, and included thirty 10-s blocks. 
Region of interest selection
Regions of interest (ROIs) were defined functionally within anatomical constraints. Because the number of voxels entered into the analysis can potentially influence peak classification accuracy, for every ROI described below we restricted the maximum cluster spread for each subject to 7 mm across all three dimensions (i.e., x, y, and z). We chose this max cluster spread range because the resulting voxel number was close to the lowest common number of voxels available across all functionally defined ROIs (all ps < 0.05, uncorrected). This resulted in an average of 25 contiguous gray matter voxels per ROI (in functional voxel resolution). Wilcoxon rank sum tests did not find significant differences in the number of gray matter voxels across subject groups for any of the ROIs after Bonferroni-Holm correction. 
hMT+/V5 was defined functionally using a separate visual (sighted subjects) or auditory (blind subjects) motion localizer. For sighted subjects we selected voxels near the posterior part of the inferior temporal sulcus that were significantly activated more by moving dots versus static dots. Following Bedny et al. (2010) and Saenz et al. (2008), for blind subjects we selected voxels in the same location that were significantly activated more by the 100% coherent auditory motion versus silence in the auditory motion localizer. We used auditory motion versus silence because responses to auditory motion versus static did not reliably localize hMT+ in two individual early blind subjects (EB6 and EB7). To verify the location of hMT+, participants' anatomical images (AC-PC aligned; AC = anterior commissure, PC = posterior commissure) were affine-registered to MNI152 space (MNI = Montreal Neurological Institute) using linear image registration tool (FLIRT, FSL; Jenkinson & Smith, 2001), and the resulting transforms were then applied to functionally defined hMT+. These functional hMT+ ROIs were then cross-referenced to the Jülich probabilistic atlas (Eickhoff et al., 2007; Malikovic et al., 2007; Wilms et al., 2005). 
To ensure that our results were not influenced by differences in ROI selection across subject groups, we also analyzed our pattern classification data using a group hMT+ ROIs created by finding the voxels significant for 100% coherent auditory motion versus static at group level. Using voxel selection criteria similar to Strnad et al. (2013), for each subject we found the spherical cluster (radius 2 mm, 33 contiguous voxels) that had the highest average t value for the contrast 100% coherent auditory motion > static condition (see Supplementary Materials). 
PAC was identified using a combination of anatomical and functional criteria. Each subject's Heschl's gyrus was defined as the most anterior transverse gyrus on the supratemporal plane, following a variety of previous studies (Morosan et al., 2001; Penhune, Zatorre, MacDonald, & Evans, 1996; Rademacher, Caviness, Steinmetz, & Galaburda, 1993; see Saenz & Langers, 2014, for a review). We defined PAC as the contiguous cluster of voxels in Heschl's gyrus showing the most significant activation to 100% coherent motion versus silence using the auditory localizer stimulus. 
Planum temporale (PT) was defined as the voxels in the triangular region lying caudal to the Heschl's gyrus on the supratemporal plane that showed the most significant activation for 100% coherent motion versus silence. There was no overlap between hMT+ and PT in any individual subject or using on group definitions of hMT+ and PT. 
To check results were not dependent on our particular choice of PT localizer, we reanalyzed our data using a PT defined in individuals based on contrast (both-motion [coherent and ambiguous] vs. static). A similar approach has been used previously by Warren et al. (2002), who defined PT based on a similar contrast (all motion vs. stationary, see Supplementary Materials). 
We noticed that at group level, the activation of right PT extended posteriorly into the temporoparietal junction. However, we did not find any significant activation for 100% coherent motion versus silence in the parietal regions, even at a liberal statistical threshold (p < 0.05, uncorrected). We were not able to define any parietal ROIs at an individual level. Note we designed our experiment with a focus on hMT+ and PT, so our 32 transverse slices did not extend over the full parietal region, and as a consequence did not include some of the parietal regions that have previously been reported to show responses selective for auditory motion (Griffiths et al., 1998). 
Right LOC was defined based on Talairach coordinates (35, −67, −8) as reported by Alink et al. (2012). These coordinates were then converted into each individual's AC-PC space via an inverse Talairach transformation (BVQX Toolbox). This ROI has been previously confirmed by Alink et al. (2012) to be nonoverlapping with the location typically reported for object-selective LOC (Larsson & Heeger, 2006; Malach et al., 1995) as well as the location typically reported for hMT+ (Dumoulin et al., 2000). 
We confirmed that right LOC did not substantially overlap with hMT+ in three ways: (a) right LOC and hMT+ ROIs did not overlap in any sighted individuals; (b) in sighted individuals, while the LOC ROI did show significant responses to visual moving dots (p < 0.05, Wilcoxon signed rank tests) and marginally significant responses to static dots (p = 0.0781, Wilcoxon signed rank tests), there was not a differential response to the two types of stimuli, as would be the case if right LOC was motion selective (Wilcoxon rank sum tests, see Supplementary Figure S2 in Supplementary Materials); and (c) we calculated the percentage overlap between the right LOC ROI and the mean Talairach coordinates of the individually defined hMT+ ROI ±2 SD (see Table 2). There was no overlap for early blind subjects, and there was a 4% overlap for sighted subjects. There was no overlap between right LOC and an alternative definition of hMT+ (see below) for either blind or sighted subjects. These three measures confirmed that our right LOC ROI was unlikely to be a subdivision of hMT+. 
Table 2
 
Talairach coordinates of individually defined ROIs.
Table 2
 
Talairach coordinates of individually defined ROIs.
M SD
x y z x y z
Sighted control
 Right hMT+ 44 −66 1 4.8 4.5 5.0
 Left hMT+ −46 −65 0 4.2 2.9 5.0
 Right PT 50 −29 12 8.5 4.4 4.3
 Left PT −49 −31 11 5.5 6.0 3.3
 Right PAC 46 −19 6 5.5 4.9 3.0
 Left PAC −42 −21 7 5.8 5.7 4.3
Early blind
 Right hMT+ 44 −68 2 3.4 5.0 5.9
 Left hMT+ −47 −70 1 3.4 3.3 5.5
 Right PT 49 −28 11 3.3 5.5 3.9
 Left PT −50 −29 10 5.4 6.7 2.1
 Right PAC 48 −15 4 2.4 5.1 2.6
 Left PAC −49 −17 4 5.3 5.3 3.7
Data analysis
Data were analyzed using Brain Voyager QX (Version 2.3, Brain Innovation, Maastricht, the Netherlands) and MATLAB (Mathworks, MA). Prior to statistical analysis, functional data underwent preprocessing steps that included 3-D motion correction, linear trend removal, and high pass filtering. Slice scan time correction was performed for functional data acquired with continuous sequences but not for functional data acquired using sparse sequences. For each individual participant, anatomical and functional data were transformed first into his or her own AC-PC space (rotating the cerebrum into the anterior commissure–posterior commissure plane) for ROI-based classification. 
ROI classification
Classification was performed within ROIs defined in subjects' own AC-PC space. Raw time series were extracted from all voxels within each ROI (e.g., right PAC) or each spherical searchlight during a period extending from 4 to 12 s (four volumes) after the onset of the auditory stimulus in the auditory motion classification experiment. For each voxel, raw time series from each trial were averaged across the four volumes, and then normalized by the mean BOLD response of all included trials from the same scan. We then carried out a hold-one-out jack-knife procedure where normalized temporal epochs from both unambiguous- and ambiguous-motion trials from all but one scan were extracted to form a training dataset for the classification analysis. Normalized temporal epochs from both unambiguous and ambiguous-motion trials from the remaining scans were defined as the test set. This was repeated across the six scans for each participant, with each scan serving as the test set once. 
Following O'Toole, Jiang, Abdi, and Haxby (2005), we classified each test pattern (right vs. left) using linear discriminant classifiers after carrying out principal component analysis (LDA + PCA). Principal components analysis was performed on the training set data and the coordinates of individual training pattern projections on these principal components (PCs) were used as input to the linear discriminant analyses. The usefulness of individual PCs in discriminating training patterns from different auditory motion direction was assessed using the signal detection measure d′. A d′ threshold of 0.25 was used to select PCs to be combined into an optimal low-dimensional subspace classifier for classifying test data set. This threshold ensured that across all participants the optimal classifier included approximately 5–10 individual PCs. 
For each of the four functionally and/or anatomically defined ROIs, the classification procedure was applied to unambiguous- and ambiguous-motion test patterns separately, and the reported classification accuracy was averaged across the six scans for each participant and then averaged separately across sighted and early blind participants. To measure our ability to classify the reported direction of motion based on the pattern of responses within each ROI, a subject-level t test was performed for each group to test whether classification accuracy from this ROI was consistently higher than chance level (50%) across seven participants. 
Spherical-searchlight classification
In addition to our ROI analysis, we also examined classification performance using a hypothesis-free spherical-searchlight approach (Kriegeskorte, Goebel, & Bandettini, 2006). To do this, the functional data were transformed into Talairach space (Talairach & Tournoux, 1988). Spherical searchlights were centered on each single voxel in Talairach space and were sized to contain the 33 surrounding voxels (2-mm radius). For each searchlight, reported classification accuracy was averaged across the six scans and was stored in Talairach space with each searchlight projecting its average accuracy to the position of its center voxel. This analysis resulted in a total of 14 (seven early blind and seven sighted) individual classification accuracy maps aligned in Talairach space. 
Following (Alink et al., 2012), individual classification accuracy maps were spatially smoothed with a Gaussian kernel (3-mm full width at half maximum [FWHM]). A subject-level t test was performed for each group to test which regions of Talairach space showed classification accuracy consistently higher than chance level (50%) across the seven participants of each group. A t threshold of 5.96 was used in conjunction with a cluster threshold of four (i.e., at least four adjacent voxels needed to exceed the t threshold), corresponding to p < 0.001 (corrected for multiple comparisons, see Figure S2 for thresholded clusters). 
We chose to use a relatively simple classification procedure: PCA + LDA. Because our goal was to compare classification accuracy between blind and sighted individuals, an attractive feature of the PCA + LDC method is that the only model parameter under the control of the experimenter is the d′ threshold used to select PCs (see O'Toole et al., 2007, for a review). The overall classification accuracy of auditory motion that we achieved using PCA + LDA is comparable to the accuracy achieved in previous studies using similar stimuli and more powerful classification approaches such as linear support vector machines (Alink et al., 2012; Strnad et al., 2013). 
Results
Behavioral performance
Blind subjects were better at the task: In the auditory classification experiment, a Wilcoxon rank sum test showed that early blind subjects had significantly superior behavioral performance on identifying the apparent direction of unambiguous motion (i.e., 50% coherence) trials (Z = −3.721, p < 0.001, two-tailed, Figure 1B). 
Localization of hMT+
hMT+ was functionally defined individually for sighted and early blind subjects (see Table 2 for mean Talairach coordinates and their standard deviations). There is a potential concern that the difficulty inherent in defining hMT+ within early blind subjects might result in a systematic bias or inaccuracy of definition in early blind subjects. This was examined in three ways: 
(a) We cross-registered each individual hMT+ ROI to an MNI152 coordinate system. Using resampling statistics, we confirmed that the Euclidian distance between the blind and sighted centroid hMT+ locations was not significantly larger than would be expected by chance (p > 0.05 for both left and right hemispheres, conservatively uncorrected for multiple comparisons), showing that there was no systematic offset between the estimated location of hMT+ in blind and sighted subjects. 
(b) We compared each individual's hMT+ ROI with the Jülich probabilistic atlas for hMT+/V5 (Eickhoff et al., 2007; Malikovic et al., 2007; Wilms et al., 2005). This atlas provides the probability value of any voxel belonging to hMT+. We extracted the histograms of the probability values of belonging to the atlas for each subject's functionally defined hMT+ ROI and found no statistical difference in these probability distributions across blind and sighted subjects (left hemisphere: p = 0.3739; right hemisphere p = 0.9099, two-sample Kolmogorov-Smirnov test, N = 105). 
(c) Finally, we redid most of our analyses using an alternative definition of hMT+ (as described in Methods) and obtained similar results. These results are described in Supplementary Materials, Supplementary Figures S4 and S5
Thus we do not believe systematic biases in hMT+ ROI selection can explain the differences we find between blind and sighted subjects. 
Beta weights within ROIs
Figure 2 shows our ROIs in the right hemisphere in a representative early blind (top row) and sighted (bottom row) subject: visual motion area hMT+ (red), PAC (green), PT (blue), and LOC (cyan). 
Figure 2
 
Sagittal views of the four ROIs in the right hemisphere defined using a combination of anatomical and functional criteria are shown in ACPC space in a representative blind (top row) and sighted (bottom row) subject. Red: hMT+; Green: PAC; Blue: PT; Cyan: LOC.
Figure 2
 
Sagittal views of the four ROIs in the right hemisphere defined using a combination of anatomical and functional criteria are shown in ACPC space in a representative blind (top row) and sighted (bottom row) subject. Red: hMT+; Green: PAC; Blue: PT; Cyan: LOC.
Figure 3 shows beta weight responses to 100% coherent auditory motion versus silence in the localizer experiment. Because our goal was simply to identify ROIs for further analysis, we did not correct for multiple comparisons. A Wilcoxon signed rank test found that early blind subjects showed significant BOLD responses to auditory motion in hMT+ (p < 0.05, both hemispheres), whereas sighted subjects did not show a significant BOLD response (p > 0.21 and p > 0.37 for right and left hemisphere, respectively). Both early blind and sighted subjects showed significant BOLD responses to auditory motion versus silence within PAC (p < 0.05, both hemispheres) and PT (p < 0.05, both hemispheres). Sighted subjects showed a small but significant suppression of response in right LOC (p < 0.05), whereas in early blind subjects the response was significantly positive (p < 0.05). 
Figure 3
 
Responses to 100% coherent auditory motion in the auditory motion localizer experiment. (A) hMT+, (B) PAC, (C) PT, and (D) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether responses were significantly different from zero. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 3
 
Responses to 100% coherent auditory motion in the auditory motion localizer experiment. (A) hMT+, (B) PAC, (C) PT, and (D) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether responses were significantly different from zero. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01; *** p < 0.001.
We carried out separate nonparametric ANOVAs (Wobbrock, Findlater, Gergle, & Higgins, 2011) on beta weights for each area, testing subject group (blind vs. sighted) and hemisphere (LH vs. RH, for bilateral ROIs only). In area hMT+ there was a significant main effect of group, Blind versus Sighted: F(1, 24) = 66.45, p < 0.001, but no effect of hemisphere (LH vs. RH, p > 0.98) or significant interaction between group and hemisphere (p > 0.56). In area PAC and PT there were no significant main effects or interactions (all ps > 0.12). In the right LOC there was a significant effect of group, F(1, 12) = 15.3, p < 0.01. 
Group comparisons on beta weights were carried out separately for each ROI, using Wilcoxon rank sum tests. In the case of hMT+, one-tailed tests uncorrected for multiple comparisons were used because our initial experimental prediction was that we would find higher activation bilaterally to auditory motion within hMT+ in early blind individuals. In other bilateral ROIs we used two-tailed tests, with Bonferroni-Holm correction for the number of hemispheres (Holm, 1979). Early blind subjects showed significantly higher beta weights than sighted subjects within both left and right hMT+ (p < 0.001 for both hemispheres). Early blind subjects also showed significantly higher beta weights than sighted subjects within right LOC (p < 0.01). There was no difference in activation across subject groups in PT or PAC (all ps > 0.14). 
For results using hMT+ and PT ROIs defined using alternative methods, see Supplementary Materials
Classification performance within ROIs
We began by examining which ROIs showed classification performance significantly better than chance. Because our goal was to identify ROIs for further analysis, we did not correct for multiple comparisons. Figure 4 shows classification performance for both unambiguous and ambiguous motion. In early blind subjects, a one-tailed Wilcoxon sign rank test found that classification performance within hMT+ was significantly above chance for unambiguous motion (50% coherent) in both hemispheres (p < 0.05, uncorrected for multiple comparisons). Similarly, classification performance for ambiguous motion was above chance within left hMT+ (p < 0.05) but not within right hMT+ (p > 0.10). Within early blind subjects the apparent direction of auditory motion could not be decoded within either PT (all ps > 0.68) or the right LOC ROI (p > 0.10). 
Figure 4
 
fMRI pattern classification performance. Left panels show classification accuracy for the direction of the unambiguous motion stimulus (50% coherence), right panels show classification accuracy for the direction of ambiguous motion stimulus (0% coherence). (A–B) hMT+, (C–D) PT, and (E–F) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether classification performance was significantly above chance. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01.
Figure 4
 
fMRI pattern classification performance. Left panels show classification accuracy for the direction of the unambiguous motion stimulus (50% coherence), right panels show classification accuracy for the direction of ambiguous motion stimulus (0% coherence). (A–B) hMT+, (C–D) PT, and (E–F) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether classification performance was significantly above chance. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01.
In contrast, within sighted subjects, the apparent direction of motion could not be classified within hMT+ for either unambiguous or ambiguous motion, in either hemisphere. Within the PT ROI classification, performance for both ambiguous and unambiguous motion was above chance in the right hemisphere (p < 0.01, both types of motion) but not the left (both ps > 0.10). Within the right LOC ROI classification, performance was above chance for unambiguous (p < 0.05) but not ambiguous motion (p > 0.10). 
Direction of motion could not be successfully classified within PAC in either hemisphere for either subject group. Note that our failure to classify auditory motion direction in PAC is consistent with previous findings that PAC is not specifically involved in the perception of sound movement (Griffiths et al., 1998; Warren et al., 2002). The PAC ROI was therefore excluded from further statistics and is not shown in Figure 4
We then carried out post hoc nonparametric ANOVAs on classification performance, testing subject group (blind vs. sighted) × motion coherence level (50% vs. 0%) × hemisphere (LH vs. RH, for bilateral ROIs only) for each ROI separately. In our hMT+ ROI there was a significant main effect of subject group, F(1, 48) = 19.68, p < 0.0001, with no other significant main effects or interactions (all ps > 0.23). In our PT ROI there was once again a significant main effect of subject group, F(1, 48) = 8.09, p < 0.01, and a significant interaction between subject group and hemisphere, F(1, 48) = 6.91, p < 0.02, reflecting the right hemisphere lateralization of PT classification in sighted participants. In our right LOC ROI there was no significant effect of group (p > 0.35) or motion coherence level (p > 0.16), and no interaction between them (p > 0.63). 
Group comparisons were carried out on classification performance separately for each ROI, using Wilcoxon rank sum tests. In the case of hMT+ one-tailed tests uncorrected for multiple comparisons were used, because our experimental prediction was that we would find better classification for blind subjects bilaterally in hMT+. In early blind subjects classification for unambiguous motion was significantly better than in sighted subjects in the right hemisphere (p < 0.05), and was not significantly better than in sighted subjects in the left hemisphere (p > 0.10). For ambiguous motion, early blind subjects' classification performance was significantly better than sighted subjects in the left hemisphere (p < 0.01) and was marginally significant in the right hemisphere (p = 0.0641). Further research is needed to confirm whether there is possible dissociation between left and right hMT+ in early blind versus sighted participants as a function of the motion coherence. 
In the case of the PT ROI, two-tailed tests corrected for multiple comparisons were used. Sighted subjects showed significantly better classification that blind subjects for both unambiguous motion and ambiguous motion in the right (p < 0.05, both motion coherence levels) but not the left hemisphere (p > 0.60). 
There was no significant difference in classification performance between sighted subjects and early blind subjects within right LOC (p > 0.76, both motion coherence levels). 
For results using hMT+ and PT ROIs defined using alternative methods, see Supplementary Materials
Classification performance using a spherical searchlight
Our spherical-searchlight analysis revealed additional cortical regions that contained directional information for auditory motion (p < 0.001, corrected for multiple comparisons, see Supplementary Figure S3 in Supplementary Materials). In sighted subjects, we identified one region located in the right superior temporal gyrus (Talairach coordinates: x = 43, y = −50, z = 13) that contained directional information for unambiguous motion (mean classification performance 53%, SEM 0.004). An additional region in the right middle occipital gyrus was selective for ambiguous motion in sighted subjects (Talairach coordinates: x = 34, y = −78, z = 12, mean classification performance 53%, SEM = 0.004). Note that this region is lateral and superior to the region of the right middle occipital gyrus (BA19; x = 51, y = −64, z = −5) that has previously been shown to show a preference for sound localization in early blind subjects (Renier et al., 2010). 
In early blind subjects, two areas were identified as successfully classifying ambiguous motion, including one in the left fusiform gyrus (Talairach coordinates: x = −42, y = −54, z = −12, mean classification performance 52%, SEM = 0.007) and one in the right parahippocampal gyrus (Talairach coordinates: x = 28, y = −28, z = −21, mean classification performance 52%, SEM = 0.002). 
Effects of age
There was a substantial difference in mean age between our early blind and control subjects (50 vs. 27 years), and it is well established that performance for many types auditory processing deteriorates as a function of age (for review see Tun, Williams, Small, & Hafter, 2012). Thus, it is possible that our behavioral results underestimate the behavioral superiority of early blind individuals. 
Our fMRI results cannot easily be explained by age differences between subject groups. We saw no significant correlation between age and performance for either BOLD or pattern classification performance that passed Bonferroni-Holm correction for either subject group. In particular (a) we saw no indication of a negative correlation between pattern classification performance and age in either LOC or PT. Indeed correlations were nonsignificantly positive across both areas and subject groups. Thus, our failure to classify direction of motion in these areas within blind subjects is unlikely to be due to their being older. (b) The only correlations that were significant before Bonferroni-Holm correction were a negative correlation between BOLD responses and age in hMT+ in sighted individuals (r = −0.61, p = 0.02) and a positive correlation between BOLD responses and age in hMT+ in blind individuals (r = 0.64, p = 0.01). Thus, our results in hMT+ are unlikely to be due to performance in this area improving with age. Nevertheless we acknowledge that there was substantial age difference between groups. 
Discussion
Superior motion processing in early blind individuals
There is a wide range of previous studies suggesting (though not uniformly, Fisher, 1964; L. A. Renier et al., 2010) improved spatial localization for static auditory stimuli in blind subjects (Juurmaa & Suonio, 1975; Muchnik, Efrati, Nemeth, Malin, & Hildesheimer, 1991; Rice, Feinstein, & Schusterman, 1965; Roder et al., 1999), also see Rauschecker (1995) for a review of the animal literature. There is also evidence that blind subjects have lower minimum audible movement angles using a single moving sound source (Lewald, 2013). Here we show that blind subjects showed significantly better behavioral performance than sighted subjects at determining the direction of auditory motion using a relatively naturalistic stimulus consisting of a number of sound sources that included ILD, ITD, and Doppler cues. Our stimulus was designed to be analogous to the visual global motion stimuli varying in coherence that have classically been used to evoke responses in MT and MST (Britten, Shadlen, Newsome, & Movshon, 1992). These stimuli are designed to minimize the ability of subjects to track the local movement over time of individual dots or sound sources (as can be done when a single sound source or visual dot is used as a stimulus), and thereby maximize reliance on global motion mechanisms. 
hMT+ and PT
Using fMRI pattern classification, in sighted individuals the perceived direction of motion for both coherent and ambiguous auditory motion stimuli was accurately categorized based on neural responses within right PT. Sighted subjects did not show significant modulation of the BOLD response to auditory motion within hMT+, and direction of auditory motion could not be classified from responses in that region. 
Within early blind individuals hMT+ responded to auditory motion, and auditory motion decisions could be successfully categorized. Surprisingly, in blind subjects the ability to classify direction of auditory motion within PT was significantly worse than for sighted subjects. 
Thus, our results are suggestive of a double dissociation whereby in early blind subjects classification for auditory motion direction is enhanced within visual area hMT+ and reduced within PT compared to sighted subjects. 
Right LOC
In sighted subjects LOC showed robustly positive responses to both moving and static visual dots with no significant difference in response between them (Supplementary Figure S2), weak or no motion selectivity. Right LOC showed a small but significant suppression of response to the auditory motion stimulus (Figure 3). However, despite a lack of positive response to auditory motion, the direction of motion could be decoded from responses in right LOC. This finding is generally consistent with Alink et al. (2012), who found successful multivariate classification of direction of auditory motion in right LOC, without any significant overall (univariate) modulation of response. 
In contrast, within early blind subjects we see larger responses to auditory motion than to silence, but the direction of motion could not be decoded from those responses. It should be noted that our finding of a group difference in classification is still somewhat provisional: An ANOVA did reveal a significant effect of group within right LOC, whereby sighted subjects showed better classification than blind subjects. However individual posthoc tests did not show a significant difference across groups. 
The effects of blindness on right LOC is difficult to interpret without a better understanding of the role of this region in sighted subjects. This ROI is nonoverlapping with the object-selective LOC (Larsson & Heeger, 2006; Malach et al., 1995) and regions associated with 3-D processing (Amedi, Jacobson, Hendler, Malach, & Zohary, 2002; Brouwer, van Ee, & Schwarzbach, 2005; Georgieva, Peeters, Kolster, Todd, & Orban, 2009; Orban, Sunaert, Todd, Van Hecke, & Marchal, 1999). Moreover, currently it is unclear to what extent this particular region of LOC is supramodal in sighted subjects, and whether it supports classification of visual as well as auditory motion direction. 
Lack of successful classification in PAC
Although neurons sensitive to cues for auditory motion have been found within PAC in both cats and monkeys (Ahissar, Ahissar, Bergman, & Vaadia, 1992; Stumpf, Toronchuk, & Cynader, 1992; Toronchuk, Stumpf, & Cynader, 1992), a wide variety of previous groups have shown that PAC does not differentiate in the magnitude of the BOLD response between moving and stationary stimuli (for review see Warren et al., 2002). There are a number of ways in which neurons selective for auditory motion might exist within PAC without eliciting robust univariate BOLD responses for moving vs. stationary stimuli: For example, tuning preferences might be distributed relatively evenly across moving and stationary stimuli, excitatory and inhibitory responses might be fairly evenly balanced, or tuning might be extremely broad. All of these would be likely to minimize an overall difference in BOLD response to moving versus stationary stimuli within PAC. 
We could not classify the direction of motion of our auditory motion stimulus within PAC. However, our results should not be taken as evidence against sensitivity to local auditory motion cues in PAC, given that we used a complex stimulus that minimized the ability to track local sound sources. Our findings do provided limited evidence for either a lack of global motion selectivity in PAC or a difference in its spatial organization and/or hemodynamic connectivity (Anderson & Oates, 2010) compared to neighboring PT, where classification was successful in sighted subjects. 
Is hMT+ supramodal in sighted individuals?
In the data shown here, in sighted subjects we saw no significant modulation of the BOLD response to auditory motion within each subject's hMT+ (if anything, there was a slight suppression of hMT+ responses when subjects listened to auditory motion), and direction of auditory motion could not be classified. As far as the previous literature on auditory motion is concerned, two studies have reported auditory responses within hMT+ within normally sighted subjects. Poirier et al. (2005) reported larger BOLD responses to auditory motion (vs. static) stimuli in blindfolded sighted subjects using a definition of hMT+ based on group averaging in stereotaxic coordinates. This group also reported the position of clusters that showed significant activation to moving versus static auditory stimuli. While these clusters were reported as being in the expected anatomical location of hMT+, only two of the eight reported coordinates of individual clusters fall within 2 SDs of the expected location of hMT+ across individuals as reported by Dumoulin et al. (2000; also see Watson et al., 1993). Using multivoxel pattern analysis, Strnad et al. (2013) recently showed that while the overall BOLD response to auditory motion was negative (in contrast to Poirier et al., 2005, but similar to L. Lewis, Saenz, & Fine, 2007), a region defined as hMT+ did contain classification information about different auditory motion conditions in sighted individuals. However, in this paper hMT+ was defined as all voxels within a relatively generous 10 mm radius from MNI group peak coordinates. The size of the ROIs was then reduced by a feature-selection criterion, including only the 50 (out of ∼1,000 voxels in the ROI) based on the highest t values from the contrast task > rest. Thus this analysis is likely to be highly susceptible to the inclusion of voxels from neighboring areas. 
In contrast, a variety of studies that have not relied on stereotaxic alignment to define hMT+ have failed to find evidence of auditory motion responses in hMT+. Indeed, the issue of to what extent auditory responses within hMT+ are an artifact of alignment was specifically examined by Saenz et al. (2008), who found that group averaged methods (surface space alignment rather than stereotaxic alignment) resulted in the appearance of auditory responses within hMT+ in sighted subjects. However, inspection of that same data using individual hMT+ ROIs (based on individual visual functional localizers) demonstrated that the vast majority of individually defined hMT+ ROIs did not respond to auditory motion, and that these responses were in fact primarily located in a neighboring region. Thus, in that study the finding of BOLD responses to auditory motion within a group average hMT+ was largely attributable to intersubject averaging. This finding was replicated in a very similar study on the same subjects by L. B. Lewis et al. (2010). 
Similarly, Alink et al. (2012), who defined ROIs based on individual anatomies, found no response to auditory motion and could not classify the direction of auditory motion within hMT+. Like us, they did find auditory motion responses within neighboring LOC. Bedny et al. (2010), who projected a conservatively defined group defined ROI onto individual anatomies, similarly did not see auditory motion responses in hMT+ in sighted or late blind subjects. J. W. Lewis et al. (2000), who aligned data on the cortical surface in a study specifically designed to examine overlap between visual and auditory motion processing, found negative BOLD responses in occipital cortex including subregions of hMT+, but did see positive BOLD responses to the auditory motion stimulus in neighboring superior temporal sulcus (STS). 
Like others (J. W. Lewis et al., 2000; Strnad et al., 2013), we see some indication of suppressive modulation of hMT+ when subjects perform an auditory motion task. It remains to be seen whether this modulation is due to cross-modal attention (Ciaramitaro, Buracas, & Boynton, 2007), and whether it is selective for motion. 
The failure to find auditory responses in hMT+ in sighted individuals is somewhat surprising given the substantial literature reporting tactile responses (e.g., Hagen et al., 2002; Matteau, Kupers, Ricciardi, Pietrini, & Ptito, 2010; Proulx, Brown, Pasqualotto, & Meijer, 2014; Ricciardi et al., 2007) selective for the direction of tactile motion (van Kemenade et al., 2014) in hMT+ within sighted subjects, as well as disruption of tactile processing with inhibition induced by repetitive transcranial magnetic stimuluation (rTMS) over the expected site of hMT+ (Ricciardi et al., 2011). 
One possibility is that the tasks and stimuli used to look for auditory responses in hMT+ have not been ideal for eliciting such responses. Failures to find auditory responses in hMT+ are necessarily negative evidence. Moreover, although the studies that have failed to find auditory responses in hMT+ were carried out in four different laboratories, all used relatively similar auditory stimuli and tasks (similar to ours or less complex). It is possible that more complex naturalistic auditory motion stimuli (or some other variation of task design) would be successful in eliciting auditory motion responses in hMT+. A second possibility is that hMT+ is less multisensory than the prevailing literature suggests. Although, as cited above, tactile responses have been reported in hMT+ across a wide range of studies; many of these studies localized hMT+ using stereotaxic coordinates, which may have resulted in the contribution of tactile motion responses from neighboring polysensory areas (Beauchamp, Yasar, Kishan, & Ro, 2007). Results from studies that used individual visual localizers to localize hMT+ suggest that tactile responses may be primarily limited to a subregion of MST (Beauchamp et al., 2007; Ricciardi et al., 2007; van Kemenade et al., 2014). Moreover, since none of these studies controlled for visual attention, the contribution of visualized or implied motion (e.g., hMT+ responds to static pictures of moving objects Kourtzi & Kanwisher, 2000) to tactile responses in hMT+ is not yet fully understood. Finally, it is possible that hMT+ is multimodal for tactile but not auditory stimuli. 
Although our findings clearly show a difference between blind and sighted individuals, how one interprets our data is influenced by whether or not hMT+ is eventually shown to have auditory responses in sighted individuals. If auditory motion responses within hMT+ are shown to exist in adult sighted subjects, then the responses we see in blind individuals likely provide an example of cross-modal plasticity in blind subjects mediated by an enhancement or unmasking of responses within a naturally multisensory area (see Kupers & Ptito, 2014, for a recent review; Pascual-Leone, Amedi, Fregni, & Merabet, 2005; also Pascual-Leone & Hamilton, 2001). However, if hMT+ proves not to respond to auditory motion in adult sighted subjects, our data provides evidence that cross-modal plasticity can involve a categorical change of adult input modality (Hunt et al., 2005; Kahn & Krubitzer, 2002; Karlen, Kahn, & Krubitzer, 2006) possibly as a result of a failure of normal pruning in development (for reviews see Huberman, Feller, & Chapman, 2008; Innocenti & Price, 2005; Katz & Shatz, 1996; O'Leary, 1992). 
In blind individuals auditory hMT+ responses are associated with the perceptual experience of auditory motion
Our results provide an important link between the perceptual experience of auditory motion and responses within hMT+ within early blind individuals. A variety of studies of visual motion in sighted subjects have shown that a wide number of visual areas not thought to be associated with visual motion perception nonetheless contain direction-specific information for unambiguous motion stimuli (Kamitani & Tong, 2006; Serences & Boynton, 2007). However, the reported perceptual state of the observer for ambiguous visual motion stimuli can only be classified based on activity patterns in the human MT complex and possibly V3a (Serences & Boynton, 2007). We show here that hMT+ can classify the perceptual experience for ambiguous auditory motion stimuli in early blind subjects, therefore showing that responses within this area are not only correlated with the physical auditory motion stimulus, but also with the perceptual experience of auditory motion in early blind individuals. 
hMT+ seems to replace rather than augment processing within PT
These data provide further support for the suggestion that the multimodal responses of hMT+ in congenitally blind individuals are not driven by connections from the PT (Bedny et al., 2010), since we saw a reduction of the ability to successfully decode auditory motion direction within that area in early blind subjects. Indeed, for our task, hMT+ appears to supplant rather than augment motion selective responses within PT. In the case of area PT, blind subjects showed similarly robust BOLD modulations to 100% auditory motion versus silence as were found in sighted subjects, but this area no longer classified direction of motion successfully, suggesting a loss of auditory directional tuning in this area. 
One potential source of these bilateral hMT+ responses might be feedback from parietal or prefrontal areas. A variety of polysensory or supramodal areas that show functional responses to a combination of auditory and visual motion or spatial position have been identified, including (but not restricted to) the inferior parietal lobule (Bushara et al., 1999; Lewald, Staedtgen, Sparing, & Meister, 2011; J. W. Lewis et al., 2000), lateral frontal cortex including the precentral sulcus (J. W. Lewis et al., 2000), and the superior temporal sulcus (Baumgart et al., 1999; Beauchamp, Argall, Bodurka, Duyn, & Martin, 2004; Griffiths, Bench, & Frackowiak, 1994; J. W. Lewis et al., 2000). In particular, in sighted subjects the parietal lobule shows functional responses to both auditory and visual motion/spatial position information (Bushara et al., 1999; Lewald et al., 2011; J. W. Lewis et al., 2000), and correlations between inferior parietal activation and occipital regions are enhanced as a result of blindness (Weeks et al., 2000). Similarly, the lateral prefrontal cortex, which contains polysensory areas in sighted subjects (J. W. Lewis et al., 2000), has previously been shown to have enhanced functional connectivity with hMT+ as a result of early blindness (Bedny et al., 2010). 
Summary
These data suggest that the plasticity resulting from early visual deprivation extends to nondeprived (auditory) regions of cortex. One model of developmental specialization is that organization is determined by a competition between modules, whereby computational tasks are more likely to be assigned to those brain regions whose innate characteristics are better suited to that task (Jacobs, 1997; Jacobs & Kosslyn, 1994). Consistent with this model, it has been suggested that many sensory areas may be multisensory at birth and become increasingly unimodal through axonal and dendritic refinements that are at least partially determined by competitive refinements during development (for reviews see Huberman et al., 2008; Innocenti & Price, 2005; Katz & Shatz, 1996; O'Leary, 1992). It has also been suggested that the apparent differences in gray matter thickness as measured using T1 imaging observed in early blind individuals (Anurova, Renier, De Volder, Carlson, & Rauschecker, in press; Bridge, Cowey, Ragge, & Watkins, 2009; Jiang et al., 2009; Park et al., 2009; Voss, Lepore, Gougoux, & Zatorre, 2011) represent differential pruning as a result of lack of visual experience. 
This competitive model is also consistent with previous studies in early blind individuals that have suggested that the reorganization of function resulting from loss of sensory input may be modulated by aspects of computational fitness such as similarity to the original computational function of that area—a theory that has been described as “functional constancy” or “metamodal plasticity” in the literature (Amedi et al., 2007; Mahon, Anzellotti, Schwarzbach, Zampini, & Caramazza, 2009; Saenz et al., 2008)—and/or connectivity to other areas such as motor systems (Mahon et al., 2007). Thus our findings provide further support for the idea that the exquisite specialization of hMT+ for motion processing may generalize to the processing of auditory motion, presumably because the processing of auditory motion shares computational similarities to the processing of visual motion. 
Finally, the finding of a reduced capacity for auditory motion direction classification in PT within blind individuals for our task suggests that in the absence of visual motion input hMT+ may actually be capable of usurping some of the functions of nondeprived auditory motion areas. Interestingly, Sadato et al. (1998) have reported a result for Braille reading that is highly analogous—ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally, were activated in blind Braille readers, but secondary somatosensory areas were less activated by Braille reading in blind individuals than sighted controls. 
“You guys can see with your eyes, we see with our ears” (Juan Ruiz, PopTech 2011). This paper provides further evidence that the enhanced auditory motion performance of early blind individuals may be, at least in part, because they “see” auditory motion. 
Acknowledgments
This work was supported by the National Institutes of Health (EY-014645 to Ione Fine). Fang Jiang was supported by the Human Frontier Science Program Long-Term Fellowship (LT00103/2008), the Auditory Neuroscience Training Program (T32DC005361), and the Pathway to Independence Award (K99EY023268). 
Commercial relationships: none. 
Corresponding author: Fang Jiang. 
Email: fjiang@uw.edu; fjiang@u.washington.edu. 
Address: Department of Psychology, University of Washington, Seattle, WA, USA. 
References
Ahissar M. Ahissar E. Bergman H. Vaadia E. (1992). Encoding of sound-source location and movement: Activity of single neurons and interactions between adjacent neurons in the monkey auditory cortex. Journal of Neurophysiology, 67 (1), 203–215. [PubMed]
Alink A. Euler F. Kriegeskorte N. Singer W. Kohler A. (2012). Auditory motion direction encoding in auditory cortex and high-level visual cortex. Human Brain Mapping, 33 (4), 969–978. [CrossRef] [PubMed]
Amedi A. Jacobson G. Hendler T. Malach R. Zohary E. (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex, 12 (11), 1202–1212. [CrossRef] [PubMed]
Amedi A. Stern W. M. Camprodon J. A. Bermpohl F. Merabet L. Rotman S. Hemond C. Pascual-Leone A. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience, 10 (6), 687–689. [CrossRef] [PubMed]
Anderson M. L. Oates T. (2010). A critique of multi-voxel pattern analysis. In Ohlsson S. Catrambone R. (Eds.) Proceedings of the 32nd Annual Meeting of the Cognitive Science Society (pp. 1511–1516), Portland, OR.
Anurova I. Renier L. A. De Volder A. G. Carlson S. Rauschecker J. P. (in press). Relationship between cortical thickness and functional activation in the early blind. Cerebral Cortex.
Baumgart F. Gaschler-Markefski B. Woldorff M. G. Heinze H. J. Scheich H. (1999). A movement-sensitive area in auditory cortex. Nature, 400 (6746), 724–726. [CrossRef] [PubMed]
Beauchamp M. S. Argall B. D. Bodurka J. Duyn J. H. Martin A. (2004). Unraveling multisensory integration: Patchy organization within human STS multisensory cortex. Nature Neuroscience, 7 (11), 1190–1192. [CrossRef] [PubMed]
Beauchamp M. S. Cox R. W. DeYoe E. A. (1997). Graded effects of spatial and featural attention on human area MT and associated motion processing areas. Journal of Neurophysiology, 78 (1), 516–520. [PubMed]
Beauchamp M. S. Yasar N. E. Kishan N. Ro T. (2007). Human MST but not MT responds to tactile stimulation. Journal of Neuroscience, 27 (31), 8261–8267. [CrossRef] [PubMed]
Bedny M. Konkle T. Pelphrey K. Saxe R. Pascual-Leone A. (2010). Sensitive period for a multimodal response in human visual motion area MT/MST. Current Biology, 20 (21), 1900–1906. [CrossRef] [PubMed]
Bremmer F. Schlack A. Shah N. J. Zafiris O. Kubischik M. Hoffmann K. (2001). Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron, 29 (1), 287–296. [CrossRef] [PubMed]
Bridge H. Cowey A. Ragge N. Watkins K. (2009). Imaging studies in congenital anophthalmia reveal preservation of brain architecture in ‘visual' cortex. Brain, 132 (Pt 12), 3467–3480. [CrossRef] [PubMed]
Britten K. H. Newsome W. T. Shadlen M. N. Celebrini S. Movshon J. A. (1996). A relationship between behavioral choice and the visual responses of neurons in macaque MT. Visual Neuroscience, 13 (1), 87–100. [CrossRef] [PubMed]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12 (12), 4745–4765. [PubMed]
Brouwer G. J. van Ee R. Schwarzbach J. (2005). Activation in visual cortex correlates with the awareness of stereoscopic depth. Journal of Neuroscience, 25 (45), 10403–10413. [CrossRef] [PubMed]
Bushara K. O. Weeks R. A. Ishii K. Catalan M. J. Tian B. Rauschecker J. P. (1999). Modality-specific frontal and parietal areas for auditory and visual spatial localization in humans. Nature Neuroscience, 2 (8), 759–766. [PubMed]
Ciaramitaro V. M. Buracas G. T. Boynton G. M. (2007). Spatial and cross-modal attention alter responses to unattended sensory information in early visual and auditory human cortex. Journal of Neurophysiology, 98 (4), 2399–2413. [CrossRef] [PubMed]
Dumoulin S. O. Bittar R. G. Kabani N. J. Baker C. L. Jr. Le Goualher G. Bruce Pike G. (2000). A new anatomical landmark for reliable identification of human area V5/MT: A quantitative analysis of sulcal patterning. Cerebral Cortex, 10 (5), 454–463. [CrossRef] [PubMed]
Eickhoff S. B. Paus T. Caspers S. Grosbras M.-H. Evans A. C. Zilles K. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. NeuroImage, 36 (3), 511–521. [CrossRef] [PubMed]
Fisher G. H. (1964). Spatial localization by the blind. American Journal of Psychology, 77, 2–14. [CrossRef] [PubMed]
Georgieva S. Peeters R. Kolster H. Todd J. T. Orban G. A. (2009). The processing of three-dimensional shape from disparity in the human brain. Journal of Neuroscience, 29 (3), 727–742. [CrossRef] [PubMed]
Griffiths T. D. Bench C. J. Frackowiak R. S. (1994). Human cortical areas selectively activated by apparent sound movement. Current Biology, 4 (10), 892–895. [CrossRef] [PubMed]
Griffiths T. D. Rees G. Rees A. Green G. G. Witton C. Rowe D. (1998). Right parietal cortex is involved in the perception of sound movement in humans. Nature Neuroscience, 1 (1), 74–79. [CrossRef] [PubMed]
Griffiths T. D. Rees A. Witton C. Cross P. M. Shakir R. A., & Green G. G. (1997). Spatial and temporal auditory processing deficits following right hemisphere infarction. A psychophysical study. Brain, 120 (Pt.5), 785–794. [CrossRef] [PubMed]
Hagen M. C. Franzen O. McGlone F. Essick G. Dancer C. Pardo J. V. (2002). Tactile motion activates the human middle temporal/V5 (MT/V5) complex. European Journal of Neuroscience, 16 (5), 957–964. [CrossRef] [PubMed]
Hall D. A. Haggard M. P. Akeroyd M. A. Palmer A. R. Summerfield A. Q. Elliott M. R. (1999). “Sparse” temporal sampling in auditory fMRI. Human Brain Mapping, 7 (3), 213–223. [CrossRef] [PubMed]
Holm S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6, 65–70.
Huberman A. D. Feller M. B. Chapman B. (2008). Mechanisms underlying development of visual maps and receptive fields. Annual Review of Neuroscience, 31, 479–509. [CrossRef] [PubMed]
Huk A. C. Dougherty R. F. Heeger D. J. (2002). Retinotopy and functional subdivision of human areas MT and MST. Journal of Neuroscience, 22 (16), 7195–7205. [PubMed]
Hunt D. L. King B. Kahn D. M. Yamoah E. N. Shull G. E. Krubitzer L. (2005). Aberrant retinal projections in congenitally deaf mice: How are phenotypic characteristics specified in development and evolution? The Anatomical Record. Part A, Discoveries in Molecular, Cellular, and Evolutionary Biology, 287 (1), 1051–1066. [CrossRef] [PubMed]
Innocenti G. M. Price D. J. (2005). Exuberance in the development of cortical networks. Nature Reviews Neuroscience, 6 (12), 955–965. [CrossRef] [PubMed]
Jacobs R. A. (1997). Nature, nurture, and the development of functional specializations: A computational approach. Psychonomic Bulletin & Review, 4 (3), 299–309. [CrossRef]
Jacobs R. A. Kosslyn S. M. (1994). Encoding shape and spatial relations: The role of receptive field size in coordination complementary representations. Cognitive Science, 18 (3), 361–368. [CrossRef]
Jenkinson M. Smith S. (2001). A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5 (2), 143–156. [CrossRef] [PubMed]
Jiang J. Zhu W. Shi F. Liu Y. Li J. Qin W. (2009). Thick visual cortex in the early blind. Journal of Neuroscience, 29 (7), 2205–2211. [CrossRef] [PubMed]
Juurmaa J. Suonio K. (1975). The role of audition and motion in the spatial orientation of the blind and the sighted. Scandanavian Journal of Psychology, 16 (3), 209–216. [CrossRef]
Kahn D. M. Krubitzer L. (2002). Massive cross-modal cortical plasticity and the emergence of a new cortical area in developmentally blind mammals. Proceedings of the National Acadamy of Sciences, USA, 99 (17), 11429–11434. [CrossRef]
Kamitani Y. Tong F. (2006). Decoding seen and attended motion directions from activity in the human visual cortex. Current Biology, 16 (11), 1092–1102.
Karlen S. J. Kahn D. M. Krubitzer L. (2006). Early blindness results in abnormal corticocortical and thalamocortical connections. Neuroscience, 142 (3), 843–858. [CrossRef] [PubMed]
Katz L. C. Shatz C. J. (1996). Synaptic activity and the construction of cortical circuits. Science, 274 (5290), 1133–1138. [CrossRef] [PubMed]
Kilian-Hütten N. Valente G. Vroomen J. Formisano E. (2011). Auditory cortex encodes the perceptual interpretation of ambiguous sound. Journal of Neuroscience, 31 (5), 1715–1720. [CrossRef] [PubMed]
Kourtzi Z. Kanwisher N. (2000). Activation in human MT/MST by static images with implied motion. Journal of Cognitive Neuroscience, 12 (1), 48–55. [CrossRef] [PubMed]
Kriegeskorte N. Goebel R. Bandettini P. (2006). Information-based functional brain mapping. Proceedings of the National Academy of Sciences, USA, 103, 3863–3868. [CrossRef]
Krumbholz K. Schonwiesner M. Rubsamen R. Zilles K. Fink G. R., & von Cramon D. Y. (2005). Hierarchical processing of sound location and motion in the human brainstem and planum temporale. European Journal of Neuroscience, 21 (1), 230–238. [CrossRef] [PubMed]
Kupers R. Pietrini P. Ricciardi E. Ptito M. (2011). The nature of consciousness in the visually deprived brain. Frontiers in Psychology, 2, 19. [CrossRef] [PubMed]
Kupers R. Ptito M. (2014). Compensatory plasticity and cross-modal reorganization following early visual deprivation. Neuroscience & Biobehavioral Reviews, 41, 36–52. [CrossRef]
Larsson J. Heeger D. J. (2006). Two retinotopic visual areas in human lateral occipital cortex. Journal of Neuroscience, 26 (51), 13128–13142. [CrossRef] [PubMed]
Lewald J. (2013). Exceptional ability of blind humans to hear sound motion: Implications for the emergence of auditory space. Neuropsychologia, 51 (1), 181–186. [CrossRef] [PubMed]
Lewald J. Staedtgen M. Sparing R. Meister I. G. (2011). Processing of auditory motion in inferior parietal lobule: Evidence from transcranial magnetic stimulation. Neuropsychologia, 49, 209–215. [CrossRef] [PubMed]
Lewis J. W. Beauchamp M. S. DeYoe E. A. (2000). A comparison of visual and auditory motion processing in human cerebral cortex. Cerebral Cortex, 10 (9), 873–888. [CrossRef] [PubMed]
Lewis L. Saenz M. Fine I. (2007). Patterns of cross-modal plasticity in the visual cortex of early blind human subjects across a variety of tasks and input modalities. Journal of Vision, 7 (9): 875, http://www.journalofvision.org/content/7/9/875, doi:10.1167/7.9.875. [Abstract]
Lewis L. B. Fine I. (2011). The effects of visual deprivation after infancy. In Levin L. A. Nilsson S. F. E. Ver Hoeve J. Wu S. Kaufman P. L. Albert A. (Eds.), Adler's physiology of the eye: Expert consult (11th ed. ). St. Louis, MO: Saunders.
Lewis L. B. Saenz M. Fine I. (2010). Mechanisms of cross-modal plasticity in early-blind subjects. Journal of Neurophysiology, 104 (6), 2995–3008. [CrossRef] [PubMed]
Mahon B. Z. Anzellotti S. Schwarzbach J. Zampini M. Caramazza A. (2009). Category-specific organization in the human brain does not require visual experience. Neuron, 63 (3), 397–405. [CrossRef] [PubMed]
Mahon B. Z. Milleville S. C. Negri G. A. Rumiati R. I. Caramazza A. Martin A. (2007). Action-related properties shape object representations in the ventral stream. Neuron, 55 (3), 507–520. [CrossRef] [PubMed]
Malach R. Reppas J. B. Benson R. R. Kwong K. K. Jiang H. Kennedy W. A. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences, USA, 92 (18), 8135–8139. [CrossRef]
Malikovic A. Amunts K. Schleicher A. Mohlberg H. Eickhoff S. B. Wilms M. (2007). Cytoarchitectonic analysis of the human extrastriate cortex in the region of V5/MT1: A probabilistic, stereotaxic map of area hOc5. Cerebral Cortex, 17, 562–574. [CrossRef] [PubMed]
Matteau I. Kupers R. Ricciardi E. Pietrini P. Ptito M. (2010). Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals. Brain Research Bulletin, 82 (5–6), 264–270. [CrossRef] [PubMed]
Morosan P. Rademacher J. Schleicher A. Amunts K. Schormann T. Zilles K. (2001). Human primary auditory cortex: Cytoarchitectonic subdivisions and mapping into a spatial reference system. NeuroImage, 13 (4), 684–701. [CrossRef] [PubMed]
Muchnik C. Efrati M. Nemeth E. Malin M. Hildesheimer M. (1991). Central auditory skills in blind and sighted subjects. Scandanavian Audiology, 20 (1), 19–23. [CrossRef]
Newsome W. T. Wurtz R. H. Dürsteler M. R. (1985). Deficits in visual motion processing following ibotenic acid lesions of the middle temporal visual area of the macaque monkey. Journal of Neuroscience, 5 (3), 825–840. [PubMed]
O'Leary D. D. (1992). Development of connectional diversity and specificity in the mammalian brain by the pruning of collateral projections. Current Opinions in Neurobiology, 2 (1), 70–77. [CrossRef]
O'Toole A. J. Jiang F. Abdi H. Haxby J. V. (2005). Partially distributed representations of objects and faces in ventral temporal cortex. Journal of Cognitive Neuroscience, 17 (4), 580–590. [CrossRef] [PubMed]
O'Toole A. J. Jiang F. Abdi H. Penard N. Dunlop J. P. Parent M. A. (2007). Theoretical, statistical, and practical perspectives on pattern-based classification approaches to the analysis of functional neuroimaging data. Journal of Cognitive Neuroscience, 19 (11), 1735–1752. [CrossRef] [PubMed]
Orban G. A. Sunaert S. Todd J. T. Van Hecke P. Marchal G. (1999). Human cortical regions involved in extracting depth from motion. Neuron, 24 (4), 929–940. [CrossRef] [PubMed]
Park H. J. Lee J. D. Kim E. Y. Park B. Oh M. K. Lee S. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47 (1), 98–106. [CrossRef] [PubMed]
Pascual-Leone A. Amedi A. Fregni F. Merabet L. B. (2005). The plastic human brain cortex. Annual Review of Neuroscience, 28, 377–401. [CrossRef] [PubMed]
Pascual-Leone A. Hamilton R. (2001). The metamodal organization of the brain. In Casanova C. Ptito M. (Eds.), Progress in brain research, Vol. 134 (pp. 427–445 ). San Diego, CA: Elsevier Science. [PubMed]
Penhune V. B. Zatorre R. J. MacDonald J. D. Evans A. C. (1996). Interhemispheric anatomical differences in human primary auditory cortex: Probabilistic mapping and volume measurement from magnetic resonance scans. Cerebral Cortex, 6 (5), 661–672. [CrossRef] [PubMed]
Petkov C. I. Kayser C. Augath M. Logothetis N. K. (2009). Optimizing the imaging of the monkey auditory cortex: Sparse vs. continuous fMRI. Magnetic Resonance Imaging, 27 (8), 1065–1073. [CrossRef] [PubMed]
Poirier C. Collignon O. Devolder A. G. Renier L. Vanlierde A. Tranduy D. (2005). Specific activation of the V5 brain area by auditory motion processing: An fMRI study. Brain Research: Cognitive Brain Research, 25 (3), 650–658. [CrossRef] [PubMed]
Poirier C. Collignon O. Scheiber C. Renier L. Vanlierde A. Tranduy D. (2006). Auditory motion perception activates visual motion areas in early blind subjects. NeuroImage, 31 (1), 279–285. [CrossRef] [PubMed]
Proulx M. J. Brown D. J. Pasqualotto A. Meijer P. (2014). Multisensory perceptual learning and sensory substitution. Neuroscience & Biobehavioral Review, 41, 16–25. [CrossRef]
Rademacher J. Caviness V. S. Jr. Steinmetz H. Galaburda A. M. (1993). Topographical variation of the human primary cortices: Implications for neuroimaging, brain mapping, and neurobiology. Cerebral Cortex, 3 (4), 313–329. [CrossRef] [PubMed]
Rauschecker J. P. (1995). Compensatory plasticity and sensory substitution in the cerebral cortex. Trends in Neuroscience, 18 (1), 36–43. [CrossRef]
Renier L. De Volder A. G. Rauschecker J. P. (2014). Cortical plasticity and preserved function in early blindness. Neuroscience & Biobehavioral Review, 41, 53–63. [CrossRef]
Renier L. A. Anurova I. De Volder A. G. Carlson S. VanMeter J. Rauschecker J. P. (2010). Preserved functional specialization for spatial processing in the middle occipital gyrus of the early blind. Neuron, 68 (1), 138–148. [CrossRef] [PubMed]
Ricciardi E. Basso D. Sani L. Bonino D. Vecchi T. Pietrini P. (2011). Functional inhibition of the human middle temporal cortex affects non-visual motion perception: A repetitive transcranial magnetic stimulation study during tactile speed discrimination. Experimental Biology and Medicine, 236 (2), 138–144. [CrossRef] [PubMed]
Ricciardi E. Vanello N. Sani L. Gentili C. Scilingo E. P. Landini L. (2007). The effect of visual experience on the development of functional architecture in hMT+. Cerebral Cortex, 17 (12), 2933–2939. [CrossRef] [PubMed]
Rice C. E. Feinstein S. H. Schusterman R. J. (1965). Echo-detection ability of the blind: Size and distance factors. Journal of Experimental Psychology, 70, 246–255. [CrossRef] [PubMed]
Roder B. Teder-Salejarvi W. Sterr A. Rosler F. Hillyard S. A. Neville H. J. (1999). Improved auditory spatial tuning in blind humans. Nature, 400 (6740), 162–166. [CrossRef] [PubMed]
Sadato N. Pascual-Leone A. Grafman J. Deiber M. P. Ibanez V., & Hallett M. (1998). Neural networks for Braille reading by the blind. Brain, 121 (Pt. 7), 1213–1229. [CrossRef] [PubMed]
Saenz M. Langers D. R. (2014). Tonotopic mapping of human auditory cortex. Hearing Research, 307, 42–52. [CrossRef] [PubMed]
Saenz M. Lewis L. B. Huth A. G. Fine I. Koch C. (2008). Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects. Journal of Neuroscience, 28 (20), 5141–5148. [CrossRef] [PubMed]
Salzman C. D. Britten K. H. Newsome W. T. (1990). Cortical microstimulation influences perceptual judgements of motion direction. Nature, 346 (6280), 174–177. [CrossRef] [PubMed]
Salzman C. D. Murasugi C. M. Britten K. H. Newsome W. T. (1992). Microstimulation in visual area MT: Effects on direction discrimination performance. Journal of Neuroscience, 12 (6), 2331–2355. [PubMed]
Serences J. T. Boynton G. M. (2007). The representation of behavioral choice for motion in human visual cortex. Journal of Neuroscience, 27 (47), 12893–12899. [CrossRef] [PubMed]
Shadlen M. N. Britten K. H. Newsome W. T. Movshon J. A. (1996). A computational analysis of the relationship between neuronal and behavioral responses to visual motion. Journal of Neuroscience, 16 (4), 1486–1510. [PubMed]
Strnad L. Peelen M. V. Bedny M. Caramazza A. (2013). Multivoxel pattern analysis reveals auditory motion information in MT+ of both congenitally blind and sighted individuals. PLoS ONE, 8 (4), e63198. [CrossRef] [PubMed]
Stumpf E. Toronchuk J. M. Cynader M. S. (1992). Neurons in cat primary auditory cortex sensitive to correlates of auditory motion in three-dimensional space. Experimental Brain Research, 88 (1), 158–168. [CrossRef] [PubMed]
Talairach J. Tournoux P. (1988). Co-planar stereotaxic atlas of the human brain. New York: Thieme Medical Publishers.
Tootell R. B. Reppas J. B. Dale A. M. Look R. B. Sereno M. I. Malach R. (1995). Visual motion aftereffect in human cortical area MT revealed by functional magnetic resonance imaging. Nature, 375 (6527), 139–141. [CrossRef] [PubMed]
Toronchuk J. M. Stumpf E. Cynader M. S. (1992). Auditory cortex neurons sensitive to correlates of auditory motion: Underlying mechanisms. Experimental Brain Research, 88 (1), 169–180. [CrossRef] [PubMed]
Tun P. A. Williams V. A. Small B. J. Hafter E. R. (2012). The effects of aging on auditory processing and cognition. American Journal of Audiology, 21 (2), 344–350. [CrossRef] [PubMed]
van Kemenade B. M. Seymour K. Wacker E. Spitzer B. Blankenburg F. Sterzer P. (2014). Tactile and visual motion direction processing in hMT+/V5. NeuroImage, 84, 420–427. [CrossRef] [PubMed]
Voss P. Lepore F. Gougoux F. Zatorre R. J. (2011). Relevance of spectral cues for auditory spatial processing in the occipital cortex of the blind. Frontiers in Psychology, 2, 48. [CrossRef] [PubMed]
Watson J. D. Myers R. Frackowiak R. S. Hajnal J. V. Woods R. P. Mazziotta J. C. (1993). Area V5 of the human brain: Evidence from a combined study using positron emission tomography and magnetic resonance imaging. Cerebral Cortex, 3 (2), 79–94. [CrossRef] [PubMed]
Warren J. D. Zielinski B. A. Green G. G. Rauschecker J. P. Griffiths T. D. (2002). Perception of sound-source motion by the human brain. Neuron, 34 (1), 139–148. [CrossRef] [PubMed]
Weeks R. A. Horwitz B. Aziz-Sultan A. Tian B. Wessinger C. M. Cohen L. (2000). A positron emission tomographic study of auditory localisation in the congenitally blind. Journal of Neuroscience, 20, 2664–2672. [PubMed]
Wilms M. Eickhoff S. B. Spech K. Amunts K. Shah N. J. Malikovic A. (2005). Human V5/MT+: Comparison of functional and cytoarchitectonic data. Anatomy and Embryology, 210, 485–495. [CrossRef] [PubMed]
Wobbrock J. O. Findlater L. Gergle D. Higgins J. J. (2011). The aligned rank transform for nonparametric factorial analyses using only ANOVA procedures. Paper presented at the Proceedings of the ACM Conference on Human Factors in Computing Systems ( CHI 2011), Vancouver, British Columbia.
Wolbers T. Zahorik P. Giudice N. A. (2011). Decoding the direction of auditory motion in blind humans. NeuroImage, 56 (2), 681–687. [CrossRef] [PubMed]
Zeki S. Watson J. D. Lueck C. J. Friston K. J. Kennard C. Frackowiak R. S. (1991). A direct demonstration of functional specialization in human visual cortex. Journal of Neuroscience, 11 (3), 641–649. [PubMed]
Figure 1
 
Experimental design. (A) Schematic of the auditory motion stimulus: The 50% coherence condition is shown. Frequency noise bands were generated by filtering white noise in the Fourier domain. (B) Early blind subjects are significantly better at determining the direction of 50% coherence auditory motion stimulus. Error bars show SEM. *** p < 0.001, Wilcoxon rank sum test, two-tailed.
Figure 1
 
Experimental design. (A) Schematic of the auditory motion stimulus: The 50% coherence condition is shown. Frequency noise bands were generated by filtering white noise in the Fourier domain. (B) Early blind subjects are significantly better at determining the direction of 50% coherence auditory motion stimulus. Error bars show SEM. *** p < 0.001, Wilcoxon rank sum test, two-tailed.
Figure 2
 
Sagittal views of the four ROIs in the right hemisphere defined using a combination of anatomical and functional criteria are shown in ACPC space in a representative blind (top row) and sighted (bottom row) subject. Red: hMT+; Green: PAC; Blue: PT; Cyan: LOC.
Figure 2
 
Sagittal views of the four ROIs in the right hemisphere defined using a combination of anatomical and functional criteria are shown in ACPC space in a representative blind (top row) and sighted (bottom row) subject. Red: hMT+; Green: PAC; Blue: PT; Cyan: LOC.
Figure 3
 
Responses to 100% coherent auditory motion in the auditory motion localizer experiment. (A) hMT+, (B) PAC, (C) PT, and (D) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether responses were significantly different from zero. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 3
 
Responses to 100% coherent auditory motion in the auditory motion localizer experiment. (A) hMT+, (B) PAC, (C) PT, and (D) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether responses were significantly different from zero. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 4
 
fMRI pattern classification performance. Left panels show classification accuracy for the direction of the unambiguous motion stimulus (50% coherence), right panels show classification accuracy for the direction of ambiguous motion stimulus (0% coherence). (A–B) hMT+, (C–D) PT, and (E–F) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether classification performance was significantly above chance. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01.
Figure 4
 
fMRI pattern classification performance. Left panels show classification accuracy for the direction of the unambiguous motion stimulus (50% coherence), right panels show classification accuracy for the direction of ambiguous motion stimulus (0% coherence). (A–B) hMT+, (C–D) PT, and (E–F) LOC. Error bars show SEM. Wilcoxon signed rank tests (uncorrected for multiple comparisons) were used to examine whether classification performance was significantly above chance. Wilcoxon rank sum tests (PAC, PT: two-tailed Bonferroni-Holm corrected for hemisphere; right LOC: two-tailed, uncorrected; hMT+: one-tailed, uncorrected) were used to test for differences between subject groups. *p < 0.05; ** p < 0.01.
Table 1
 
Blind participants' characteristics.
Table 1
 
Blind participants' characteristics.
Sex Age Blindness onset Cause of blindness Light perception
EB1 F 63 Right eye ruptured 2 months, detached retina 5 years Detached retina No
EB2 M 59 Born blind Retinopathy of prematurity No
EB3 F 60 1.5 years Optic nerve virus infection Low
EB4 M 47 Born blind Congenital glaucoma Low
EB5 F 52 Born blind Retinopathy of prematurity No
EB6 M 38 Born blind Congenital glaucoma Low in right eye
EB7 M 31 Born blind Leber's congenital amaurosis No
Table 2
 
Talairach coordinates of individually defined ROIs.
Table 2
 
Talairach coordinates of individually defined ROIs.
M SD
x y z x y z
Sighted control
 Right hMT+ 44 −66 1 4.8 4.5 5.0
 Left hMT+ −46 −65 0 4.2 2.9 5.0
 Right PT 50 −29 12 8.5 4.4 4.3
 Left PT −49 −31 11 5.5 6.0 3.3
 Right PAC 46 −19 6 5.5 4.9 3.0
 Left PAC −42 −21 7 5.8 5.7 4.3
Early blind
 Right hMT+ 44 −68 2 3.4 5.0 5.9
 Left hMT+ −47 −70 1 3.4 3.3 5.5
 Right PT 49 −28 11 3.3 5.5 3.9
 Left PT −50 −29 10 5.4 6.7 2.1
 Right PAC 48 −15 4 2.4 5.1 2.6
 Left PAC −49 −17 4 5.3 5.3 3.7
Supplementary Material
Supplementary Figure S1
Supplementary Figure S2
Supplementary Figure S3
Supplementary Figure S4
Supplementary Figure S5
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×