Research into the neural basis of multisensory integration has focused predominantly on the special case of reliability-weighted integration under forced-fusion assumptions (Beauchamp, Pasalar, & Ro,
2010; Fetsch, DeAngelis, & Angelaki,
2013; Helbig et al.,
2012). For instance, very elegant neurophysiology work in macaque has demonstrated that single neurons integrate sensory inputs linearly weighted by their reliability (Morgan, Deangelis, & Angelaki,
2008) in line with theories of probabilistic population coding (Ma, Beck, Latham, & Pouget,
2006). Furthermore, in a visuo-vestibular heading task decoding of neuronal activity in the dorsal medial superior temporal area (MSTd) mostly accounted for the sensory weights that the nonhuman primates employed at the behavioral level (Fetsch, Pouget, DeAngelis, & Angelaki,
2012). A recent fMRI study suggests that a cortical hierarchy performs Bayesian CI for spatial localization by representing multiple spatial estimates, which are closely related to forced-fusion (e.g.,
ŜAV,C=1), full-segregation (e.g.,
ŜAV,C=2), and Bayesian CI (e.g.,
ŜA; Rohe & Noppeney,
2015). Future neurophysiology studies in primates are needed to investigate how single neurons or populations of neurons implement Bayesian CI.