August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
An Investigation of Sound Content in Early Visual Areas
Author Affiliations
  • Angus Paton
    Institute of Neuroscience and Psychology, University of Glasgow
  • Lucy Petro
    Institute of Neuroscience and Psychology, University of Glasgow
  • Lars Muckli
    Institute of Neuroscience and Psychology, University of Glasgow
Journal of Vision September 2016, Vol.16, 153. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angus Paton, Lucy Petro, Lars Muckli; An Investigation of Sound Content in Early Visual Areas. Journal of Vision 2016;16(12):153. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Early visual cortical neurons receive non-feedforward input from lateral and top-down connections (Muckli & Petro, 2013). Auditory input to early visual cortex has been shown to contain contextual information of complex natural sounds (Vetter, Smith, Muckli, 2014). To date, contextual auditory information in early visual cortex has only been examined in the absence of visual input (i.e. subjects were blindfolded). Therefore the representation of contextual auditory information in visual cortex during concurrent visual stimulation remains unknown. Using functional brain imaging and multivoxel pattern analysis, we investigated if auditory information can be discriminated in early visual areas during an eyes-open fixation paradigm, while subjects were independently stimulated with complex aural and visual scenes. We investigated similarities between auditory and visual stimuli in eccentricity mapped V1, V2 & V3 by comparing contextually-matched top-down auditory input with feedforward visual input. Lastly, we compared top-down auditory input to V1, V2 & V3 with top-down visual input, by presenting visual scene stimuli with the lower-right quadrant occluded. We find contextual auditory information is distinct in the periphery of early visual areas, in line with previous research (Vetter, Smith, Muckli, 2014). We also report contextual similarity between sound and visual feedback to occluded visual areas. We suggest that top-down expectations are shared between modalities and contain abstract contextual information. Such cross-modal information could facilitate spatial temporal expectations by amplifying and disamplifying feedforward input based on context (Phillips et al., 2015).

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.