September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Decoding semantic sound categories in Early Visual Cortex
Author Affiliations & Notes
  • Giusi Pollicina
    Royal Holloway, University of London
  • Assaf Weksler
    University of Haifa
  • Petra Vetter
    University of Fribourg
  • Footnotes
    Acknowledgements  This project was supported by a grant from the John Templeton Foundation (Prime Award No 48365) as part of the Summer Seminars in Neuroscience and Philosophy (SSNAP, subcontract No 283-2608).
Journal of Vision September 2021, Vol.21, 2516. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Giusi Pollicina, Assaf Weksler, Petra Vetter; Decoding semantic sound categories in Early Visual Cortex. Journal of Vision 2021;21(9):2516.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A high number of feedback connections links early visual cortex to several other cortical areas. Auditory cortex also sends feedback information to early visual cortex, but the content of this flux of information has not been fully explored yet. In a recent study by Vetter, Smith & Muckli (2014, Current Biology), it was found that feedback sent by auditory cortex is sufficient to produce distinguishable neural activity in early visual cortex when participants listen to different natural sounds in the absence of visual stimulation. The current study focused on understanding the information content of this flux of auditory feedback to visual cortex. We presented a sample of 36 sounds to 20 blindfolded participants while acquiring functional MRI data. Natural sounds were selected according to different semantic categories, e.g. animate sounds, divided into humans and animals, divided into specific species or types of sound, and the same for inanimate sounds. The boundaries of V1, V2 and V3 were drawn using individual retinotopic mapping. We analysed the fMRI activity patterns produced by these sounds in each early visual region using Multivoxel Pattern Analysis (MVPA). Results showed that the MVPA classifier could distinguish sounds belonging to different semantic categories on the basis of activity patterns in V1, V2 and V3. Particularly, animate sounds seemed to be generally better distinguished compared to inanimate sounds. Thus, auditory feedback to early visual cortex seems to follow some categorical distinctions, but not others. Our results show that auditory feedback relays certain categorical information to areas that were once believed to be exclusively specialised for vision. We hypothesise that early visual cortex may use this categorical information from sounds to predict visual input, enhance visual perception or to solve visual ambiguities.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.