August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Comparison of Signal to Noise in Vision and Imagery for qualitatively different kinds of stimuli
Author Affiliations & Notes
  • Tiasha Saha Roy
    University of Minnesota
  • Jesse Breedlove
    University of Minnesota
  • Ghislain St-Yves
    University of Minnesota
  • Kendrick Kay
    University of Minnesota
  • Thomas Naselaris
    University of Minnesota
  • Footnotes
    Acknowledgements  Collection of the NSD dataset was supported by NSF IIS-1822683 and NSF IIS-1822929.
Journal of Vision August 2023, Vol.23, 5961. doi:https://doi.org/10.1167/jov.23.9.5961
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tiasha Saha Roy, Jesse Breedlove, Ghislain St-Yves, Kendrick Kay, Thomas Naselaris; Comparison of Signal to Noise in Vision and Imagery for qualitatively different kinds of stimuli. Journal of Vision 2023;23(9):5961. https://doi.org/10.1167/jov.23.9.5961.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Brain activity during mental imagery is often characterized as a reactivation of visual activity. Brain areas vary considerably in their response to qualitatively different visual stimuli, but it is currently unknown if these effects are preserved during mental imagery. To investigate this issue, we tested if the activity profile across different visually responsive brain areas remains stable when subjects imagine two qualitatively different kinds of stimuli. Specifically, we conducted a 7T fMRI experiment in which subjects viewed and imagined simple (bars and crosses) and complex (natural scene images and artwork) stimuli, and calculated signal-to-noise ratios (SNR) in individual voxels during imagery and vision. All 8 subjects of the Natural Scenes Dataset (NSD) experiment (Allen et al., 2022) took part in this additional scan session. For every vision run, there were 2 corresponding imagery runs. Significant differences in the SNR profile were observed across the two imagery runs for simple stimuli, alluding to a potential practice effect. We thus focused subsequent analyses on data from the second run only. We used an AlexNet-based encoding model to sort voxels according to their preferred network layer. We then calculated median SNR during vision and imagery and for both stimulus types as a function of network layer preference. During vision, median voxelwise SNR for simple stimuli was greater than for complex stimuli in voxels that preferred lower network layers, whereas, for voxels that preferred higher network layers, SNR for complex stimuli was greater. We observed the same trend during imagery, although the SNR mean and variance across layers was greatly reduced relative to vision. We conclude that while vision enjoys much higher SNR than imagery, the effect of stimulus type on SNR is preserved by the transformation from seen to imagined representations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×