August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
The Functional Role of Imagery in Generative Models of Visual Perceptions
Author Affiliations
  • Ghislain St-Yves
    Department of Neurosciences, Medical University of South Carolina
  • Thomas Naselaris
    Department of Neurosciences, Medical University of South Carolina
Journal of Vision September 2016, Vol.16, 1434. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ghislain St-Yves, Thomas Naselaris; The Functional Role of Imagery in Generative Models of Visual Perceptions. Journal of Vision 2016;16(12):1434. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

There is growing empirical evidence that visual perception and mental imagery, i.e. the conjuring of a visual percept in absence of external visual stimuli, operate on the same neural substrate [Albright, 2012]. This strongly suggests that imagery is an integral part of seeing, in favor of generative models of visual processing where the generative machinery can be harnessed even in absence of stimuli. In addition, generative models can be adapted to explain many otherwise puzzling observations of neural activity patterns like extra-classical receptive-field responses [Rao and Ballard, 1999] and the emergence of vivid hallucinations in certain conditions of degraded visual input [Reichert et al., 2013]. Here, we are interested in the consequences of generative models for the process of mental imagery. In the models, the neural activity generated during mental imagery results entirely from predictive signaling. We will show how interpreting mental imagery in this way offers an explanation of why mental imagery decoding can work at all [Naselaris et al., 2015], why it is not easy, and explore the factors that limit the precision of mental images. In our framework, mental imagery decoding is possible because the independent activa- tion of high-level visual areas during imagery induces activity in early visual areas that is strongly correlated with the activity during perception. We also show that, in accord with the common observations during measurements of BOLD activity in fMRI studies during mental imagery tasks, the modeled activity in regions that encode generic image features (low-level) is generally much weaker than during the corresponding perception task. This contributes to the difficulty of decoding mental images. More importantly however, we show that the character of the high-level features encoded in a deep generative model places an inherent limitation on how faithfully a mental image can emulate a corresponding visual percept.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.