September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Visual representations outside of conscious awareness can support sensory preconditioning
Author Affiliations & Notes
  • Cody A Cushing
    Department of Psychology, University of California Los Angeles
  • Mouslim Cherkaoui
    Department of Psychology, University of California Los Angeles
  • Mitsuo Kawato
    Department of Decoded Neurofeedback, Computational Neuroscience Laboratories, Advanced Telecommunications Research Institute International
    Faculty of Information Science, Nara Institute of Science and Technology
  • Jesse Rissman
    Department of Psychology, University of California Los Angeles
    Brain Research Institute, University of California Los Angeles
    Department of Psychiatry & Biobehavioral Sciences, University of California Los Angeles
    Integrative Center for Learning and Memory, University of California Los Angeles
  • Hakwan Lau
    Department of Psychology, University of California Los Angeles
    Brain Research Institute, University of California Los Angeles
    Department of Psychology, University of Hong Kong
    State Key Laboratory of Brain and Cognitive Sciences, University of Hong Kong
Journal of Vision September 2019, Vol.19, 188. doi:https://doi.org/10.1167/19.10.188
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cody A Cushing, Mouslim Cherkaoui, Mitsuo Kawato, Jesse Rissman, Hakwan Lau; Visual representations outside of conscious awareness can support sensory preconditioning. Journal of Vision 2019;19(10):188. https://doi.org/10.1167/19.10.188.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When one of two previously-paired neutral visual stimuli starts to subsequently predict reward, subjects treat the other neutral item as if it similarly predicts reward despite the lack of direct reinforcement for that particular visual stimulus. Termed sensory preconditioning, this phenomenon is often used to explore model-based learning. However, it has been suggested that sensory preconditioning may happen even for stimuli that are not consciously perceived. We tested this hypothesis in a decoded neurofeedback (DecNef) experiment. To form an association between a viewed stimulus and an unrelated non-conscious visual representation, online fMRI data was analyzed through a MVPA classifier while participants (N=5) viewed a fullscreen dot-motion display at 100% motion coherence. Visual feedback was given based on the likelihood that BOLD activity while participants viewed the dot-motion display represented an unrelated target image category (DecNef target), which was never consciously seen. After 3 days of neurofeedback, participants completed a betting task with feedback which reinforced participants to value the previously presented dot-motion display as a significant financial loss. Following this, participants completed another round without feedback in which a critical decision was made between two previously unseen objects: the DecNef target and a neutral control. Participants bet on the neutral control significantly more often than the DecNef target (individual šŒ2ā€™s, all pā€™s< 0.05 with 3 subjects showing the maximum possible effect at šŒ2(1, N=30) = 30, p< 0.001), indicating successful preconditioning of a visual stimulus outside of conscious awareness. These results suggest associations can indeed be formed and conditioned between visual stimuli outside of conscious awareness, questioning whether consciousness is necessary for model-based learning. This opens a discussion on how these neurofeedback-driven subliminal visual presentations may complement traditional methods of rendering visual stimuli unconscious, such as: masking, continuous flash suppression, crowding, and other methods.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×