August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Sharpening Orientation Tuning with Reward
Author Affiliations
  • Jeongmi Lee
    Psychology, George Washington University
  • Sarah Shomstein
    Psychology, George Washington University
Journal of Vision August 2012, Vol.12, 9. doi:https://doi.org/10.1167/12.9.9
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeongmi Lee, Sarah Shomstein; Sharpening Orientation Tuning with Reward. Journal of Vision 2012;12(9):9. https://doi.org/10.1167/12.9.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies indicate that attention enhances the gain of neurons tuned for the relevant feature, maximizing perceptual acuity in the specific context. Recent evidence suggests that reward also enhances perceptual representations of valuable stimuli. However, the precise nature of reward-based modulation on neurons tuned for the rewarded feature is not yet clear. Here, we investigated the effect of reward on orientation tuning, and the degree to which it is affected by context (i.e., target-distractor similarity). To this end, a reward-based variant of the Navalpakkam and Itti (2007) paradigm was employed. During the training phase, subjects searched for an either 45° or 135° oriented target grating among dissimilar (45° apart) or similar (10° apart) distractor gratings. One of the targets was more highly rewarded than the other. Reward-based tuning effect was then measured in the randomly intermixed test phase in which subjects located the target embedded among a spectrum of distractor gratings. When the target and distractors in the training phase were dissimilar, target orientation received attentional gain proportional to the amount of reward. When the target and distractors were similar, however, the shape of the tuning function varied with the amount of reward. For low-rewarded target, gain was applied to the exaggerated target orientation, shifting the tuning function slightly away from the actual target orientation (i.e., optimal gain). For high-rewarded target, interestingly, gain was applied to both target and exaggerated target orientations, making the tuning function asymmetric but still sharply tuned to the actual target orientation. This reward-based sharpening made the performance optimal not only in the training phase but also in the test phase. These results suggest that rewarded stimuli are robustly represented by sharpening the response profile of the neurons selective for the relevant feature in the specific context, which in turn, optimizes the performance.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×