June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
Hold it there and let's have a look: Extracting shift-invariance templates and sub-template features from signal-clamped classification images
Author Affiliations
  • Bosco S. Tjan
    Department of Psychology, University of Southern California, and Neuroscience Graduate Program, University of Southern California
  • Anirvan S. Nandy
    Department of Psychology, University of Southern California
Journal of Vision June 2006, Vol.6, 1098. doi:https://doi.org/10.1167/6.6.1098
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bosco S. Tjan, Anirvan S. Nandy; Hold it there and let's have a look: Extracting shift-invariance templates and sub-template features from signal-clamped classification images. Journal of Vision 2006;6(6):1098. https://doi.org/10.1167/6.6.1098.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Invariance or constancy is a hallmark of visual processing. Linear techniques such as classification images and spike-triggered averaging are thought to be incapable of recovering the front-end template or receptive-field structure of a higher-order visual mechanism whose response may be invariant to the position, size, or orientation of a target. Using the max-pooling property of a typical uncertainty model, we show analytically, in simulations, and with human experiments (single-letter identification in fovea and periphery, with and without positional uncertainty) that the effect of intrinsic uncertainty (i.e. invariance) can be reduced or even eliminated by embedding a signal of sufficient strength in the masking noise of a classification-image experiment. We refer to this technique as “signal clamping”. We argue against combining the classification images across stimulus-response categories as is typically done. We show that the signal-clamped classification images from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal; they also contain a low-contrast “haze” that is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of positional uncertainty, we show that this “haze” provides an estimate of the spatial extent of the uncertainty. With the effect of intrinsic uncertainty significantly reduced by signal clamping, we further show that a covariance analysis can be applied to different regions of a classification image to reveal the elementary features that are the components of the perceptual template seen in the classification image.

Tjan, B. S. Nandy, A. S. (2006). Hold it there and let's have a look: Extracting shift-invariance templates and sub-template features from signal-clamped classification images [Abstract]. Journal of Vision, 6(6):1098, 1098a, http://journalofvision.org/6/6/1098/, doi:10.1167/6.6.1098. [CrossRef]
Footnotes
 Supported by: NIH/NEI R03-EY016391
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×