August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Key object feature dimensions modulate texture filling-in
Author Affiliations
  • Chang mao Chao
    National Yang-Ming University, Institute of Neuroscience and Brain Research Center, Taipei, Taiwan
  • Li-Feng Yeh
    National Yang-Ming University, Institute of Neuroscience and Brain Research Center, Taipei, Taiwan
  • Chou P. Hung
    National Yang-Ming University, Institute of Neuroscience and Brain Research Center, Taipei, Taiwan
Journal of Vision August 2012, Vol.12, 1063. doi:https://doi.org/10.1167/12.9.1063
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chang mao Chao, Li-Feng Yeh, Chou P. Hung; Key object feature dimensions modulate texture filling-in. Journal of Vision 2012;12(9):1063. https://doi.org/10.1167/12.9.1063.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Filling-in is a perceptual phenomenon in which visual attributes such as color, brightness, texture or motion are replaced by those in a neighboring region of the visual field. Although many studies have explored lower-order filling-in in early areas of visual cortex (Spillman), the mechanisms underlying higher-order filling-in remain unknown. Previously, we showed that neighboring columnar-scale clusters in macaque inferior temporal cortex encode opposing features or 'key dimensions' (Lin et al. 2009 SFN) and that these key dimensions are measurable in human LOC (Yeh et al. 2010 SFN). Here, we asked whether the macaque key dimensions predict the speed of texture filling-in in humans. We show that textures with matched key dimensions fill in significantly faster than textures with different key dimensions. This difference in filling-in latency was not due to low-level features or object size. We suggest that texture filling-in is modulated by features encoded in higher visual cortex, and that such feature dimensions are consistent across monkeys and humans. These results strengthen the case for common representations and mechanisms underlying form vision in primates.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×