June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
View-invariant object category learning: How spatial and object attention are coordinated using surface-based attentional shrouds
Author Affiliations
  • Arash Fazl
    Department of Cognitive and Neural Systems, Boston University
  • Stephen Grossberg
    Department of Cognitive and Neural Systems, Boston University
  • Ennio Mingolla
    Department of Cognitive and Neural Systems, Boston University
Journal of Vision June 2006, Vol.6, 315. doi:https://doi.org/10.1167/6.6.315
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arash Fazl, Stephen Grossberg, Ennio Mingolla; View-invariant object category learning: How spatial and object attention are coordinated using surface-based attentional shrouds. Journal of Vision 2006;6(6):315. https://doi.org/10.1167/6.6.315.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When learning a view-invariant object category, the brain does not incorrectly bind views of all the objects that the eyes scan in a scene. The ARTSCAN model predicts how spatial and object attention in the What and Where streams cooperate to selectively learn multiple views of an attended object. The model predicts that spatial attention employs an “attentional shroud” that is derived from an object's surface representation (Tyler & Kontsevich, 1995). This shroud persists during active scanning of the object. The Where stream shroud modulates view-invariant object category learning in the What stream. Surface representations compete for spatial attention to select winning shrouds. When the eyes move off an object, its shroud collapses, releasing a reset signal that stops learning of the object category in the What stream, before a new shroud forms in the Where stream and a new object category is selected. The new shroud enables multiple view categories corresponding to the shroud to be bound together in a new object category, while top-down expectations that realize object attention within the What stream stabilize object category learning. The model learns with 96% accuracy on a letter database. The model simulates reaction times in data about object-based attention: RTs are faster when responding to the non-cued end of an attended object compared to a location outside the object, and slower engagement of attention to a new object occurs if attention has to get disengaged from another object first (Brown et al., 2005).

Fazl, A. Grossberg, S. Mingolla, E. (2006). View-invariant object category learning: How spatial and object attention are coordinated using surface-based attentional shrouds [Abstract]. Journal of Vision, 6(6):315, 315a, http://journalofvision.org/6/6/315/, doi:10.1167/6.6.315. [CrossRef]
Footnotes
 Supported in part by NSF and ONR.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×