October 2003
Volume 3, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   October 2003
Object appearance from integration of 3D and 2D cues in real scenes
Author Affiliations
  • Jan-Olof Eklundh
    Dept Computer Science, Royal Institute of Technology, Stockholm, Sweden
Journal of Vision October 2003, Vol.3, 646. doi:https://doi.org/10.1167/3.9.646
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jan-Olof Eklundh, Marten Bjorkman, Eric Hayman; Object appearance from integration of 3D and 2D cues in real scenes. Journal of Vision 2003;3(9):646. https://doi.org/10.1167/3.9.646.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans looking around in the world can, seemingly without effort, segment out and distinguish different objects in the world. The corresponding capability has largely eluded the efforts of researchers in computer vision. Figure-ground segmention in general needs both context and task to be well-defined, i.e. may not be addressed using information in the visual scene alone. However, 3D cues play a special role: they indicate physical chunks that in turn can be ascribed visually observable 3D properties, such as position, location and motion, and object intrinsic properties such as shape, color and maybe surface and material characteristics.

In the paper we will discuss segmentation of the scene into figure and ground and more generally into layers. Cues from stereo and motion will be used together with monocular cues from e.g. colour and texture. The goal is to acquire appearance models of the objects that can be used for subsequent processing, such as recognition. We will consider both moving and static objects, in the latter case assuming that 3D cues are available from either binocular stereo or observer motion.

Integrating multiple cues is a key aspect of our approach and two techniques for this will be compared. One is a probabilistic approach where the likelihood of observing the data given a model of each layer is computed followed by a classification of each pixel using Bayes' rule. A second scheme is a voting method, the key difference being that each cue makes an independent decision regarding membership before these decisions are combined using a weighted sum. The advantage of voting in data fusion is that measurements drawn from very different spaces can easily be combined. With probabilistic methods more care must be taken in designing the model of each so that the different cues combine in the desired manner.

Experiments on everyday scenes will show the performance of our methods and the type of object appearance models that can be acquire

Eklundh, J.-O., Bjorkman, M., Hayman, E.(2003). Object appearance from integration of 3D and 2D cues in real scenes [Abstract]. Journal of Vision, 3( 9): 646, 646a, http://journalofvision.org/3/9/646/, doi:10.1167/3.9.646. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×