May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Fixation locations during three-dimensional object recognition are predicted by image segmentation points at concave surface intersections
Author Affiliations
  • Charles Leek
    Centre for Cognitive Neuroscience, School of Psychology, University of Wales, Bangor, UK
  • Stephen Johnston
    Centre for Cognitive Neuroscience, School of Psychology, University of Wales, Bangor, UK
Journal of Vision May 2008, Vol.8, 216. doi:https://doi.org/10.1167/8.6.216
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Charles Leek, Stephen Johnston; Fixation locations during three-dimensional object recognition are predicted by image segmentation points at concave surface intersections. Journal of Vision 2008;8(6):216. https://doi.org/10.1167/8.6.216.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements have been extensively studied in a variety of domains including reading, scene perception and visual search. Here we show how fixation patterns can also provide unique insights into how the human visual system accomplishes three-dimensional (3D) object recognition. Fixation patterns were recorded while observers memorised sets of novel surface rendered 3D objects and then performed a recognition memory task. Instead of pre-defining areas of interest (AOIs), analyses of fixation data were based on a new data-driven approach in which the fixation patterns themselves were used to define AOIs that are then subject to detailed analyses of shape information content. The analysis methodology contrasts fixation region overlap between the observed data patterns, a random distribution, and any number of predicted patterns derived from theoretical models of shape analysis. The results showed that the distributions of fixation regions are not random but structured and consistent across Ss: observers fixate the same image locations between the learning and test phases and track similar geometric shape features across changes in object viewpoint. We contrasted the locations of fixation regions from the recognition task against a random model of fixation region location, a visual saliency model, and against a model based on the localization of 3D segmentation points at negative minima of curvature at surface intersections. The visual saliency model did no better than the random distribution in accounting for fixation region overlap. In contrast, the fixation regions predicted by the 3D segmentation model accounted for significantly more than the random model. This suggests that, contrary to some current 2D image-based models of object recognition, relatively high-level local 3D shape properties defined by negative minima of curvature constrain fixation patterns during shape analyses for object recognition.

Leek, C. Johnston, S. (2008). Fixation locations during three-dimensional object recognition are predicted by image segmentation points at concave surface intersections [Abstract]. Journal of Vision, 8(6):216, 216a, http://journalofvision.org/8/6/216/, doi:10.1167/8.6.216. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×