July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Understanding the nature of the visual representations underlying rapid categorization tasks.
Author Affiliations
  • Imri Sofer
    Department of Cognitive Linguistic & Psychological Sciences, and the Institute for Brain Sciences, Brown University, Providence, RI, USA
  • Kwang Ryeol Lee
    Dept. of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
  • Pachaya Sailamul
    Dept. of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
  • Sébastien Crouzet
    Department of Cognitive Linguistic & Psychological Sciences, and the Institute for Brain Sciences, Brown University, Providence, RI, USA \nInstitute of Medical Psychology, Charité University Medicine Berlin, Germany
  • Thomas Serre
    Department of Cognitive Linguistic & Psychological Sciences, and the Institute for Brain Sciences, Brown University, Providence, RI, USA
Journal of Vision July 2013, Vol.13, 658. doi:https://doi.org/10.1167/13.9.658
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Imri Sofer, Kwang Ryeol Lee, Pachaya Sailamul, Sébastien Crouzet, Thomas Serre; Understanding the nature of the visual representations underlying rapid categorization tasks.. Journal of Vision 2013;13(9):658. https://doi.org/10.1167/13.9.658.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Unlike simple artificial visual stimuli that can be readily parameterized, one of the key challenges associated with the use of natural image datasets in psychophysics is that they are difficult to control. For instance, we do not yet have metrics for natural images that capture the intrinsic visual similarity between real-world object categories. It is thus hard to control for task difficulty when dealing with natural object categories, which might be a cause for confounds in existing experiments. Here we introduced a measurement of the discriminability between natural scene categories based on visual features. The discriminability measurement is obtained by first extracting image features from existing computational models of visual processing from low-, to mid- and high-level features. A classifier is then trained to discriminate between pairs of categories and an error rate is then computed on an independent dataset. This procedure was performed on 10 categories of four-legged mammals (dog, bear, gorilla, zebra, etc), and tested on participants performing 2AFC rapid categorization tasks, using all possible category pairs (average performance=83%, SD=7.9, n=4). Surprisingly, we found that low-level visual features are already highly correlated with the pattern of performance (both accuracy and reaction time) from individual participants (r>0.80, p<0.001). Our results suggest that visual similarities based on low-level visual features may account for various effects observed in visual categorization experiments, which are typically attributed to semantic aspects of visual processing. Overall, these types of visual similarity measurements might offer a unique tool to control image sets in visual recognition experiments, and further improve our understanding of the underlying visual processes.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×