September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Shape features learned for object classification can predict behavioral discrimination of written symbols
Author Affiliations & Notes
  • Daniel Janini
    Department of Psychology, Harvard University
  • Talia Konkle
    Department of Psychology, Harvard University
Journal of Vision September 2019, Vol.19, 32d. doi:https://doi.org/10.1167/19.10.32d
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Janini, Talia Konkle; Shape features learned for object classification can predict behavioral discrimination of written symbols. Journal of Vision 2019;19(10):32d. https://doi.org/10.1167/19.10.32d.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

After years of experience, humans become experts at recognizing written symbols. This learning process may form a new visual feature space highly specialized for distinguishing between letters of one’s alphabet. Alternatively, recognizing written symbols may simply involve general shape features previously learned for object classification. Here, we assess the plausibility of the latter hypothesis. We measured the perceptual dissimilarity of all pairs of letters using a visual search paradigm. On each trial, participants (n=220) identified a target letter as quickly as possible among five distrac-tors, for example identifying the letter ‘a’ among five ‘b’s. This procedure was completed for all letter pairs (325 parings) across twenty fonts. Next, we determined whether general shape features could predict the perceptual similarity space measured by this task. We used AlexNet trained on object classification as a model of a general shape space, as the learned features were never directly trained to distinguish between written symbols. We recorded responses within AlexNet to all 26 letters across the twenty fonts used in the behavioral experiment. Then we constructed a representational dissimilarity matrix (RDM) for each layer. Each RDM predicted variance in the perceptual similarity of letters (R2 = 0.21–0.50, noise ceiling = 0.73–0.83), with the best predictions being made with the mid-to-late layers. Next, we predicted the behavioral data using a weighted combination of features across all layers of AlexNet, accounting for most of the explainable variance (R2 = 0.66, noise ceiling = 0.73–0.83). These results provide a plausibility argument that perceiving and distinguishing written symbols can utilize the same general shape features as object recognition. Future work will determine if a feature space highly specialized for representing written symbols can predict human letter recognition as well as the general shape features used here.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×