October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Modeling effects of blurred on vision on category learning
Author Affiliations
  • William Charles
    Fordham University
  • Rohan Agarwal
    Hunter College High School
  • Daniel Leeds
    Fordham University
Journal of Vision October 2020, Vol.20, 833. doi:https://doi.org/10.1167/jov.20.11.833
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      William Charles, Rohan Agarwal, Daniel Leeds; Modeling effects of blurred on vision on category learning. Journal of Vision 2020;20(11):833. doi: https://doi.org/10.1167/jov.20.11.833.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human visual acuity sharpens during the first several months of life. Changing eye shape causes vision to develop from 20/800 to 20/20 over these months. Visual object learning develops in tandem with visual acuity. Children with congenital cataracts removed after months of visual development show impairments in a variety of visual tasks, including integration of contour segments (Putzer 2007) and facial recognition (de Heering 2002). Recently Vogelsang (2018) reported a benefit for face learning in the AlexNet Convolutional Neural Network (CNN) when training first on blurred images followed by clear images. We explore effects of blurred vision on broader object class discrimination, compared against fine grained dog breed discrimination. We test the bounds of advantageous blurring. We train CNN models (including AlexNet and Squeezenet) on two image datasets drawn from Imagenet (Russakovsky 2015): “Imagewoof” features ten breeds of dogs, and “Imagnette” features ten visually distinct object types. CNNs were trained after Xavier initialization using images with each of five Gaussian blur settings – windows of 1, 3, 5, 11, and 23 pixels. These windows capture the span of visual acuities over development. We test each network separately on images from each blur level, using five-fold cross validation. We find networks perform best when trained and tested on the same level of blur. Notably, training with higher-blur images allows relatively robust recognition for lower-blur images, while lower-blur learning does not equivalently benefit higher-blur recognition. The benefits of blur training extend to the highest blur training windows for object recognition, but are confined to smaller levels of blur (3 and 5 pixels) for dog breed discrimination. These benefits were more pronounced in the larger AlexNet architecture, compared to Squeezenet. Our findings support the utility of learning from blurred images for broad object recognition, particularly in larger networks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×