September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Invariance of Human Image Recognition Measured Using Generative Adversarial Nets
Author Affiliations & Notes
  • Jaykishan Y Patel
    Department of Psychology and Centre for Vision Research, York University, Toronto, ON
  • Elee D Stalker
    Department of Psychology and Centre for Vision Research, York University, Toronto, ON
  • Ingo Fruend
    Department of Psychology and Centre for Vision Research, York University, Toronto, ON
Journal of Vision September 2019, Vol.19, 124d. doi:https://doi.org/10.1167/19.10.124d
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jaykishan Y Patel, Elee D Stalker, Ingo Fruend; Invariance of Human Image Recognition Measured Using Generative Adversarial Nets. Journal of Vision 2019;19(10):124d. https://doi.org/10.1167/19.10.124d.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans can perform fine discriminations between different natural images. Previous studies for identifying visual discrimination mechanisms have often used grating stimuli. However, gratings often look different from real world images, which have complex and nonlinear structure. Quantitative characterization of discrimination performance requires precise manipulation of stimuli. Yet, the required precision can be difficult to achieve for naturalistic images without distorting the image. Generative image models based on deep neural networks, known as Generative Adversarial Nets (GANs), represent an image as a vector of highly nonlinear latent image features. Here, we attempted to generalize classical oblique masking experiments to this highly nonlinear feature space. We have found previously that rotations of these latent feature vectors corresponds to changes in the images’ content, while length of the feature vectors seems to correspond to the images’ contrast. In a 2-alternatives forced-choice task, 4 observers were asked to match one of two probe stimuli to a target stimulus. Crucially, one of the probes was rotated in feature space towards the target, making its features slightly more similar to the target’s features. This allowed us to measure thresholds for rotations in the latent feature space. Thresholds were on average 29.24+/−1.99 (mean+/−sem) degrees and did not significantly change with our overall measure of global contrast. An analysis of trial-by-trial responses showed that sensitivity to these feature space rotations was approximately independent of the length and exact identity of the feature vectors corresponding to the presented stimuli. This further indicates that selectivity to image content was not only independent of global contrast but in fact selectivity did not change throughout the entire space of natural images captured by our model.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×