August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Scene representations underlying categorization behaviour emerge 100 to 200 ms after stimulus onset
Author Affiliations & Notes
  • Agnessa Karapetian
    Freie Universitaet Berlin, Germany
    Einstein Center for Neurosciences Berlin, Germany
    Bernstein Centre for Computational Neuroscience Berlin, Germany
  • Antoniya Boyanova
    Freie Universitaet Berlin, Germany
  • Muthukumar Pandaram
    Bernstein Centre for Computational Neuroscience Berlin, Germany
  • Klaus Obermayer
    Einstein Center for Neurosciences Berlin, Germany
    Bernstein Centre for Computational Neuroscience Berlin, Germany
    Technische Universitaet Berlin, Germany
    Berlin School of Mind and Brain, Germany
  • Tim C. Kietzmann
    Universitaet Osnabrueck, Germany
  • Radoslaw M. Cichy
    Freie Universitaet Berlin, Germany
    Einstein Center for Neurosciences Berlin, Germany
    Bernstein Centre for Computational Neuroscience Berlin, Germany
    Berlin School of Mind and Brain, Germany
  • Footnotes
    Acknowledgements  Einstein Center for Neurosciences, German Research Council (DFG), European Research Council (ERC)
Journal of Vision August 2023, Vol.23, 4689. doi:https://doi.org/10.1167/jov.23.9.4689
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Agnessa Karapetian, Antoniya Boyanova, Muthukumar Pandaram, Klaus Obermayer, Tim C. Kietzmann, Radoslaw M. Cichy; Scene representations underlying categorization behaviour emerge 100 to 200 ms after stimulus onset. Journal of Vision 2023;23(9):4689. https://doi.org/10.1167/jov.23.9.4689.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans are constantly processing scene information from their environment, requiring quick and accurate decision-making and behavioural responses. Despite the importance of this process, it remains unknown which cortical representations might underlie this function. Additionally, to date there is no unifying model of scene categorization which can predict both neural and behavioural correlates as well as their relationship. Here, we approached these questions empirically and via computational modelling using deep neural networks. First, to determine which scene representations are suitably formatted for behaviour, we collected electroencephalography (EEG) data and reaction times from human subjects during a scene categorization task (natural vs. man-made) and an orthogonal task (fixation cross colour discrimination). Then, we linked the neural representations with reaction times in a within-task or a cross-task analysis using the distance-to-hyperplane approach, a multivariate extension of signal detection theory. We observed that neural data and categorization reaction times were correlated between ~100 ms and ~200 ms after stimulus onset, even when neural data were from the orthogonal task. This identifies when post stimulus representations suitably formatted for behavior emerge. Second, to provide a unified model of scene categorization, we evaluated a recurrent convolutional neural network in terms of its capacity to predict a) human neural data, b) human behavioural data, and c) the brain-behaviour relationship. We observed similarities between the network and humans on all levels: the network correlated strongly with humans with respect to neural representations and reaction times. In terms of the brain-behaviour relationship, EEG data correlated with network reaction times between ~100 ms and ~200 ms after stimulus onset, mirroring the results from the empirical analysis. Altogether, our results provide a unified empirical and computational account of scene categorization in humans.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×