August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Automatic neural coding of open and closed scenes in RSC and PPA during visual search
Author Affiliations
  • Fei Guo
    Department of Psychological and Brain Sciences, Univ. of California, Santa Barbara, Santa Barbara, CA\nInst. for Collaborative Biotechnologies, Univ. of California, Santa Barbara, Santa Barbara, CA
  • Tim Preston
    Department of Psychological and Brain Sciences, Univ. of California, Santa Barbara, Santa Barbara, CA\nInst. for Collaborative Biotechnologies, Univ. of California, Santa Barbara, Santa Barbara, CA
  • Barry Giesbrecht
    Department of Psychological and Brain Sciences, Univ. of California, Santa Barbara, Santa Barbara, CA\nInst. for Collaborative Biotechnologies, Univ. of California, Santa Barbara, Santa Barbara, CA
  • Miguel P. Eckstein
    Department of Psychological and Brain Sciences, Univ. of California, Santa Barbara, Santa Barbara, CA\nInst. for Collaborative Biotechnologies, Univ. of California, Santa Barbara, Santa Barbara, CA
Journal of Vision August 2012, Vol.12, 595. doi:10.1167/12.9.595
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fei Guo, Tim Preston, Barry Giesbrecht, Miguel P. Eckstein; Automatic neural coding of open and closed scenes in RSC and PPA during visual search. Journal of Vision 2012;12(9):595. doi: 10.1167/12.9.595.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Scene processing produces neural representations in a number of cortical regions that support various tasks, including visual search. Recent evidence suggests that during scene processing the responses of the parahippocampal place area (PPA) are modulated by spatial boundaries but not categorical or content aspects (Kravitz et al., 2011; Park et al., 2011). Unknown is whether the neural coding of spatial layout persists when observers are engaged in a visual search task requiring a decision that is orthogonal to judging the spatial layout of scenes or having to recognize the scenes. Using event-related fMRI and a single-trial multivariate pattern analysis (MVPA), we show that several visual areas can reliably discriminate scene spatial layout (open vs. closed space) when observers are doing a visual search task. Observers viewed 640 briefly presented (250 ms) diverse real-world scenes and searched for a target object that was specified by a cue word that preceded each scene. Each scene was presented only once and targets were present in 50% of the scenes. Observers reported their present/absent decision using an 8-point confidence rating. Importantly, the observer’s discrimination (present/absent) was orthogonal to the discrimination of the MVPA (open/closed scene). Single-trial MVPA was used to discriminate between the spatial layout categories of scenes (open/closed scene) in regions identified by standard localizers. Classifier performance indicated that RSC, PPA and V3B were significantly greater than chance for open/closed space discrimination, but RSC and PPA had the best performance. Low-level visual areas (V1-V4, MT), object-, and face- selective areas did not predict spatial layout above chance. The finding that RSC and PPA can discriminate between the open and closed images when observers are engaged in an orthogonal task supports the conclusion that areas within the scene processing network are involved in automatically coding the spatial layout of natural scenes.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×