September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Neural representation of object-scene scale consistency
Author Affiliations
  • Lauren Welbourne
    Psychological and Brain Sciences, University of California, Santa BarbaraInstitute for Collaborative Biotechnologies, University of California, Santa Barbara
  • Barry Giesbrecht
    Psychological and Brain Sciences, University of California, Santa BarbaraInstitute for Collaborative Biotechnologies, University of California, Santa Barbara
  • Miguel Eckstein
    Psychological and Brain Sciences, University of California, Santa BarbaraInstitute for Collaborative Biotechnologies, University of California, Santa Barbara
Journal of Vision September 2018, Vol.18, 1243. doi:10.1167/18.10.1243
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lauren Welbourne, Barry Giesbrecht, Miguel Eckstein; Neural representation of object-scene scale consistency. Journal of Vision 2018;18(10):1243. doi: 10.1167/18.10.1243.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To optimize finding objects in scenes the human brain guides search towards likely target locations (Torralba et al.,2006). Recent work shows that size consistency of searched objects relative to scenes ("scale consistency") contributes to search optimization (Eckstein et al.,2017), but little is known about neural representation of scale consistency. Here, we utilized fMRI to determine brain region and voxel selectivity for scale consistency. Methods: Fourteen subjects viewed 120 computer-generated images per scan (5 scans), each containing 1 of 10 objects in scenes with different scale consistency levels (from normally scaled to extremely mis-scaled). We manipulated four properties: perceived scale-consistency, real-world object size, object retinal size, and scene field-of-view. Subjects adapted to the scene (2000-3500ms) prior to object presentation (500ms). Functional ROIs (e.g. LO, PPA, TOS) were identified using localizer scans (contrast threshold p<10-5). Results: Timecourse estimates were produced for each perceived scale-consistency level using Finite Impulse Response (FIR) GLM analyses. Object region LO responded strongly to all levels, but on average showed no difference in peak beta values between normally scaled and extremely mis-scaled levels (paired t-tests, Bonferroni corrected) (t(13)=0.6186, p=NS). Conversely, significant differences were found in scene regions PPA (t(13)=11.7982, p<10-6) and TOS (t(13)=7.4838, p<10-4). MVPA, using voxel-specific HRF parameters, demonstrated high prediction accuracy for perceived scale-consistency: PPA (80%), TOS (87%), LO (87%). Cross-training with other properties, and testing on perceived scale-consistency, showed inferior but above chance decoding (<60%), suggesting an overlap of neuronal populations selective to each property. Voxel-wise encoding models also identified voxel clusters with high selectivity for perceived scale-consistency within each ROI. Conclusions: Our results suggest most voxels in PPA and TOS are selective for perceived scale-consistency, whereas LO selectivity is determined by specific voxel clusters. The MVPA cross-training and voxel-wise encoding models suggest that voxels can be jointly selective to multiple scene properties.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×