December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
PCA Reveals Common Spatial Patterns of Motion Energy in Diverse Stimulus Sets and in Scene-Selective Area Voxel Tuning
Author Affiliations & Notes
  • Yu Zhao
    University of Nevada, Reno
  • Matthew W. Shinkle
    University of Nevada, Reno
  • Arnab Biswas
    University of Nevada, Reno
  • Mark D. Lescroart
    University of Nevada, Reno
  • Footnotes
    Acknowledgements  NSF EPSCoR RII Track2 #1920896 to M.G., M.D.L., P.M., B.B.
Journal of Vision December 2022, Vol.22, 4486. doi:https://doi.org/10.1167/jov.22.14.4486
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yu Zhao, Matthew W. Shinkle, Arnab Biswas, Mark D. Lescroart; PCA Reveals Common Spatial Patterns of Motion Energy in Diverse Stimulus Sets and in Scene-Selective Area Voxel Tuning. Journal of Vision 2022;22(14):4486. https://doi.org/10.1167/jov.22.14.4486.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Many studies have characterized regions in the brain, including MT and MST, that are sensitive to local and global patterns of motion contrast. However, visual motion also contains information about three-dimensional scene structure in naturalistic stimuli. Encoding models based on motion energy accurately predict human BOLD responses in areas outside MT and MST, including in regions selective for scenes. Whether this is due to spurious correlations or intrinsic relationships between the spatial distribution of motion and scene features remains to be seen. To investigate motion selectivity in multiple regions, we used a biologically inspired model of motion energy to extract motion features from several naturalistic video datasets including film clips, moving rendered scenes and first-person video recordings. Using a combination of previously collected and new BOLD fMRI data, we fit regression models to predict voxel responses based on motion energy in these stimuli. To probe how the experimental stimuli varied in motion content and how different visual areas are tuned to patterns of motion, we computed principal components of variation in both stimulus motion and voxel regression weights in place-selective areas. The primary dimension of variability in our stimuli corresponds to the general presence or absence of visual motion. Other patterns of motion selectivity in the top nine principal components of all three datasets include contrasts in the location (upper versus lower, left versus right visual field), orientation (horizontal versus vertical) and temporal frequency of motion. We find similar dimensions of variability in voxel tuning across scene-selective areas RSC, PPA, and OPA, with the primary dimension of variability being selectivity to high temporal frequency motion at horizontal versus vertical orientations. Our results suggest that the known selectivity of scene-selective areas for vertical and horizontal orientations is due to robust statistics of motion that are present in diverse stimulus sets.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×