August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Intermediate human visual areas represent the locations of silhouette edges in natural movies
Author Affiliations
  • Mark D. Lescroart
    Helen Wills Neuroscience Institute, University of California, Berkeley
  • Shinji Nishimoto
    National Institute of Information and Communications Technology, Osaka, Japan
  • Jack L. Gallant
    Helen Wills Neuroscience Institute, University of California, Berkeley
Journal of Vision August 2014, Vol.14, 716. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mark D. Lescroart, Shinji Nishimoto, Jack L. Gallant; Intermediate human visual areas represent the locations of silhouette edges in natural movies. Journal of Vision 2014;14(10):716. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Intermediate visual areas (V4 and areas in lateral occipital cortex) respond selectively to variation in color, texture, motion, and shape. One goal of vision research is to make computational models that can predict responses to arbitrary stimuli varying in all these dimensions. However, most studies of these areas have only examined one or two dimensions in isolation. Therefore, the information that these areas represent about complex natural scenes is poorly understood. To address this issue we created a novel set of computer-generated movies to use as stimuli in a voxel-wise modeling fMRI experiment. The movies contained realistic objects in random settings, with naturalistic variation in color, texture, lighting, and camera motion. We quantified two stimulus parameters, silhouette edges and motion energy, by using meta-information from the rendering software and a spatiotemporal Gabor wavelet model (Nishimoto et al 2011). Because the locations of silhouette edges were often correlated with high contrast in motion energy when the camera moved in 3D, we also created a stimulus set containing only 2D motion. We used a 3T MRI scanner to record brain activity while subjects viewed both sets of rendered movies. We then used the silhouette edge and motion energy features and L2-regularized linear regression to fit voxel-wise models to the data from each individual subject. Finally, we used an independent data set to test predictions of the fit models. For the movies with 3D camera motion, the motion energy model gave better predictions than the silhouette model. However, for the movies that contained only 2D motion, the silhouette edge model gave better predictions in V4 and LO. Thus, although motion energy is correlated with the presence of silhouette edges in stimuli rendered using naturalistic camera motion, V4 and LO are best described by a model that explicitly represents the locations of silhouette edges.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.