August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
A pattern classification approach to discriminating neural responses to faces and bodies in motion
Author Affiliations
  • Alice O'Toole
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Vaidehi Natu
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Allyson Rice
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • P. Jonathon Phillips
    National Institue of Standards and Technology
  • Xiaobo An
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
Journal of Vision August 2012, Vol.12, 1181. doi:https://doi.org/10.1167/12.9.1181
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alice O'Toole, Vaidehi Natu, Allyson Rice, P. Jonathon Phillips, Xiaobo An; A pattern classification approach to discriminating neural responses to faces and bodies in motion. Journal of Vision 2012;12(9):1181. https://doi.org/10.1167/12.9.1181.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The specialized neural functions associated with the visual processing of faces and bodies span multiple cortical regions in ventral temporal (VT) and superior temporal (ST) cortex. We used pattern classification to separate the neural activity elicited in response to viewing moving and static presentations of faces and bodies. Using fMRI, participants (n = 8) viewed 12-second blocks of videos of people walking toward a camera and 12-second sequences of the "best" static images from the videos. For dynamic and static presentation types, participants saw: 1.) videos/images of the whole person (WP); 2.) videos/images of the body with the face pixilated (B); and 3.) videos/images of the face with the body obscured (F). Pattern classifiers were implemented to discriminate all pairs of conditions (e.g., dynamic F vs. dynamic B) in each participant’s brain. Neural activation patterns were highly discriminable for static-to-static (Mean d’ = 2.22, se = .21) and motion-to-motion (Mean d’ = 2.15, se =.24) comparisons (F vs. B, F vs. WP, B vs. WP) in functionally localized face/object selective regions of VT cortex. For both cases, F vs. B and F vs. WP activations were more separable than B vs. WP. In ST cortex, these conditions were separable also, but with the motion-to-motion F vs. B comparison significantly more discriminable than the other comparisons. For motion vs. static conditions, neural activity maps in the VT cortex were separable only for cross-condition comparisons that included the face (F vs. B, F vs. WP; Mean d’ = 1.58, se = .35). Within a condition (e.g., moving F vs. static F), the neural activation maps could not be separated at levels above chance (Mean d’ = -0.23, se =.48). This study offers a systematic dissection of the neural representations of human face and body motion in a natural stimulus.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×