July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
The time course of chromatic and achromatic information extraction in a face-gender discrimination task
Author Affiliations
  • Kim Dufresne
    Psychology, University of Montreal
  • Laurent Caplette
    Psychology, University of Montreal
  • Valérie English
    Psychology, University of Montreal
  • Maxime Fortin
    Psychology, University of Montreal
  • Mélissa Talbot
    Psychology, University of Montreal
  • Daniel Fiset
    Psychoéducation et de Psychologie, Université du Québec en Outaouais
  • Frederic Gosselin
    Psychology, University of Montreal
  • Nicolas Dupuis-Roy
    Psychology, University of Montreal
Journal of Vision July 2013, Vol.13, 414. doi:https://doi.org/10.1167/13.9.414
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kim Dufresne, Laurent Caplette, Valérie English, Maxime Fortin, Mélissa Talbot, Daniel Fiset, Frederic Gosselin, Nicolas Dupuis-Roy; The time course of chromatic and achromatic information extraction in a face-gender discrimination task. Journal of Vision 2013;13(9):414. https://doi.org/10.1167/13.9.414.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A previous study using the Bubbles technique (Dupuis-Roy, et al., 2009) showed that the eyes, the eyebrows, and the mouth were the most potent features for face-gender discrimination (see also Brown & Perrett, 1993; Russell, 2003, 2005). Intriguingly, the results also revealed a large positive correlation between the mouth region and rapid correct answers. Given the highly discriminative color information in this region, we hypothesized that the extraction of color and luminance cues may have different time courses. Here, we tested this possibility by sampling the chromatic and achromatic face cues independently with spatial and temporal Bubbles (see Gosselin & Schyns, 2001; Blais et al., 2009). Ninety participants (45 men) completed 900 trials of a face-gender discrimination task with briefly presented sampled faces (200 ms). To create a stimulus, we first isolated the S and V channels of the HSV color space for 300 color pictures of frontal-view faces (average interpupil distance of 1.03 deg of visual angle) and adjusted the S channel so that every color was isoluminant (±5 cd/m[sup]2[/sup]); then, we sampled S and V channels independently through space and time with 3D Gaussian windows. The group classification image computed on the response accuracy revealed that in the first 60 ms, participants used the color in the right eye-eyebrow and mouth regions, and that they mostly relied on the luminance information located in the eyes-eyebrows regions later on (>60 ms). Further classification images were computed for each gender-stimulus category. The results indicate that chromatic information in the mouth region led to systematic categorization errors. An analysis of the chromatic information available in this facial area suggests that this is not due to our face database, but rather represents a perceptual bias. Altogether, these results help to disentangle the relative contributions of chromatic and luminance information in face-gender discrimination.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.