Abstract
Previous research demonstrates that individuals can extract single summary statistics from crowds—even when faces are viewed rapidly at speeds of 20 Hz. (Haberman & Whitney, 2009). However, it remains unclear whether individuals can extract multiple high-level ensemble percepts in a similarly efficient manner. To address this question, we created a set of face stimuli morphed along two dimensions; emotion (from happy, to sad, to angry) and identity (from Caucasian, to Asian, to African American). The stimulus set was psychophysically controlled to be equally discriminable along both dimensions. Participants viewed a sequentially displayed crowd of 18 faces, with each face presented for 50 ms. After the display disappeared, participants were cued to choose either the average emotion or the average identity of the crowd using a method of adjustment task. Participants extracted both the average emotion and average identity of the crowd without a pre-cue, indicating that participants were able to successfully ensemble code both dimensions. Additionally, participants were able to estimate the average emotion/identity more accurately when presented with the crowd compared to a single-face subset. This suggests that participants were integrating information from multiple faces and not randomly picking one emotion or identity from the crowd. Results from other experimental controls, such as crowds comprised of scrambled faces, indicate that participants are not merely relying on low-level features to extract multiple dimensions, but rather are engaging in higher-level processing strategies. Taken together, these results indicate that individuals may utilize multiple ensemble percepts to efficiently analyze complex scenes.
Meeting abstract presented at VSS 2014