Abstract
Faces with emotional expressions attract and hold visual attention more than neutral expressions. Moreover, emotional faces are thought to have prioritised access to visual awareness. Images of facial expressions differ on many image properties, however, which poses a problem for interpretation. Since any low-level difference between image conditions is a valid candidate to explain any assumed cognitive effects, accounting for these differences between images is crucial. Here we set out to find the image-features of expressive faces that affect their access to awareness. In the current experiment, we presented two face images expressing anger, happiness or a neutral expression to the left and the right of fixation to one eye while dynamic masks were presented to corresponding locations of the other eye. Participants reported which of the two faces was perceived first. Consistent with previous literature, results show that happy expressions have prioritised access to awareness. More importantly, using a combination of machine learning and feature selection methods, we show that utilising contrast energy differences between the two simultaneously presented images allows us to predict which expression will be perceived first. Interestingly, the contrast energy that allows us to predict prioritised access to awareness is not the same as the contrast energy that allows us to decode image category (i.e. anger or happy). To our knowledge, we are the first to show that the race for access to awareness between two images can be predicted using feature selection and machine learning methods. Moreover, we show that the image features that predict relative access to awareness are not the same as those that define the facial expressions used in our task. This suggests that image properties that determine access to awareness are not reflecting the expression of a face.