Abstract
A recent study (Gao & Wilson, 2014) suggested that humans automatically extract similarities and the most significant dimensions of differences among faces, through a mechanism that resembles Principal Component Analysis. Here we investigated the neural underpinning of such a mechanism. Ten adults studied 16 faces while their neural responses were recorded with fMRI. The 16 faces were synthesized from 3D scanned male adult faces. They are of equal physical distance to the average face in a multidimensional image space. The first principal component of the 16 faces explained 50% of the variance among them. In 6 sessions, each of the 16 faces was presented for 6 times in a fast event-related design. In the last two sessions, we also presented the average face and two faces representing the first principal component (PC) of the studied faces. We analyzed similarity of the neural response patterns (Kriegeskorte, Mur, & Bandettini, 2008) among studied faces and between the studied faces and the average and PC faces in four cortical areas identified in an independent face localizer scan: the bilateral fusiform face areas (FFA) and occipital face areas (OFA). Pattern similarity among studied faces decreased with learning in the left OFA and bilateral FFA (ps< 0.05). Pattern similarity between studied faces and PC faces remained the same throughout learning in all areas (ps=NS). Pattern similarity between studied faces and average face increased with learning in left OFA and right FFA (ps< 0.05). The results suggest that, with learning, the neural representations for individual faces become more differentiated. At the same time, the relation between the studied faces and faces that capture the similarities and the most significant differences among the studied faces remains stable. Thus, we provided the first evidence of the neural mechanisms underlying the implicit learning of average and PC faces.
Meeting abstract presented at VSS 2015