Abstract
Gaze patterns towards faces typically concentrate in a region that includes eyes and mouth as upper and lower boundaries (e.g. van Belle et al., 2010). This implies a natural retinotopic bias– eyes will appear more often in the upper than lower visual field and vice versa for mouths. We asked whether this bias is reflected in perceptual sensitivity and cortical processing of face features. In a behavioral experiment we tested whether recognition performance for eyes and mouths varied with retinotopic location. In each trial healthy human participants (n=18) saw a brief (200 ms) image of a single eye or mouth, accompanied by a noise mask. Recognition performance was tested in a match-to-sample task. In a canonical condition eye and mouth stimuli were presented in typical upper and lower visual field locations while in a second condition these locations were reversed. We found strong evidence for the predicted feature by location interaction (F=21.87, P<0.001). Recognition of eyes was significantly better for upper vs. lower visual field locations (t=3.34, P<0.01) while the reverse was true for mouth recognition (t=3.40, P<0.01). We speculated this might reflect a correlation between spatial and feature preferences of neural populations in face sensitive cortex. Based on this hypothesis we performed an, fMRI experiment (n=21) using identical stimuli. Preliminary results indicate that patterns evoked by eyes vs. mouths were separable significantly better than chance in inferior occipital gyrus (IOG) and fusiform face area (FFA) of either hemisphere. Crucially, separability of patterns was significantly better for the canonical condition in right IOG (t=2.20, P<0.05) and a similar trend was observed for right FFA (t=1.92, P=0.07). These results indicate that sensitivity to face features is spatially heterogeneous across the visual field and in human face-sensitive cortex. Face feature sensitivity thus likely reflects input statistics.
Meeting abstract presented at VSS 2014