Abstract
Recent work on neural-based image reconstruction has established the ability of fMRI patterns to support this enterprise. However, these attempts rely on prespecified image features motivated by their general biological plausibility. Here, we take on the task of deriving facial image-based features directly from patterns of empirical data and of using those features for the purpose of face reconstruction. Also, we exploit the current approach and confirm its robustness by its separate application to fMRI, EEG and psychophysical data. More specifically, we collect behavioral and neuroimaging data in healthy adults during individual face recognition (e.g., during a one-back identity task). These data are subjected to a method akin to reverse correlation able to derive facial features separately from behavioral and neural responses by exploiting confusability patterns associated with different facial identities. Then, we combine those features to reconstruct novel face images based on the responses that they elicit. This approach allows us: (i) to estimate an entire gallery of visual features associated with different neural signals, (ii) to support significant levels of reconstruction accuracy for any of the empirical modalities considered (i.e., fMRI, EEG and psychophysical data), and (iii) to relate homologous representations derived from different modalities. From a theoretical perspective, the present findings provide key insights into the nature of high-level visual representations, into their reliance upon specific neural resources (e.g., different cortical areas) and into joint brain-behavior models of face processing. At the same time, they make possible a broad range of image-reconstruction applications via a general, multimodal methodological approach.
Meeting abstract presented at VSS 2016