Abstract
Neurons selectively responsive to faces exist in the ventral visual stream of both monkeys and humans. However, the characteristics of face cell receptive fields are poorly understood. Here we use tensor decompositions of faces to model a range of possibilities for the neural coding of faces that may inspire future experimental work. Tensor decomposition is in some sense a generalization of principal component analysis from 2-D to higher dimensions. For this study the input face set was a 4-D array, with two spatial dimensions, color the third dimension, and the population of different faces forming the fourth dimension. Tensor decomposition of a population of faces produces a set of components called tensorfaces. Tensorfaces can be used to reconstruct different faces by doing different weighted combinations of those components. A set of tensorfaces thus forms a population code for the representation of faces. A special feature of the tensor decomposition algorithm we used was the ability to specify the complexity of the tensorface components, measured as Kolmogorov complexity (algorithmic information). High-complexity tensorfaces have clear face-like appearances, while low-complexity tensorfaces have blob-like appearances that crudely approximate faces. For a fixed population size, high-complexity tensorfaces produced smaller reconstruction errors than low-complexity tensorfaces when dealing with familiar faces. However, high-complexity tensorfaces had a poorer ability to generalize to handling novel face stimuli that were very different from the input face training set. This raises the possibility that it may be advantageous for biological face cell populations to contain a diverse range of complexities rather than a single optimal complexity.
Meeting abstract presented at VSS 2017