Purchase this article with an account.
Sidney Lehky, Anh Huy Phan, Andrzej Cichocki, Keiji Tanaka; Coding of faces by tensor components. Journal of Vision 2017;17(10):243. doi: 10.1167/17.10.243.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Neurons selectively responsive to faces exist in the ventral visual stream of both monkeys and humans. However, the characteristics of face cell receptive fields are poorly understood. Here we use tensor decompositions of faces to model a range of possibilities for the neural coding of faces that may inspire future experimental work. Tensor decomposition is in some sense a generalization of principal component analysis from 2-D to higher dimensions. For this study the input face set was a 4-D array, with two spatial dimensions, color the third dimension, and the population of different faces forming the fourth dimension. Tensor decomposition of a population of faces produces a set of components called tensorfaces. Tensorfaces can be used to reconstruct different faces by doing different weighted combinations of those components. A set of tensorfaces thus forms a population code for the representation of faces. A special feature of the tensor decomposition algorithm we used was the ability to specify the complexity of the tensorface components, measured as Kolmogorov complexity (algorithmic information). High-complexity tensorfaces have clear face-like appearances, while low-complexity tensorfaces have blob-like appearances that crudely approximate faces. For a fixed population size, high-complexity tensorfaces produced smaller reconstruction errors than low-complexity tensorfaces when dealing with familiar faces. However, high-complexity tensorfaces had a poorer ability to generalize to handling novel face stimuli that were very different from the input face training set. This raises the possibility that it may be advantageous for biological face cell populations to contain a diverse range of complexities rather than a single optimal complexity.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only