Abstract
How do infants see the world? The object is the unit of visual perception and attention. In the adult visual cortex, object representations are spatially organized by categories, with a broad distinction between animate and inanimate objects encompassing finer-grained distinctions between human vs. nonhuman faces and bodies, and natural vs. artificial big and small objects. While some cortical hallmarks of this organization seems to be already in place at birth, it remains unknown when that organization becomes functional so to drive infants’ looking behavior.
We studied this issue, with eye-tracking. We measured the differential looking time (DLT) as 4- and 19-month-olds looked at two exemplars belonging to two of the above eight categories. Using the DLTs, for each group, we built a representational dissimilarity matrix (RDM) evaluating infants’ perceived similarity/dissimilarity of each stimulus pair. Using representational similarity analysis, we found that 4-month-olds showed an overall preference for human faces, while they discriminated all other objects based on the current size (number of pixels - Figure 1A). The broadest animate-inanimate distinction emerged only when, in a second study, we controlled for the image size (Figure 1C).
Instead, the RDM of 19-month-olds matched the mature visual cortex organization by categories. Confirming this, the RDM of 19-month-olds also correlated with object representation in the highest layers of the AlexNet Deep Neural Network (DNN), which provides a model of visual object recognition (Figure 1B).
Thus, infants first prefer human faces, then form the broad categories of animate and inanimate objects (by the 4th month of life). The adult-like organization of visual-object representation is fully functional by the 19th month of life.