Abstract
Real-world size is a behaviorally relevant object property that is automatically encoded, is reflected in the organization of the human ventral temporal cortex, and can be decoded from neural responses as early as 150 ms after stimulus onset. However, while real-world size is a distinct, conceptual object property, it strongly correlates with at least two other object properties: rectilinearity (large objects typically have more rectilinear features) and fixedness (large objects are more often fixed in the environment). Here, we aimed to dissociate the temporal profile of object size processing from that of covarying shape and fixedness properties. During EEG recording, participants (N=33) viewed isolated objects that were drawn from a 2 (real-world size: large, small) x 2 (shape: rectilinear, curvilinear) x 2 (fixedness: fixed, transportable) design. This design allowed us to decode each dimension (e.g., size) across the other dimensions (e.g., shape, fixedness). For example, we tested whether (and when) a classifier trained to distinguish large from small fixed and/or rectilinear objects (e.g., bed vs mailbox) successfully generalized to distinguish large from small transportable-curvilinear objects (e.g., airballoon vs balloon). Across posterior electrodes, cross-decoding of real-world size was significant from 350 ms after stimulus onset for all cross-decoding splits. Similar cross-decoding analyses of the other two object properties revealed cross-decoding of shape from 170 ms and no significant cross-decoding of fixedness at any time point. These results indicate that higher-level (shape-invariant) representations of real-world object size emerge relatively late during visual processing.