Abstract
Recent findings suggest that semantic memories may be stored in multiple modalities, as opposed to being stored using an amodal code. Furthermore, these findings suggest that semantics from different modalities may be stored in modality-specific processing regions of the cortex. We tested these ideas using novel objects and artificial concepts. Objects were divided into sets of four and were all highly visually similar, biological looking, and three-dimensional. Artificial concepts were triads of semantic features (presented as words) from the auditory (AUD) modality (e.g. howls, squeals, etc.), the tactile (TAC) modality (e.g. hard, cold, etc.), or a combination (SEM) of modalities (e.g. noisy, soft, friendly, etc.). Objects and artificial concepts were associated during two one-hour training sessions prior to scanning. During scanning, participants performed a simultaneous match task, a task that could be performed using only the information in the visual images, and without explicit reference to the trained concepts. As expected, objects in the AUD condition produced more activation along the superior temporal gyrus than objects in a NON-trained condition. TAC objects produced less activation in parietal and occipital cortex than NON objects. One speculation is that the TAC concepts actually interfered with visual processing, because they invoked associated visual features that were incongruent with the visual stimulus. SEM objects produced more activation in the inferior frontal and fusiform gyrus compared to a name-only condition and the NON condition. In conclusion, associating artificial concepts with novel objects produced activation in similar regions to those found with familiar objects, despite the fact that associations were developed over a short time period, and the test task was implicit. These regions respond in a modality-specific manner and they are found near modality-specific perceptual processing regions.