Visual word representations provide a unique opportunity to study the effect of visual training on neural selectivity throughout the ventral visual pathway. Even though it has been suggested that the processing of letter strings has inherited many of the characteristics of the processing of objects (Dehaene & Cohen, 2005, Trends in Cognitive Sciences), no computational models have been developed to generate specific neuroscientific predictions and test them empirically. Here we adapt a neural object recognition model to include case-specific and case-invariant letter string representations, resulting in the model HMAX-WORD. Population-level analyses on the responses within different layers of HMAX-WORD provide specific predictions about a progression from a representational space defined by case to a representation defined by case-invariant word identity. This case-invariant coding only occurred when high-level units in the model were tuned to letter groups and not just single letters. We compared the predictions from HMAX-WORD with data from a functional magnetic resonance imaging (fMRI) experiment on 8 participants who were shown words presented in both upper- and lower-case (e.g., ‘HAAN’ versus ‘haan’ (rooster)). The functional imaging data, analysed through multi-voxel pattern analyses, show a similar progression from case-dependent to case-invariant representational spaces as found in HMAX-WORD with units tuned for letter groups. The findings from this direct comparison of modelling and fMRI evidence confirm that word recognition is consistent with the implementation of abstract orthographic units in a feed-forward architecture. These findings reveal the potential of a combined computational and pattern analyses-based approach to understand how visual word forms are constructed in the brain and, more in general, to investigate how stimulus dimensions are represented in specific brain regions.
Meeting abstract presented at VSS 2012