Abstract
Inverted encoding models (IEMs) have recently become a popular method for investigating neural representations by reconstructing the contents of perception, attention, and memory from neuroimaging data. Here we present a more interpretable and flexible approach, “enhanced inverted encoding modeling” (eIEM), that results in improved reconstructions of visual features across a wide range of perceptual and mnemonic applications. eIEM incorporates several methodological improvements, including proper consideration of the encoder’s population-level tuning functions. Improved interpretability is further gained via a trial-by-trial prediction error-based metric; reconstruction quality can be measured in meaningful units that are directly comparable across experiments rather than the current standard of arbitrary units. Improved flexibility is gained via eIEM’s new goodness-of-fit feature: for trial-by-trial reconstructions, goodness-of-fits are obtained independently (and non-circularly) to prediction error. Incorporating this trial-wise goodness-of-fit information can reliably improve reconstruction quality and brain-behavior correlations. We validate the improved utility of eIEM from methodological principles and across three pre-existing fMRI datasets (1. decoding horizontal position of a perceived stimulus, 2. decoding an attended item’s orientation from a multi-item stimulus array, 3. decoding orientation of an item held in working memory). Researchers can also benefit from partial adoption of eIEM: e.g., goodness-of-fits from eIEM can be used to improve results obtained from any IEM procedure or decoding metric. Notably, our enhanced IEM procedure is easy to apply and broadly accessible; our publicly available Python package implements our recommended approach (on simulated or real neuroimaging data, including fMRI, EEG, MEG, etc.) in one line of code, and is easily modifiable to compare performance metrics and/or scale up to more complex models.