Abstract
Adaptation aftereffects have been widely used to infer mechanisms of visual coding. In the context of face processing, aftereffects have been interpreted in terms of two alternative models: 1) norm-based codes, in which the facial dimension is represented by the relative activity in a pair of broadly-tuned mechanisms with opposing sensitivities; or 2) exemplar codes, in which the dimension is sampled by multiple channels narrowly-tuned to different levels of the stimulus. Evidence for or against these alternatives has been based on the different patterns of aftereffects they predict (e.g. whether there is adaptation to the norm, and how adaptation increases with stimulus strength). However, these predictions are often based on implicit assumptions about both the encoding and decoding stages of the models. We evaluated these latent assumptions to better understand how the alternative models depend on factors such as the number, selectivity, and decoding strategy of the channels, to clarify the consequential differences between these coding schemes and the adaptation effects that are most diagnostic for discriminating between them. We show that the distinction between norm and exemplar codes depends more on how the information is decoded than encoded, and that some aftereffect patterns commonly proposed to distinguish the models fail to in principle. We also compare how these models depend on assumptions about the stimulus (e.g. broadband vs. punctate) and the impact of noise. These analyses point to the fundamental distinctions between different coding strategies and the patterns of visual aftereffects that are best for revealing them.