The central role of learning in the visual recognition of complex shapes motivates the hypothesis that the recognition of complex movement patterns might also be based on learning. Evidence supporting this hypothesis was provided by studies showing that human observers learn to recognize individuals based on their facial or full-body movements (e.g., Hill & Pollick,
2000; Kozlowski & Cutting,
1977; O'Toole, Roark, & Abdi,
2002; Troje, Westhoff, & Lavrov,
2005). Moreover, the detection of point-light walkers in dynamic noise can be improved through visual learning (Grossman, Blake, & Kim,
2004; Hiris, Krebeck, Edmonds, & Stout,
2005). Furthermore, the recognition of biological motion is dependent on stimulus orientation, like the recognition of stationary objects (Bertenthal, Proffitt, & Kramer,
1987; Pavlova & Sokolov,
2000; Sumi,
1984). Consistent with these psychophysical findings, biological-motion-sensitive neurons in the superior temporal sulcus (STS) of monkeys show view-dependent modulation of their firing rate (Perrett et al.,
1985), and imaging studies indicate reduced fMRI activity in human STS for the presentation of inverted point-light walkers (Grossman & Blake,
2001). This suggests that complex movements and static shapes might be encoded by similar orientation-dependent and, potentially, view-dependent mechanisms (Verfaillie, De Troy, & Van Rensbergen,
1994). That such learning mechanisms provide a computationally powerful explanation of biological motion recognition is suggested by theoretical models that account for a variety of experimental results (Giese,
2000; Giese & Poggio,
2003; Lee & Wong,
2004).