Here, we introduce a complementary approach to determine and to quantify the microstructure of motion correspondences by taking advantage of the recently discovered non-retinotopic feature attributions in motion displays (see also, Nishida, Watanabe, Kuriki, & Tokimoto,
2007; Otto, Öğmen, & Herzog,
2006; Shimozaki, Eckstein, & Thomas,
1999). With the Ternus–Pikler display, for example, we showed previously that feature attribution for single elements is in accordance with the global motion percept of either element or group motion (Öğmen et al.,
2006). Here, we determine and quantify the line-to-line correspondences by measuring feature attribution systematically for all possible combinations of lines in a Ternus–Pikler display. In
Experiment I, our findings indicate primarily one-to-one “correspondences” between lines according to the global group motion percept (
Figure 2B). However, there was also a small amount of feature attribution not in accordance with the group motion, which might have occurred for two reasons. First, these cases reflect feature attribution violating the established motion correspondence or, second, point to a correspondence ambiguity causing an unspecific feature attribution. Interestingly, most of these “erroneous” feature attributions occur retinotopically; that is, according to element motion. This might indicate that, with an ISI of 100 ms, the visual system interprets element motion as a second but much less likely solution to solve the motion correspondence problem. Moreover, we never found feature attribution in the opposite direction to the global motion percept (e.g.,
c →
b′), which, from a pure spatial perspective, is as likely as attribution in the global motion direction (e.g.,
b →
c′). Hence, we think it is justified to assume that cases with ambiguous feature attribution point to ambiguous motion correspondences.