When reaching to pick up an object, how are contact points visually selected? Target points are partially determined by the reaching task (e.g. grasping points close to symmetric about the object's center of gravity), but generally there remains ambiguity (a set of equivalent reach points). Current models of reaching (including Rosenbaum et. al 2000, Todorov 2002) propose that a feedback controller governs movements by minimizing a cost that depends on distance to target. However, in the case of ambiguity, it is unknown whether the controller picks a particular point from the set, or whether it “knows” the ambiguity. These two strategies behave differently under a perturbing force field mid-reach: the first corrects the perturbations, while the second adapts by contacting the new closest viable point. We tested these possibilities psychophysically. Subjects use a PhanTom to virtually touch graphically rendered lines which appear at various orientations on a virtual surface. During some trials, a force (oriented in various directions with respect to the line) perturbs the movement. If subjects are using adaptive control, they should take advantage of the perturbing force, letting the force in the direction of the line carry their hand. Conversely, if subjects touch nearly the same point as in the no-force condition, we know they are choosing that point prior to movement. We found that reaches “go with the flow”, adapting to external perturbations. This suggests that the brain visually encodes and adaptively uses the set of viable contact points.