Abstract
To examine whether targets defined by depth from motion information pop-out or not, we conducted visual search experiments in which the target was defined by motion parallax, that is, motion disparities yoked to observers' voluntary head movements. In the experiments, stimulus displays were presented to observers for 200 msec (exps. 1 and 2) or 500 msec (exps. 3–6). The observers responded whether the target was present or absent. Display sizes were 4, 9, 16, or 25, and one half of the trials contained the target item. In head-moving (HM) conditions, observers performed tasks with head movements to which item motions were yoked. We expected that depth information was available in target searching in these conditions. In head-fixed (HF) conditions, observers performed tasks with their head fixed. The item motions were not yoked but simply controlled by a PC, and the motion information alone was available in target searching in these conditions. In the first two experiments, the target was defined by motion (exp. 1) and speed (exp. 2). These manipulation yielded depth between the target and distracters from motion parallax (the HM condition). In the following experiments, using motion parallax, the target was defined by slant (exp. 3), magnitude of slant (exp. 4), direction of slant (exp. 5), or shape of the depth surface (exp. 6). As results, search efficiencies in the HM condition were equal to or below those in the HF condition for all cases. Moreover, almost all observers reported that they hardly see depth in the stimulus displays even when the item motion was yoked to their head movements (the HM condition). These results indicate that depth from motion information does not provide pop-out. The observers might use the motion information itself in target searching, regardless of the depth from motion. This suggests that the depth processing from motion has lower frequency properties in temporal and/or spatial domains than the motion processing. Supported by HFSP grant.