Purchase this article with an account.
Ahmad Yoonessi, Curtis L. Baker; Boundary segmentation from dynamic occlusion-based motion parallax. Journal of Vision 2014;14(4):15. doi: https://doi.org/10.1167/14.4.15.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Active observer movement results in retinal image motion that is highly dependent on the scene layout. This retinal motion, often called motion parallax, can yield significant information about the boundaries between objects and their relative depth differences. Previously we examined segmentation from shear-based motion parallax, which consists of only relative motion information. Here, we examine segmentation from dynamic occlusion-based motion parallax, which contains both relative motion and accretion-deletion. We utilized random dots whose motion was modulated with vertical low spatial frequency envelopes and synchronized to head movements (Head Sync), or recreated using previously recorded head movement data for the same stationary observer (Playback). Observers judged the orientation of a boundary between regions of oppositely moving dots in a 2AFC task. The results demonstrate that observers perform poorer when the stimulus motion is synchronized to head movement, particularly at smaller relative depths, even though that head movement provides significant information about depth. Both expansion-compression and accretion-deletion in isolation could support segmentation, albeit with reduced performance. Therefore, unlike our previous results for depth ordering, expansion-compression and accretion-deletion contribute similarly to segmentation. Furthermore, human observers do not appear to utilize depth information to improve segmentation performance.
This PDF is available to Subscribers Only