Abstract
Background. Humans can use local cues to help distinguish edges caused by a change in depth from other types of edges (Vilankar et al., 2014). But which local cues? Here we use the SYNS database (Adams et al., 2016) to automatically label image edges as depth or non-depth and use this to compare the edge cues used by human and deep neural networks (DNN) observers for this task. Labelling. We employed a multi-scale algorithm (Elder & Zucker, 1998) to detect edges in both 2D color imagery and registered 3D range images and used a probabilistic method to associate image and range edges that match in location and orientation. Image edges with depth contrast >0.1 were labelled as depth edges. Image edges without a range edge match were labelled as non-depth edges. Methods. Observers viewed square image patches, each centered on an image edge, ranging in size from 0.6-2.4 degrees (8-32 pixels) wide. Human judgements (depth/non-depth) were compared to responses of a DNN trained on the same task. Results. Human performance increased with patch size from 65% to 74% correct, but remained well below DNN performance (82-86% correct). Agreement between human and DNN observers was above chance but below agreement between pairs of human observers. For both human and DNN observers, depth edge response increased with luminance contrast. However, for human observers, darker and bluer patches were more likely to be judged as depth edges, whereas for DNN observers, greener patches were more likely to be judged as depth edges. Also, for humans, the role of color increased with patch size, whereas for the DNN it decreased with patch size. Conclusion. Several local luminance and color features provide useful cues for depth edge detection. A DNN model provides a partial account of how human observers employ these cues.
Meeting abstract presented at VSS 2018