Purchase this article with an account.
Ching-Fan Chu, Chien-Chung Chen, Yei-Yu Yeh; Object identification in spatially filtered scene background. Journal of Vision 2010;10(7):1269. https://doi.org/10.1167/10.7.1269.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We investigated the role of low-passed scene information on object identification in a previous study. We showed that, with short presentation, identification of a low-passed object embedded in a low-passed scene was not better than that of object alone (Chu et al., 2009, VSS). However, this might be due to lateral masking from the scene that has the same spatial frequency spectrum as the object. If so, identification of an object should be improved when the power spectrum of the spatially filtered object is different from that of the spatially filtered scene. In the present experiments, the objects and the scenes were processed by different filters.In Experiment 1, photos of 20 target objects were presented on 20 natural scenes. The objects were either low-passed or high-passed with a 2 cpd cut-off frequency. The scene backgrounds were either low-passed or high-passed with six possible cut-off frequencies (0.4, 0.7, 1.4, 2.8, 4, and 5.6 cpd), resulting into a 2 x 2 design. The viewing duration was 36 ms. The task of the observers was to name the target object. We found that identification of high-passed objects on low-passed scenes was better than the other three target-scene combinations. To investigate the critical role of low-passed scene information, spatial filtering was applied to natural scenes or to phase-scrambled scenes while objects were not filtered. The low-passed scrambled background produced a greater masking effect than the low-passed scene background. Our results suggest that the low spatial frequency information in scene background benefits the processing of high-spatial frequency components of objects through the reduction of lateral masking in the frequency domain.
This PDF is available to Subscribers Only