Purchase this article with an account.
Pei Ying Chua, Kenneth Kwok; Using V1-Based Models for Change Detection in Natural Scenes. Journal of Vision 2014;14(10):372. doi: https://doi.org/10.1167/14.10.372.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Using models of the visual system, it is possible to investigate the features of human visual processing that enable sensitive and accurate change detection across a range of ambient conditions. V1-based models were used to detect the presence or absence of changes in natural scenes. All models were based on the same basic constructs of the human visual system, incorporating mechanisms of visual processing such as colour opponency, receptive field tuning, linear and non-linear behaviour, and response pooling. The natural scene images were obtained from from the publicly available Change Detection Benchmark Dataset (Bourdis, Marraud, & Sahbi, 2011). The models performances were evaluated by their sensitivity and accuracy in correctly detecting the presence or absence of changes in the given scene. Previously, we found that models which were tuned to be more sensitive to high spatial frequencies were both more accurate and sensitive, likely because high spatial frequencies represent the fine details within the image and would thus have a greater probability of containing information about the presence of targets. Here, we studied this in greater detail by changing the size of the receptive fields as well, and found that using smaller receptive fields increased a models sensitivity, possibly because smaller fields are more sensitive to small changes. Another feature that was studied was the effects of an attention spotlight. Implementation of a single stationary attention spotlight resulted in poor performance, whilst allowing the model to shift attention and consider cues from all regions of the image equally gave much better performance.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only