Purchase this article with an account.
Chou Hung, Andre Harrison, Anthony Walker, Min Wei, Barry Vaughan; Feature interactions under high dynamic range (HDR) luminance visual recognition. Journal of Vision 2017;17(10):774. doi: 10.1167/17.10.774.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Visual search in the real world occurs under luminance contrast ratios up to 1,000,000:1, but models of search behavior are based on laboratory tests at ~100:1 contrast ratio. Recent reports of brightness perception have revealed non-linear effects of luminance normalization at contrast ratios over 1000:1 ('high dynamic range (HDR) luminance'), expanding the perceived shadings of gray at the mode of the luminance distribution (Allred et al 2012). We hypothesize that, because visual neurons encode both luminance/color and shape features, luminance and shape processing interact non-linearly during visual recognition under HDR luminance. We predict that target/distractor discriminability increases (camouflage is weaker) when both target and distractors are at modal luminance versus when both are antimodal. Here, we propose a framework to test this hypothesis and to model the underlying cognitive mechanisms. We are measuring EEG, eye tracking, and visual recognition behavior under rapid serial visual presentation (RSVP, 1-2 Hz). Stimuli consist of Gabors and grayscale-rendered objects presented on a 5 × 5 grid of luminance patches. Subjects indicate target detection (orientation or object category) via keypress. The primary independent variables are the HDR luminance distribution of the patches (whether the target and distractor patch luminance are at the mode or antimode of the distribution) and target/distractor similarity (Gabor orientation similarity or object feature similarity). Secondary independent variables include the eccentricity of the target and the eccentricity and temporal dynamics of the luminance patches. Dependent variables include behavioral response time and accuracy, stimulus and ocular-locked EEG amplitude, latency and frequency, and pupil size. The primary effect of interest is the dependence of these variables on the interaction of HDR luminance distribution and target/distractor similarity. We model the effect by varying the local adaptation levels within the visual field based on the distribution of background luminance vs target luminance at different eccentricities.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only