September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Feature interactions under high dynamic range (HDR) luminance visual recognition
Author Affiliations
  • Chou Hung
    Human Research and Engineering Directorate, US Army Research Laboratory
  • Andre Harrison
    Human Research and Engineering Directorate, US Army Research Laboratory
  • Anthony Walker
    Human Research and Engineering Directorate, US Army Research Laboratory
    DCS Corp
  • Min Wei
    Human Research and Engineering Directorate, US Army Research Laboratory
    DCS Corp
  • Barry Vaughan
    Human Research and Engineering Directorate, US Army Research Laboratory
Journal of Vision August 2017, Vol.17, 774. doi:https://doi.org/10.1167/17.10.774
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chou Hung, Andre Harrison, Anthony Walker, Min Wei, Barry Vaughan; Feature interactions under high dynamic range (HDR) luminance visual recognition. Journal of Vision 2017;17(10):774. https://doi.org/10.1167/17.10.774.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search in the real world occurs under luminance contrast ratios up to 1,000,000:1, but models of search behavior are based on laboratory tests at ~100:1 contrast ratio. Recent reports of brightness perception have revealed non-linear effects of luminance normalization at contrast ratios over 1000:1 ('high dynamic range (HDR) luminance'), expanding the perceived shadings of gray at the mode of the luminance distribution (Allred et al 2012). We hypothesize that, because visual neurons encode both luminance/color and shape features, luminance and shape processing interact non-linearly during visual recognition under HDR luminance. We predict that target/distractor discriminability increases (camouflage is weaker) when both target and distractors are at modal luminance versus when both are antimodal. Here, we propose a framework to test this hypothesis and to model the underlying cognitive mechanisms. We are measuring EEG, eye tracking, and visual recognition behavior under rapid serial visual presentation (RSVP, 1-2 Hz). Stimuli consist of Gabors and grayscale-rendered objects presented on a 5 × 5 grid of luminance patches. Subjects indicate target detection (orientation or object category) via keypress. The primary independent variables are the HDR luminance distribution of the patches (whether the target and distractor patch luminance are at the mode or antimode of the distribution) and target/distractor similarity (Gabor orientation similarity or object feature similarity). Secondary independent variables include the eccentricity of the target and the eccentricity and temporal dynamics of the luminance patches. Dependent variables include behavioral response time and accuracy, stimulus and ocular-locked EEG amplitude, latency and frequency, and pupil size. The primary effect of interest is the dependence of these variables on the interaction of HDR luminance distribution and target/distractor similarity. We model the effect by varying the local adaptation levels within the visual field based on the distribution of background luminance vs target luminance at different eccentricities.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×