Abstract
Two recent studies used similar stimulus sequences to investigate mechanisms for brightness perception. Anstis and Greenlee (2014) demonstrated that adaptation to a flickering black and white outline erased the visibility of a subsequent target shape defined by a luminance increment or decrement. Robinson and de Sa (2013) used a large flickering annulus to show a similar effect when the target was the same size as the inner edge of the annulus. Here, a neural network model (Francis & Kim, 2012), which previously explained properties of scene fading, is shown to also explain most of the erasure effects reported by Anstis and Greenlee and by Robinson and de Sa. The model proposes that in normal viewing conditions a brightness filling-in process is constrained by oriented boundaries, which thereby define separate regions of a visual scene. Contour adaptation can weaken the boundaries and thereby allow brightness signals to merge together, which renders target stimuli indistinguishable from the background. New model simulations with the stimuli used by Anstis and Greenlee and Robinson and de Sa produce model output very similar to the perceptual experience of human observers. Importantly, Robinson and de Sa interpreted their findings as evidence against a filling-in process, but the new simulations demonstrate that their findings support at least one type of filling-in process.
Meeting abstract presented at VSS 2015