Abstract
A fundamental issue in perception is how the visual system resolves the luminance inverse problem. A wholly empirical approach that contends with this problem, and rationalizes human lightness perception, is based on the cumulative probability of natural luminance patterns in human experience (Yang and Purves, 2004). The cumulative probability of natural luminance patterns is also an efficient way for biological neurons to encode environmental luminance patterns (Laughlin, 1981). What is not clear, however, is the visual circuitry necessary to resolve the inverse problem in wholly empirical terms. To address this issue we empirically evolved two-layer artificial neural networks to match the cumulative probability of naturally occurring 2-D luminance patterns. In the first layer, each of the network’s 37 sensors received luminance intensity from a 0.12 deg[sup]2[/sup] portion of visual space. The outputs from the sensor neurons were forwarded via evolvable synaptic connections to a layer of integrating neurons. In the second layer, each integrating neuron projected the algebraic sum of the sensor neuron inputs to an output neuron via a further evolvable synaptic connection. Every synapse in the network was modeled as a sigmoidal transfer function, with its initial strength randomly initialized near zero. The artificial neurons evolved receptive fields that have a classical center-surround organization and automatic adaption to ambient light. They also show suppressive modulation, a neuronal property that has been used to explain a number of additional phenomena observed in experimental animals (Carandini, 2004; Bonin et al., 2005). These results suggest that biological visual circuitry uses a similar empirical strategy.
Meeting abstract presented at VSS 2013