Abstract
Experts in appraisal of material worthiness, such as pearl appraisers, can instantaneously evaluate object's material qualities when seeing them. This raise the possibility that perception of material qualities can be acquired through repeatedly seeing objects and learning image characteristics associated with material qualities. Here we aim to examine whether learning in material categorization tasks can increase the performance, and what other visual sensitivities can be affected by the learning. The stimuli were photographs of various materials (Sharan et al., 2009) divided into 2 material groups: the glossy material group (glass, plastic, water, and metal) and the non-glossy material group (wood, paper, fabric, and stone). Two different observer groups were assigned to the two material groups respectively. In a trial of the learning, four photographs, one from each material group, were simultaneously presented for 50 ms. Then the observer responded which photograph corresponded to a previously instructed material with a 4AFC procedure, followed by a correct / incorrect feedback. The learning period was 15 days, each of which included 200 trials for each observer. We also ran complementary experiments (specular, texture and color discrimination tasks) before and after the learning to examine learning effects for other visual tasks. The material categorization performances improved for both the two groups with the learning (40.9% to 59.9% of percent correct, in average), indicating that the material quality perception can be improved by perceptual learning as expected. In addition, the learning-related improvement of performances for the color and texture discrimination tasks was larger for the glossy material group. This difference in learning effects suggest that improvement of the material categorization performances is, at least partly, based on learning effect for the relatively lower-level visual processing associated with the image features which may underlie material categorization for each material group.
This work was supported by Grant-in-Aid for Scientific Research on Innovative Areas (22300076).