Free
Research Article  |   October 2007
Modification of the convexity prior but not the light-from-above prior in visual search with shaded objects
Author Affiliations
Journal of Vision October 2007, Vol.7, 10. doi:https://doi.org/10.1167/7.13.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rebecca A. Champion, Wendy J. Adams; Modification of the convexity prior but not the light-from-above prior in visual search with shaded objects. Journal of Vision 2007;7(13):10. https://doi.org/10.1167/7.13.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies of visual search performance with shaded stimuli, in which the target is rotated by 180° relative to the distracters, typically demonstrate more efficient performance in stimuli with vertical compared to horizontal shading gradients. In addition, performance is usually better for vertically shaded stimuli with top-light (seen as convex) distracters compared to those with bottom-light (seen as concave) distracters. These findings have been cited as evidence for the use of the prior assumptions of overhead lighting and convexity in the interpretation of shaded stimuli and suggest that these priors affect preattentive processing. Here we attempt to modify these priors by providing observers with visual–haptic training in an environment inconsistent with their priors. Observers' performance was measured in a visual search task and a shape judgment task before and after training. Following training, we found a reduced asymmetry between visual search performance with convex and concave distracters, suggesting a modification of the convexity prior. However, although evidence of a change in the light-from-above prior was found in the shape judgment task, no change was found in the visual search task. We conclude that experience can modify the convexity prior at a preattentive stage in processing; however, our training did not modify the light-from-above prior that is measured via visual search.

Introduction
The shading information in a scene is inherently ambiguous. However, shading often produces a compelling perception of 3D shape because our visual system employs prior knowledge or assumptions about the statistical regularities in the environment to interpret the 2D information. Evidence suggests that observers assume a single light source (Kleffner & Ramachandran, 1992; Ramachandran, 1988) that is positioned roughly overhead (Adams, Graf, & Ernst, 2004; Brewster, 1862; Kleffner & Ramachandran, 1992; Mamassian & Goutcher, 2001; Ramachandran, 1988) to recover shape from shading. Such assumptions are consistent with the majority of our experience in naturally and artificially lit scenes. In addition, it has been proposed that observers use an assumption of convexity, in alignment with the predominance of convex, over concave, objects in the world (Langer & Bülthoff, 2001; Sun & Perona, 1997). Experimental studies into the processing of shading information have used both shape judgment tasks (Adams et al., 2004; Chacón, 2004; Kleffner & Ramachandran, 1992; Mamassian & Goutcher, 2001; Ramachandran, 1988) and visual search tasks (Aks & Enns, 1992; Chacón, 2004; Enns & Rensink, 1990; Kleffner & Ramachandran, 1992; Sun & Perona, 1996, 1997, 1998). The use of shaded stimuli in visual search tasks is particularly interesting; performance in these tasks suggests that 3D shape can act as a preattentive feature, a controversial claim given that these features are traditionally considered to be based on 2D image properties rather than those of the 3D scene (Kleffner & Ramachandran, 1992). 
The study of visual search performance using shaded objects (such as those shown in Figure 1A) has generated two robust findings. Firstly, observers' search performance is substantially more efficient for stimuli with vertical rather than horizontal shading gradients. For vertical gradients, search is faster (Kleffner & Ramachandran, 1992; Sun & Perona, 1998) and independent of set size (Kleffner & Ramachandran, 1992). From shape judgment tasks, we know that perceived depth is reduced and more ambiguous for disks with horizontal shading (Adams et al., 2004; Ramachandran, 1988). Kleffner and Ramachandran (1992) therefore argue that the difference between search performance in the horizontal and vertical conditions demonstrates that target detection is not based on differences in luminance polarity per se but rather on 3D shape, reconstructed in accordance with the light-from-above prior. In other words, depth perception is impaired with horizontal gradients and this makes target detection more difficult. Recently, Adams (2007) showed that the stimulus orientation (or lighting direction) for optimal visual search varies substantially across individuals. However, these individual variations are coupled with variations in shape perception, i.e., observers that see the most unambiguous 3D shape in objects illuminated from the top-left and bottom-right, also perform best in visual search displays illuminated from those directions. This strongly suggests, again, that visual search is closely related to perceived shape. 
Figure 1
 
Examples of stimuli used in (A) the visual search task and (B) the shape judgment task.
Figure 1
 
Examples of stimuli used in (A) the visual search task and (B) the shape judgment task.
The second reliable finding to emerge from visual search studies is that within vertical (or near-vertical) gradient stimuli, performance is significantly better for targets which are dark at the top among top-light distracters compared to stimuli with the opposite arrangement (Chacón, 2004; Kleffner & Ramachandran, 1992; Sun & Perona, 1998). In other words, a concave target amongst convex distracters is more easily detected than a convex target amongst concave distracters. Enns and Rensink (1990) reported a similar asymmetry with cube stimuli (although inverting a cube target relative to its distracters generally affects perceived reflectance, rather than shape). This search asymmetry is usually explained in terms of more efficient processing of distracters that conform to the assumptions or preferences for convex, top-lit objects (e.g., Sun & Perona, 1997). Enns and Rensink also propose that search is based on shape (or associated reflectance) but suggest that pop-out corresponds to deviation from the light-from-above direction, and thus concave targets are easier to detect. In contrast, Chacón (2004) proposed that the asymmetry is due to the difference in perceived contrast of stimuli shaded in the two directions, presumably as a result of the reduced perceived depth in concave stimuli. 
In summary, visual search behavior observed with shaded stimuli has led many researchers to suggest that the preattentive features driving pop-out may include not only 2D image properties but also perceived 3D shape and/or the associated perceived surface reflectance. This proposal would further suggest that the priors for light-from-above and convexity that guide the interpretation of shading information are incorporated in early, preattentive visual processing. 
In the present study, we aimed to modify observers' priors for light-from-above and convexity by providing visual–haptic training in an environment inconsistent with these prior assumptions. We then investigated the effect of this training on subsequent visual search behavior. During training, our observers interacted, using both visual and haptic (touch) information, with an environment in which the average lighting direction was shifted by ±27.5° relative to the observer's original light-prior. In addition, convex and concave objects were equally prevalent in the trained environment. Adams et al. (2004) previously demonstrated that the light-from-above prior can be modified by visual–haptic training. In that study, training affected subsequent shape perception of the trained stimuli, but also generalized to affect the perceived reflectance of novel stimuli. Here we investigate whether trained changes in shape perception will be mirrored by changes in visual search behavior. Such a finding would have two implications; firstly, that perceived 3D shape (derived using prior assumptions) is the preattentive feature driving visual search with shaded stimuli, and secondly, that visual–haptic training can modify priors at an early preattentive stage of processing. 
Experiment 1
Methods
Observers
Twelve naïve observers completed the experiment. All were undergraduate students and had normal or corrected-to-normal vision. Ethical approval was gained for this study from the University of Southampton ethics committee and all observers gave informed consent. 
Apparatus
Visual stimuli were generated and displayed using OpenGL. The stimuli were presented on a CRT monitor (30.5 × 40.5 cm), viewed via a mirror as shown in Figure 2. The observer's line of sight was orthogonal to the screen. Haptic stimuli were generated using the GHOST software and presented using a PHANToM force feedback device, which was attached to the observer's forefinger. The observer's head was kept stationary using a bite bar and he/she wore an eye-patch over one eye to eliminate binocular depth cues. The experiment was conducted in a darkened room. 
Figure 2
 
Apparatus. Observers viewed the visual stimuli on a CRT via a mirror; haptic presentation was via a PHANToM force-feedback device.
Figure 2
 
Apparatus. Observers viewed the visual stimuli on a CRT via a mirror; haptic presentation was via a PHANToM force-feedback device.
Stimuli
The visual stimuli were shaded circular disks that were consistent with convex or concave squashed hemispheres illuminated by a single light source. The objects had a diameter of 2.7°, at the viewing distance of 48 cm. The orientation of the shading gradient (or direction of the light source) was varied. 
The haptic stimuli were smooth convex or concave surfaces, of the same dimensions as the visual disks, protruding from or indented into a smooth surface. Thus, during visual–haptic training, the observers' perception was of viewing a surface with perturbations while running their finger over that surface. A small matt black visual dot indicated finger position. 
Procedure
This experiment used three different tasks; a shape judgment task, a visual search task, and a haptic training task. Observers completed blocks of the shape judgment task and the visual search task before and after haptic training. Observers were allocated to one of two haptic training conditions; those trained with a positive light direction shift (six observers) and those trained with a negative shift (six observers). 
Shape judgment task
In the shape judgment task, visual stimuli were presented without haptic feedback. Four objects were displayed 2.8° from a central fixation cross as shown in Figure 1B. One or two of the objects, selected at random, was rotated by 180° relative to the other objects. Halfway through the 1200-ms presentation, a red star appeared indicating which object's shape should be judged (the target). Observers then responded either “IN” or “OUT” by pressing virtual buttons with the PHANToM. 
In the pre-training shape task, 48 evenly spaced shading gradient orientations were presented. Within a block, each orientation was repeated 4 times. Target position and orientation were randomly interleaved. Observers completed three blocks of 192 trials. The first block served as a practice block and was discarded. The proportion of “convex” responses, as a function of orientation, was fit with two cumulative Gaussians (see Equation A1), each centered at a 50% point. These 50% points represent the gradient orientations with maximum concave/convex ambiguity; the orientations of the perceptual transitions from convex to concave and vice versa, henceforth termed “transition orientations.” The observer's light-prior was taken as the average of the two transition orientations. The mean pre-training light-prior was 4.5° to the left of vertical, consistent with previous reports of a bias towards to the left (Mamassian & Goutcher, 2001; Sun & Perona, 1998); however, there was substantial deviation in light-prior across observers (σ = 7.0°), a result that is also consistent with previous findings (Adams, 2007). The pre-training transition orientations were used to determine the orientations used in subsequent tasks, as described below. 
The effect of visual–haptic training is to shift the transition orientations in the trained direction (see Figure 3A and Adams et al., 2004) and it is around these orientations that we expected the greatest change in performance following training. Therefore, to limit the total number of trials, yet accurately assess the impact of training on shape and visual search judgments, we selected a limited range of orientations around these transition orientations to measure pre- and post-training visual search behavior and post-training shape perception. In the post-training shape task, 36 orientations were presented, either −45° to +75° or −75° to +45° relative to the pre-training transition orientations for the positive and negative training conditions, respectively (i.e., a range of 45° either side of the pre-training and maximum expected post-training transition orientation (pre-training ±30°)). The estimated light-prior direction and the light-prior + 180° were also presented. Within a block, each orientation was repeated 4 times, once at each of the 4 locations. Observers completed three blocks (432 trials in total) in alternation with two post-training visual search blocks. Data were analyzed as previously described to obtain the post-training transition orientations. 
Figure 3
 
(A and B) Proportion convex judgments as a function of shading gradient orientation, in the shape judgment task. (A) Data shown are for one typical observer, in the positive shift training condition, before (red stars) and after (blue circles) training. The lines show the Gaussian fits to the data and the two arrows represent the directions of the pre- and post-training light-priors extracted from the fits. An orientation of 0° corresponds to stimuli light at the top. (B) Average fit across observers before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. (NB: Data could not be directly averaged across observers as different orientations were presented to each observer.) The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (C) Consistency of shape judgments measured before (red stripes) and after (blue solid) training, at the pre- and post-training transition orientations and at the pre-training light-prior and light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers.
Figure 3
 
(A and B) Proportion convex judgments as a function of shading gradient orientation, in the shape judgment task. (A) Data shown are for one typical observer, in the positive shift training condition, before (red stars) and after (blue circles) training. The lines show the Gaussian fits to the data and the two arrows represent the directions of the pre- and post-training light-priors extracted from the fits. An orientation of 0° corresponds to stimuli light at the top. (B) Average fit across observers before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. (NB: Data could not be directly averaged across observers as different orientations were presented to each observer.) The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (C) Consistency of shape judgments measured before (red stripes) and after (blue solid) training, at the pre- and post-training transition orientations and at the pre-training light-prior and light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers.
Visual search task
In the visual search task, visual stimuli were presented without haptic feedback. The stimulus consisted of an array of 16 objects arranged as an inner ring of 6 objects with radius 3.4° and an outer ring of 10 objects with radius 6.8° (see Figure 1A). Position on the ring was slightly jittered. On half of the trials, one of the objects (the target) was rotated by 180° relative to the other objects. The stimulus was displayed until the observer made a response. The observer's task was to indicate whether an odd-one-out was present or not as quickly and as accurately as possible. Responses were made by pressing the left or right mouse button. 
Sixteen shading gradient orientations were presented for the visual search task consisting of seven evenly spaced orientations ranging from −7.5 to +37.5° or −37.5 to +7.5° relative to the two pre-training transition orientations in the positive and negative shift conditions respectively, plus the estimated light-prior and the light-prior + 180°. As for the post-training shape judgment task, this range was selected so as to concentrate on the orientations of most interest, i.e., those around the transition orientations. For each orientation, 16 “target-present” and 16 “target-absent” trials were presented with orientation and target position randomized, making 512 trials per block. Pre-training, observers completed two consecutive blocks, lasting approximately 30 min in total. Post-training, observers completed two blocks, presented in alternation with the three post-training shape judgment blocks. Pre- and post-training visual search data were analyzed separately. For each orientation, data were combined and a “perf” value (Santhi & Reeves, 2004) was obtained where perf = d2 / (RTcorrect − RTmotor). RTcorrect is the mean reaction time of correct responses, RTmotor is the estimated motor component of the reaction time, and d′ represents d-prime. As the perf measure combines both reaction time and error rate, it provides a more complete description of performance. 
From the pre- and post-training visual search data, two measures were obtained for each observer. Firstly, the perf values at the transition orientations were calculated. (Transition orientations were determined by observers' pre- and post-shape task data. Perf values at the post-training transition orientations were obtained by interpolation from adjacent search orientations.) Secondly, the size of the convex/concave asymmetry was calculated for each observer as: log(perf at light-prior / perf at light-prior + 180°), where a value of 0 indicates no asymmetry. 
Haptic training task
Stimuli for the visual–haptic training consisted of a visual stimulus of 4 objects, identical to those used in the shape judgment task. In addition, a haptic surface was presented, with convex bumps and concave dimples of the same dimensions as the visual stimulus. Importantly, however, these visual–haptic scenes were consistent with a range of light-source positions whose mean was shifted by either ±27.5° (for positive or negative shift conditions) relative to the baseline light-prior for each observer (measured in the shape judgment task). Therefore, visual stimuli whose orientations were up to +27.5° or −27.5° (for positive and negative shift conditions) from the transition orientations now had haptically defined curvature opposite to their pre-judged shape; stimuli previously judged as convex on more than 50% of the trials now felt concave and vice versa. 
On each training trial, observers viewed and explored the four objects haptically for at least 5 seconds. After pressing a button to indicate that they had finished exploring, a single test object, which was identical to one of the previous four objects, was presented centrally for 1200 ms. Based solely on its visual appearance (without haptics), observers judged this object as “IN” or “OUT”, via a PHANToM button press, and subsequently touched the object to gain feedback regarding its haptic shape. The same 16 orientations were used as in the visual search task. Each orientation was repeated 4 times, with target position and orientation being randomly interleaved, making 64 trials per block. Observers each completed 4 blocks of training, which lasted roughly 1 hour. 
Throughout the experiment, the observers were given regular breaks. However, only after completing the initial 30-min session (baseline shape judgment task) were they allowed to leave the lab. The rest of the experiment was completed in a second session lasting 2–2.5 hours, during which observers did not leave the lab or turn lights on. 
Results
Figure 3A shows the proportion of convex judgments in the shape judgment task, as a function of shading gradient orientation for one typical observer, before (red stars) and after (blue circles) training. Figure 3B shows the average of the Gaussian fits to all individual observers' data. The fits accounted for nearly all of the variance in the data (mean R 2 = 0.985, σ = 0.01). As expected, before training observers see top-light objects as convex. The difference between the two arrows demonstrates the shift, following training, of the range of orientations predominantly perceived as convex. The mean shift in measured light-prior across observers was 10.0° ( σ = 6.8°) in the direction of training and this was highly significantly different from zero ( t 11 = 5.09, p < 0.001). The mean shifts for the two separate training conditions were 6.1° ( σ = 8.2°) and −12.9° ( σ = 3.9°) in the positive and negative conditions, respectively. 
The change in performance following training can also be measured by the change in consistency of judgments made at the orientations of greatest ambiguity (the transition orientations) measured before and after training, as shown in Figure 3C. We define consistency as the proportion of convex or concave judgments, whichever is greater than 50%. At the pre-training transition orientations, we expected consistency to improve following training as observers received feedback that these shading orientations were either consistently convex or consistently concave. In contrast, we expected that stimuli that were ambiguous post-training (the post-training transition orientations) would previously have been perceived more consistently as either convex or concave. We expected this reduction in consistency because training stimuli presented a conflict between visual and haptic perception at these orientations. At the light-prior and the light-prior + 180°, we expected very little change in consistency as haptic feedback was always consistent with the original visual percept at these stimulus orientations. Figure 3C shows the consistency of shape judgments in pre- and post-training performance at (i) the pre-training transition orientations, (ii) the post-training transition orientations, (iii) the pre-training light-prior, and (iv) the pre-training light-prior + 180°. Our predictions were confirmed; at the pre-training transition orientations consistency improved and at the post-training transition orientations consistency was reduced. The difference between the changes was significant ( t 11 = 4.29, p < 0.01). This result again shows an effect of visual–haptic training on shape perception and, in particular, demonstrates large changes in shape perception around the transition orientations. 
Results from the visual search task are summarized in Figure 4. Figure 4A shows visual search performance, averaged across observers, as a function of shading gradient orientation. As noted in the methods section, the range of orientations used in this task was selected specifically to assess the change in performance at the transition orientations and was therefore not optimal for fitting individual observers' data. However, a fit to the average group data is shown to illustrate key results. The fit consists of a combination of two sine waves plus a constant (see Equation A2). Figure 4A clearly demonstrates that the best performance was achieved when the shading gradient was roughly vertical with top-light (convex) distracters (i.e., around 0°). The pre-training data show a large convex/concave asymmetry ( μ = 0.45); performance with convex distracters (around 0°) is far better than with concave distracters (around 180°), a result that is consistent with previous reports of a strong asymmetry in this task (Chacón, 2004; Kleffner & Ramachandran, 1992; Sun & Perona, 1998). The post-training data show a significantly reduced convex/concave asymmetry (μ = 0.18; t11 = 2.7, p < 0.05). Vertical gradients (around 0 and 180°) produced better performance than roughly horizontal gradients, a result that is also consistent with previous findings (Kleffner & Ramachandran, 1992; Sun & Perona, 1998). As can be seen from Figure 4A, the reduction in asymmetry is not due to a generalized practice effect at all orientations except 0°; there is very little change in performance at any orientation except 180°. This suggests a significant improvement specifically in the processing of the concave distracter/convex target stimuli. 
Figure 4
 
(A) Visual search performance as a function of shading gradient orientation, averaged across twelve observers, before (red stars) and after training (blue circles). Orientations were normalized to the average baseline light-prior from the shape task and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of top-light (convex) distracters and a top-dark (concave) target. Due to the wide variation across observers in mean and variance of perf scores, individual observers' perf values for the pre- and post-training blocks at each orientation were converted to z-scores, calculated across the two blocks, before group averaging. The lines show the fit to the group data before (red solid) and after (blue dashed) training. (B) Perf z-scores measured before (red stripes) and after training (blue solid), at the pre- and post-training transition orientations, at the pre-training light-prior and at the pre-training light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers. The horizontal line at z = 0 shows the average performance across orientations.
Figure 4
 
(A) Visual search performance as a function of shading gradient orientation, averaged across twelve observers, before (red stars) and after training (blue circles). Orientations were normalized to the average baseline light-prior from the shape task and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of top-light (convex) distracters and a top-dark (concave) target. Due to the wide variation across observers in mean and variance of perf scores, individual observers' perf values for the pre- and post-training blocks at each orientation were converted to z-scores, calculated across the two blocks, before group averaging. The lines show the fit to the group data before (red solid) and after (blue dashed) training. (B) Perf z-scores measured before (red stripes) and after training (blue solid), at the pre- and post-training transition orientations, at the pre-training light-prior and at the pre-training light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers. The horizontal line at z = 0 shows the average performance across orientations.
Figure 4B shows the perf scores measured before and after training at the pre- and post-training transition orientations and at the orientations of the pre-training light-prior and the light-prior + 180°. If, as proposed, visual search performance is modulated by, or driven by shape perception, then we would expect a pattern of results similar to the shape judgment data ( Figure 3C): an improvement at the pre-training transition orientations and a deterioration at the post-training transition orientations. However, we also expected a general improvement in performance due to practice; therefore, our prediction was of a greater improvement at the pre-training transition orientations than the post-training transition orientations. Figure 4B demonstrates that there was very little change in performance at either pre- or post-training transition orientations and no greater improvement at the pre- compared to the post-training orientations ( t 11 = 0.06, p = 0.95). The four right-most bars of the figure also clearly illustrate the previously discussed reduction in convex/concave asymmetry following training, by comparison of performance at the light-prior and light-prior + 180° orientations. 
Discussion
Our data demonstrate that visual–haptic training had a large effect on the pre-training convex/concave asymmetry in visual search performance with vertical shading gradients. Before training, concave targets were detected more efficiently than convex targets. However, after training this asymmetry was virtually eliminated. In contrast, although training had a significant effect on shape perception around the transition orientations, there was no effect on visual search performance around these orientations. In other words, the shift in light-prior that was observed in the shape judgment task was not mirrored in the visual search performance. 
To ensure that the null result found in the visual search task was not due to a lack of power, a second experiment was conducted. In Experiment 2, we presented a large number of stimulus orientations that were uniformly sampled from the full 360° range. This enabled us to accurately measure individual observers' light-priors for both tasks and gave us greater power to detect a difference in the light-prior shift in the two tasks induced by training. 
Experiment 2
Methods
Observers
Four participants were tested; the two authors and two naïve, but experienced, psychophysical observers. All four had previously taken part in experiments involving shape judgments and visual search with shaded stimuli very similar to those used in the current study. 
Apparatus and stimuli
The apparatus and stimuli were almost identical to those used in Experiment 1. Changes were made only to the shading gradient orientations presented in each task as detailed below. In addition, pre- and post-training sessions were completed on consecutive days at roughly the same time of day, with the training blocks split between the 2 days. 
Shape judgment task and visual search task
Twenty-four evenly spaced shading gradient orientations were presented. The number of repetitions and blocks was identical to those in Experiment 1
Haptic training task
Four repetitions of each of forty-eight evenly spaced shading gradient orientations were presented per block. Two blocks were completed per observer, one at the end of the session on day 1 and the second at the beginning of the session on day 2, each session lasted approximately 1.5 hours. As in Experiment 1, visual–haptic stimuli were consistent with a shift in light-prior of either +27.5° (2 observers) or −27.5° (2 observers). 
Results
Each observer's pre- and post-training shape data were fit with a double Gaussian function ( Equation A1) to extract their light-prior. Similarly, each individual's visual search data were fit with a sine wave function ( Equation A2) and a light-prior extracted. The fits accounted for a very high proportion of the variance in the data ( R 2: shape judgment task μ = 0.997, σ = 0.005; visual search task μ = 0.933, σ = 0.03). 
For purposes of display (not for analysis), the average of the fits to the shape judgment data are displayed in Figure 5A. The proportion of convex judgments is plotted as a function of shading gradient orientation, before (red solid) and after training (blue dashed). The arrows represent the mean light-priors before ( μ = −10.4°, σ = 10.9°) and after training ( μ = −23.1°, σ = 13.3°). As in Experiment 1, a significant shift in measured light-prior followed training (mean shift = 12.8°, t 3 = 4.6, p < 0.05). 
Figure 5
 
(A) Average Gaussian fits to the shape judgment data from Experiment 2. The solid lines represent proportion convex judgments as a function of shading gradient orientation, before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. An orientation of 0° corresponds to stimuli light at the top, with a vertical shading gradient. The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (B) Average fits to the visual search data from Experiment 2. The lines represent perf z-scores as a function of shading gradient orientation before (red solid) and after (blue dashed) training. Orientations were normalized to the average baseline light-prior and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of a top-dark (concave) target and top-light (convex) distracters. (C) Average light-prior shift following training in the two tasks from Experiment 2. Error bars represent ±1 standard error across observers.
Figure 5
 
(A) Average Gaussian fits to the shape judgment data from Experiment 2. The solid lines represent proportion convex judgments as a function of shading gradient orientation, before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. An orientation of 0° corresponds to stimuli light at the top, with a vertical shading gradient. The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (B) Average fits to the visual search data from Experiment 2. The lines represent perf z-scores as a function of shading gradient orientation before (red solid) and after (blue dashed) training. Orientations were normalized to the average baseline light-prior and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of a top-dark (concave) target and top-light (convex) distracters. (C) Average light-prior shift following training in the two tasks from Experiment 2. Error bars represent ±1 standard error across observers.
The average of the fits to the observers' visual search data are displayed in Figure 5B: search performance as a function of distracter orientation is shown before (red solid) and after training (blue dashed). The arrows represent the mean light-priors before ( μ = −15.2°, σ = 15.0°) and after training ( μ = −16.6°, σ = 13.4°); there was no significant change in light-prior (mean shift = 1.4°, t 3 = 0.8, p = 0.47). Notably, in this second experiment (contrary to Experiment 1), there is no concave/convex asymmetry before training ( μ = −0.095, σ = 0.20). This is almost certainly due to the fact that the observers in Experiment 2 had significant previous experience with these stimuli. This will be addressed further in the General discussion. Not surprisingly, training produced no significant change in asymmetry (post-training μ = −0.003, σ = 0.11, t 3 = 0.95, p = 0.41). 
The light-prior shifts in the two tasks induced by training are summarized in Figure 5C. Training had a much greater effect on the light-prior measured in the shape judgment task than in the visual search task. A paired samples t-test showed that this difference in shifts was significant ( t 3 = 3.3, p < 0.05). Given the data from both experiments, we can confidently conclude that the reliable shift in light-prior for shape perception is not accompanied by analogous changes in search performance. 
General discussion
The aim of this study was to use visual–haptic training to modify the priors that guide the perception of shape from shading. Furthermore, we wished to investigate the effect of modifying these priors on subsequent visual search performance with shaded stimuli. We proposed that changes in visual search behavior would indicate that perceived 3D shape, rather than a 2D property of the image, is the preattentive feature driving visual search. In addition, it would indicate that the modification of prior assumptions occurs at an early, preattentive stage of processing. 
With regard to the convexity prior, Experiment 1 demonstrated a significant reduction, following training, in the asymmetry between convex and concave target detection. Sun and Perona (1997) proposed that convex distracters are processed faster because they are consistent with the convexity prior. Accordingly, our observed reduction of this asymmetry could be interpreted as an increase in the speed of processing of concave distracters. This finding supports the proposal that visual search is driven by 3D shape and indicates that interaction with an environment in which concave and convex objects were equally prevalent led to a weakening of the convexity prior. The experienced observers who completed Experiment 2 displayed no pre-training asymmetry (and hence no subsequent reduction in asymmetry). A reduced convex/concave asymmetry for experienced vs. naïve observers has previously been noted by Kleffner and Ramachandran (1992). Taking all of these results together, it appears that prolonged exposure to these types of stimuli (with balanced numbers of convex and concave objects) can reduce the convexity prior underlying search behavior. Furthermore, it may be that extended visual exposure to these stimuli, even without haptic feedback, is sufficient to reduce the asymmetry. 
With regard to the light-from-above prior, Experiments 1 and 2 both provided evidence that visual–haptic training significantly modified the light-prior used in the shape judgment task, but not in the visual search task. Perhaps the simplest explanation for this discrepancy between the tasks would be that training did not actually modify shape perception and that the post-training changes in shape judgments reflected a cognitive strategy. In other words, observers might have learnt to match particular responses to particular visual stimuli, without any change in shape perception. We argue against such an explanation for two reasons: Firstly, in the shape judgment task, observers were asked to report perceived shape. They had no motivation to do otherwise, and any cognitive strategy to modify responses to stimuli only within ≈10° (the size of the observed light-prior shift) of their previous transition orientations would be quite challenging. Secondly, Adams et al. (2004) demonstrated that similar visual–haptic training not only influenced subsequent performance in a shape judgment task, but also performance in a novel reflectance judgment task, thus ruling out explanations based on simple cognitive strategies. 
We assert therefore that training modified observers' shape perception. The subsequent absence of any change in the light-prior measured by visual search behavior supports proposals that visual search performance is not based on perceived 3D shape, but rather on 2D image properties, i.e., shading gradient orientation. However, this interpretation is at odds with the observed reduction in the convex-concave asymmetry. The change in asymmetry suggests that visual search performance is based on perceived shape and that the convexity prior has been weakened. A 2D interpretation also conflicts with previous findings by Enns and Rensink (1990) and Kleffner and Ramachandran (1992). Enns and Rensink compared visual search performance with shaded 3D cube stimuli to performance with stimuli with very similar 2D image properties, but no clear 3D interpretation. They found that performance with the cube stimuli was significantly faster than with the other stimuli and was invariant with set-size. Similarly, Kleffner and Ramachandran found a significantly greater dependence on set-size in visual search performance for circular stimuli with a vertical step change in luminance compared to stimuli with a gradual change in luminance (similar to those used in the present study). These findings are hard to reconcile with an explanation of visual search performance based solely on the 2D image property of shading orientation. In addition, a range of studies using other depth cues, for example, occlusion (He & Nakayama, 1992; Rauschenberger & Yantis, 2001), perspective (Aks & Enns, 1996) and structure from stereo and motion (Rushton, Bradshaw, & Warren, 2007), provide further support for the notion of preattentive features based on the 3D representation. 
How to resolve the mixed results outlined above? Our reduction in the convex/concave asymmetry after training in Experiment 1, combined with previous data on visual search, strongly suggests that 3D shape guides search. However, we did not see the additional changes in search behavior that would be predicted by our trained changes in perceived shape associated with the shifted light-prior. We propose a hybrid explanation involving multiple stages of shape construction. Quantitative, explicit perceptual judgments of shape rely on a fully reconstructed 3D representation. In contrast, pop-out in visual search tasks is, by definition, driven by a preattentive representation of the image or scene. This does not, however, necessitate that pop-out is driven by a 2D representation. We propose that the two tasks used in this study are performed based on different stages of shape reconstruction. Shape may initially be constructed in a “quick and dirty” fashion and this initial preattentive, probably unconscious, representation may drive visual search. The light-prior measured using visual search stimuli will therefore only reflect this first-pass crude analysis. Subsequent processing may involve the refinement of the estimated lighting direction, taking into account additional information, such as other shape cues or recent experience, to produce an improved representation of shape used for explicit shape judgments. We suggest that our brief visual–haptic training did not modify the light-prior at the preattentive stage, but affected a subsequent stage of processing. 
The proposal of the sequential processing of shape-from-shading information is supported by recent findings by Adams (in press). In this study, Adams demonstrated that the strong correlation across observers between light-priors used for visual search and shape judgments (Adams, 2007) is reduced when the head is tilted. Results showed that visual search with shaded stimuli is entirely retinally based and unaffected by head tilt. However, performance in a shape judgment task was affected by head tilt; the assumed light-source was somewhere between retinally and gravitationally based “up.” Evaluation of these previous findings together with the present results suggests that in a search task the visual system uses a “quick and dirty” strategy to compute shape, whereas in a shape judgment task, the measured light-prior reflects additional processing which takes into account head tilt and also recent experience with the world. 
In summary, we found evidence for the modification of the light-from-above prior in a shape judgment task but not in a visual search task. In addition, we found evidence for the modification of the convexity prior in the visual search task. Taking these findings in conjunction with previously reported evidence we propose that the preattentive feature driving visual search with shaded stimuli is 3D shape, and that the convexity prior, but not the light-from-above prior may be modified at a preattentive stage of processing. It appears that the convexity prior is more flexible and easily modified than the light-from-above prior, a robust strategy for a visual system living in a world where the predominance of convex items is less ubiquitous than the occurrence of top-lighting. 
Appendix A
For the shape task, observers' data (proportion convex as a function of shading gradient orientation, Figure 3A) were modelled using two cumulative Gaussians:  
p ( c o n v e x ( θ ) ) = 1 + C u m u l G a u s s ( θ , [ μ 2 , σ ] ) C u m u l G a u s s ( θ , [ μ 1 , σ ] ) ,
(A1)
 
where CumulGauss( θ, [ μ, σ]) is the cumulative function (integral from −∞ to θ) of the Gaussian distribution with mean μ and standard deviation σ. μ 1, μ 2, and σ were free parameters fit individually for each observer. The two Gaussians have equal standard deviation and their means correspond to the orientations at which perception changed from convex to concave or vice versa; the transition orientations. The light-prior was taken as the mean of the two transition orientations. 
For the visual search task, observers' data (perf score as a function of distracter orientation, Figure 4A) were fit with a simple function consisting of two sine waves plus a constant:  
p e r f ( θ ) = a cos ( 2 ( θ α ) ) + b cos ( θ α ) + c .
(A2)
 
The two amplitudes ( a and b), the constant ( c), and the phase ( α) were fit as free parameters, individually, for each observer. The first component of the function has two peaks and corresponds to the better performance at vertical relative to horizontal orientations. The second single-peaked component corresponds to the asymmetry in performance between convex and concave distracters. The phase ( α, the peak in performance) gives the observer's light-prior. 
Acknowledgments
This work was supported by a Leverhulme Trust early career fellowship (RAC) and an EPSRC project grant EP/D039916/1 (WJA). Thanks to Erich Graf for helpful comments on the manuscript. 
Commercial relationships: none. 
Corresponding author: Rebecca A. Champion. 
Present address: School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3AT, UK. 
References
Adams, W. J. (2007). A common light-prior for visual search, shape and reflectance judgements. Journal of Vision, 7, (11):11, 1–7, http://journalofvision.org/7/11/11/, doi:10.1167/7.11.11. [PubMed] [Article] [CrossRef] [PubMed]
Adams, W. J. (in press). Cognition.
Adams, W. J. Graf, E. W. Ernst, M. O. (2004). Experience can change the “light-from-above” prior. Nature Neuroscience, 7, 1057–1058. [PubMed] [Article] [CrossRef] [PubMed]
Aks, D. J. Enns, J. T. (1992). Visual search for direction of shading is influenced by apparent depth. Perception & Psychophysics, 52, 63–74. [PubMed] [CrossRef] [PubMed]
Aks, D. J. Enns, J. T. (1996). Visual search for size is influenced by a background texture gradient. Journal of Experimental Psychology: Human Perception and Performance, 22, 1467–1481. [PubMed] [CrossRef] [PubMed]
Brewster, D. (1826). On the optical illusion of the conversion of cameos into intaglios, and of intaglios into cameos, with an account of other analogous phenomena. Edinburgh Journal of Science, 4, 99–108.
Chacón, J. (2004). Perceived contrast explains asymmetries in visual-search tasks with shaded stimuli. Perception, 33, 1499–1509. [PubMed] [CrossRef] [PubMed]
Enns, J. T. Rensink, R. A. (1990). Influence of scene-based properties on visual search. Science, 247, 721–723. [PubMed] [CrossRef] [PubMed]
He, Z. J. Nakayama, K. (1992). Surfaces versus features in visual search. Nature, 359, 231–233. [PubMed] [CrossRef] [PubMed]
Kleffner, D. A. Ramachandran, V. S. (1992). On the perception of shape from shading. Perception & Psychophysics, 52, 18–36. [PubMed] [CrossRef] [PubMed]
Langer, M. S. Bülthoff, H. H. (2001). A prior for global convexity in local shape-from-shading. Perception, 30, 403–410. [PubMed] [CrossRef] [PubMed]
Mamassian, P. Goutcher, R. (2001). Prior knowledge on the illumination position. Cognition, 81, B1–B9. [PubMed] [CrossRef] [PubMed]
Ramachandran, V. S. (1988). Nature, 331, 163–166. [PubMed] [CrossRef] [PubMed]
Rauschenberger, R. Yantis, S. (2001). Masking unveils pre-amodal completion representation in visual search. Nature, 410, 369–372. [PubMed] [CrossRef] [PubMed]
Rushton, S. K. Bradshaw, M. F. Warren, P. A. (2007). The pop-out of scene-relative object movement against retinal motion due to self-movement. Cognition, 105, 237–245. [PubMed] [CrossRef] [PubMed]
Santhi, N. Reeves, A. (2004). The roles of distractor noise and target certainty in search: A signal detection model. Vision Research, 44, 1235–1256. [PubMed] [CrossRef] [PubMed]
Sun, J. Perona, P. (1996). Early computation of shape and reflectance in the visual system,, Nature, 379, 165–168. [PubMed] [CrossRef] [PubMed]
Sun, J. Perona, P. (1997). Shading and stereo in early perception of shape and reflectance. Perception, 26, 519–529. [PubMed] [CrossRef] [PubMed]
Sun, J. Perona, P. (1998). Where is the sun? Nature Neuroscience, 1, 183–184. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
Examples of stimuli used in (A) the visual search task and (B) the shape judgment task.
Figure 1
 
Examples of stimuli used in (A) the visual search task and (B) the shape judgment task.
Figure 2
 
Apparatus. Observers viewed the visual stimuli on a CRT via a mirror; haptic presentation was via a PHANToM force-feedback device.
Figure 2
 
Apparatus. Observers viewed the visual stimuli on a CRT via a mirror; haptic presentation was via a PHANToM force-feedback device.
Figure 3
 
(A and B) Proportion convex judgments as a function of shading gradient orientation, in the shape judgment task. (A) Data shown are for one typical observer, in the positive shift training condition, before (red stars) and after (blue circles) training. The lines show the Gaussian fits to the data and the two arrows represent the directions of the pre- and post-training light-priors extracted from the fits. An orientation of 0° corresponds to stimuli light at the top. (B) Average fit across observers before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. (NB: Data could not be directly averaged across observers as different orientations were presented to each observer.) The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (C) Consistency of shape judgments measured before (red stripes) and after (blue solid) training, at the pre- and post-training transition orientations and at the pre-training light-prior and light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers.
Figure 3
 
(A and B) Proportion convex judgments as a function of shading gradient orientation, in the shape judgment task. (A) Data shown are for one typical observer, in the positive shift training condition, before (red stars) and after (blue circles) training. The lines show the Gaussian fits to the data and the two arrows represent the directions of the pre- and post-training light-priors extracted from the fits. An orientation of 0° corresponds to stimuli light at the top. (B) Average fit across observers before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. (NB: Data could not be directly averaged across observers as different orientations were presented to each observer.) The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (C) Consistency of shape judgments measured before (red stripes) and after (blue solid) training, at the pre- and post-training transition orientations and at the pre-training light-prior and light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers.
Figure 4
 
(A) Visual search performance as a function of shading gradient orientation, averaged across twelve observers, before (red stars) and after training (blue circles). Orientations were normalized to the average baseline light-prior from the shape task and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of top-light (convex) distracters and a top-dark (concave) target. Due to the wide variation across observers in mean and variance of perf scores, individual observers' perf values for the pre- and post-training blocks at each orientation were converted to z-scores, calculated across the two blocks, before group averaging. The lines show the fit to the group data before (red solid) and after (blue dashed) training. (B) Perf z-scores measured before (red stripes) and after training (blue solid), at the pre- and post-training transition orientations, at the pre-training light-prior and at the pre-training light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers. The horizontal line at z = 0 shows the average performance across orientations.
Figure 4
 
(A) Visual search performance as a function of shading gradient orientation, averaged across twelve observers, before (red stars) and after training (blue circles). Orientations were normalized to the average baseline light-prior from the shape task and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of top-light (convex) distracters and a top-dark (concave) target. Due to the wide variation across observers in mean and variance of perf scores, individual observers' perf values for the pre- and post-training blocks at each orientation were converted to z-scores, calculated across the two blocks, before group averaging. The lines show the fit to the group data before (red solid) and after (blue dashed) training. (B) Perf z-scores measured before (red stripes) and after training (blue solid), at the pre- and post-training transition orientations, at the pre-training light-prior and at the pre-training light-prior + 180°, averaged across twelve observers. Error bars represent ±1 standard error across observers. The horizontal line at z = 0 shows the average performance across orientations.
Figure 5
 
(A) Average Gaussian fits to the shape judgment data from Experiment 2. The solid lines represent proportion convex judgments as a function of shading gradient orientation, before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. An orientation of 0° corresponds to stimuli light at the top, with a vertical shading gradient. The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (B) Average fits to the visual search data from Experiment 2. The lines represent perf z-scores as a function of shading gradient orientation before (red solid) and after (blue dashed) training. Orientations were normalized to the average baseline light-prior and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of a top-dark (concave) target and top-light (convex) distracters. (C) Average light-prior shift following training in the two tasks from Experiment 2. Error bars represent ±1 standard error across observers.
Figure 5
 
(A) Average Gaussian fits to the shape judgment data from Experiment 2. The solid lines represent proportion convex judgments as a function of shading gradient orientation, before (red solid) and after (blue dashed) training. All orientations were normalized to the mean baseline prior and data from the positive training condition has been reversed to allow pooling across conditions. An orientation of 0° corresponds to stimuli light at the top, with a vertical shading gradient. The red arrow shows the average baseline light-prior and the blue arrow shows the average light-prior following training. (B) Average fits to the visual search data from Experiment 2. The lines represent perf z-scores as a function of shading gradient orientation before (red solid) and after (blue dashed) training. Orientations were normalized to the average baseline light-prior and data from observers trained with a positive shift have been reversed. A stimulus at 0° consisted of a top-dark (concave) target and top-light (convex) distracters. (C) Average light-prior shift following training in the two tasks from Experiment 2. Error bars represent ±1 standard error across observers.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×