Free
Article  |   October 2015
Visual pop-out in barn owls: Human-like behavior in the avian brain
Author Affiliations
Journal of Vision October 2015, Vol.15, 4. doi:https://doi.org/10.1167/15.14.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julius Orlowski, Christian Beissel, Friederike Rohn, Yair Adato, Hermann Wagner, Ohad Ben-Shahar; Visual pop-out in barn owls: Human-like behavior in the avian brain. Journal of Vision 2015;15(14):4. https://doi.org/10.1167/15.14.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual pop-out is a phenomenon by which the latency to detect a target in a scene is independent of the number of other elements, the distractors. Pop-out is an effective visual-search guidance that occurs typically when the target is distinct in one feature from the distractors, thus facilitating fast detection of predators or prey. However, apart from studies on primates, pop-out has been examined in few species and demonstrated thus far in rats, archer fish, and pigeons only. To fill this gap, here we study pop-out in barn owls. These birds are a unique model system for such exploration because their lack of eye movements dictates visual behavior dominated by head movements. Head saccades and interspersed fixation periods can therefore be tracked and analyzed with a head-mounted wireless microcamera—the OwlCam. Using this methodology we confronted two owls with scenes containing search arrays of one target among varying numbers (15–63) of similar looking distractors. We tested targets distinct either by orientation (Experiment 1) or luminance contrast (Experiment 2). Search time and the number of saccades until the target was fixated remained largely independent of the number of distractors in both experiments. This suggests that barn owls can exhibit pop-out during visual search, thus expanding the group of species and brain structures that can cope with this fundamental visual behavior. The utility of our automatic analysis method is further discussed for other species and scientific questions.

Introduction
Attentional selection of salient or task-relevant information (Tsotsos, 1990; Yarbus, 1967) helps to focus sensory processing. For example, animals and humans often direct their gaze toward conspicuous objects during visual search. In standard visual search tasks in the laboratory, observers are asked to search for a target item in a scene that also contains other items, the distractors. Such a task may be classified as easy/parallel or difficult/serial (Wolfe & Horowitz, 2004). In easy search tasks, search time and the number of fixations until the target is found do not depend on the total number of items present, a phenomenon referred to as parallel search or pop-out (Treisman & Gelade, 1980; Wolfe, 1994; Zelinsky & Sheinberg, 1997). Pop-out may thus be regarded as a very effective search strategy. Targets tend to pop out if they are distinctly different from the distractors in at least one feature such as color, motion, or orientation. In difficult search tasks (e.g., when the target is specified by a distinct combination of features), search time and the number of fixations until the target is found increase linearly with the number of items in a display (Treisman & Gelade, 1980; Williams, Reingold, Moscovitch, & Behrmann, 1997). Here, serial focusing of attention on single items or a group of items at a time is required until the target is found. Therefore, this case is often called serial search
Since pop-out facilitates the detection of predator and prey, it should have evolved whenever the ecological conditions provided enough selection pressure and when enough brain capacity was available. Indeed, pop-out is present in humans and primates (Hochstein & Ahissar, 2002; Nothdurft, Pigarev, & Kastner, 2009; Treisman & Gelade, 1980; Wolfe & Horowitz, 2004). By contrast, less is known about it in nonprimate animals. It is already known that free flying bees are able to find a target among differently colored objects, but as set size increases, so does their search time (Spaethe, Tautz, & Chittka, 2006). Archer fish are able to shoot at targets displayed on screens based on saliency (Mokeichev, Segev, & Ben-Shahar, 2010) and are able to match human performance in a serial search task (Rischawy & Schuster, 2013). Barn owls fixate at an odd target faster, longer, and more often than a randomly chosen distractor item (Harmening, Orlowski, Ben-Shahar, & Wagner, 2011). These findings demonstrate capabilities of visual search in different species, but are not sufficient to demonstrate pop-out. The only nonprimate species for which pop-out has been demonstrated are the rat (Botly & De Rosa, 2012), the pigeon (Allan & Blough, 1989; Blough, 1977), the zebra fish (Proulx, Parker, Tahir, & Brennan, 2014), and very recently also the archer fish (Ben-tov, Donchin, Ben-Shahar, & Segev, 2015), but this already indicates that a cortex is not necessary for the implementation of pop-out sensitivity. Moreover, focusing on motion cues rather than static pictorial cues, Zahar, Wagner, and Gutfreund (2012) reported motion pop-out sensitive neurons in owl tectal cells, while Ben-Tov et al. (2015) reported similar findings in the optic tectum of archer fish. Since more information is needed both at the behavioral and neural levels, it is especially interesting to examine nonprimate species in more depth to find out what types of visual-search strategies are present, whether these species have evolved pop-out sensitivity, and how this sensitivity is implemented in the brain. 
The barn owl is an excellent model system for such studies. This species is a keen hunter that uses both the auditory and visual systems to locate prey (Harmening & Wagner, 2011; Orlowski, Harmening, & Wagner, 2012; Wagner, Kettler, Orlowski, & Tellers, 2012). It possesses stereopsis (van der Willigen, Frost, & Wagner, 1998) and motion parallax (van der Willigen, Frost, & Wagner, 2002) that both help to unmask camouflaged objects. Crossmodal attentional advantage also has been demonstrated (Hausmann, Plachta, Singheiser, Brill, & Wagner, 2008). Furthermore, a big advantage of using the barn owl for such studies is that their gaze may easily be tracked by monitoring head movements (Masino & Knudsen, 1990; Ohayon, van der Willigen, Wagner, Katsman, & Rivlin, 2006). This is possible because barn owl eye movements are limited to less than 2° (Steinbach & Money, 1973). One way to monitor these head movements is a head-mounted camera, known as the OwlCam (Ohayon, Harmening, Wagner, & Rivlin, 2008). The scenes recorded by the OwlCam offer a unique first person view from the owl's perspective and facilitate analysis of its visual decisions during visual search (or other) tasks. 
Instead of measuring reaction times as commonly done in humans (Duncan & Humphreys, 1989; Treisman & Gelade, 1980; Wolfe, 1998), the findings by Harmening, Orlowski, Ben-Shahar, and Wagner (2011) were based on measures more suited for the free-viewing situation, in particular on the number of saccades and the time it takes the owl to fixate the odd target. Using similar types of measures, in the following we report a series of feature-search experiments designed to examine pop-out capacity in barn owls. We report that both search time and the number of saccades until the target was fixated remain largely independent of the number of distractors in a search task in which target orientation was used as a feature, and in a search task where luminance contrast was the feature that discriminated the target from the distractors. Taken together, these are the first type of findings to suggest that, similar to humans, barn owls also can exhibit pop-out during visual search. 
Methods
Animal subjects
Two American barn owls, Tyto furcata pratincola (subjects WH and HB), from the breeding colony of the Department of Zoology at RWTH Aachen University were used for the experiments. Both animals were hand raised and tame. Experiments were conducted under a permit issued by the Landespräsidium für Natur, Umwelt und Verbraucherschutz Nordrhein Westfalen, Recklinghausen, Germany. During the experiments, the owls' body weight was kept at about 90% of their free-feeding weight (420 g and 480 g). They were rewarded with pieces of chicken meat during the experiments, and were fed with additional chicken meat after an experiment to maintain body weight irrespective of behavioral performance. The owls participated in experiments 5–6 days a week, approximately 2 hr a day, and were fed in their aviaries when no experiment was conducted. No attempt was made to reverse their nocturnal cycle. Both owls had a small aluminum head post fixed to their skull, to which the OwlCam could be affixed during experiments. This head post was put on the skull under anesthesia before the experiments started (for details, see Vonderschen & Wagner, 2009). 
Setup and experimental sequence
Experimental procedures and the basic setup followed Harmening et al. (2011). We recorded first-person–view videos from barn owls wearing the head mounted OwlCam. The birds were confronted with arrays of items that were organized on the floor of the experimental chamber. All arrays contained one odd item (the target) among several similar items (the distractors). In the orientation-feature search (Experiment 1), the items were rectangular bars made from white cardboard and measured 15 × 5 cm. The target was slanted 45° clockwise compared to the prevalent distractor orientation. In the luminance search (Experiment 2), the items were round discs, 5 cm in radius. Here, the target was cut from white cardboard, while the distractors were grey. Arrays in both types of stimuli were rectangular in size and could contain 16, 25, 36, 49, or 64 items. The size of the experimental chamber was 545 × 405 × 265 cm and its walls were coated with pyramidal foam to provide sound attenuation. The owls' perch was placed 200 cm above the floor close to the smaller wall. From there, the owls could observe the arrays placed on the floor. Between experimental trials, an opaque retractable curtain was lowered in front of the perch to block the animal's view. The target item was placed at a random internal location in the array (i.e., targets were never placed at the outer ring of the arrays to avoid margin effects). Interitem distance on the floor was kept constant at 15 cm except for a small positional jitter. Thus, after perspective projection, the retinal image of the arrays varied from an average of 30° × 15° in 4 × 4 = 16-item arrays to 55° × 30° in 8 × 8 = 64-item arrays. 
Prior to the experiments, the owls were trained to search for the target item. In this training phase that lasted up to a month, food items were placed on the target to make the owls fixate it as fast as possible. Trials were conducted using the following procedure: First, the owl was placed on the perch with its view blocked by the curtain while the experimenter arranged the stimulus array on the floor. Then the experimenter left the room, retracted the curtain, and switched the light on, thereby starting a trial. The owl would then start searching for the target. A trial was terminated either after the owl flew from the perch to fetch a target or after it looked around freely for a maximum amount of time—3 min in orientation trials and 1 min in luminance trials. Up to 15 trials per day were performed; approximately 20% (3–4) of these trials were reinforcement trials with food placed on the target bar to keep the owl motivated for the duration of the experimental session. These reinforcement trials were excluded from analysis. Overall, experimentation period lasted 71 days for a total of 980 trials. 
OwlCam calibration
During all experiments the owls were wearing the OwlCam, a lightweight wireless microcamera specifically designed to be worn by barn owls without restricting their head movements (Harmening et al., 2011). The OwlCam's digital video signal was stored at 30 frames per second in a 640 × 480 pixel video format. The videos were segregated into fixations (static video segments) and saccades (video segments showing significant motion) using a custom-written algorithm. Due to the barn owls' lack of eye movements (Steinbach & Money, 1973) and the fixed relation of the OwlCam to the gaze of each barn owl, a “first person” representation of the owls' field of view was obtained. However, the location of the owls' “functional fixation spot” (i.e., its region of visual attention in camera coordinates) had to be obtained. For that we followed Ohayon et al. (2008) and Harmening et al. (2011), and in a preliminary step presented the owls with few (3–5) interesting items (food items or food item dummies) on the floor of the experimental room. To detect the food item the owls would repeatedly fixate them. By design, these food items were much brighter than the floor, such that the video frames containing fixations could be converted into a binary black-and-white image leaving the location of the targets marked in white. These frames (5,336 fixation frames in owl HB, 6,579 fixation frames in owl WH) were then overlaid and the quantitative occurrence of items in camera coordinates was determined. This resulted in a circular area on which most of the fixations occurred, that we call the fixation spot (Figure 1a). The center of the fixation spot of owl HB was located at camera coordinate 334 × 315 (in pixels; horizontal, vertical) and was 2.17° wide, while owl WH's fixation spot was at 344 × 319 and was 2.51° wide. 
Figure 1
 
Functional fixation spot and classification of fixations. (a) The fixation map is a heat map with blue colors specifying locations of low target probability and red colors specifying regions of high target probability in the image. Assuming the bird has no reason to consistently fixate at “nothing,” this map thus represents where in the image plane (or retina) the owl prefers to place targets (by proper head movements), a retinal position we consider as the functional fovea or functional fixation spot. Shown here is the result for subject HB after applying the calibration procedure described in the text (also in Harmening et al., 2011). Note the approximately circular shape. (b) Typical stimulus scene, containing a 25-item orientation feature search array on the floor. Note the single target among 24 distractors. Labels mark the three content categories for classification used in this study. Fixations are classified as “target” if they intersect the target (marked by blue box), “inside” if the fixation spot is not in the target area but inside the array area, and “outside” if the fixation spot lies outside the stimulus array. Note that the inside category includes fixations on distractors or anywhere between items in the stimulus array.
Figure 1
 
Functional fixation spot and classification of fixations. (a) The fixation map is a heat map with blue colors specifying locations of low target probability and red colors specifying regions of high target probability in the image. Assuming the bird has no reason to consistently fixate at “nothing,” this map thus represents where in the image plane (or retina) the owl prefers to place targets (by proper head movements), a retinal position we consider as the functional fovea or functional fixation spot. Shown here is the result for subject HB after applying the calibration procedure described in the text (also in Harmening et al., 2011). Note the approximately circular shape. (b) Typical stimulus scene, containing a 25-item orientation feature search array on the floor. Note the single target among 24 distractors. Labels mark the three content categories for classification used in this study. Fixations are classified as “target” if they intersect the target (marked by blue box), “inside” if the fixation spot is not in the target area but inside the array area, and “outside” if the fixation spot lies outside the stimulus array. Note that the inside category includes fixations on distractors or anywhere between items in the stimulus array.
Video analysis
OwlCam videos were several minutes (up to 3) long and contained numerous (up to 140) fixations per trial. Since processing these videos for visual search characteristics required accurate and laborious operations, we developed video analysis software that provided fully automatic analysis of various aspects of the data. Implemented in MATLAB (MathWorks, Natick, MA), the system is also equipped with a graphical user interface (GUI) and semiautomatic tools allowing verification and correction of the results by a human inspector (if needed). 
The pipeline of video analysis constituted the computation of fixations, registration and room stitching, room analysis, and scan path computation. Each stage in this sequence was designed as a “plug-and-play” module, allowing easy extensions for future and different experiments. More specifically, given an OwlCam video trial, it was segregated into fixations (video segments with no or negligible motion) and saccades (video segments with significant rapid motion). All fixations were classified either as “inside,” “outside” or, in the rare cases where their content could not be identified due to noise, as “noise.” Representative frames from inside fixations were stitched together to create a panoramic view of the scene from the owls' vantage point. Then, the fixation spot location of each fixation was mapped to this global panoramic view and the distance to the nearest array item was calculated. Using this information a scan path was generated and each inside fixation was classified as “target” or not (Figure 1b), and the distance to the target was calculated. Adding the duration of each fixation and the time that elapsed between them, the system stored all scan path information and exported the data to Microsoft Excel for further analysis. Note that this entire process was done completely automatically and a full description of the computational methodology is provided in the Appendix
Data analysis
Once extracted from the video trials, the owls' scan path and viewing behavior was studied with respect to several criteria. In particular, we examined the relative and absolute number of fixations directed onto certain items (e.g., the target) or regions of interests, and the search time and number of head saccades performed until the items were first looked at. Unless otherwise declared we used the following statistical analyses available as functions in MATLAB: Data groups were analyzed using the Kruskal-Wallis test to test for significant difference, in the case of significant differences we used Bonferroni post hoc analysis (with Dunn-Šidák correction) to determine which condition differed significantly. 
Results
After training (see Methods), our two owl subjects, HB and WH, performed visual search trials for more than 40 days for each experiment. The results in this section, first for orientation (Experiment 1) and then for luminance contrast based targets (Experiment 2), are reported by the individual subject (using the HB/WH notation) and later pooled over subjects when applicable. In all cases we report mean performance and the standard error of the mean. 
For each subject, the data collected included at least 40 trials per experiment (not including reinforcement trials) and set size (16, 25, 36, 49, or 64). This accumulated to 243 min = 27,472 s = 824,170 frames of OwlCam video for owl HB and 233 min = 11,806 s = 354,195 frames) for owl WH in Experiment 1. In Experiment 2, the respective numbers were 147 min = 8830 s = 264,600 frames) and 138 min = 8356 s = 250,696 frames. For Experiment 1, we recorded a total number of 241/233 trials. The owls terminated 189/228 trials by flying from the perch. The average duration of a trial was 114.7/47.7 seconds, during which a new fixation was selected every 2.6/3.4 seconds. In most cases, the owls did not stop scanning immediately after first detecting the target but returned to the target's location after a few fixations. On average, the owls made 43.9 ± 29.9/14.1 ± 12.3 fixations during a trial (i.e., in each video). For Experiment 2 we recorded 231/237 videos, and 172/194 of these were terminated by flying. Here, we have a shorter average trial duration of 38.2/38.3 s with a new fixation selected after 4.1/3.8 s. On average, 9.3 ± 0.43/9.7 ± 0.47 fixations were made per trial. 
Once fixations were collected, they were categorized as outside, inside, and target hits (Figure 2) as discussed in Methods. In both experiments, only a small part of the fixations were outside of the array area (2.2%–27.2%). Most of the remaining fixations were inside, and of these, a large portion were also on the target. While the overall proportion of distractor fixations was higher than the proportion of target fixations, this comparison does not reveal much. Instead, the target fixation has to be set in relation to the expected mean fixations on a random item. Ideally, these expected proportions range from 6.25% (set size = 16) to 1.56% (set size = 64). However, because there were inside fixations that were neither on the target nor on a distractor, these numbers are upper bounds and conservative estimates of the expectations. Nevertheless, using this criterion, the observed proportions of target fixations were much higher than the expectations (Figure 2). It is also obvious that the proportions of target fixations were higher in Experiment 2 than in Experiment 1 in both owls. These proportions range from a minimum of 29.5% (owl WH) to a maximum of 36.7% (owl HB) in Experiment 2. The proportions of target fixations in Experiment 2 did not depend on the set size (Figure 2). In Experiment 1, the minimum proportion of target fixations was 6.9% (owl WH), while the maximum proportion of target fixations was 20.7% (owl HB). Here, the proportion of target fixations depended on set size. All in all, approximately twice as many fixations were on the target in luminance-search trials than in orientation-search trials. 
Figure 2
 
Proportion of target fixations during experiments. Upper row: results shown for each set size and owl when all fixations are considered. Orientation feature search is color coded blue (owl HB)/lightblue (owl WH); luminance feature search is coded green (owl HB)/pale green (owl WH). Lower row: Ratio of target fixations with outside fixations discounted. The black dashed line shows the expected proportion of fixations on a random item for each array size. Data from owl HB is based on 10,557/2,151 (orientation/luminance) fixations in 241/255 videos. Data from owl WH is from 3,245/2,293 fixations in 231/237 videos.
Figure 2
 
Proportion of target fixations during experiments. Upper row: results shown for each set size and owl when all fixations are considered. Orientation feature search is color coded blue (owl HB)/lightblue (owl WH); luminance feature search is coded green (owl HB)/pale green (owl WH). Lower row: Ratio of target fixations with outside fixations discounted. The black dashed line shows the expected proportion of fixations on a random item for each array size. Data from owl HB is based on 10,557/2,151 (orientation/luminance) fixations in 241/255 videos. Data from owl WH is from 3,245/2,293 fixations in 231/237 videos.
Recall that analysis of OwlCam videos started from the first fixation on the target array (see Methods). From this first fixation, the owls started to look for the target by making saccades across the array (Figure 3a). In the example with set size 16 (Figure 3a, left), the recording started at an outer item. The bird then fixated an inner item, before it looked at the target. Thus, the target was first fixated with the second saccade. Likewise, the scan path shown in the middle panel starts at an inner item and then passed through two inner items before the owl turned to the target after the third saccade. A similar sequence as in the middle panel is shown in the right panel, despite an increase in set size from 36 to 64. 
Figure 3
 
Panoramic scene reconstruction and cumulative occurrences. (a) Panoramic scene reconstruction of OwlCam videos showing scan paths and fixation spot location until the first hit on the target in arrays containing 16, 36, and 64 items for orientation feature search. Overall luminance differences between the videos are due to different camera angles and battery charge. Fixations are numbered sequentially. Fixation spots are filled blue at target location, gray if they cover an inner item, and outlined otherwise. Dashed lines represent the scan paths. (b) Normalized cumulative occurrences of saccades until first target hit (owl HB, top, blue; owl WH, bottom, light blue lines), and of the averaged saccades to all other items (light gray lines) for each array size. All data are normalized to the number of trials for each array size. Orientation feature search is color coded blue (owl HB) and light blue (owl WH), luminance feature search is green (owl HB) and pale green (owl WH). Target saccades are solid lines; dashed lines are average item saccades. The target plot is shifted left and up from the distractors in each condition for both owls, demonstrating that the owls look faster and in more trials at the target. This effect is stronger in the luminance feature search, although it is also definitely present for orientation feature search.
Figure 3
 
Panoramic scene reconstruction and cumulative occurrences. (a) Panoramic scene reconstruction of OwlCam videos showing scan paths and fixation spot location until the first hit on the target in arrays containing 16, 36, and 64 items for orientation feature search. Overall luminance differences between the videos are due to different camera angles and battery charge. Fixations are numbered sequentially. Fixation spots are filled blue at target location, gray if they cover an inner item, and outlined otherwise. Dashed lines represent the scan paths. (b) Normalized cumulative occurrences of saccades until first target hit (owl HB, top, blue; owl WH, bottom, light blue lines), and of the averaged saccades to all other items (light gray lines) for each array size. All data are normalized to the number of trials for each array size. Orientation feature search is color coded blue (owl HB) and light blue (owl WH), luminance feature search is green (owl HB) and pale green (owl WH). Target saccades are solid lines; dashed lines are average item saccades. The target plot is shifted left and up from the distractors in each condition for both owls, demonstrating that the owls look faster and in more trials at the target. This effect is stronger in the luminance feature search, although it is also definitely present for orientation feature search.
As mentioned before, the target bar was fixated much more frequently than each individual item. The analysis of cumulative probabilities (Figure 3b) yields more information than the data presented in Figure 2. For example, in both experiments the owls made at least one saccade to the target in most of the cases. The numbers range from 52% (set size = 64, Experiment 1, owl WH) to 100% (many set sizes, both experiments, owl HB or owl WH). The observed percentages were at least two times higher than the average numbers calculated from saccades towards all other items (compare the dotted with the solid lines in Figure 3b). However, it has to be noted that our analysis program could not detect all distractor items in Experiment 2 due to their lower contrast to the background in the videos. Therefore, the observed percentages might be slightly higher than shown in Figure 3b. This analysis also demonstrated that, in particular, owl WH fixated the target in more trials in the luminance-search task than in the orientation-search task (compare light green solid lines with light blue solid lines in Figure 3b). Moreover, when comparing the normalized cumulative occurrences of saccades toward the target in both feature searches, it is evident that the luminance curves are shifted leftward compared to the orientation curve. This means that the first fixation of the target occurred earlier in a fixations sequence in Experiment 2 than in Experiment 1. Specifically, in the orientation-search task the target was fixated already with the first saccade in 7%–48% by owl HB and in 5%–20% by owl WH. This percentage was much higher in luminance search with 67%–75% for owl HB and 57–75% for owl WH. In other words, in more than half of all luminance-search trials, the target was fixated with the first saccade. These data also indicate that the orientation-search task was more difficult for the owls than the luminance-search task. 
When comparing the number of saccades at each set size we do not see differences for owl WH in both experiments (orientation, p = 0.46; luminance, p = 0.46). Owl HB exhibited the same behavior in general, except for set size 16 in Experiment 1 (p = 0.00, p = 0.07; Figure 4a). On average across all set sizes, the two owls fixated the target after nearly the same number of saccades: 3.72 ± 0.2 saccades for owl HB and 3.76 ± 0.17 saccades for owl WH in Experiment 1; and 1.11 ± 0.07 saccades for owl HB and 1.83 ± 0.09 saccades for owl WH in Experiment 2. In Experiment 1, HB's number of saccades before the first target fixation increased slightly with set size: best linear fit, y(HBsaccades) = 0.038x + 2.36), while for owl WH, the slope was slightly negative, y(WHsaccades) = −0.01x + 4.12 (Figure 4a). When pooled across subjects, the saccades versus set size function had a small increase with set size: y(saccades) = 0.013x + 3.24. The data from Experiment 2 looks similar for both owls. The number of saccades did not increase with set size; in fact, it even decreased slightly for owl HB: y(saccades) = −0.009x + 1.44 and y(saccades) = 0.00x + 1.77 (Figure 4b). If pooled, the slope was slightly negative: −0.002 saccades/item. 
Figure 4
 
Influence of set size on number of saccades and search time for both feature searches. The upper row (a, b) shows the number of saccades until target detection, the lower row (c, d) shows the search time until detection. Orientation feature (a and c) search is color coded blue (owl HB)/light blue (owl WH); luminance feature search (b and d) is green (owl HB)/ pale green (owl WH). Error bars are standard error of the mean. Lines are the linear best fit to the data. If these features do pop out, the slope for both saccades and search time should not increase with set size. Only the orientation feature search for owl HB has positive a search slope: 0.038 saccades/item and 0.087 s/item. All other slopes are negative.
Figure 4
 
Influence of set size on number of saccades and search time for both feature searches. The upper row (a, b) shows the number of saccades until target detection, the lower row (c, d) shows the search time until detection. Orientation feature (a and c) search is color coded blue (owl HB)/light blue (owl WH); luminance feature search (b and d) is green (owl HB)/ pale green (owl WH). Error bars are standard error of the mean. Lines are the linear best fit to the data. If these features do pop out, the slope for both saccades and search time should not increase with set size. Only the orientation feature search for owl HB has positive a search slope: 0.038 saccades/item and 0.087 s/item. All other slopes are negative.
So far, we have analyzed the number of saccades it takes until the target is first fixated. The performance in visual search tasks is usually expressed by the reaction time (Palmer, 1995; Treisman & Gelade, 1980). Reaction time is typically measured by pressing a button measuring the latency of a saccade after the initiation of a trial. With these criteria, we measured the time until first target hit after the owl had initiated a trial by looking inside the target array. In general, the search-time results closely resemble the findings, with the number of saccades until the target was first hit, as presented above. However, some differences seem worth mentioning. In Experiment 1, owl HB was usually faster in fixating the target, with average time to fixate the target being 6.88 ± 0.6 s. Owl WH fixated the target after 11.07 ± 0.07 seconds. Again, set size HB16 differed significantly (HB, p = 0.00, WH, p = 0.59). Owl HB's search time increased slightly with set size, y(HBtime) = 0.087x + 3.64, while owl WH's search time decreased, y(WHtime) = −0.09x + 14.15 (Figure 4c). In other words, owl HB's search time increases by 87 ms per item in the array. When pooled across subjects, average search time hardly changed, y(time) = 0.012x + 8.08. The search independent overhead (i.e., the minimum delay until a response is initiated by the owl) was 8.08 seconds. That corresponds to the 3.24 saccadic overhead (see above), once the average fixation duration of 2.6/3.4 s is taken into account. In Experiment 2, search time for both owls decreased: y(HBtime) = −0.02x + 3.04 and y(WHtime) = −0.01x + 4.28 (Figure 4d). There were no statistical outliers between the set sizes for each owl (p = 0.07, p = 0.30). The pooled function was y(saccades) = −0.014x + 3.75. 
In summary, the number of saccades increases slightly with set size for orientation feature search, but not for luminance feature search. The search time for both feature searches does not change with set size, indicating that barn owls do exhibit a pop-out effect. However, in orientation feature search, time and number of saccades increase slightly for one owl, but not for the other. In luminance feature search, both are unaffected by set size. 
Discussion and conclusions
We presented data from two experiments that tested visual pop-out in barn owls. In Experiment 1, a target different in orientation was shown to pop out, while in Experiment 2 the target was different by its luminance from the distractors. Pop-out was demonstrated by search time and the number of saccades until the target was fixated, two measures that remain largely independent of the number of distractors in both experiments. In the following section, we discuss these findings in relation to what is known about visual pop-out in humans and in other animals. We also discuss the differences between the two experiments and finally speculate about the neural substrate underlying visual pop-out. 
Pop-out in barn owl in comparison to humans
In human visual search, a pop-out effect is well established and usually occurs in very easy feature search tasks (Wolfe & Horowitz, 2004). It is characterized by a rapid detection and fixation of a salient object, which occurs independently of the number of distracters and is explained by the involvement of parallel processes across the visual field (Treisman & Gelade, 1980). We chose two features (orientation and luminance contrast) for our studies of barn owls that are known to exhibit pop-out in humans (Nothdurft, 1991, 1992; Sagi & Julesz, 1985; Theeuwes, 1994). In such studies, it is common to measure reaction time from stimulus onset until the detection of the salient target. In barn owls this type of measurement is more problematic because we have little control over the actual time the owls start the trial; in particular, it may take some time from stimulus onset until the animal even directs its gaze at the stimulus. To remain as close as possible to the criteria used in humans, we set the beginning of the trial as the time the stimulus first appears into view in the OwlCam video and then measured both search time and the number of saccades until the target was fixated. Indeed, both remained largely independent of the number of distractors in both experiments, suggesting pop-out type of behavior at the phenomenological level. In human visual search, searches with slopes in the range of 20–40 ms per item are considered inefficient (Wolfe & Horowitz, 2004). The slopes for our two experiments are below these, if the data from both owls is pooled (Experiment 1, 12 ms/item; Experiment 2, −14 ms/item). However, we note that the individual slope for owl HB in Experiment 1 (orientation search) was 87 ms/item and thus, should be considered inefficient by human visual search standards. 
Moreover, it is known from human visual search that reaction time and the number of saccades and fixations are closely related. Quantitatively, the ratio between number of fixations and the response time is mostly unaffected by set size, especially in easy searches (Williams et al., 1997; Zelinsky & Sheinberg, 1997); we also find this effect in our barn owl experiments. In orientation feature search the owls did an average of one saccade every 2.3 ± 0.22 s. This ratio is nearly identical in luminance search, with one saccade every 2.1 ± 0.14 s. 
When comparing our results to human visual search, one difference is nevertheless striking: barn owls needed a rather long time to detect (i.e., fixate) the pop-out target—approximately 8 s in Experiment 1 (orientation) and approximately 3 s in Experiment 2 (luminance). Human reaction time in similar experiments is at least one and sometimes up to two orders of magnitude faster, especially in the easy search tasks (Williams et al., 1997; Young & Hulleman, 2013). On the other hand, the number of saccades until target fixation were not noticeably different in our second experiment from what is commonly observed in human feature-search experiments (Williams et al., 1997; Young & Hulleman, 2013). The speed of saccades is also comparable, with 800°/s peak speed in barn owl head saccades compared to 900°/s peak speed in human eye saccade (du Lac & Knudsen, 1990). Still, human fixations during search experiments, while somewhat dependent on task difficulty, last approximately 0.25 s, with 3–4 new fixation points selected every second (Vlaskamp, Over, & Hooge, 2005; Young & Hulleman, 2013), which is approximately 10 times smaller than the owls' speed. Therefore, the differences in reaction times, and more generally in pop-out behavior, between owls and humans can be attributed to a large extent to the duration of the fixation between the saccades. This last conclusion indicates that the differences between the two species should focus on this particular stage of the behavioral sequence (i.e., the fixations). 
Pop-out in other animals
There have been few studies that tested visual search behavior in nonprimate animals. Among the animals investigated are pigeons (Blough, 1984), blue jays (Bond & Kamil, 1999), rats (Botly & De Rosa, 2012), archer fish (Mokeichev et al., 2010; Rischawy & Schuster, 2013), zebrafish (Proulx et al., 2014), and bees (Spaethe et al., 2006). All these species were shown to exhibit visual search using methods that reflect their behavioral capacity, while pop-out type of behavior has been thus far demonstrated in pigeons, rats, zebrafish, and most recently in the archer fish (Ben-Tov et al., 2015). One drawback of the research demonstrating pop-out with rats, zebrafish, and archer fish is that only a small number of items (10 or less) could be used, probably due to the low visual acuity or constraints on field of view in these species; thus, what would happen with larger set sizes remains unclear. Indeed, our barn owls exhibited longer response times than rats or pigeons in feature search tasks. This might be attributed to various factors, such as differences in experimental design, levels of training, or computational capabilities; however, in itself, this fact does not confound our main result that barn owls exhibit pop-out sensitivity for set sizes up to 64 items. 
Differences between the two experiments
We used two different experiments to show that barn owls exhibit behavioral pop-out. Although both experiments indicated similar behavior, we found differences in the corresponding results. While luminance feature search did pop out for both owls, only one owl showed efficient search in orientation feature search. In general, the target was fixated relatively more often per trial in the luminance search compared to the orientation search. Also, in the luminance task, the target was detected approximately twice as fast and in half as many saccades. Thus, while both feature searches indicated a type of parallel search mechanism, the barn owl's visual system appears to solve the luminance task more efficiently than the orientation task. The reason might be intrinsic and described by the more efficient processing of luminance compared to orientation, but in our case, it could also have resulted from the fact that our owls were trained for and tested on orientation feature searches before training for and testing on the luminance feature search. A reduction in task difficulty and therefore in response time is a common training effect in visual search (Schneider & Shiffrin, 1977; Wolfe, Alvarez, & Horowitz, 2000); thus, the improved performance in the luminance visual search experiment may be attributed to the animals' longer familiarity and better expertise in coping with visual search tasks in general. This possibility in itself would be a remarkable and interesting finding, because, although we know that barn owls are capable of information transfer from motion parallax to stereo (van der Willigen et al., 2002), the transfer of acquired knowledge between domains is considered a cognitive achievement (Blaisdell & Cook, 2005; Zentall & Hogan, 1976). At the same time, the explanation may be confounded by the mere fact that our training phase was lengthy, lasting several months before each experiment. It is therefore likely that the familiarization curve may have hit a ceiling before the first experiment started. 
Broader impacts: From behavior to neural substrate of pop-out
There is a substantial number of studies investigating how the primate brain performs visual search tasks, often focusing on cortical structures (e.g., Bichot, Rossi, & Desimone, 2005, Chelazzi, Miller, Duncan, & Desimone, 1993). While it was speculated for some time that only animals with large a neocortex may have mechanisms of visual search, it now seems clear that a structure like a neocortex is not necessary for pop-out sensitivity. But what then are the minimal requirements? Clearly, the responses within the classical receptive field of neurons are not enough; there must also be interactions between cells beyond the classical receptive field. In fact, the saliency model of Li (2002) proposes that horizontal connections between neurons in V1 provide enough contextual information to mediate the saliency of a stimulus. In birds, such substrates may be found in the the avian visual Wulst, which resembles in many respects the mammalian visual cortex (Nieder & Wagner, 1999; Pettigrew & Konishi, 1976; Wagner & Frost, 1993). While not much is known about lateral connections in the visual Wulst of barn owls, and the Wulst is not layered as the mammalian cortex, the experiments of Nieder and Wagner (1999) on cognitive contours demonstrate a high level of connectivity which may also underlie pop-out, while its rich connectivity to other areas may also facilitate the interaction of bottom-up and top-down mechanisms for visual attention in general, (Connor, Egeth, & Yantis, 2004). However, no studies related to pop-out sensitivity have been conducted in the visual Wulst. Typical candidate areas for visual search are also those involved in saccade control and target selection, such as the cortical lateral interparietal cortex and frontal eye fields, or the midbrain superior colliculus (Bichot, Schall, & Thompson, 1996; Bisley & Goldberg, 2010; Fecteau & Munoz, 2006; Shen, Valero, Day, & Paré, 2011). The avian homologue of the superior colliculus is the optic tectum. Neurons in the optic tectum are sensitive to the intensity of stimuli in their receptive fields but less selective for individual features (Knudsen, 1982). Its involvement in stimulus competition by inhibition of weaker stimuli suggests a role in saliency computation, as well (Mysore, Asadollahi, & Knudsen, 2010, 2011). Indeed, some pop-out–like sensitivity has been observed in tectal cells of barn owls (Zahar et al., 2012). That is, tectal cells were sensitive to contrasting motion stimuli. However, the data of Zahar et al. (2012) did not show pop-out sensitivity in orientation as was found here. Therefore, it seems that more interactions than those present in the optic tectum are necessary to create pop-out sensitivity for orientation. More experiments are clearly necessary to find out what that substrate is and, perhaps more interestingly, what may be the minimal circuitry that can support pop-out sensitivity. 
Methodological contribution
While the OwlCam employed in this study was already proposed in our previous work, here it was used in conjunction with a novel algorithmic system that could analyze OwlCam videos automatically, thus facilitating the collection and analysis of the large amount of data associated with studies that require many trials and defy manual analysis. Clearly, the methodological implication of this combined system is not limited the study of pop-out or visual search, as many types of visual behavior could benefit from the construction of the panoramic visual field and the scan path by which the bird explores it. For instance, head-mounted camera systems have been used in recent studies with peahens and falcons (Kane & Zamani, 2014; Yorzinski, Patricelli, Babcock, Pearson, & Platt, 2013). Needless to say, the same methodology is highly useful for studying visual behavior in other species as well, and in particular, it is directly applicable for other species with eyes that are relatively immobile in their sockets, including mammals like tarsiers, quite a few bird species, and, upon further future miniaturization, animals with compound eyes. 
Acknowledgments
This research was supported in part by the National Institute for Psychobiology in Israel (Grant No. 9-2012/2013) founded by the Charles E. Smith Family, by the Israel Science Foundation (ISF Grants 259/12 and 1274/11), and the German-Israeli Foundation Grant 1-1117-114.1/2010. We also thank the Frankel Fund, the ABC Robotics initiative, and the Zlotowski Center for Neuroscience at Ben-Gurion University for their generous support. 
Commercial relationships: none. 
Corresponding author: Julius Orlowski. 
Email: Julius@bio2.rwth-aachen.de. 
Address: Institute of Biology II, RWTH Aachen University, Aachen, Germany. 
References
Allan S. E., Blough D. S. (1989). Feature-based search asymmetries in pigeons and humans. Perception & Psychophysics, 46, 456–464, doi:10.3758/BF03210860.
Ben-Tov M., Donchin O., Ben-Shahar O., Segev R. (2015). Pop-out in visual search of moving targets in the archer fish. Nature Communications, 6, 1–11, doi:10.1038/ncomms7476.
Bichot N. P., Rossi A. F., Desimone R. (2005). Parallel and serial neural mechanisms for visual search in macaque area V4. Science, 308, 529–534, doi:10.1126/science.1109676.
Bichot N. P., Schall J. D., Thompson K. G. (1996). Visual feature selectivity in frontal eye fields induced by experience in mature macaques. Nature, 381, 697–699, doi:10.1038/381697a0.
Bisley J. W., Goldberg M. E. (2010). Attention, intention, and priority in the parietal lobe. Annual Review of Neuroscience, 33, 1–21, doi:10.1146/annurev-neuro-060909-152823.
Blaisdell A. P., Cook R. G. (2005). Two-itemsame-different concept learning in pigeons. Animal Learning & Behavior, 33, 67–77, doi:10.3758/BF03196051.
Blough D. S. (1977). Visual search in the pigeon: Hunt and peck method. Science 196, 1013–1014, doi:10.1126/science.860129.
Blough P. M. (1984). Visual search in pigeons: Effects of memory set size and display variables. Perception & Psychophysics, 35, 344–352, doi:10.3758/BF03206338.
Bond A. B., Kamil A. C. (1999). Searching image in blue jays: Facilitation and interference in sequential priming. Animal Learning & Behavior, 27, 461–471, doi:10.3758/BF03209981.
Botly L. C. P., De Rosa E. (2012). Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention. Cerebral Cortex, 22, 2441–2453, doi:10.1093/cercor/bhr331
Chelazzi L., Miller E. K., Duncan J., Desimone R. (1993). A neural basis for visual search in inferior temporal cortex. Nature, 363, 345–347, doi:10.1038/363345a0.
Connor C. E., Egeth H. E., Yantis S. (2004). Visual attention: Bottom-up versus top-down. Current Biology, 14, 850–852, doi:10.1016/j.cub.2004.09.041.
du Lac S., Knudsen E. I. (1990). Neural maps of head movement vector and speed in the optic tectum of the barn owl. Journal of Neurophysiology, 63, 131–146, doi:10.1152/jn.01142.2009.
Duncan J., Humphreys G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433–458, doi:10.1037/0033-295X.96.3.433.
Fecteau J. H., Munoz D. P. (2006). Salience, relevance, and firing: A priority map for target selection. Trends in Cognitive Sciences, 10, 382–390, doi:10.1016/j.tics.2006.06.011.
Harmening W. M., Orlowski J., Ben-Shahar O., Wagner H. (2011). Overt attention toward oriented objects in free-viewing barn owls. Proceedings of the National Academy of Sciences, USA, 108, 8461–8466, doi:10.1073/pnas.1101582108.
Harmening W. M., Wagner H. (2011). From optics to attention: Visual perception in barn owls. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, 197, 1031–1042, doi:10.1007/s00359-011-0664-3.
Hausmann L., Plachta D. T. T., Singheiser M., Brill S., Wagner H. (2008). In-flight corrections in free-flying barn owls (Tyto alba) during sound localization tasks. Journal of Experimental Biology, 211, 2976–2988, doi:10.1242/jeb.020057.
Hochstein S., Ahissar M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791–804, doi:10.1016/S0896-6273(02)01091-7.
Kane S. A., Zamani M. (2014). Falcons pursue prey using visual motion cues: New perspectives from animal-borne cameras. Journal of Experimental Biology, 217, 225–234, doi:10.1242/jeb.092403.
Knudsen E. I. (1982). Auditory and visual maps of space in the optic tectum of the owl. Journal of Neuroscience, 2, 1177–1194.
Li Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Sciences, 6, 9–16, doi:10.1016/S1364-6613(00)01817-9.
Masino T., Knudsen E. (1990). Horizontal and vertical components of head movement are controlled by distinct neural circuits in the barn owl. Nature, 345, 434–437, doi:10.1038/345434a0.
Mokeichev A., Segev R., Ben-Shahar O. (2010). Orientation saliency without visual cortex and target selection in archer fish. Proceedings of the National Academy of Sciences, USA, 107, 16726–31671, doi:10.1073/pnas.1005446107.
Mysore S. P., Asadollahi A., Knudsen E. I. (2010). Global inhibition and stimulus competition in the owl optic tectum. Journal of Neuroscience, 30, 1727–1738, doi:10.1523/JNEUROSCI.3740-09.2010.
Mysore S. P., Asadollahi A., Knudsen E. I. (2011). Signaling of the strongest stimulus in the owl optic tectum. Journal of Neuroscience, 31, 5186–5196, doi:10.1523/JNEUROSCI.4592-10.2011.
Nieder A., Wagner H. (1999). Perception and neuronal coding of subjective contours in the owl. Nature Neuroscience, 2, 660–663, doi:10.1038/10217.
Nothdurft H.-C. (1991). Texture segmentation and pop-out from orientation contrast. Vision Research, 31, 1073–1078, doi:10.1016/0042-6989(91)90211-M.
Nothdurft H.-C. (1992). Feature analysis and the role of similarity in preattentive vision. Perception & Psychophysics, 52, 355–375, doi:10.3758/BF03206697.
Nothdurft H.-C., Pigarev I. N., Kastner S. (2009). Overt and covert visual search in primates: Reaction times and gaze shift strategies. Journal of Integrative Neuroscience, 8, 137–174, doi:10.1142/S0219635209002101.
Ohayon S., Harmening W. M., Wagner H., Rivlin E. (2008). Through a barn owl's eyes: Interactions between scene content and visual attention. Biological Cybernetics, 98, 115–132, doi:10.1007/s00422-007-0199-4.
Ohayon S., van der Willigen R. F., Wagner H., Katsman I., Rivlin E. (2006). On the barn owl's visual pre-attack behavior: I. Structure of head movements and motion patterns. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, 192, 927–940, doi:10.1007/s00359-006-0130-9.
Orlowski J., Harmening W. M., Wagner H. (2012). Night vision in barn owls: Visual acuity and contrast sensitivity under dark adaptation. Journal of Vision, 12 (13): 4, 1–8, doi:10.1167/12.13.4 [PubMed] [Article].
Palmer J. (1995). Attention in visual search: Distinguishing four causes of a set-size effect. Current Directions in Psychological Science, 4, 118–123, doi:10.1111/1467-8721.ep10772534.
Pettigrew J. D., Konishi M. (1976). Neurons selective for orientation and binocular disparity in the visual Wulst of the barn owl (Tyto alba). Science, 193, 675–678, doi:10.1126/science.948741.
Proulx M. J., Parker M. O., Tahir Y., Brennan C. H. (2014). Parallel mechanisms for visual search in zebrafish. PLoS One, 9, e111540, doi:10.1371/journal.pone.0111540.
Rischawy I., Schuster S. (2013). Visual search in hunting archerfish shares all hallmarks of human performance. Journal of Experimental Biology, 216, 3096–3103, doi:10.1242/jeb.087734.
Sagi D., Julesz B. (1985). “Where” and “what” in vision. Science, 228, 1217–1219, doi:10.1126/science.4001937.
Schneider W., Shiffrin R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1–66, doi:10.1037/0033-295X.84.1.1.
Shen K., Valero J., Day G. S., Paré M. (2011). Investigating the role of the superior colliculus in active vision with the visual search paradigm. European Journal of Neuroscience, 33, 2003–2016, doi:10.1111/j.1460-9568.2011.07722.x.
Spaethe J., Tautz J., Chittka L. (2006). Do honeybees detect colour targets using serial or parallel visual search? Journal of Experimental Biology, 209, 987–993, doi:10.1242/jeb.02124.
Srinivasa Reddy B., Chatterji B. N. (1996). An FFT-based technique for translation, rotation, and scale-invariant image registration. IEEE Transactions on Image Processing, 5, 1266–1271, doi:10.1109/83.506761.
Steinbach M. J., Money K. E. (1973). Eye movements of the owl. Vision Research, 13, 889–891, doi:10.1016/0042-6989(73)90055-2.
Theeuwes J. (1994). Parallel search for a conjunction of shape and contrast polarity. Vision Research, 34, 3013–3016.
Treisman A. M., Gelade G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136, doi:10.1016/0010-0285(80)90005-5.
Tsotsos J. (1990). Analyzing vision at the complexity level. Behavioral and Brain Sciences, 13, 423–469, doi:10.1017/S0140525X00079577.
van der Willigen R. F. Frost, B. J., Wagner H. (1998). Stereoscopic depth perception in the owl. Neuroreport, 9, 1233–1237, doi:10.1097/00001756-199804200-00050.
van der Willigen R. F., Frost B., Wagner H. (2002). Depth generalization from stereo to motion parallax in the owl. Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology, 187, 997–1007, doi:10.1007/s00359-001-0271-9.
Vlaskamp B. N. S., Over E., Hooge I. T. C. (2005). Saccadic search performance: the effect of element spacing. Experimental Brain Research, 167, 246–259, doi:10.1007/s00221-005-0032-z.
Vonderschen K., Wagner H. (2009). Tuning to interaural time difference and frequency differs between the auditory arcopallium and the external nucleus of the inferior colliculus. Journal of Neurophysiology, 101, 2348–2361, doi:10.1152/jn.91196.2008.
Wagner H., Frost B. (1993). Disparity-sensitive cells in the owl have a characteristic disparity. Nature, 364, 796–798, doi:10.1038/364796a0.
Wagner H., Kettler L., Orlowski J., Tellers P. (2012). Neuroethology of prey capture in the barn owl (Tyto alba L.). Journal of Physiology, 107, 51–61, doi:10.1016/j.jphysparis.2012.03.004.
Williams D. E., Reingold E. M., Moscovitch M., Behrmann M. (1997). Patterns of eye movements during parallel and serial visual search tasks. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 51, 151–164, doi:10.1037/1196-1961.51.2.151.
Wolfe J. M. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238.
Wolfe J. M. (1998). What can 1 million trials tell us about visual search? Psychological Science, 9, 33–39, doi:10.1111/1467-9280.00006.
Wolfe J. M., Alvarez G. A., Horowitz T. S. (2000). Attention is fast but volition is slow. Nature, 406, 691, doi:10.1038/35021132.
Wolfe J. M., Horowitz T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5, 495–501, doi:10.1038/nrn1411.
Yarbus A. (1967). Eye movements and vision. New York: Plenum.
Yorzinski J. L., Patricelli G. L., Babcock J. S., Pearson J. M., Platt M. L. (2013). Through their eyes: Selective attention in peahens during courtship. Journal of Experimental Biology, 216, 3035–3046, doi:10.1242/jeb.087338.
Young A. H., Hulleman J. (2013). Eye movements reveal how task difficulty moulds visual search. Journal of Experimental Psychology, Human Perception & Performance, 39, 168–90, doi:10.1037/a0028679.
Zahar Y., Wagner H., Gutfreund Y. (2012). Responses of tectal neurons to contrasting stimuli: An electrophysiological study in the barn owl. PLoS One, 7, e39559, doi:10.1371/journal.pone.0039559.
Zelinsky G., Sheinberg D. (1997). Eye movements during parallel–serial visual search. Journal of Experimental Psychology, Human Perception & Performance, 23, 244–262.
Zentall T. R., Hogan D. E. (1976). Pigeons can learn identity or difference, or both. Science, 191, 408–409, doi:10.1126/science.191.4225.408.
Zitová B., Flusser J. (2003). Image registration methods: A survey. Image and Vision Computing, 21, 977–1000, doi:10.1016/S0262-8856(03)00137-9.
Figure 1
 
Functional fixation spot and classification of fixations. (a) The fixation map is a heat map with blue colors specifying locations of low target probability and red colors specifying regions of high target probability in the image. Assuming the bird has no reason to consistently fixate at “nothing,” this map thus represents where in the image plane (or retina) the owl prefers to place targets (by proper head movements), a retinal position we consider as the functional fovea or functional fixation spot. Shown here is the result for subject HB after applying the calibration procedure described in the text (also in Harmening et al., 2011). Note the approximately circular shape. (b) Typical stimulus scene, containing a 25-item orientation feature search array on the floor. Note the single target among 24 distractors. Labels mark the three content categories for classification used in this study. Fixations are classified as “target” if they intersect the target (marked by blue box), “inside” if the fixation spot is not in the target area but inside the array area, and “outside” if the fixation spot lies outside the stimulus array. Note that the inside category includes fixations on distractors or anywhere between items in the stimulus array.
Figure 1
 
Functional fixation spot and classification of fixations. (a) The fixation map is a heat map with blue colors specifying locations of low target probability and red colors specifying regions of high target probability in the image. Assuming the bird has no reason to consistently fixate at “nothing,” this map thus represents where in the image plane (or retina) the owl prefers to place targets (by proper head movements), a retinal position we consider as the functional fovea or functional fixation spot. Shown here is the result for subject HB after applying the calibration procedure described in the text (also in Harmening et al., 2011). Note the approximately circular shape. (b) Typical stimulus scene, containing a 25-item orientation feature search array on the floor. Note the single target among 24 distractors. Labels mark the three content categories for classification used in this study. Fixations are classified as “target” if they intersect the target (marked by blue box), “inside” if the fixation spot is not in the target area but inside the array area, and “outside” if the fixation spot lies outside the stimulus array. Note that the inside category includes fixations on distractors or anywhere between items in the stimulus array.
Figure 2
 
Proportion of target fixations during experiments. Upper row: results shown for each set size and owl when all fixations are considered. Orientation feature search is color coded blue (owl HB)/lightblue (owl WH); luminance feature search is coded green (owl HB)/pale green (owl WH). Lower row: Ratio of target fixations with outside fixations discounted. The black dashed line shows the expected proportion of fixations on a random item for each array size. Data from owl HB is based on 10,557/2,151 (orientation/luminance) fixations in 241/255 videos. Data from owl WH is from 3,245/2,293 fixations in 231/237 videos.
Figure 2
 
Proportion of target fixations during experiments. Upper row: results shown for each set size and owl when all fixations are considered. Orientation feature search is color coded blue (owl HB)/lightblue (owl WH); luminance feature search is coded green (owl HB)/pale green (owl WH). Lower row: Ratio of target fixations with outside fixations discounted. The black dashed line shows the expected proportion of fixations on a random item for each array size. Data from owl HB is based on 10,557/2,151 (orientation/luminance) fixations in 241/255 videos. Data from owl WH is from 3,245/2,293 fixations in 231/237 videos.
Figure 3
 
Panoramic scene reconstruction and cumulative occurrences. (a) Panoramic scene reconstruction of OwlCam videos showing scan paths and fixation spot location until the first hit on the target in arrays containing 16, 36, and 64 items for orientation feature search. Overall luminance differences between the videos are due to different camera angles and battery charge. Fixations are numbered sequentially. Fixation spots are filled blue at target location, gray if they cover an inner item, and outlined otherwise. Dashed lines represent the scan paths. (b) Normalized cumulative occurrences of saccades until first target hit (owl HB, top, blue; owl WH, bottom, light blue lines), and of the averaged saccades to all other items (light gray lines) for each array size. All data are normalized to the number of trials for each array size. Orientation feature search is color coded blue (owl HB) and light blue (owl WH), luminance feature search is green (owl HB) and pale green (owl WH). Target saccades are solid lines; dashed lines are average item saccades. The target plot is shifted left and up from the distractors in each condition for both owls, demonstrating that the owls look faster and in more trials at the target. This effect is stronger in the luminance feature search, although it is also definitely present for orientation feature search.
Figure 3
 
Panoramic scene reconstruction and cumulative occurrences. (a) Panoramic scene reconstruction of OwlCam videos showing scan paths and fixation spot location until the first hit on the target in arrays containing 16, 36, and 64 items for orientation feature search. Overall luminance differences between the videos are due to different camera angles and battery charge. Fixations are numbered sequentially. Fixation spots are filled blue at target location, gray if they cover an inner item, and outlined otherwise. Dashed lines represent the scan paths. (b) Normalized cumulative occurrences of saccades until first target hit (owl HB, top, blue; owl WH, bottom, light blue lines), and of the averaged saccades to all other items (light gray lines) for each array size. All data are normalized to the number of trials for each array size. Orientation feature search is color coded blue (owl HB) and light blue (owl WH), luminance feature search is green (owl HB) and pale green (owl WH). Target saccades are solid lines; dashed lines are average item saccades. The target plot is shifted left and up from the distractors in each condition for both owls, demonstrating that the owls look faster and in more trials at the target. This effect is stronger in the luminance feature search, although it is also definitely present for orientation feature search.
Figure 4
 
Influence of set size on number of saccades and search time for both feature searches. The upper row (a, b) shows the number of saccades until target detection, the lower row (c, d) shows the search time until detection. Orientation feature (a and c) search is color coded blue (owl HB)/light blue (owl WH); luminance feature search (b and d) is green (owl HB)/ pale green (owl WH). Error bars are standard error of the mean. Lines are the linear best fit to the data. If these features do pop out, the slope for both saccades and search time should not increase with set size. Only the orientation feature search for owl HB has positive a search slope: 0.038 saccades/item and 0.087 s/item. All other slopes are negative.
Figure 4
 
Influence of set size on number of saccades and search time for both feature searches. The upper row (a, b) shows the number of saccades until target detection, the lower row (c, d) shows the search time until detection. Orientation feature (a and c) search is color coded blue (owl HB)/light blue (owl WH); luminance feature search (b and d) is green (owl HB)/ pale green (owl WH). Error bars are standard error of the mean. Lines are the linear best fit to the data. If these features do pop out, the slope for both saccades and search time should not increase with set size. Only the orientation feature search for owl HB has positive a search slope: 0.038 saccades/item and 0.087 s/item. All other slopes are negative.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×