Free
Article  |   May 2014
Visual control of an action discrimination in pigeons
Author Affiliations
  • Muhammad A. J. Qadri
    Department of Psychology, Tufts University, Medford, MA, USA
    Muhammad.Qadri@tufts.edu
  • Yael Asen
    Department of Psychology, Tufts University, Medford, MA, USA
  • Robert G. Cook
    Department of Psychology, Tufts University, Medford, MA, USA
Journal of Vision May 2014, Vol.14, 16. doi:https://doi.org/10.1167/14.5.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Muhammad A. J. Qadri, Yael Asen, Robert G. Cook; Visual control of an action discrimination in pigeons. Journal of Vision 2014;14(5):16. https://doi.org/10.1167/14.5.16.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Recognizing and categorizing behavior is essential for all animals. The visual and cognitive mechanisms underlying such action discriminations are not well understood, especially in nonhuman animals. To identify the visual bases of action discriminations, four pigeons were tested in a go/no-go procedure to examine the contribution of different visual features in a discrimination of walking and running actions by different digital animal models. Two different tests with point-light displays derived from studies of human biological motion failed to support transfer of the learned action discrimination from fully figured models. Tests with silhouettes, contours, and the selective deletion or occlusion of different parts of the models indicated that information about the global motions of the entire model was critical to the discrimination. This outcome, along with earlier results, suggests that the pigeons' discrimination of these locomotive actions involved a generalized categorization of the sequence of configural poses. Because the motor systems for locomotion and flying in pigeons share little in common with quadruped motions, the pigeons' discrimination of these behaviors creates problems for motor theories of action recognition based on mirror neurons or related notions of embodied cognition. It suggests instead that more general motion and shape mechanisms are sufficient for making such discriminations, at least in birds.

Introduction
The detection, recognition, categorization, and interpretation of the behavior of other animals are vital to the survival of many species. An essential social skill in humans, our capabilities for interpreting behaviors are highly developed and potentially specialized. In the last decade, a marked upsurge in research examining action recognition in humans has been inspired in part by the discovery of mirror neurons in monkeys (Buccino, Binkofski, & Riggio, 2004; Decety & Grèzes, 1999; di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992). This has given rise to the development of a number of motor-based theories of human action, embodied cognition, language, intentionality, and social cognition (Arbib, 2005; Engel, Maye, Kurthen, & König, 2013; Gallese, 2007; Grafton, 2009; Iacoboni, 2009; Jeannerod, 2001; Rizzolatti, Fogassi, & Gallese, 2001; Wilson & Knoblich, 2005). One recent idea, for instance, has centered on the notion that humans have an action observation network that is critically tied to the embodied simulation of the movements of others and is essential to understanding conspecific actions and intentions (e.g., Grafton, 2009). It has been suggested that this system uses species-specific motor-based knowledge to recognize actions by internally simulating or emulating them (Buccino, Lui, et al., 2004; Jeannerod, 2001; Wilson & Knoblich, 2005). 
Recognizing and interpreting the behavior of both conspecifics and heterospecifics is equally critical for nonhuman animals, serving important functions in courtship, mate selection, communication, territory defense, learning by imitation, and social foraging (Byrne & Russon, 1998; Fernández-Juricic, Erichsen, & Kacelnik, 2004). Not all animals have the luxury of the analytical power in a human brain, however. Grafton's (2009) proposed human action observation network, for example, exceeds the total brain size of many birds, whose neural hardware has likely been limited in size by the evolutionary demands of muscle-powered flight. Therefore, understanding how action recognition and its neural mechanisms work in other nonhuman species may provide important insight into our own action recognition. The theoretical analysis of action recognition by nonhuman animals has progressed much more slowly because of the difficulty of controlling and using “behavior” in experimentally analytic situations. Animals just don't take direction well. Digital software used to create animated displays of behavior, however, holds considerable promise for moving beyond this problem. 
Recently, we successfully taught pigeons to discriminate and group the walking and running actions of eight different digital animals using life-like, articulated, animated models in a go/no-go task (Asen & Cook, 2012). Because these locomotor activities are likely salient natural action categories (Malt et al., 2008), they provided a good starting place for building on the prior results of video-based action recognition (Dittrich & Lea, 1993; Dittrich, Lea, Barrett, & Gurr, 1998; Jitsumori, Natori, & Okuyama, 1999). In that study, each digital animal model ran or walked in place on a textured background (see examples in Figure 1). To encourage action categorization, the digital models were rendered from 12 different camera perspectives (combinations of elevation, azimuth, and distance). It was found that (a) this type of action discrimination was easily acquired; (b) it showed significant transfer to novel species moving in biologically appropriate but distinct ways; (c) it exhibited viewpoint invariance over camera distance, elevation, and perspective; (d) it did not vary substantially with variations in presentation speed; (e) and it showed selective interference with the inversion of the video or the randomization of its sequential frames. The results seemed most consistent with the hypothesis that the pigeons learned action categories for the different behaviors as a series of sequenced poses. Given that pigeon locomotion likely does not share motor representations in common with the different quadruped actions that they were discriminating, these results suggest that behaviors can sometimes be visually discriminated without their embodiment in the observer. 
Figure 1
 
Example of one of the eight animal models used in these experiments to exemplify the actions of walking and running. It is shown as rendered from a low, close, side perspective. Superimposed on the displays are the different motion paths of five body parts (nose, neck junction, tail junction, fore right foot, and rear right foot). These paths were not present in the stimuli tested with the pigeons.
Figure 1
 
Example of one of the eight animal models used in these experiments to exemplify the actions of walking and running. It is shown as rendered from a low, close, side perspective. Superimposed on the displays are the different motion paths of five body parts (nose, neck junction, tail junction, fore right foot, and rear right foot). These paths were not present in the stimuli tested with the pigeons.
Computational models have explored the problem of human behavior recognition based exclusively on visual information for a variety of functions (Aggarwal & Cai, 1999; Poppe, 2010; Wang, Hu, & Tan, 2003). One computer vision approach focuses on the higher-level global or configural organization among different body parts to recognize action. The representation used in these theories often involves hierarchical, geometry-based, configural models coding the relative motion of body limbs and joints (Aggarwal & Cai, 1999). A second approach codes nonconfigural, and often nonparametric, representations to sufficiently discriminate among behaviors. These theories vary in many ways, including how and what information is encoded, from global representations, such as space-time volumes or integrated silhouettes, to more localized features, such as optic flow or periodic motion trajectories (e.g., Bobick & Davis, 2001; Polana & Nelson, 1997; Schindler & Van Gool, 2008). One nonconfigural account of the actions in Figure 1, for example, might isolate the localized movement of the five different points traced in each example. Given any one of these paths, but especially those of the feet, it would be possible to determine if the model were running or walking without processing the entire figure. Some models of human biological motion perception utilize both configural/top-down and local/bottom-up information with some success in reproducing experimental outcomes (Giese & Poggio, 2003). It is not possible to precisely discriminate among the wide variety of proposed computer models, but a key feature in these methods is the use of global or local cues. Identifying which cues pigeons use would help to determine the visual and cognitive mechanisms involved in avian action recognition and in identifying the relevant class of computational models for comparison. 
Asen and Cook's (2012) evidence seemed to favor the pigeons' use of the more generalized higher-level organization of the digital models' actions. First, inverting the video disrupted the birds' discrimination, much as in humans (Dittrich, 1993). Such stimulus inversions only minimally disturb the localized nonconfigural features of the displays—although effects like this have been attributed to spatially relevant local motion detection (Hirai, Chang, Saunders, & Troje, 2011). Second, the gaits of the animated animal models varied greatly (i.e., ponderous elephants, lithe cats), but each of the models supported good transfer of the locomotion discrimination suggesting a higher-level recognition of the actions than species-specific features. Finally, the viewpoint invariance of the pigeons' discrimination across perspectives suggests that the precise appearance of the features or their relationships was not particularly critical. Although these results suggest that the pigeons use larger-scale or global features, the question can be investigated directly and more precisely using digitally altered stimuli. 
The present experiments specifically investigated the representation of these locomotive actions in the pigeons by testing them with different display manipulations. The goal of the experiments was to determine if the pigeons relied more on global or local information to discriminate the actions of the digital models. Pigeons previously trained to discriminate walking and running actions were tested. In Experiment 1, we investigated whether the pigeons could transfer their established action discrimination to point-light displays (PLDs) similar to those studied in human tests of “biological motion” (Johansson, 1973). In Experiment 2, the pigeons were tested with models in which internal local features were eliminated by using only the contour or silhouette of the acting models. Finally, in Experiment 3, different portions of the models were digitally occluded or deleted to determine which parts of the models were critical in mediating the discrimination. Better understanding how the pigeons processed and classified precisely controlled movements that simulate the actions of different animals may offer insights into how they represent behaviors generally and the nature of the visual mechanisms involved. 
Experiment 1
In humans, “biological motion” displays are regularly used to examine the discrimination and recognition of actions (Blake & Shiffrar, 2007; Johansson, 1973). Consisting of coordinated moving points or dots corresponding to the articulated motions of different behaviors, such PLDs powerfully invoke the perception of a behaving actor in humans. Humans can easily classify a wide variety of actions and recognize many socially relevant features (e.g., age, gender, emotion) from these simple moving elements (Blake & Shiffrar, 2007). Thus, PLD perception seems to require the spatial and temporal integration of the separated and discrete elements into a global perception of action without corresponding form information (however, see Thirkettle, Benton, & Scott-Samuel, 2009). Importantly, this perception is derived from both global and local features of the stimuli (Beintema & Lappe, 2002; Hirai et al., 2011). 
This ease of recognizing biological motion in PLD stimuli by humans has, in turn, generated sharp interest in whether such displays similarly generate the same type of perception in nonhuman mammals (Blake, 1993; J. Brown, Kaplan, Rogers, & Vallortigara, 2010; Oram & Perrett, 1994; Parron, Deruelle, & Fagot, 2007; Puce & Perrett, 2003; Tomonaga, 2001) and birds (Dittrich et al., 1998; Regolin, Tommasi, & Vallortigara, 2000; Troje & Aust, 2013; Vallortigara, Regolin, & Marconato, 2005). For birds, the results have been mixed. 
Dittrich et al. (1998) trained pigeons to discriminate between pecking and nonpecking behaviors of conspecifics using video playback. Following fully detailed video training, those birds that learned the discrimination showed limited transfer to PLDs of the same behaviors. A follow-up experiment revealed that four of eight naïve pigeons could learn to discriminate PLD displays of these same behaviors, but they showed no transfer of the discrimination to fully detailed videos. These mixed results suggest that the processing of the PLDs by pigeons does not readily generate the same percept of a behaving animal as expressed via video playback. 
Troje and Aust (2013) recently trained eight pigeons with PLDs in a direction discrimination, in which they had to discriminate left-walking from right-walking human or pigeon PLD walkers in a choice task. The authors then tested the discrimination with globally and locally inconsistent displays and various inversion controls. They found two pigeons that appeared to be responding to the globally facing direction of the walkers, and the remaining six attended primarily to the dots corresponding to the movement of the feet. The latter pattern of results suggests that, for the majority of the pigeons, their perception and discrimination of these biological motion displays was locally biased. 
Young chicks have been tested several times with biological motion animations (Regolin et al., 2000; Vallortigara et al., 2005). Using an imprinting procedure, Regolin et al. (2000) imprinted a large number of chicks on PLD stimuli of either a walking or a scrambled hen. When later tested for preference with both displays, females displayed a slightly greater preference for the imprinted animations (walking or scrambled), and males showed a small amount of avoidance to the imprinted displays (walking or scrambled). Using a preferential proximity paradigm, Vallortigara et al. (2005) investigated PLD perception in newly hatched chicks by examining their distance from two possible test displays. Testing a large number of chicks, their experiment revealed a small, but significant, proximity preference for an articulated hen PLD, an articulated cat PLD, and a scrambled hen PLD when compared to random or rigid dot motion displays. While it is not clear that these displays are perceived exactly as intended, the authors suggest these findings imply that chicks have an innate predisposition for processing the types of features that underlie biological motion perception. 
In the current experiment, we created PLD stimuli that retained the articulated structure and motion features of our already established walking and running digital models. We then examined how discrimination with these articulated PLD stimuli compared to the discrimination of full-figured stimuli and the discrimination of several important controls. One of these controls was a scrambled condition, which had identically moving dots positioned randomly about the display. This control contains all of the same motion information but lacks the structural articulation, coordination, and coherence of the motion that promotes biological motion perception. The second was an inversion condition in which the dot pattern was inverted, resulting in the “legs” of the PLD model pointing up and the “head” and “torso” positioned toward the bottom. This control provides the same coordinated articulation and periodic timing but disrupts location-specific motion features (e.g., as in Blake & Shiffrar, 2007; Troje & Aust, 2013). Typically, this control has been interpreted as disrupting global processing, but recent research shows that this arguably global manipulation affects the weight given to local motion cues during PLD displays (Hirai et al., 2011). Thus, if inversion disrupts otherwise capable PLD discrimination, it would need to be determined if the disruption resulted from an interaction of local and global properties. Last, we tested a randomized frame condition, in which all of the same frames were presented as in the normally articulated PLD stimulus but in a random sequence. This condition disrupts motion-based cuing while retaining the same static frames during presentation (Asen & Cook, 2012; Cook & Roberts, 2007; Koban & Cook, 2009). Any discrimination of this condition suggests that the coherent pattern of motion and the form features are irrelevant and that some static cue, such as the presence of the figure in a certain region of the display, can be sufficient for discrimination (Cook & Roberts, 2007). Experiment 1 consisted of two different tests of these PLD stimuli with the pigeons. Any greater degree of transfer from the ongoing fully figured locomotion discrimination to the normally articulated PLD stimuli relative to the different controls would be consistent with the hypothesis that the pigeons see the biological motion in these displays like humans do. 
Methods
Animals
Four male pigeons (Columba livia) were tested: #G1, #G2, #S3, and #Y4. They were maintained at 80%–85% of their free-feeding weights with free access to grit and water. These pigeons were already trained to discriminate walking and running actions and did not require any additional training. All procedures were approved by the Tufts University Internal Animal Care and Use Committee, which adheres to ARVO guidelines. 
Apparatus
Testing was conducted in a computer-controlled chamber. Stimuli were presented on an LCD monitor (NEC Accusync 51VM, 1024 × 768, 60-Hz refresh rate) recessed 8 cm behind a 33 × 22 cm infrared touchscreen (EZscreen EZ-150-Wave-USB). A 28-V ceiling light was illuminated at all times except during time-outs. A central food hopper (Coulbourn Instruments) under the touchscreen delivered mixed grain. 
Procedure
Go/no-go discrimination testing:
Each trial was initiated by a peck to a centrally presented, 2.5-cm, white ready signal. This signal was replaced by a video of a digital model animal started from a randomly selected frame and repeatedly looped from there for 20 s. Two pigeons were reinforced for pecking at “running” models and two for pecking at “walking” models (described below). Pecks during these correct S+ actions were reinforced with 2.9-s access to mixed grain on a variable interval schedule (VI-10) so that a single peck would result in reinforcement with uniform probability from 0 to 20 s after the peck occurred. An additional 2.9-s reward was provided at the end of S+ presentations. Pecks to the incorrect S− action resulted in no reward and a variable dark time-out at the end of the presentation (0.5 s per peck; for #G1, 1 s per peck was used). During baseline trials, a small percentage of S+ trials were randomly selected to be probe trials during which no reinforcement was delivered. These trials allowed for the uncontaminated measurement of the positive peck rate without the interruption or signaling of the food presentations. All baseline S+-dependent measures were calculated from these probe trials. 
Baseline:
The digital stimuli were 11.5 × 11.5 cm compressed AVI (using Microsoft Video1 compression) videos of three-dimensionally rendered animal models that were running or walking in a continuous loop. The stimuli were created and rendered with 3-D figural animation software (Poser 7 and 8, Smithmicro.com) using third-party models of the animals and their actions (Daz 3D, www.daz3d.com, and Eclipse Studios, www.es3d.com/index2.html). Six animal models available from prior training were used in the baseline: buck, camel, cat, dog, elephant, and human (see Supplementary Movies 16 for examples with the dog and buck). The dog and buck action models were also rendered with two different skins, bringing the total number of training “animals” to eight. 
Using different biomechanical motion models characteristic of the species depicted, each animal model moved in a fixed central position (i.e., walking or running “in place”). The number of frames and their presentation rate (frames per second) varied according to the digital model and action. Across the models used in the baseline set, the “running” stimuli appropriately cycled faster (M = 1.7 behavioral cycles per second, cps) than the “walking” stimuli (M = .82 cps). 
All model animals were rendered from a combination of six camera directions (body focus: side = 0°, front = −45°, rear = +45° and direction: left-facing and right-facing), two camera elevations (low ∼5.5° and high ∼26.3° relative to the surface), and two camera distances (close, far). Visual angle was calculated using a viewing distance of 8.5 cm to accommodate the 8 cm that the screen was recessed and 0.5 cm for the pigeon's viewing distance of the stimuli at its closest point. This yields a horizontal visual angle of approximately 26° to 40°, depending on the model, in the close perspective and 8° to 12° in the far perspective. 
Each digital animal was illuminated from a fixed overhead light source and rendered in one of two contexts. One context was the receding green-textured flat “ground” surface below a pale blue “sky” used in Asen and Cook (2012), and all possible combinations of animals and perspectives were shown in this context. The second context contained no ground, so the model was surrounded by just pale blue sky bounding box (RGB value = [191, 252, 252]). This no-ground context had only been added to training in order to familiarize the pigeons with the context used to test the transfer stimuli, so only a restricted set of stimulus configurations were displayed in this context: the dog and buck models in the low, close, side perspective. 
A total of 192 different videos of each of the two actions on the grass context were thus used in baseline (eight digital animals × 24 perspectives). Four videos of the buck and dog in the no-ground context were included to acclimate the pigeons to the no-ground context used in the tests, all from the low, close, side perspective. Prior to the experiment, the four pigeons were very good at discriminating these actions across all of these models, perspectives, and contexts. 
Baseline sessions consisted of 84 trials (42 walking/42 running). The animal, camera distance, and elevation varied randomly for 72 trials, but equivalent counts of camera perspective (canonical side, three quarters front, three quarters rear, but not facing direction) were presented. Of the S+ trials in this set, 15% were designated as nonreinforced S+ probes as described above. Further, 12 additional no-ground trials (six walking/six running) were randomly mixed into this set, depicting the dog or buck from the right-facing low and close perspectives. 
PLD test 1:
The PLD stimuli were created by placing 1.2-mm (.8°) flat black dots at the key joints of the models in the digital software. These dots moved in a coordinated fashion in the same positions as the models' joints. PLD stimuli were created for both the buck (27 dots: five per limb, four for the torso, two for the neck, and one for the head; see Supplementary Movie 7) and dog (28 dots: four per limb, five for the torso, four for the tail, two for the neck, and one for the head) models. To control for interstimulus variability, only these two stimuli were manipulated throughout these experiments. To maximize the visibility of the dots, the ground context was omitted, and the stimuli were rendered from the low, close, side camera position. The resulting articulated PLD “figures” subtended a visual angle of 25° to 40°, matching the fully detailed animations for overall spatial extent. One complete cycle of the PLD displays contained the same number of frames as the fully figured displays (buck running: 19 frames, walking: 33 frames; dog running: 16 frames, walking: 50 frames), and the stimuli were presented at the rate of 30 ms per frame (33.3 frames per second), again matching baseline values. 
For comparison, three control conditions were tested. The inverted control consisted of a 180° rotation of the articulated PLD stimuli. The scrambled control had the dots randomly positioned in the videos, eliminating their configural motion but otherwise having each individual dot moving along the same local pathway and temporal synchrony as in the articulated condition. As this condition was generated off-line, only one scrambled version of each model was tested. Last, a randomized frame condition was tested. Here the frames of the articulated PLD stimuli were randomly scrambled during their presentation, breaking up its coherent motion. The randomized order of the frames was changed for each presentation but fixed for the duration of the presentation. 
The four PLD conditions were tested as both walk and run actions in each session. A testing session consisted of the same 72 ground-context trials as in baseline sessions; two no-ground context trials with the tested animal from the low, close, side perspective; and eight transfer trials testing the above experimental conditions, yielding 82 total trials. The eight transfer trials and the two no-ground trials were all tested as nonreinforced probes. A total of 10 sessions, testing the buck and dog PLD models five times each, were conducted. 
PLD test 2:
Following the unsuccessful transfer of the discrimination in the previous test (see Results) we created and tested new PLD stimuli that might better support the perceptual grouping of the dots. For this goal, two properties were manipulated. First, dots that were two (2.4 mm, 1.6°) and three times (3.6 mm, 2.4°; see Supplementary Movie 8) larger than in the prior test were added to reduce the distances between display elements. Second, the overall size and visual angle was reduced by shifting the perspective to be roughly two times farther from the model (see Supplementary Movies 9 and 10; overall visual angle 13° to 25°, max dot size 1.4°). To reduce the number of conditions, only the buck model was used in this test, and only the inverted and randomized frame control conditions were included. Each test session present the six PLD transfer stimuli (walking and running; articulated, inverted, and randomized frames) for a fixed distance (close and far) and dot size (1.2 mm, 2.4 mm, and 3.6 mm). The six trials testing these transfer conditions were conducted as nonreinforced probes, and they were randomly mixed into a session with 72 ground-context baseline trials and two no-ground buck trials, yielding 80 total trials per test session. Six sessions were required to test all combinations once, comprising a single experimental block. Three blocks of testing were conducted, totaling 18 test sessions. The order of tests within a block was randomized. 
Metrics:
The LED touchscreen in the chamber used for these experiments was unusually sensitive, picking up small differences among pecks but also chest and feather entries. Although we will continue to refer to measuring peck responses, the peck counts more honestly reflect the degree of total activity directed by the pigeons toward the displays. The primary dependent variable analyzed was discrimination ratio (DR), the proportion of total pecks that occurred during an S+ stimulus (i.e., S+ pecks / [S− pecks + S+ pecks]). This adjusts for each bird's individual rate of responding and scales nicely from 0 to 1, such that .5 is chance performance and 1 is perfect discrimination. To best illustrate the birds' discrimination as it relates to the DR, the reported peck rates have been adjusted for each bird's base rate of pecking by normalizing each bird's data to the total average pecking to all positive baseline trials. The baseline peck rates reported and analyzed, however, concern only the suitable comparison stimuli (i.e., the same perspective as test stimuli). Consequently, their values are typically near 1 as a result of the normalization, but they are not fixed there. 
Results
Baseline:
All four pigeons were very good at discriminating the different actions of all eight animal models on the baseline trials of the first experiment, exhibiting greater pecking to the S+ action (mean pecks = 81.0, SE = 16.1) than the S− action (mean pecks = 15.4, SE = 2.3). Mean discrimination ratio (see Metrics) when computed for each bird individually and then averaged together was 0.83 (SE = 0.05). A one-sample t test confirms that this is significantly above the chance level of 0.5, t(3) = 6.9, p = 0.006, d = 3.4 (all p values and t tests reflect two-tail comparisons; alpha = 0.05 for this and all comparisons). This action discrimination was significantly affected by the model animal displaying the action as analyzed using a one-way repeated-measures ANOVA on DR, F(5, 15) = 10.8, p < 0.001, η2p = 0.783, which paired comparisons indicate is caused by a lower DR with the elephant model. One sample t tests comparing DR to chance for each model separately confirmed, however, that all models supported significantly above-chance levels of discrimination of the two actions, ts(3) > 3.7, ps < 0.032, ds > 1.8. This action discrimination was also invariant across azimuth, elevation, and distance. Because randomized and combinatorial complexity in the baseline displays resulted in untested combinations of factors, separate one-way repeated-measures ANOVAs (azimuth without facing direction, elevation, distance) were used to separately analyze these different aspects of perspective. No effects of these factors were found. These results match those previously reported for these same birds (Asen & Cook, 2012) in which camera perspective also had no influence on discrimination. 
PLD test 1:
The pigeons showed no capacity to discriminate among the actions when depicted as PLD stimuli despite their continued and excellent discrimination of complete models. Shown in Figure 2 are the normalized peck rates for S+ and S− actions for the matched baseline and different PLD test conditions. Baseline DR was computed from all comparable trials, testing the buck and dog models from the perspective configuration used for the PLD stimuli, and it was significantly above chance, t(3) = 10.2, p = 0.002, d = 5.1. Importantly, there was no significant discrimination of the articulated PLD stimuli where the dots mirrored the motions of complete figures, t(3) = 0.02. Given the lack of discrimination in the articulated condition, it is perhaps not surprising that none of the PLD controls supported significant discrimination either, ts(3) < 1.5. 
Figure 2
 
Mean normalized peck rates for the four pigeons in Experiment 1 tested with different types of PLDs. Error bars indicate the standard error of each condition.
Figure 2
 
Mean normalized peck rates for the four pigeons in Experiment 1 tested with different types of PLDs. Error bars indicate the standard error of each condition.
We also examined the results for each pigeon separately to determine if any individual bird may have perceived the actions within the PLD stimuli. Similar analyses were conducted using session as the repeated factor for each bird, but the results were identical to the group analysis. Each bird individually discriminated the complete models, ts(9) > 8.5, ps < 0.001, but not the articulated PLD condition or different controls, ts(9) < 2.2. 
PLD test 2:
This second test also failed to reveal any evidence of discrimination mediated by the PLD stimuli, despite modifications to better support perceptual grouping and configural perception in the displays. Table 1 lists the mean normalized pecking for the baseline and PLD test conditions. The baseline stimuli continued to support excellent discrimination with both close and far versions of the complete models, ts(3) > 9.0, ps < 0.03. A repeated-measures ANOVA (distance × dot size) of pecks to the articulated PLD stimuli revealed no significant effects. Similar analyses of performance with the PLD controls also found no evidence that the various conditions affected discrimination. Further, one-sample t tests suggest no discrimination was found for any control or display condition ts(3) < 2.1 except for the random frame condition t(3) = −3.3, p = 0.044. Post-hoc corrections for these multiple comparisons using the Holm-Bonferroni method indicated that this was likely not a significant result. Again, analyses of the individual birds with block as the repeated factor also failed to find evidence of discrimination in these conditions. 
Table 1
 
Mean normalized pecks in PLD condition during second test of Experiment 1.
Table 1
 
Mean normalized pecks in PLD condition during second test of Experiment 1.
Condition Dot size Close Far
S+ S− S+ S−
Baseline 1.08 0.13 0.90 0.18
Articulated 0.09 0.09 0.07 0.10
0.04 0.04 0.09 0.06
0.13 0.10 0.12 0.05
Inverted 0.09 0.10 0.10 0.09
0.08 0.05 0.09 0.08
0.14 0.08 0.09 0.10
Randomized 0.06 0.12 0.09 0.09
0.06 0.10 0.11 0.06
0.10 0.13 0.08 0.10
Discussion
Pigeons trained to discriminate the walking and running actions of a wide variety of complete, fully figured, articulated models showed no capacity to transfer this discrimination to PLD stimuli corresponding to these actions and models. This was found in two different tests during Experiment 1. Although easily discriminated by humans, this type of “biological motion” display failed to support the discrimination of these well-trained actions in pigeons. This was true across considerable variations in the size of the defining dots and the visual angle of the display that attempted to promote the perceptual grouping of the separate points. 
There are several possible reasons for this difficulty. First, the overall appearance of the PLD stimuli and the complete models are different. One of the reasons for testing PLDs is that they make it possible to examine the independent contribution of articulated and coordinated motion by eliminating form information. The resulting large alteration in the form information (model to dots), however, may have caused a degree of neophobic nonresponding. Supporting this hypothesis, three of four pigeons in this experiment pecked less to the PLD stimuli than full-featured displays, and furthermore, Dittrich and colleagues' (1998) pigeons similarly showed reduced levels of pecking during their PLD transfer. Such lowered reduced pecking suggests some degree of generalization decrement related to unfamiliarity likely contributes to the poor performance of the pigeons with biological motion stimuli. 
Another potentially important reason for the pigeons' inability to discriminate PLDs is that these displays require the perceptual grouping of widely separated and disconnected points into a unified configuration. In this case, the lack of connected, form-based cues may prevent the activation of the motion cues required for the discrimination. Pigeons have frequently exhibited problems grouping separated elements into larger wholes (Lea, Goto, Osthaus, & Ryan, 2006). With hierarchically arranged stimuli, they frequently show a bias to initially process local elements over global ones (Cavoto & Cook, 2001). In their study of PLD direction perception, Troje and Aust (2013) found the majority of pigeons exhibited a local bias. Correspondingly, pigeons also have had trouble detecting the larger global structure of Glass dot patterns, completing separated amodal displays, and the larger symmetry of line-based patterns (Huber et al., 1999; Kelly, Bischof, Wong-Wylie, & Spetch, 2001; Sekuler, Lee, & Shettleworth, 1996). 
Although pigeons can detect and group global patterns under the right conditions (Cook, 2001; Cook, Goto, & Brooks, 2005), this appears to emerge with experience or secondarily to initial attention to local elements. A similar local bias has also been suggested about the visual cognition of human individuals diagnosed to be on the autism spectrum. This may possibly be the reason for their increased difficulty in detecting biological motion, actions, and social information in PLDs (Kaiser & Shiffrar, 2009). While some animal studies of biological motion have found more intriguing results than these with PLD stimuli, our results are also part of a general trend that pigeons, and perhaps other animals, just do not find the coordinated actions in such dotted stimuli as easy to perceive as humans (Dittrich et al., 1998; Troje & Aust, 2013). 
Experiment 2
Experiment 2 examined if the pigeons required more complete or connected form information to detect the motion patterns that they use to discriminate these actions. To explore how varying types of form information contributed to the discrimination, we tested contour-only and silhouette versions of the animal models (examples included in Figures 3 and 4; see Supplementary Movies 11 and 12). These stimuli retained the global motions of the models using an exterior contour that was completed and connected while concurrently reducing interior texture and form information. 
Figure 3
 
Mean normalized peck rates for the four pigeons in Experiment 2 tested with different contour displays. Error bars indicate the standard error of each condition.
Figure 3
 
Mean normalized peck rates for the four pigeons in Experiment 2 tested with different contour displays. Error bars indicate the standard error of each condition.
Figure 4
 
Mean normalized peck rates for the three pigeons in Experiment 2 tested with different silhouette displays. Error bars indicate the standard error of each condition.
Figure 4
 
Mean normalized peck rates for the three pigeons in Experiment 2 tested with different silhouette displays. Error bars indicate the standard error of each condition.
Several previous studies have examined the use of this information for object perception in pigeons. They have suggested that pigeons are able to interpret silhouettes of objects at least partially correctly (e.g., Cook, Wright, & Drachman, 2012; Peissig, Young, Wasserman, & Biederman, 2005; Young, Peissig, Wasserman, & Biederman, 2001). Besides being able to discriminate objects in part based on their silhouette, this discrimination may incrementally improve if dynamically presented in a manner consistent with the rigid structure of an underlying object (Cook & Katz, 1999). Tests with contour-only static stimuli have generally suggested that this type of stimulus is more difficult to discriminate than silhouettes (Cabe & Healey, 1979; Cook et al., 2012; Peissig et al., 2005). 
We conducted two separate tests, first testing contour-only models and then silhouette models. The contour test stimuli consisted of black outlines of the models running or walking. The silhouette test stimuli consisted of the models with their interior solidly filled in with the average color of the fully rendered model. Hence, these two types of stimuli removed the local, internal detail of models while retaining their global, connected form and associated motion information. Three additional conditions were tested. To evaluate the role of temporal and spatial features, controls with rotated and inverted versions of the stimuli were included. The role of coherent motion features in the discrimination was again evaluated using tests in which the frames of the video stimuli were randomized. The main question was whether such models, having both bounded and connected form information, would be sufficient to support the established action discrimination. 
Methods
Animals and apparatus
The same pigeons and apparatus were used as in Experiment 1. After the contour test, #G1 was no longer tested for reasons unrelated to the experiment. 
Procedure
Contour test:
This first set of test stimuli consisted of the contoured outline of the buck animal model. The baseline buck model without the ground was modified using MATLAB. The close and far, low, side, stimuli were used and a two-pixel (.4°) black contour was generated at the border of the figures on a frame-by-frame basis using the close and far, low, side videos on the blue background. A simple edge-detection algorithm was used. Briefly, if any of the eight pixels surrounding a nonbackground pixel was the background color, it was considered a border pixel, and if none were the background color, they were considered interior pixels. All border pixels were colored black on a frame-by-frame basis, and the interior pixels that touched those border pixels were also colored black. These frames were then combined into an AVI video (Cinepak codec; see Supplementary Movie 11). 
Two types of control stimuli were also created for this set. The control stimulus in the rotated condition contained the contour figure rotated 90° so that the buck's head pointed upward. This stimulus could support a locomotion discrimination that specifically attended to the periodic nature of the display. The other control rotated the contoured figure 180°, which retained the original stimuli's nonconfigural motion along the vertical dimension in addition to features of timing. Finally, one control condition of properly oriented but randomized frames was also tested, and it could evaluate the use of coherent motion. 
The contour test stimuli and the three control stimuli (four conditions × two actions = eight contour stimuli) for a given distance were tested as probe trials randomly mixed into sessions of 72 ground-context baseline trials and two buck no-ground trials, totaling 82 trials in the test sessions. Two sessions were required to test all conditions once, forming a two-session block. Four blocks of testing were conducted with one baseline session between blocks two and three. 
Silhouette test:
The silhouette set of stimuli was generated in the same way as the contour stimuli except that the border and all interior pixels were colored to create a uniform silhouette. The color of the silhouette was the mean of the red, green, and blue channels of the original model as averaged across all frames in the video (see Supplementary Movie 12). The same four conditions as for the contour test were used: upright, rotated, inverted, and randomized frame presentation. With a fixed distance for each session, these four conditions (analogous to above, total eight silhouette stimuli) were tested as probe trials randomly mixed into sessions of 72 ground-context baseline trials and two buck no-ground trials. Two sessions were required to test all conditions once (a two-session block), and two blocks of testing were conducted, separated by one baseline session. 
Results
Contour test:
Shown in Figure 3 are the test results for the contour conditions as a function of mean normalized peck rate. The four pigeons continued to discriminate the actions with the baseline buck stimuli used for the contour test, t(3) = 8.3, p = 0.004, d = 4.1. As shown in the peck rates to S+ and S− stimuli, the contour-only displays seemed to support above-chance discrimination although it was clearly reduced relative to the baseline conditions, and the average DR was nonsignificant, t(3) = 2.7, p = 0.08. This reduction and nonsignificance was partly due to #Y4 failing to discriminate among these contour stimuli because this pigeon did not peck much at these stimuli. The DR of the remaining three pigeons showed that they discriminated the actions of the contour stimuli, t(2) = 7.5, p = 0.017, d = 4.4. We found no differences in performance as a function of overall visual angle. 
These three contour-discriminating pigeons failed to discriminate the actions of the model in any of the three control conditions (as did pigeon #Y4 with its low peck rates; see Figure 3). Presentations of the S+ and S− actions produced peck rates reflecting nondiscrimination when rotated 90°, inverted 180°, or the frames were randomly scrambled. One-sample t tests of DR compared to chance indicated no discrimination was present for the control displays (ts(2) < 1). 
Silhouette test:
Shown in Figure 4 are the test results for the silhouette stimuli as a function of mean normalized peck rate for the three pigeons that were tested. Baseline DR with the buck model continued to be excellent, t(2) = 13.1, p = 0.006, d = 7.1. The model's silhouette also supported discrimination of the actions, t(2) = 10.5, p = 0.009, d = 6.1. As with the contour test, the pigeons were unable to discriminate the actions during any of the three control conditions. Consistent with this, one-sample t tests confirmed that discrimination ratios for the rotated, inverted, and randomized control conditions were not significantly above chance levels, both when considered across birds, ts(2) < 1.7, or when considered for each pigeon individually, ts(3) < 2.1, using sessions as the repeated observation for each bird. 
Discussion
This experiment revealed that walking and running models depicted in contour and silhouette retained sufficient information for the majority of the pigeons to discriminate the depicted actions. The silhouettes supported discrimination at a level nearly comparable to that with the fully featured models from the same perspectives, and the contour supported a significant, but slightly reduced, level of performance. For both types, stimulus rotation eliminated the discrimination, suggesting that local features, such as the localized speed of legs or head movement, were not a critical part of the discrimination as these cues were present in both types of rotations. The disruption with inverted stimuli suggests that simple vertical motion of the overall figure and timing cues from the period of the stimuli are also insufficient for the discrimination. Finally, frame randomization continued to disrupt the discrimination, indicating that motion coherence is needed for these feature-reduced stimuli. 
These results in conjunction with the PLD results from the previous experiment suggest several important facts about the pigeons' action discrimination. The most important is that the acting models likely require connected edges or boundaries. This would readily explain the failure of PLD stimuli in Experiment 1. It further appears that internally filled boundaries facilitate discrimination as performance with silhouettes was better than with just contours. Asen and Cook (2012) found that these pigeons were also able to transfer from one action model to another across visually discriminable novel types of “skin.” While, traditionally, silhouettes are black, in the present case, we used the average color of the baseline model, which may have contributed to their continued recognition. The exact contribution of simply filling in the figure versus the nature of coloring remains to be resolved. The slight reduction in performance relative to the complete model may suggest that internal color features and texture within the models are encoded and represented by the pigeons (Cook et al., 2012). Perhaps the pigeons use these internal features to help distinguish among the models' different limbs or parts, such as the head, torso, and legs. The relative contribution of these different parts is the focus of Experiment 3
Experiment 3
The goal of Experiment 3 was to determine the relative contributions of the models' different limbs or body parts to the discrimination. Understanding which parts of the display are critical to performance helps to evaluate the suggestion from the previous experiment that the pigeons' action recognition operates on more global or large-scale characteristics. If the pigeons could discriminate these action stimuli without leg motion, then the inversion effect was likely the result of global feature disruption and not the result of expecting critical information in the lower region of the stimuli (cf. Hirai et al., 2011). Thus, in this experiment, different portions of the digital animals were made unavailable by manipulating the visibility of selected portions of the model across different conditions. We conducted two different tests in which we either used occlusion or deletion to examine the pigeons' reliance on the specific components of the animal models. 
In the occlusion test, digital “rocks” were introduced and added to the scenes to obscure the visibility of different amounts of the models' legs (see examples in Figure 5). Although changes in relative speed of movement within a video (Asen & Cook, 2012) and the mere presence of motion in the area of the legs (Experiment 1) have proven to be insufficient to mediate the discrimination, leg-related speed, positioning, and spatial extent are the most salient carriers of walking and running information (see Figure 1). As a result, we used variably sized rocks to hide the front, the back, or both pairs of legs to further examine their possible contribution to the discrimination. 
Figure 5
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of occlusion conditions. Error bars indicate the standard error of each condition.
Figure 5
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of occlusion conditions. Error bars indicate the standard error of each condition.
For a controlled comparison, we tested the same conditions with the rock placed behind the model's legs. This was done for two reasons. First, it is an appropriate and natural experimental control for the occlusion test. Second, however, is that prior studies with pigeons have found this type of “behind” condition appears to be disruptive to ongoing discrimination performance in several settings (e.g., see DiPietro, Wasserman, & Young, 2002; Koban & Cook, 2009; Lazareva, Wasserman, & Biederman, 2007). This effect seems to be driven by a difficulty of decomposing novel edge relationships that are still present when the “occluder” is in a nonoccluding position (Lazareva et al., 2007). We wanted to test the reliability of this “behind masking” effect in this different context and to help determine whether the movement of the models' limbs might help to overcome these apparent processing difficulties by supporting better segregation of the model from the background elements. 
A similar strategy was employed in the deletion test. Here, we digitally removed specific parts of the models without altering the movement of the remaining parts by using the figural software to not render these components (see examples in Figure 6). This digital amputation yields a similar effect as the occlusion test but with greater precision when removing targets. For example, occluding both legs clearly disrupts the visibility of the leg motions, but it simultaneously deprives the pigeon of partial information about the torso. The deletion method allows for the removal of the legs without influencing the visibility of the torso. Finally, this also allowed us to delete additional portions of the models (head, torso) that would not have been appropriate without putting an occluder in unnatural and unusual positions (although such digital deletions could also be considered unnatural). 
Figure 6
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of featured deletion conditions. Performance with various portions of the stimulus deleted. Error bars indicate the standard error of each condition.
Figure 6
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of featured deletion conditions. Performance with various portions of the stimulus deleted. Error bars indicate the standard error of each condition.
Thus, testing and comparing both occlusion and deletion versions of these stimuli would best determine the relative contributions of the different body parts to the discrimination. The resulting patterns of the outcomes can provide insight into how different portions of the models support the discrimination and their relative importance and how global/configural and local/featural information potentially work together in mediating the discrimination of the models' actions by the pigeons. 
Methods
Animals and apparatus
The same three pigeons and apparatus were used as at the end of Experiment 2
Procedure
Occlusion test:
For the occlusion test, a digital rock was placed in the scene with either the buck or dog model. This rock was placed in one of seven possible locations (see Figure 5 samples and Supplementary Movies 1319). Horizontally, the rock was positioned at the model's rear legs, fore legs, or extending across both pairs of legs. This horizontal rock position was factorially combined with the rock's depth, such that the rock was either in front of the model and occluding the digital model's limbs or behind the digital model and being occluded. Finally, to acclimate the pigeons to the rock-containing displays and to evaluate the absolute effect of the rock's presence, a beside condition was also used in which the rock was placed ahead of the model so that at no point during locomotion did the model reach the rock. For the running buck model, 24 frames were used to comprise one cycle instead of the 19 frames in baseline to create a slightly smoother motion, but this did not seem to affect performance. For this test, perspective was restricted to the low, close, side perspective. 
In the eight sessions prior to the tests, the pigeons were given training with the beside condition. These were added as eight trials to the baseline 72 ground-context trials: two S−, one reinforced S+, and one probe S+ for both the buck and dog models. 
For test sessions, the six rock-placement stimuli (behind vs. occluding × rear, fore, and both) for a given model were conducted as nonreinforced probes. Twelve trials testing these conditions (six rock placements × two actions), the four trials of the beside condition (two S−, one S+, one nonreinforced S+), and the 72-trial baseline trials comprised an 88-trial session. Eight test sessions were conducted with the dog model, and four were conducted with the buck model. After every two experimental sessions, one baseline session was given to reduce memorization of the experimental test stimuli. Probe data in the rock-beside condition was not available for the first four sessions with the dog model, so the peck rates from the interleaved baseline sessions were used for those data points. 
Deletion test:
In this test, five different part-deleted stimuli were tested to evaluate the pigeons' use of the rear legs, the fore legs, all legs, the torso midsection, and the head (see Figure 6 and Supplementary Movies 2024). These stimuli were generated by marking the indicated components of the buck and dog models invisible to the rendering algorithm so that either the ground or the background appeared where previously the body part was present. These stimuli were then rendered using the low, close, side perspective. 
Two different test sessions were composed for this experiment. For each model, one session tested the three leg deletions (six test trials), and the second tested the head and torso deletions (four test trials). All test trials were conducted as nonreinforced probes and mixed in 72-trial baseline trials (78-trial or 76-trial sessions, respectively). Four two-session blocks were tested for each model (16 total sessions) with single baseline sessions separating each testing block. 
Results
Occlusion test:
Shown in Figure 5 are the results for the different occlusion test conditions as a function of mean normalized pecks to the display. All three pigeons showed continued strong baseline discrimination of the tested models and perspective as measured by DR, ts(2) = 16.9, p = 0.003, d = 9.7. The addition of a nonoccluding rock beside the model resulted in a small decline in DR (from M = .87, SE = .04 to M = .80, SE = .12), but discrimination continued significantly above chance, t(2) = 4.4, p < 0.049, d = 2.5. 
Averaging together the three occluding and three behind rock conditions, the three individual pigeons showed significantly above-chance discrimination of the transfer stimuli (#G2 DR = .89, #S3 DR = .65, #Y4 DR = .68; ts(11) > 3.9, ps < 0.002). Whether the rock was placed in front of or behind the model made no difference in the pigeons' reactions to the displays as their discrimination was very similar across this manipulation. This equivalence was supported by a repeated-measures ANOVA (occluding vs. behind × horizontal rock position) using DR. This analysis revealed no main effect of the occluding versus behind factor, F(1, 2) = 1.4, or its interaction with horizontal rock position, F(2, 4) < 1. Horizontal rock position did show a significant main effect, F(2, 4) = 7.7, p = 0.043; η2p = .79, which indicates that the location of the rock with respect to the model affected discrimination. Individual differences among the pigeons, however, are essential for a complete understanding of this effect. 
To better evaluate how the position of the rock relative to the body affected the discriminability of the actions, the left half of Table 2 reports DR and its analyses for each condition as averaged across the occlusion and behind conditions because performance in those conditions seemed equivalent. This analysis using DR revealed that both #G2 and #S3 could discriminate the actions regardless of the rock's horizontal position although the all legs condition was numerically worse. Pigeon #Y4 had difficulty when the rear legs were unavailable as discrimination was lower in both the rear legs and all legs conditions (in both cases, the statistical probabilities were marginal). Correspondingly, this pigeon could easily discriminate the actions when the rear legs were available (i.e., fore condition). 
Table 2
 
Analysis of individual bird performance in Experiment 3. Notes: The left half of the table shows performance with a digital rock beside the acting model, covering its rear legs, fore legs, or across both legs as averaged across the occluding/behind manipulation. The right half shows performance when the specified parts were deleted from the model. All refers to the all legs occluded or deleted conditions. All bolded p values are significant after using a Holm-Bonferroni multiple test correction for each bird.
Table 2
 
Analysis of individual bird performance in Experiment 3. Notes: The left half of the table shows performance with a digital rock beside the acting model, covering its rear legs, fore legs, or across both legs as averaged across the occluding/behind manipulation. The right half shows performance when the specified parts were deleted from the model. All refers to the all legs occluded or deleted conditions. All bolded p values are significant after using a Holm-Bonferroni multiple test correction for each bird.
Rock placement Deletions
Beside Rear Fore All Rear Fore All Torso Head
#G2
 DR .91 .91 .93 .87 .89 .92 .89 .88 .89
t(11) 32.6 42 37.3 15.8
t(7) 19.3 16.3 15.8 7.2 7
p <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
d 9.4 12.1 10.8 4.6 6.8 5.8 5.6 2.6 2.5
#S3
 DR .67 .66 .71 .59 .67 .61 .63 .69 .79
t(11) 3.9 4.5 7.3 2.7
t(7) 2.6 1.5 1.7 2.6 6.9
p 0.002 0.001 0.001 0.022 0.038 0.168 0.141 0.035 <0.001
d 1.1 1.3 2.1 0.8 0.9 0.5 0.6 0.9 2.4
#Y4
 DR .83 .67 .83 .67 .88 .96 .79 .89 .90
t(11) 5.9 2 7.2 2.1
t(7) 6.6 32 3.2 8.3 13.3
p <0.001 0.074 <0.001 0.058 <0.001 <0.001 0.025 <0.001 <0.001
d 1.7 0.6 2.1 0.6 2.3 11.3 1.1 2.9 4.7
Deletion test:
Figure 6 shows the normalized results for the different deletion test conditions as a function of mean peck rate. The results reveal that selective deletion of the different parts of the models minimally affected discrimination. Baseline DR with just the models tested in the deletion tests continued to be excellent, t(2) = 22.5, p = 0.004, d = 13.0. Discrimination was well above chance for the part-deleted stimuli overall for each individual bird (#G2 DR = .89, #S3 DR = 0.67, #Y4 DR = 0.88; ts(7) > 5.0, ps < 0.002, ds > 1.8). A one-way repeated-measures ANOVA (deletion condition) on discrimination ratio revealed no differences between the deletion conditions, both when considered across the three pigeons, F(4, 8) = 1.0, and when considered individually for each bird, Fs(4, 28) < 1.4. This experiment, however, also benefits from close attention to individual bird performance. As Table 2 shows, although #G2 and #Y4 were able to discriminate action across all deletion conditions, #S3 needed all the legs and the torso to accurately discriminate. 
Discussion
The results of these two analytic tests indicate that no specific localized part of the model was the critical determinant of the discrimination. Although removing both pairs of legs of a walking or running model was likely most disruptive for all birds in the occlusion test, there was still enough information that two of three pigeons could discriminate between the actions. Similarly, as the different parts of the model were selectively removed in the deletion test, the remainder of the model retained sufficient information to mediate the discrimination. Even when the legs were completely removed, most of the pigeons continued to recognize the model's actions. Thus, it appears that as long as the majority of the model was available, the pigeons were able to continue identifying the models' actions. They clearly were not exclusively relying on a specific localized stimulus region or a particular body part, such as the seemingly salient legs, to discriminate these actions. The implication of this pattern is that the pigeons were globally or configurally evaluating the models' bodily articulated movements over time. 
One curious outcome of the experiment was that placing the “occluding” rock behind the models' legs had approximately the same impact on performance as placing it in front. The occlusion of the model, where features were clearly hidden, did not highly disrupt the discrimination except perhaps when both pairs of legs were invisible. Placing the rock behind both legs had a similar impact on discrimination. As mentioned in the Introduction, this is not the first time that this type of “behind-masking” effect has been found for pigeons (DiPietro et al., 2002; Koban & Cook, 2009; Lazareva et al., 2007). It is not clear why otherwise visible information seems to be interfered with in such conditions for pigeons, although experience can help reduce its effects (Lazareva et al., 2007). Even the model's movements did not help to disambiguate the critical shape and pose information from the mask. For reasons that are not entirely clear at the moment, it appears that the introduction of novel background regions creates problems for pigeons in decomposing or segregating even familiar elements into separate parts. Better understanding how the edge and segregation processes by which pigeons visually decompose overlapping objects is an important question for further exploration in the future as they may possibly diverge from those in mammals. 
General discussion
These three experiments investigated, for the first time, the visual control of a locomotive action discrimination by pigeons. Testing different feature-altered digital models demonstrating each action, the experiments provide important new information about avian action recognition and motion processing. Experiment 1's two tests revealed that “biological motion” stimuli (i.e., biomechanically coordinated moving dots) could not support discriminative transfer following extensive training with complete models. Experiment 2 revealed that connected edge information and interior shading were likely critical elements to the discrimination as both silhouette and contour-only stimuli were sufficient to support an above-chance but reduced level of transfer. Experiment 3 suggested that motion information from both the body and legs were involved in the pigeons' determination of the entire model's actions. Together this evidence indicates the pigeons are likely using the motion information derived from the entire model to discriminate these walking and running actions. As a consequence, they support a hypothesis that pigeons can extract and classify the globally organized locomotive actions of nonavian animal models. 
This outcome is consistent with the different set of feature tests in the earlier results (Asen & Cook, 2012). That study had manipulated presentation rate and eliminated local rate of motion as a potential feature, indicating that the pigeons were not simply looking for how fast parts of the displays moved. As found here as well, inversion of the displays disrupted discrimination. Such inversions retain a vast majority of local features, and Experiment 3 provided evidence that the discrimination is not solely a product of attention to the leg motions, which are most disrupted by inversion (cf. Hirai et al., 2011). When combined with the present results, it strongly suggests the pigeons were using the global configuration of the moving model instead of relying on local features. If so, these outcomes suggest that the pigeons had learned to recognize sequences of spatially oriented poses or that the relative configurations of body parts within the model are the bases of this locomotion discrimination (e.g., Singer & Sheinberg, 2010). The results of Experiments 2 and 3 seem consistent with this hypothesis as different parts of the model's body seemed sufficient to mediate the discrimination in the absence of specific parts. Although the relative motion and positioning of the legs likely contributed significantly to the discrimination (it is locomotion, after all), the pigeons were still able to identify the model's actions when the legs were entirely eliminated in Experiment 3. This indicates that the body's movement also carries useful information. Recognizing such pose sequences seems to require the encoding of the relative global positions and motions of multiple body parts. This type of configural encoding was likely facilitated by our use of numerous models, multiple orientations, and camera distances during training. Such extensive stimulus variability likely encourages the pigeons to use generalized, global information from the model's movements instead of localized parts or specific locations to classify this large number of possible displays. 
If the pigeons attended to and integrated global movement information from across the complete model, there were some limitations in its application. In particular, this moving form information may need to be bounded by a contour or filled in. This is suggested by the complete failure of the PLD stimuli to support this action discrimination. Despite several manipulations designed to enhance the perceptual grouping of these coordinated but disconnected elements, the pigeons failed to see any actions in this type of display. The pigeons' failure to see “biological motion” could represent either a perceptual or cognitive deficit. One possibility is that pigeons have visual or perceptual limitations on integrating unconnected parts into a coordinated whole. Although pigeons can integrate global visual information (Cook, 1992, 2001; Cook et al., 2005; Troje & Aust, 2013), there are also numerous instances of pigeons showing difficulties in grouping separated elements (Aust & Huber, 2006; Kelly et al., 2001; Sekuler et al., 1996), exhibiting greater attention to isolated local elements relative to the larger global information (Cavoto & Cook, 2001; Lea et al., 2006), and attending to separated information (M. F. Brown, Cook, Lamb, & Riley, 1984; Cook, Riley, & Brown, 1992). The latter difficulties suggest that perceptually integrating separated information is not necessarily always easy for them. A second possibility is that pigeons do not retain or use the same kind of higher-order cognitive models for action as humans might do. It is the possession of these top-down expectations and attention for actions that may allow humans to readily see biological motion in such impoverished stimuli (Dittrich, 1999; Shiffrar, Lichtey, & Chatterjee, 1997; Thornton, Rensink, & Shiffrar, 2002). It is not clear what, if any, top-down expectations the pigeons may have had here. Finally, because of the considerable difference in the visual appearance of the complete models and the PLD stimuli, a third possibility is that the pigeons simply did not associate the PLD displays with their previously reinforced discrimination, and the resulting generalization decrement limits their transfer. Reconciling the conditions that may facilitate pigeons' global representation of actions with PLD stimuli is an important goal for future research. 
With the discovery of mirror neurons in monkeys, there has been renewed interest in motor-based theories of human action, embodied cognition, language, intentionality, and social cognition (Engel et al., 2013; Gallese, 2007; Grafton, 2009; Iacoboni, 2009; Jeannerod, 2001; Rizzolatti et al., 2001; Wilson & Knoblich, 2005). The current study of action discrimination by a nonhuman animal adds significantly to the ongoing debate regarding the nature and mechanisms of human action recognition and the role of motor simulation (Decety & Grèzes, 1999; Heyes, 2010; Hickok, 2009). One collective theme of many proposals is that action execution and action observation are commonly coded, sometimes suggested to be mediated by an independent, species-specific action network (Grafton, 2009; Jeannerod, 2001). Given that the motor systems for locomotion and flying in pigeons share little in common with the different quadruped motions tested here, our results carry the implication that actions can be discriminated without simple embodiment within the observer and without large computational and linguistic capacities. Thus, the current results seem to run counter to a major prediction of theories that assume an overlap between action execution and action perception. Whatever species-specific action system or even mirror-like neurons (Prather, Peters, Nowicki, & Mooney, 2008) birds may have, it is likely not evolved for the present discrimination or the models tested. The results instead suggest that the visual processes generally available for motion perception are likely sufficient to extract and recognize complex, sequentially moving forms without requiring activation of or simulation by the motor system. Such a conclusion is consistent with recent human findings regarding the discrimination of biologically consistent and artificial trajectories (Jastorff, Kourtzi, & Giese, 2006) and related criticisms of such motor-based theories (Hickok, 2009). Animals regularly need to recognize and react to the behaviors of a wide variety of species with which they may share few motor programs. While embodied cognition makes good evolutionary sense when thinking about the origins of cognition generally, making action recognition specifically dependent on your species' motor representations would prevent effective recognition of nonconspecific behavior. 
Examinations across different animal species will add considerably to our understanding of mechanisms and evolution of behavioral recognition and advance the development of an expanded comparative science of visual cognition. With increasing success, studies have suggested that pigeons are able to form motion-based action categories (Asen & Cook, 2012; Cook, Beale, & Koban, 2011; Cook, Shaw, & Blaisdell, 2001; Dittrich et al., 1998; Mui et al., 2007) despite a size-limited nervous system. The stimuli in this experiment focused on locomotor categories because they are likely salient and tractable natural categories. These actions are also simple, periodic, and repetitive. In nature, however, there are many examples of temporally extended, complex action series that comprise single behaviors, such as grooming or courting (Shimizu, 1998). The capacity to discriminate between such complex and nonrepetitive behaviors is clearly an important extension of the present research requiring further investigation. 
Supplementary Materials
Acknowledgments
This research was supported by a grant from the National Eye Institute (RO1-EY022655). E-mail: Robert.Cook@tufts.edu. Home Page: www.pigeon.psy.tufts.edu
Commercial relationships: none. 
Corresponding author: Muhammad A. J. Qadri. 
Email: Muhammad.Qadri@tufts.edu. 
Address: Department of Psychology, Tufts University, Medford, MA, USA. 
References
Aggarwal J. K. Cai Q. (1999). Human motion analysis: A review. Computer Vision and Image Understanding, 73, 428–440. [CrossRef]
Arbib M. A. (2005). From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics. Behavioral and Brain Sciences, 28, 105–124. [PubMed]
Asen Y. L. Cook R. G. (2012). Discrimination and categorization of actions by pigeons. Psychological Science, 23, 617–624, doi:10.1177/0956797611433333. [CrossRef] [PubMed]
Aust U. Huber L. (2006). Does the use of natural stimuli facilitate amodal completion in pigeons? Perception, 35, 333–349. [CrossRef] [PubMed]
Beintema J. A. Lappe M. (2002). Perception of biological motion without local image motion. Proceedings of the National Academy of Sciences, USA, 99 (8), 5661–5663, doi:10.1073/pnas.082483699. [CrossRef]
Blake R. (1993). Cats perceive biological motion. Psychological Science, 4, 54–57. [CrossRef]
Blake R. Shiffrar M. (2007). Perception of human motion. Annual Review of Psychology, 58, 47–73. [CrossRef] [PubMed]
Bobick A. F. Davis J. W. (2001). The recognition of human movement using temporal templates. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23 (3), 257–267. [CrossRef]
Brown J. Kaplan G. Rogers L. J. Vallortigara G. (2010). Perception of biological motion in common marmosets (Callithrix jacchus): By females only. Animal Cognition, 13 (3), 555–564, doi:10.1007/s10071-009-0306-0. [CrossRef] [PubMed]
Brown M. F. Cook R. G. Lamb M. R. Riley D. A. (1984). The relation between response and attentional shifts in pigeon compound matching-to-sample performance. Animal Learning & Behavior, 12, 41–49, doi:10.3758/BF03199811. [CrossRef]
Buccino G. Binkofski F. Riggio L. (2004). The mirror neuron system and action recognition. Brain and Language, 89 (2), 370–376. [CrossRef] [PubMed]
Buccino G. Lui F. Canessa N. Patteri I. Lagravinese G. Benuzzi F. Rizzolatti G. (2004). Neural circuits involved in the recognition of actions performed by nonconspecifics: An fMRI study. Journal of Cognitive Neuroscience, 16 (1), 114–126. [CrossRef] [PubMed]
Byrne R. W. Russon A. E. (1998). Learning by imitation: A hierarchical approach. Behavioral and Brain Sciences, 21, 667–684. [PubMed]
Cabe P. A. Healey M. L. (1979). Figure-background color differences and transfer of discrimination from objects to line drawings with pigeons. Bulletin of the Psychonomic Society, 13 (3), 124–126. [CrossRef]
Cavoto K. K. Cook R. G. (2001). Cognitive precedence for local information in hierarchical stimulus processing by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 27 (1), 3–16, doi:10.1037/0097-7403.27.1.3. [CrossRef] [PubMed]
Cook R. G. (1992). Dimensional organization and texture-discrimination in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 18 (4), 354–363. [CrossRef]
Cook R. G. (2001). Hierarchical stimulus processing by pigeons. In Cook R. G. (Ed.), Avian visual cognition, Available at www.pigeon.psy.tufts.edu/avc/cook/.
Cook R. G. Beale K. Koban A. C. (2011). Velocity-based motion categorization by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 37, 175–188, doi:10.1037/a0022105. [CrossRef] [PubMed]
Cook R. G. Goto K. Brooks D. I. (2005). Avian detection and identification of perceptual organization in random noise. Behavioural Processes, 69 (1), 79–95, doi:10.1016/j.beproc.2005.01.006. [CrossRef] [PubMed]
Cook R. G. Katz J. S. (1999). Dynamic object perception by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 25 (2), 194–210, doi:10.1037/0098-7403.25.2.194. [CrossRef] [PubMed]
Cook R. G. Riley D. A. Brown M. F. (1992). Spatial and configural factors in compound stimulus-processing by pigeons. Animal Learning & Behavior, 20 (1), 41–55. [CrossRef]
Cook R. G. Roberts S. (2007). The role of video coherence on object-based motion discriminations by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 33 (3), 287–298, doi:10.1037/0097-7403.33.3.287. [CrossRef] [PubMed]
Cook R. G. Shaw R. Blaisdell A. P. (2001). Dynamic object perception by pigeons: Discrimination of action in video presentations. Animal Cognition, 4, 137–146, doi:10.1007/s100710100097. [CrossRef] [PubMed]
Cook R. G. Wright A. A. Drachman E. E. (2012). Categorization of birds, mammals, and chimeras by pigeons. Behavioural Processes, 93, 98–110, doi:10.1016/j.beproc.2012.11.006. [CrossRef] [PubMed]
Decety J. Grèzes J. (1999). Neural mechanisms subserving the perception of human actions. Trends in Cognitive Sciences, 3 (5), 172–178. [CrossRef] [PubMed]
di Pellegrino G. Fadiga L. Fogassi L. Gallese V. Rizzolatti G. (1992). Understanding motor events: A neurophysiological study. Experimental Brain Research, 91 (1), 176–180. [CrossRef] [PubMed]
DiPietro N. T. Wasserman E. A. Young M. E. (2002). Effects of occlusion on pigeons' visual object recognition. Perception, 31, 1299–1312. [CrossRef] [PubMed]
Dittrich W. H. (1993). Action categories and the perception of biological motion. Perception, 22, 15–22. [CrossRef] [PubMed]
Dittrich W. H. (1999). Seeing biological motion - Is there a role for cognitive strategies? In Braffort A. Gherbi R. Gibet S. Richardson J. Teil D. (Eds.), Gesture-based communication in human-computer interaction (Vol. 1739) (pp. 3–22). Berlin, Germany: Springer.
Dittrich W. H. Lea S. E. G. (1993). Motion as a natural category for pigeons: Generalization and a feature-positive effect. Journal of the Experimental Analysis of Behavior, 59, 115–129, doi:10.1901/jeab.1993.59-115. [CrossRef] [PubMed]
Dittrich W. H. Lea S. E. G. Barrett J. Gurr P. R. (1998). Categorization of natural movements by pigeons: Visual concept discrimination and biological motion. Journal of the Experimental Analysis of Behavior, 70, 281–299. [CrossRef] [PubMed]
Engel A. K. Maye A. Kurthen M. König P. (2013). Where's the action? The pragmatic turn in cognitive science. Trends in Cognitive Sciences, 17 (5), 202–209 , doi:10.1016/j.tics.2013.03.006. [CrossRef] [PubMed]
Fernández-Juricic E. Erichsen J. T. Kacelnik A. (2004). Visual perception and social foraging in birds. Trends in Ecology & Evolution, 19, 25–31. [CrossRef] [PubMed]
Gallese V. (2007). Before and below ‘theory of mind': Embodied simulation and the neural correlates of social cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 362 (1480), 659–669. [CrossRef]
Giese M. A. Poggio T. (2003). Neural mechanisms for the recognition of biological movements. Nature Reviews Neuroscience, 4, 179–192. [CrossRef] [PubMed]
Grafton S. T. (2009). Embodied cognition and the simulation of action to understand others. Annals of the New York Academy of Sciences, 1156 (1), 97–117. [CrossRef] [PubMed]
Heyes C. (2010). Mesmerising mirror neurons. NeuroImage, 51 (2), 789–791. [CrossRef] [PubMed]
Hickok G. (2009). Eight problems for the mirror neuron theory of action understanding in monkeys and humans. Journal of Cognitive Neuroscience, 21 (7), 1229–1243. [CrossRef] [PubMed]
Hirai M. Chang D. H. F. Saunders D. R. Troje N. F. (2011). Body configuration modulates the usage of local cues to direction in biological-motion perception. Psychological Science, 22 (12), 1543–1549, doi:10.1177/0956797611417257. [CrossRef] [PubMed]
Huber L. Aust U. Michelbach G. Ölzant S. Loidolt M. Nowotny R. (1999). Limits on symmetry conceptualization in pigeons. The Quarterly Journal of Experimental Psychology, 52B, 351–379. [CrossRef]
Iacoboni M. (2009). Imitation, empathy, and mirror neurons. Annual Review of Psychology, 60, 653–670. [CrossRef] [PubMed]
Jastorff J. Kourtzi Z. Giese M. A. (2006). Learning to discriminate complex movements: Biological versus artificial trajectories. Journal of Vision, 6 (8): 3, 791–804, http://www.journalofvision.org/content/6/8/3, doi:10.1167/6.8.3. [PubMed] [Article] [PubMed]
Jeannerod M. (2001). Neural simulation of action: A unifying mechanism for motor cognition. NeuroImage, 14 (1), S103–S109. [CrossRef] [PubMed]
Jitsumori M. Natori M. Okuyama K. (1999). Recognition of moving video images of conspecifics by pigeons: Effects of individuals, static and dynamic motion cues, and movement. Animal Learning & Behavior, 27, 303–315. [CrossRef]
Johansson G. (1973). Visual perception of biological motion and a model of its analysis. Perception and Psychophysics, 14, 201–211. [CrossRef]
Kaiser M. D. Shiffrar M. (2009). The visual perception of motion by observers with autism spectrum disorders: A review and synthesis. Psychonomic Bulletin & Review, 16 (5), 761–777, doi:10.3758/pbr.16.5.761. [CrossRef] [PubMed]
Kelly D. M. Bischof W. F. Wong-Wylie D. R. Spetch M. L. (2001). Detection of Glass patterns by pigeons and humans: Implications for differences in higher-level processing. Psychological Science, 12 (4), 338–342. [CrossRef] [PubMed]
Koban A. C. Cook R. G. (2009). Rotational object discrimination by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 35, 250–265, doi:10.1037/a0013874. [CrossRef] [PubMed]
Lazareva O. F. Wasserman E. A. Biederman I. (2007). Pigeons' recognition of partially occluded objects depends on specific training experience. Perception, 36 (1), 33–48. [CrossRef] [PubMed]
Lea S. E. G. Goto K. Osthaus B. Ryan C. M. E. (2006). The logic of the stimulus. Animal Cognition, 9 (4), 247–256, doi:10.1007/s10071-006-0038-3. [CrossRef] [PubMed]
Malt B. C. Gennari S. Imai M. Ameel E. Tsuda N. Majid A. (2008). Talking about walking: Biomechanics and the language of locomotion. Psychological Science, 19 (3), 232–240. [CrossRef] [PubMed]
Mui R. Haselgrove M. McGregor A. Futter J. Heyes C. Pearce J. M. (2007). The discrimination of natural movement by budgerigars (Melopsittacus undulates) and pigeons (Columba livia). Journal of Experimental Psychology: Animal Behavior Processes, 33, 371–380. [CrossRef] [PubMed]
Oram M. Perrett D. (1994). Responses of anterior superior temporal polysensory (STPa) neurons to “biological motion” stimuli. Journal of Cognitive Neuroscience, 6 (2), 99–116. [CrossRef] [PubMed]
Parron C. Deruelle C. Fagot J. (2007). Processing of biological motion point-light displays by baboons (Papio papio). Journal of Experimental Psychology: Animal Behavior Processes, 33, 381–391. [CrossRef] [PubMed]
Peissig J. J. Young M. E. Wasserman E. A. Biederman I. (2005). The role of edges in object recognition by pigeons. Perception, 34, 1353–1374. [CrossRef] [PubMed]
Polana R. Nelson R. (1997). Detection and recognition of periodic, nonrigid motion. International Journal of Computer Vision, 23 (3), 261–282. [CrossRef]
Poppe R. (2010). A survey on vision-based human action recognition. Image and Vision Computing, 28 (6), 976–990. [CrossRef]
Prather J. F. Peters S. Nowicki S. Mooney R. (2008). Precise auditory-vocal mirroring in neurons for learned vocal communication. Nature, 451 (7176), 305–310, doi:10.1038/nature06492. [CrossRef] [PubMed]
Puce A. Perrett D. (2003). Electrophysiology and brain imaging of biological motion. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 358 (1431), 435–445. [CrossRef]
Regolin L. Tommasi L. Vallortigara G. (2000). Visual perception of biological motion in newly hatched chicks as revealed by an imprinting procedure. Animal Cognition, 3 (1), 53–60. [CrossRef]
Rizzolatti G. Fogassi L. Gallese V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2, 661–670. [CrossRef] [PubMed]
Schindler K. Van Gool L. (2008). Action snippets: How many frames does human action recognition require? Proceedings of the Conference on Computer Vision and Pattern Recognition (pp. 1–8).
Sekuler A. B. Lee J. A. J. Shettleworth S. J. (1996). Pigeons do not complete partly occluded figures. Perception, 25, 1109–1120. [CrossRef] [PubMed]
Shiffrar M. Lichtey L. Chatterjee S. H. (1997). The perception of biological motion across apertures. Perception & Psychophysics, 59 (1), 51–59. [CrossRef] [PubMed]
Shimizu T. (1998). Conspecific recognition in pigeons (Columba livia) using dynamic video images. Behaviour, 135, 43–53. [CrossRef]
Singer J. M. Sheinberg D. L. (2010). Temporal cortex neurons encode articulated actions as slow sequences of integrated poses. Journal of Neuroscience, 30, 3133–3145. [CrossRef] [PubMed]
Thirkettle M. Benton C. P. Scott-Samuel N. E. (2009). Contributions of form, motion and task to biological motion perception. Journal of Vision, 9 (3): 28, 1–11, http://www.journalofvision.org/content/9/3/28, doi:10.1167/9.3.28. [PubMed] [Article] [PubMed]
Thornton I. M. Rensink R. A. Shiffrar M. (2002). Active versus passive processing of biological motion. Perception, 31 (7), 837–853. [CrossRef] [PubMed]
Tomonaga M. (2001). Visual search for biological motion patterns in chimpanzees (Pan troglodytes). Psychologia: An International Journal of Psychology in the Orient, 44, 46–59.
Troje N. F. Aust U. (2013). What do you mean with “direction”? Local and global cues to biological motion perception in pigeons. Vision Research, 79, 47–55, doi:10.1016/j.visres.2013.01.002. [CrossRef] [PubMed]
Vallortigara G. Regolin L. Marconato F. (2005). Visually inexperienced chicks exhibit spontaneous preference for biological motion patterns. PLoS Biology, 3, 1312–1316. [CrossRef]
Wang L. Hu W. Tan T. (2003). Recent developments in human motion analysis. Pattern Recognition, 36 (3), 585–601. [CrossRef]
Wilson M. Knoblich G. (2005). The case for motor involvement in perceiving conspecifics. Psychological Bulletin, 131 (3), 460–473. [CrossRef] [PubMed]
Young M. E. Peissig J. J. Wasserman E. A. Biederman I. (2001). Discrimination of geons by pigeons: The effects of variations in surface depiction. Animal Learning & Behavior, 29 (2), 97–106. [CrossRef]
Figure 1
 
Example of one of the eight animal models used in these experiments to exemplify the actions of walking and running. It is shown as rendered from a low, close, side perspective. Superimposed on the displays are the different motion paths of five body parts (nose, neck junction, tail junction, fore right foot, and rear right foot). These paths were not present in the stimuli tested with the pigeons.
Figure 1
 
Example of one of the eight animal models used in these experiments to exemplify the actions of walking and running. It is shown as rendered from a low, close, side perspective. Superimposed on the displays are the different motion paths of five body parts (nose, neck junction, tail junction, fore right foot, and rear right foot). These paths were not present in the stimuli tested with the pigeons.
Figure 2
 
Mean normalized peck rates for the four pigeons in Experiment 1 tested with different types of PLDs. Error bars indicate the standard error of each condition.
Figure 2
 
Mean normalized peck rates for the four pigeons in Experiment 1 tested with different types of PLDs. Error bars indicate the standard error of each condition.
Figure 3
 
Mean normalized peck rates for the four pigeons in Experiment 2 tested with different contour displays. Error bars indicate the standard error of each condition.
Figure 3
 
Mean normalized peck rates for the four pigeons in Experiment 2 tested with different contour displays. Error bars indicate the standard error of each condition.
Figure 4
 
Mean normalized peck rates for the three pigeons in Experiment 2 tested with different silhouette displays. Error bars indicate the standard error of each condition.
Figure 4
 
Mean normalized peck rates for the three pigeons in Experiment 2 tested with different silhouette displays. Error bars indicate the standard error of each condition.
Figure 5
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of occlusion conditions. Error bars indicate the standard error of each condition.
Figure 5
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of occlusion conditions. Error bars indicate the standard error of each condition.
Figure 6
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of featured deletion conditions. Performance with various portions of the stimulus deleted. Error bars indicate the standard error of each condition.
Figure 6
 
Mean normalized peck rates for the three pigeons in Experiment 3 tested in different types of featured deletion conditions. Performance with various portions of the stimulus deleted. Error bars indicate the standard error of each condition.
Table 1
 
Mean normalized pecks in PLD condition during second test of Experiment 1.
Table 1
 
Mean normalized pecks in PLD condition during second test of Experiment 1.
Condition Dot size Close Far
S+ S− S+ S−
Baseline 1.08 0.13 0.90 0.18
Articulated 0.09 0.09 0.07 0.10
0.04 0.04 0.09 0.06
0.13 0.10 0.12 0.05
Inverted 0.09 0.10 0.10 0.09
0.08 0.05 0.09 0.08
0.14 0.08 0.09 0.10
Randomized 0.06 0.12 0.09 0.09
0.06 0.10 0.11 0.06
0.10 0.13 0.08 0.10
Table 2
 
Analysis of individual bird performance in Experiment 3. Notes: The left half of the table shows performance with a digital rock beside the acting model, covering its rear legs, fore legs, or across both legs as averaged across the occluding/behind manipulation. The right half shows performance when the specified parts were deleted from the model. All refers to the all legs occluded or deleted conditions. All bolded p values are significant after using a Holm-Bonferroni multiple test correction for each bird.
Table 2
 
Analysis of individual bird performance in Experiment 3. Notes: The left half of the table shows performance with a digital rock beside the acting model, covering its rear legs, fore legs, or across both legs as averaged across the occluding/behind manipulation. The right half shows performance when the specified parts were deleted from the model. All refers to the all legs occluded or deleted conditions. All bolded p values are significant after using a Holm-Bonferroni multiple test correction for each bird.
Rock placement Deletions
Beside Rear Fore All Rear Fore All Torso Head
#G2
 DR .91 .91 .93 .87 .89 .92 .89 .88 .89
t(11) 32.6 42 37.3 15.8
t(7) 19.3 16.3 15.8 7.2 7
p <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001
d 9.4 12.1 10.8 4.6 6.8 5.8 5.6 2.6 2.5
#S3
 DR .67 .66 .71 .59 .67 .61 .63 .69 .79
t(11) 3.9 4.5 7.3 2.7
t(7) 2.6 1.5 1.7 2.6 6.9
p 0.002 0.001 0.001 0.022 0.038 0.168 0.141 0.035 <0.001
d 1.1 1.3 2.1 0.8 0.9 0.5 0.6 0.9 2.4
#Y4
 DR .83 .67 .83 .67 .88 .96 .79 .89 .90
t(11) 5.9 2 7.2 2.1
t(7) 6.6 32 3.2 8.3 13.3
p <0.001 0.074 <0.001 0.058 <0.001 <0.001 0.025 <0.001 <0.001
d 1.7 0.6 2.1 0.6 2.3 11.3 1.1 2.9 4.7
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×