Open Access
Article  |   December 2019
Face categorization and behavioral templates in rats
Author Affiliations & Notes
  • Anna Elisabeth Schnell
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
    annaelisabeth.schnell@kuleuven.be
  • Gert Van den Bergh
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
    gert.vandenbergh@kuleuven.be
  • Ben Vermaercke
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
    ben.vermaercke@kuleuven.vib.be
  • Kim Gijbels
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
  • Christophe Bossens
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
    christophe.bossens@kuleuven.be
  • Hans Op de Beeck
    Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
    Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
    hans.opdebeeck@kuleuven.be
  • Footnotes
    *  AES, GVdB, and BV contributed equally to this article.
Journal of Vision December 2019, Vol.19, 9. doi:https://doi.org/10.1167/19.14.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Elisabeth Schnell, Gert Van den Bergh, Ben Vermaercke, Kim Gijbels, Christophe Bossens, Hans Op de Beeck; Face categorization and behavioral templates in rats. Journal of Vision 2019;19(14):9. doi: https://doi.org/10.1167/19.14.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Rodents have become a popular model in vision science. It is still unclear how vision in rodents relates to primate vision when it comes to complex visual tasks. Here we report on the results of training rats in a face-categorization and generalization task. Additionally, the Bubbles paradigm is used to determine the behavioral templates of the animals. We found that rats are capable of face categorization and can generalize to previously unseen exemplars. Performance is affected—but remains above chance—by stimulus modifications such as upside-down and contrast-inverted stimuli. The behavioral templates of the rats overlap with a pixel-based template, with a bias toward the upper left parts of the stimuli. Together, these findings significantly expand the evidence about the extent to which rats learn complex visual-categorization tasks.

Introduction
Up to recently, visual perception and the neural underpinnings thereof at the cortical level were mostly investigated in monkeys (Mishkin & Ungerleider, 1982; Felleman & Van Essen, 1991; Fabre-Thorpe, Richard, & Thorpe, 1998; Vogels, 1999a, 1999b; Rainer, Augath, Trinath, & Logothetis, 2002; Kiani, Esteky, Mirpour, & Tanaka, 2007; Orban, 2008), cats (Berman & Cynader, 1972; Stryker & Blakemore, 1972; Blake & Holopigian, 1985; Chino, Kaas, Smith, Langston, & Cheng, 1992; Blake, 1993), and pigeons (Gibson, Wasserman, Gosselin, & Schyns, 2005; Gibson, Lazareva, Gosselin, Schyns, & Wasserman, 2007). In contrast, in many other domains of experimentation, rodents—in particular rats and mice—have been the primary animal model. One of the main reasons that these rodents are not the model of choice in vision research is that they are nocturnal animals, and thus their visual sense is less advanced, with, for example, lower visual acuity compared to primates or humans (Prusky, Harker, Douglas, & Whishaw, 2002). Recently, more and more vision studies have started to focus on rodents because of the availability of a wide range of genetic and systems-level tools in rodents, including genetic knockout models, two-photon imaging, and optogenetics. Nevertheless, it is still unclear to what extent the rodent model can serve as a useful model for complex forms of visual cognition. Here we will focus on a particularly important domain of primate vision: the ability to detect faces. 
Several studies have already suggested that rodents are capable of learning complex discrimination tasks (Simpson & Gaffan, 1999; Minini & Jeffery, 2006; Zoccolan, Oertelt, DiCarlo, & Cox, 2009; Tafazoli, Di Filippo, & Zoccolan, 2012; Vermaercke & Op de Beeck, 2012; Alemi-Neissi, Rosselli, & Zoccolan, 2013; for a review, see Zoccolan, 2015). For example, Zoccolan et al. (2009) used a transform-invariant object-recognition task to show that rats are capable of higher level visual processing. The study of Vinken, Vermaercke, and Op de Beeck (2014) suggested that rats can be trained in a complex visual-categorization task with natural stimuli. These natural stimuli included movies of rats, nonrat objects, and a phase-scrambled version of rat movies. Another study applying a visual shape-discrimination task presented static and dynamic stimuli that differed in both first-order luminance information and second-order cues (De Keyser, Bossens, Kubilius, & Op de Beeck, 2015). The results showed that rats were able to use complex strategies when necessary to solve the task. 
However, even though multiple studies are in favor of using the rodent model for relatively complex forms of visual cognition, it is clear—also from behavioral discrimination tasks—that the rodent visual system works differently than the human visual system. In many cases rodents perform worse than humans, and there are some tasks that they do not seem to be able to learn (Minini & Jeffery, 2006; Bossens & Op de Beeck, 2016). In the study by Bossens and Op de Beeck (2016), the animals failed to learn the nonlinear part in the object-discrimination task that was used. The human participants, however, did not have any difficulties, and even performed best on this nonlinear task. In other situations, surprisingly, rodents can sometimes perform better on visual tasks than humans. For example, Vermaercke, Cop, Willems, D'Hooge, and Op de Beeck (2014) have shown that rats significantly outperformed humans in terms of generalization in an information-integration categorization task. This study provides evidence that category learning and generalization in humans are dimension based, whereas rats use a similarity-based generalization strategy. Taken together, these studies highlight that more research on the rodent visual system is necessary. 
An important step in rodent-vision research is to investigate how the animals solve visual tasks. A paradigm that can aid in this quest is the Bubbles paradigm, first described by Gosselin and Schyns (2001). This paradigm can be used to understand the mechanisms of discrimination and categorization, and has been previously used, with success, in rats (Vermaercke & Op de Beeck, 2012; Alemi-Neissi et al., 2013; Rosselli, Alemi, Ansuini, & Zoccolan, 2015; Djurdjevic, Ansuini, Bertolini, Macke, & Zoccolan, 2018) and in monkeys (Nielsen, Logothetis, & Rainer, 2006, 2008). It is important to understand whether rodents focus on higher level features to solve a visual task or whether they adhere to easier strategies. These higher level features could include oriented edges and corners as well as (un)oriented local contrast patterns (Zoccolan, 2015). Some studies suggest that rats indeed use higher level visual processing (Zoccolan et al., 2009; Zoccolan, 2015), which stands in contrast with the work of Minini and Jeffery (2006), who state that rats adopt a lower level strategy. Several studies have taken a position in between (e.g., Vermaercke & Op de Beeck, 2012; Rosselli et al., 2015; Djurdjevic et al., 2018). Vermaercke and Op de Beeck showed that the behavioral templates of rats are context and position tolerant, and suggested that the animals use a flexible combination of easier, midlevel strategies. Their study first found that rats focused on the lower part of the stimuli to distinguish between the target and the discriminator, which is in line with the research of Minini and Jeffery. However, when rats were forced to also use the top part of the stimuli, because the lower part was masked by Bubbles, they were able to do so. This suggests that rats adopt a flexible strategy that is context dependent. This hypothesis is supported by the study of Vinken et al. (2014), who discuss the possibility of explaining the results in terms of a complex combination of midlevel features such as local contrast cues. Djurdjevic et al. expanded on Vermaercke and Op de Beeck's study by changing the experimental design. They found that the perceptual strategies varied largely among rats, and concluded that rat object vision is transformation tolerant. Rosselli et al. trained rats to discriminate two objects that have a highly similar structure, and found that the complexity of the discrimination affects the animals' perceptual strategy. When the stimuli were highly similar, and thus the task harder, the animals adopted more variable strategies. 
Here we plan to use a combination of categorization tasks, generalization tests, and the Bubbles paradigm to investigate the possibility of face categorization in rats. Face stimuli have often been used in primate-vision research; here we will only briefly mention a few findings. Maurer, Le Grand, and Mondloch (2002) found that configural processing—that is, perceiving relations among the features of visual objects—is affected by inversion, and that this effect is particularly strong with faces. Second, contrast polarity is very important for detecting faces as a computational strategy; this is supported by findings from Ohayon, Freiwald, and Tsao (2012), who showed significant contrast polarity preferences in face-selective neurons within the inferotemporal cortex of macaque monkeys (see also Tsao, Freiwald, Tootell, & Livingstone, 2006). Previous research has suggested that such contrast effects are only found in faces (Galper, 1970; Subramaniam & Biederman, 1997; Vuong, Peissig, Harrison, & Tarr, 2005; Nederhouser, Yue, Mangini, & Biederman, 2007). 
The present study is twofold. The first part investigates face categorization and generalization in rats. We will determine whether rats are able to master a face/nonface categorization task and whether they can generalize to new, unseen stimuli. Additionally, in line with previous primate research, we will investigate generalization through the effect of modifications to the stimuli, such as size and position invariance, inverted luminance, and upside-down stimuli. The second part focuses on determining the behavioral templates of the rats, using the Bubbles paradigm as described by Gosselin and Schyns (2001) and Vermaercke and Op de Beeck (2012). By using this paradigm, we wish to reveal the strategy or template of the rats during categorization. 
Materials and methods
Animals
Six male outbred Long Evans rats (Janvier Labs, Le Genest-Saint-Isle, France) were used for behavioral training. At the start of the training, these rats were at least 3 months old. This strain has a visual acuity of 1.0 c/° (Prusky et al., 2002). Rats were housed in groups of three per cage and held in a light cycle of 12 hr on, 12 hr off. Each cage was enriched with a plastic toy (Bio-Serv, Flemington, NJ). During training, rats were food deprived and obtained enough food to maintain a body weight of 80%–90% of their original body weight. Rats received water ad libitum. The training lasted for 1 year. One of the animals died during the months between the face-categorization part and the Bubbles part. All experiments and procedures involving living animals were approved by the Ethical Committee of the University of Leuven and were in accordance with the European Commission Directive of September 22, 2010 (2010/63/EU). 
Setup
Rats were trained in two automated touch-screen rat-testing chambers (Campden Instruments Ltd., Leicester, UK) with ABET II controller software. Each chamber was contained within a sound-attenuating box. Rats were placed within a trapezoid operant chamber (30.5 × 25.1 × 8.25 cm) with black Perspex side walls and a stainless-steel grid floor. At the large base of the trapezoid, an infrared touchscreen monitor was installed. A black Perspex mask containing two square response windows (10.0 × 10.0 cm) was placed in front of the monitor. To force the animals to attend to the stimuli and to view the stimuli within their central visual fields, a shelf (5.4 cm wide) supported by springs was installed onto the mask 16.5 cm above floor level. In order to respond, the animals had to stand on their hind paws, press the shelf down, and stretch toward the stimuli. Due to the infrared touch screen, no pressure had to be applied onto the screen—close proximity was enough to elicit a touch. On the opposite side of the chamber was a reward tray in which food pellets (45-mg sucrose pellets; TestDiet, St. Louis, MO) could be delivered. The chamber was further equipped with a house light and a tone generator. 
Stimuli
Images measured 340 × 340 pixels on a screen of 10.0 × 10.0 cm. Rats made their response by touching the stimulus on the screen. However, it is possible that they made their choice already when they were still away from the screen, possibly even at the reward tray. This precludes us from defining the properties of the stimuli in visual degrees at the moment of making the response choice. The stimuli are therefore defined in measures of pixels and centimeters. 
Face categorization
The initial stimulus pair consisted of a face image as the target stimulus and an image of a computer-generated three-dimensional object as the distractor stimulus, both in gray scale on a black background. In all phases, the general luminance levels, defined as the sum of pixel intensities, were similar between target and distractor stimuli. Additional faces and objects were generated similarly to obtain a final number of 10 faces and 10 different objects (see Figure 1). Half of these stimuli (five faces, five objects) were used during the discrimination training, whereas the other half were shown only during the generalization test. During all phases, the stimuli were presented in every possible combination. Faces were generated as three-dimensional models with the open source software MakeHuman (MakeHuman) and rendered in the open source software Blender (Blender). Objects were generated and rendered in Blender. 
Figure 1
 
An overview of all stimuli that were used in the face-categorization part of this study. The first five faces and objects were used for training. The second set of five faces and objects was used for both generalization tests.
Figure 1
 
An overview of all stimuli that were used in the face-categorization part of this study. The first five faces and objects were used for training. The second set of five faces and objects was used for both generalization tests.
The second set of stimuli consisted of the same faces and objects as used in the training phases, but presented on a different background. First, the luminance level of the background was increased from black to light gray in three steps. Next, the gray background was replaced with a noisy background of the same mean luminance in two steps: first at 50% contrast, then at 100% contrast. The stimuli for the second generalization test consisted of the same stimuli as for the first generalization test, but with a noisy background with 100% contrast. Backgrounds were generated using MATLAB (MathWorks, Natick, MA) and added to the objects and faces with Adobe Photoshop. 
The third set of stimuli was used to test size and position invariance. The stimuli used in these phases consisted of the training stimuli. First, the necks were removed from the faces. Next, the size of the faces and objects was decreased in two steps (80% and 75% of the original size). Finally, the faces and objects were presented at five different positions (center, upper left, upper right, lower left, lower right). Stimuli were always shown on a noisy background with 100% contrast for these tests of size and position invariance. During position invariance, all stimuli were always shown at 70% of their original size. 
A final set of stimuli consisted of stimuli that were shown upside down or with inverted luminance. Here, stimuli were always shown on a gray background. Figure 2 presents examples of stimuli used in each phase. 
Figure 2
 
Examples of stimuli used in each phase. Note that only one face and one object per phase are shown here. These were chosen randomly, and their only purpose is to give an example of the stimuli. For a full set, see Figure 1.
Figure 2
 
Examples of stimuli used in each phase. Note that only one face and one object per phase are shown here. These were chosen randomly, and their only purpose is to give an example of the stimuli. For a full set, see Figure 1.
Bubbles paradigm
In this part of the study, only three faces and three objects were used (see Figure 3a). These stimuli were shown on a gray background and masked by a number of Gaussian blobs, or Bubbles (the terms will be used interchangeably; see Figure 3b). Here, we applied the variation on this approach used by Vermaercke and Op de Beeck (2012), in which the Bubbles function as occluders. 
Figure 3
 
(a) The set stimuli that were used for the Bubbles part of this study. Note that only three faces and three objects were used. All stimuli are shown on a gray background. (b) Example of stimuli that are masked with a number of Gaussian blobs. (c) Unsmoothed differential image.
Figure 3
 
(a) The set stimuli that were used for the Bubbles part of this study. Note that only three faces and three objects were used. All stimuli are shown on a gray background. (b) Example of stimuli that are masked with a number of Gaussian blobs. (c) Unsmoothed differential image.
The Gaussian blobs that were used had a size given by a sigma of 60 pixels, which is a similar size, relative to the stimulus size, as was used by Vermaercke and Op de Beeck (2012, figure 1). This Bubbles size is large enough to mask only one facial feature, as visualized in Figure 4. The performance of the rats was primarily determined by the number of Bubbles. A performance of approximately 70% correct in each session was set as the criterion level. To achieve this, the blob number for the next session was redefined after each session. If the performance in the previous session was below 70%, the number of Bubbles was decreased. Blobs were added if the animals reached the 70% performance criterion. The number of Bubbles varied between 30 and 40, and all Bubbles were randomly placed in the 340 × 340 pixel space, forming a Bubble mask. The value of each pixel in the Bubbles mask was used to define the contrast—that is, the deviation from the mean luminance value of the whole image—of each pixel in the stimulus. Pixel contrast in the center of the Bubble is zero when the Bubble reaches its maximum. This can be defined as stimulus = (source stimulus − mean background level) × Bubbles mask + mean background level, where × denotes point-wise multiplication and the values all range between 0 and 1. For each stimulus pair shown, the Bubbles mask was identical for the target and distractor images. 
Figure 4
 
Examples of Bubbles masks. With this Bubbles size, it is still possible to mask only one facial feature while still showing all other features.
Figure 4
 
Examples of Bubbles masks. With this Bubbles size, it is still possible to mask only one facial feature while still showing all other features.
As a benchmark for the obtained behavioral templates, we also computed the differential image that indicated which parts of the images were different between the stimuli (see Figure 3c). This image was obtained by taking, per pixel, the absolute value of the difference in each pair of stimuli and then summing across the stimulus pairs. The redder parts indicate areas where the stimuli differed more. 
Protocols
Shaping
Before the actual training procedure started, the animals went through a shaping period in which they were accustomed to the behavioral setup in a series of five different phases. First, they were placed in the touch-screen chamber with all lights extinguished and no stimuli on the screen. In the reward tray, 4.5 g of reward pellets were delivered. In this habituation phase, the animals remained for 30 min and proceeded to the next phase once they had eaten all pellets in a single session. Second, rats proceeded with the initial touch phase. Here, a stimulus was presented on one of the screens while the other screen remained black. The stimulus was a random black-and-white drawing. If the animals reacted to the stimulus by touching either touch screen within 30 s, the stimulus disappeared, a tone sounded, and three reward pellets were delivered. If they did not respond to the stimulus, they received only one reward pellet. In both cases a time-out period of 20 s started before the next trial began. After one session, the animals proceeded to the must-touch phase. In this third phase, the same stimuli as in the previous phase were shown. Now, however, a stimulus remained on the screen until animals made a response to it by touching the correct screen. If they succeeded, a single food pellet was delivered and the time-out period started. If the animals touched the black screen, nothing happened. Some jam was applied to the Perspex mask to motivate the animals to approach the stimulus. As soon as an animal performed 100 correct trials in a single session, it proceeded to the fourth phase. During the must-initiate phase, a rat had to learn to initiate a trial by sticking its head into the reward tray. To signal the possibility to initiate a trial, a light was turned on in the reward tray after the time-out period. Finally, in the punish-incorrect phase, touching the incorrect black screen instead of the white screen caused the house light in the operant chamber to illuminate for 5 s, after which a time-out period of 20 s started. After this, the same trial was repeated until the animal made the correct choice. This type of trial is a correction trial, and these trials are not included in the total trial count or the analysis procedures. 
Face categorization and generalization
An overview of all phases for the first part of this study can be found in Table 1. This table also shows how many animals were used in each phase. Supplementary Table S1 gives an overview of the reward scheme as well as stimuli used in each phase. 
Table 1
 
An overview of all phases of the first part of the study.
Table 1
 
An overview of all phases of the first part of the study.
Rats started with the face-versus-object discrimination training after they succeeded the shaping procedure. They performed a single session each day in the operant chamber and were taken out of the chamber when they completed 100 trials or were in the setup for 60 min. As an indication, rats performed on average 72 trials per training phase (Phases 1–5). Outliers with low numbers of trials occurred; in individual learning curves these points are marked with gray (see Figure 5). In these cases, there were mostly difficulties with the food-restriction schedule, due to which rats were not sufficiently motivated to do the task. 
Figure 5
 
Learning curves of the first five phases, including both the new and old stimuli for each phase. There is one subplot for each rat, and each line indicates the learning curve of one phase. The gray data points in the plots of Rats 1, 4, and 6 indicate outliers due to a low number of trials in a session (< 8 trials). The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 5
 
Learning curves of the first five phases, including both the new and old stimuli for each phase. There is one subplot for each rat, and each line indicates the learning curve of one phase. The gray data points in the plots of Rats 1, 4, and 6 indicate outliers due to a low number of trials in a session (< 8 trials). The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
An intertrial interval of 20 s separated each pair of trials. Correction trials were presented after each incorrect response to reduce the chance that animals would develop a response bias to one of the two screens. To proceed to the next training phase, rats had to perform at or above 80% correct for two consecutive sessions. Some of the rats, however, seemed to have problems reaching this criterion level in later phases. For this reason, the criteria were relaxed to a more gradual scheme (see Table 2), similar to De Keyser et al. (2015). To determine when the threshold was lowered, two conditions had to be met. First, the average performance of the rats had to be above a certain threshold (Table 2, left column) during a fixed number of sessions (Table 2, right column). Second, performance during the last session had to be above this predetermined threshold. 
Table 2
 
Performance criteria. Rats had to perform at or above the threshold (left column) during a predetermined number of sessions (right column). During the last session, they had to be at or above threshold; if they were not, the threshold was lowered.
Table 2
 
Performance criteria. Rats had to perform at or above the threshold (left column) during a predetermined number of sessions (right column). During the last session, they had to be at or above threshold; if they were not, the threshold was lowered.
In the first training phase (Phase 1), the animals had to discriminate between the first face and object pair. In all phases, touching the face was rewarded. After reaching the criterion levels in Phase 1, rats proceeded to the second phase, where a new target (face) and distractor (object) were introduced. All possible combinations of new and old faces and objects were randomly presented, with the frequency of the new faces and objects at 50% of total presentations. In Phases 3, 4, and 5, a third, fourth, and fifth target and distractor were added to the stimulus set. In these first five training phases, criterion performance was calculated across all trials, including both new and old stimuli. Rats were trained until they could discriminate five different faces from five different objects at criterion level. 
After the initial discrimination and categorization training (Phases 1–5), animals proceeded to the first generalization test (Phase 6), in which generalization to new, unseen stimuli was investigated. Rats were therefore presented with only five new faces and five new objects on the same black background (see Figure 1). Stimuli were presented in all possible combinations. In this generalization test, no correction trials were used and rats were always rewarded. This prevented the rats from using reward feedback to learn which stimuli were correct. 
To reduce the possibility that generalization of the animals for faces was due to local contrast cues at the edges of the faces or objects, the complexity of the background was gradually increased in the next five phases. Here only the old stimuli were presented—that is, the same as in Phases 1–5, but with modified backgrounds. Rats were rewarded only for a correct choice. First, the luminance of the background was increased in three steps until the background was midgray (Phases 7–9). In each step, animals had to reach the criterion level again to continue to the next phase. Next, random noisy Gaussian background was added for both the targets and distractors. Each target and distractor had one, fixed background. However, within a stimulus pair, the background was different between the target and distractor. Mean luminance was kept equal to the brightest background from the previous phase. In Phase 10, the Michelson contrast—here defined in terms of pixel values—of the background noise was set to 50%, whereas in the next phase (Phase 11) it was increased to 100%. Note that only four of the original six rats were used in Phase 11. This part of the study was part of an internship, and because two rats were slow learners, time ran out and thus only the fastest rats were included. 
After rats reached the criterion level for the stimuli in Phase 11, a second generalization test (Phase 12) was performed with the same new targets and new distractors as in the first generalization test (Phase 6). However, stimuli were now presented on a noisy background. Again, no correction trials were used and rats were rewarded in every trial. Because of this reward pattern that offers no feedback to the animal, it is assumed that animals did not learn to associate the faces and objects from the first generalization test with the reward scheme, unless they had already done so through generalizing from the first five faces and objects. 
The next four phases included further modifications to the stimuli. Phase 13 included stimuli in which the necks of the faces were removed. The distractors remained the same. The following two phases (14 and 15) were used to investigate size invariance. For Phases 13–15, the stimuli were presented on a noisy background (with 100% contrast). Phase 16 investigated position invariance. After the 100%-contrast background seemed to be too difficult, a variant of this phase was used in which stimuli included a noisy background of 50% contrast (Phase 16b). After rats reached criterion level, the original stimuli with 100%-contrast noisy background were used again (Phase 16c). In Phases 13–16c, rats were rewarded only for a correct choice, and all possible combinations of the stimuli were presented. In Phases 16–16c, the size remained the same for all targets and distractors. However, the background differed within a pair—that is, the target and distractor in one pair did not necessarily have the same background. 
Finally, generalization to upside-down stimuli (Phase 17) and contrast-inverted stimuli (Phase 18) was investigated. Rats were presented with modified and unmodified stimuli. The unmodified stimuli consisted of the stimuli as used in the training phases (Phases 1–5; see Figure 1), whereas the modified stimuli consisted of upside-down (Phase 17) or contrast-inverted (Phase 18) stimuli. All stimuli were always shown on a midgray background, and all possible combinations of targets and distractors were presented. Each pair consisted of either modified or unmodified stimuli. For both phases, half of the trials consisted of modified stimuli, whereas the other half consisted of unmodified stimuli. Rats were rewarded randomly in 80% of the trials. In the remaining trials, they received no reward. This reward scheme was chosen to motivate the rats. 
Bubbles paradigm
To determine the behavioral templates that the animals use to distinguish faces from nonface objects, the Bubbles paradigm (Gosselin & Schyns, 2001), as previously used in rats by Vermaercke and Op de Beeck (2012), was employed. 
In this part of the study, four rats were trained to discriminate three faces from three nonface objects. These stimuli were also used in the upside-down and contrast-inverted phases. Stimuli were presented on a midgray background with stimulus contrast reduced locally by the Bubbles mask. During the Bubbles training, rats were given a food pellet for correctly responding, but no correction trials were introduced when they gave an incorrect response. Table 3 gives an overview of the total number of sessions and trials per rat. 
Table 3
 
An overview of the number of sessions and trials per rat that were performed during the Bubbles part of this study. The last row indicates the average number (± standard error).
Table 3
 
An overview of the number of sessions and trials per rat that were performed during the Bubbles part of this study. The last row indicates the average number (± standard error).
Data analysis of the Bubbles paradigm
Analysis of the Bubbles data was generally performed as described in previous work (Vermaercke & Op de Beeck, 2012). Bubbles templates were computed by dividing the sum of all masks used in the correct trials by the sum of all masks, as presented by Gosselin and Schyns (2001). To visualize the significant template areas used by the animals in the behavioral tasks, a permutation test was used based on the one discussed by Djurdjevic et al. (2018). This test works as follows: The rat responses were permuted 100 times while preserving the ratio between correct and incorrect responses. For each permuted data set, we computed the classification image in the same manner as the original classification image. This resulted in 100 (permuted) classification images. For each pixel, we calculated the empirical distribution and fitted it with a Gaussian distribution, as done by Djurdjevic et al. We then compared, pixel-wise, the value of the original classification image to this distribution. A pixel of the original classification image is significant if it falls on the right tail within the 0.05 significance region. This thresholded behavioral template per rat, as well as for all rats combined, was then compared to the pixel-based template, derived from the differential image explained earlier and shown in Figure 3c. This pixel-based template was obtained by smoothing the differential with a sigma of 300 × 0.2 and afterwards performing a convolution on this smoothed image with the Bubbles size. A threshold of 3 × 108 was used to retrieve the contour of this template. The exact value of the threshold is arbitrary, as long as it is low enough to result in one large region and high enough to avoid the inclusion of noise. While the exact value affects the exact numbers obtained for, say, overlap with behavioral templates, it has no impact upon the conclusions drawn. All scripts were written in MATLAB (MathWorks). 
Results
Categorization and generalization performance
In the first part of this study, rats were tested on a face-categorization and generalization task. The stimuli consisted of images of faces (targets) and other three-dimensional objects (distractors). This task consisted of 18 phases; results are discussed per block of phases. 
Initial learning (Phases 1–5)
The first five phases consisted of the training phases. Rats were trained to master the face-categorization task. In the first phase, only one target and one distractor were shown. For each succeeding phase, another pair of stimuli was added, and every combination of old and new stimuli was presented. Figure 5 shows the learning curves for each rat for these first five phases, including both the new and old stimuli in each phase. Each line represents one phase. All rats were able to learn the task successfully, pass the criterion on all successive stages, and end with high performance in the last phase. There are a few further observations worth making. 
First, this figure highlights the individual differences of the rats. For example, Rat 6 seems to be a slow learner, as it needs twice as many sessions as the other rats for the first two phases to reach the criterion level to proceed to the next phase. Figure 6 shows the average learning curves of the first five phases, including both the new and old stimuli of each phase. This figure shows only the number of sessions that were performed by most rats—for example, Phase 1 runs only up to 11 sessions because few rats needed more. 
Figure 6
 
Learning curves of the first five phases, averaged over all rats. These learning curves include both the new and old stimuli, as each possible combination of old and new stimuli was presented. The shaded error bar corresponds to the standard error. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 6
 
Learning curves of the first five phases, averaged over all rats. These learning curves include both the new and old stimuli, as each possible combination of old and new stimuli was presented. The shaded error bar corresponds to the standard error. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Second, the rats mastered the later phases much faster than the initial phases. This is already obvious in the first session of the different phases (see Figure 6, Table 4). For Phase 1, the animals start significantly lower than chance level, p < 0.01, 95% confidence interval (CI) [0.30, 0.46], possibly due to a bias of the shaping procedure. During the shaping procedure, only one stimulus was presented on one screen, while the second screen remained black. The stimulus screen was thus brighter, and the animals might have reacted to the brightness rather than to the stimulus itself. As the first target-distractor pair was the same for all animals—that is, all animals were presented with the same stimulus pair in Phase 1—this could have been caused by an idiosyncratic characteristic of this particular pair. For Phase 2, however, the performance on the first session is significantly higher than chance level, p < 0.0001, 95% CI [0.58, 0.68]. For later phases, the performance of all rats is also significantly higher than chance during the first sessions of Phase 3, p < 0.0001, 95% CI [0.80, 0.87], Phase 4, p < 0.0001, 95% CI [0.85, 0.91], and Phase 5, p < 0.0001, 95% CI [0.79, 0.86]. In particular in Phases 3 and 4, the interindividual variability was very low and all rats performed close to the criterion level of 80% already in the first session. The high performances in the later phases can be explained by the fact that old and new stimuli were presented in every possible combination. A large part of the presented stimuli therefore included old stimuli, which rats had already seen multiple times. The high performance could also partially be explained by the better-than-chance performance on new stimuli due to generalization. The occurrence of generalization was tested in the next phase. 
Table 4
 
Binomial test on the pooled response of all rats. The middle column specifies the p value for the first session.
Table 4
 
Binomial test on the pooled response of all rats. The middle column specifies the p value for the first session.
First generalization test (Phase 6)
These results suggest that in the later phases, the rats had learned stimulus features that allowed them to generalize to previously unseen exemplars. We conducted the first generalization test to verify this explicitly. Five previously unseen targets and five previously unseen distractors were presented. All rats performed 100 trials, mostly in one session, although one rat (Rat 2) required two. The performance of all rats combined, based on the pooled response, is 73.92%, which is significantly higher than chance level, p < 0.0001, 95% CI [0.70, 0.77] (see Table 5). Figure 7 visualizes the performance matrices of all rats combined as well as for each rat individually. Interestingly, looking at the results of each rat individually, all individual rats except Rat 6, which had a performance of 60%, performed significantly better than chance level. This suggests that rats were able to generalize to new, unseen stimuli. 
Table 5
 
The percentage correct during the first generalization test for each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 5
 
The percentage correct during the first generalization test for each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Figure 7
 
(a) Matrix visualizing the performance of all rats combined on the first generalization test (Phase 6) per pair of stimuli. The average number of trials per matrix cell is 24. (b) The performance matrices of each individual rat.
Figure 7
 
(a) Matrix visualizing the performance of all rats combined on the first generalization test (Phase 6) per pair of stimuli. The average number of trials per matrix cell is 24. (b) The performance matrices of each individual rat.
Background changes (Phases 7–11)
In the next block of five phases (Phases 7–11), the complexity of the background of the stimuli changed gradually to investigate to what extent the performance of the animals would be based on local contrast cues at the edges of the faces or objects. Phases 7–9 included the old stimuli—that is, the stimuli used in Phases 1–5—in which the background changed from black to midgray in three steps. In Phases 10 and 11, the background of these stimuli consisted of noisy Gaussian background with 50% and 100% contrast, respectively. Figure 8 shows the learning curves of each tested rat (one line per rat) in Phase 11, where stimuli were presented with 100% contrast of the noisy background. 
Figure 8
 
Learning curves of four rats for Phase 11. Each line indicates one rat. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 8
 
Learning curves of four rats for Phase 11. Each line indicates one rat. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
From Figure 8, it is clear that all four rats were able to master the face/nonface categorization task when the stimuli included a noisy background. 
Second generalization test (Phase 12)
The second generalization test (Phase 12) included the same stimuli as the first generalization test, but with a noisy background. Figure 9 shows the performance matrices of all rats. The performance of all rats combined, based on their pooled response, is 71.82%, which—similar to the first generalization test—is significantly higher than chance level, p < 0.0001, 95% CI [0.67, 0.76] (see Table 6). Looking at the performances of each rat individually, we see that all rats except Rat 4, which had a performance of 57%, performed significantly higher than chance level. This suggests that rats were able to generalize even when the background was changed to a noisy background. 
Figure 9
 
(a) Matrix visualizing the performance of all rats combined on Phase 12, per pair of stimulus. The average number of trials per matrix cell is 16. (b) The performance matrices of all individual rats. Note that only four rats (2, 3, 4, and 5) were used in this phase.
Figure 9
 
(a) Matrix visualizing the performance of all rats combined on Phase 12, per pair of stimulus. The average number of trials per matrix cell is 16. (b) The performance matrices of all individual rats. Note that only four rats (2, 3, 4, and 5) were used in this phase.
Table 6
 
The percentage correct during the second generalization test (Phase 12) of each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 6
 
The percentage correct during the second generalization test (Phase 12) of each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Size and position invariance (Phases 13–16)
In the next block of phases, several modifications to the stimuli were performed. First, the necks were removed from the faces (Phase 13). Then size invariance was tested (Phases 14 and 15), and finally position invariance was investigated (Phase 16). At first, only one rat was tested in Phase 16. However, its performance did not reach the criterion level, and was close to chance level. Therefore, this phase was simplified to an easier version. In this easier version (Phase 16b), two rats participated until their performance reached the criterion level. These rats then proceeded to the original version (Phase 16c). Figure 10 shows the learning curves of all rats that participated in this block of phases. 
Figure 10
 
Learning curves of four rats for Phases 13–16c. Each line indicates one phase. The gray data point in the plot of Rat 3 indicates an outlier. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 10
 
Learning curves of four rats for Phases 13–16c. Each line indicates one phase. The gray data point in the plot of Rat 3 indicates an outlier. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
It is obvious that in these phases we are pushing the limits of what the animals can do. The reduction in size toward Phase 15 reduced performance substantially. Possibly this is due to the visual acuity of the rats, rather than a problem with dealing with size variation at a more cognitive level. The position variation on top of the size reduction was very difficult for the rats to handle. Neither of the two that were tested in Phase 16c managed to even get close to the criterion of 80% performance in multiple days. Nevertheless, given the large number of trials (Rat 3: 759 divided over eight sessions; Rat 5: 751 divided over eight sessions), the performance averaged across all sessions (Rat 3: 69.04%; Rat 5: 65.91%) is significantly higher than chance level in each of the animals—Rat 3: p < 0.0001, 95% CI [0.66, 0.72]; Rat 5: p < 0.0001, 95% CI [0.62, 0.69]. 
Upside down and contrast inverted (Phases 17 and 18)
Finally, the last two phases investigated generalization to upside-down stimuli (Phase 17) and contrast-inverted stimuli (Phase 18). Table 7 gives an overview of the performances of each rat individually as well as on average. 
Table 7
 
An overview of performance in Phases 17 (upside down) and 18 (contrast inverted). The last row indicates the average performance over all rats (± standard error).
Table 7
 
An overview of performance in Phases 17 (upside down) and 18 (contrast inverted). The last row indicates the average performance over all rats (± standard error).
Averaged over all rats, there is a significant drop in performance to the modified stimuli for both phases. However, it remains significantly above chance level (see Tables 8 and 9). Looking at the individual performances of the animals for the upside-down test, we can see that three out of four rats show significantly lower performance on the upside-down stimuli compared to the upright stimuli (see Table 8). Rat 4, on the other hand, shows no significant change in performance, p = 0.10, 95% CI [0.74, 0.76] for Phase 17. 
Table 8
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 17 (upside down). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 8
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 17 (upside down). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 9
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 18 (contrast inverted). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 9
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 18 (contrast inverted). The last row indicates the results of the binomial test on the pooled response of all rats.
For contrast inversion, the situation is very similar, with a significant drop in three out of four rats and no significant difference—even a trend in the opposite direction—for Rat 2. 
Despite the drops in performance, overall the rats were able to perform significantly better than chance with both upside-down and contrast-inverted stimuli. Figure 11 visualizes performance on the old and new stimuli averaged over all rats for both phases. These results indicate that rats are able to generalize to both upside-down and contrast-inverted stimuli. 
Figure 11
 
Bar plots of the average performance of all rats on Phases 17 (left) and 18 (right). The error bars indicate the standard error across rats. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 11
 
Bar plots of the average performance of all rats on Phases 17 (left) and 18 (right). The error bars indicate the standard error across rats. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Bubbles paradigm
Behavioral templates
From the first part of this study, it became clear that rats are capable of face categorization. The animals were also able to generalize to new, unseen stimuli. In the second part of this study, the Bubbles paradigm was employed to determine the behavioral templates the animals use to distinguish between faces and nonface objects. 
In the second part of this study, only four rats were tested. Rat 5 died between the first and second parts of the study, and Rat 1 was not included because it was the slowest animal of the remaining five and not more than four rats could be tested in parallel. As shown in the Materials and methods section, this experiment included a very large number of sessions and trials overall and per animal (see Table 3). 
Figure 12 shows the behavioral thresholded template (left), averaged over all stimuli and rats, which was retrieved after performing the permutation test with 100 permutations. The red area corresponds to the significant area which rats used to make their decision. From this area, it is clear that the animals, on average, use the upper half and left side of the stimuli to distinguish between the faces and nonface objects. This area overlaps with a substantial part of the area of the display in which target and distractor stimuli differ and that would be used by the pixel-based template (Figure 12, white contour). To quantify this overlap, we calculated two percentages. First we calculated how much of the pixel-based template is within the behavioral template of the rats. The overlapping region corresponds to 41.83% of the pixel-based template (see Table 10). Second, we calculated how much of the behavioral template is within the pixel-based template. Here we find an overlap of 86.93%, suggesting that on average, the behavioral template of the rats falls to a large amount within the pixel-based template. 
Figure 12
 
The behavioral thresholded template, averaged over all rats and stimuli (red). The red contour visualizes the significant area on which all rats, on average, focus to distinguish between a face and a nonface object. The white contour indicates the contour of the pixel-based template. The contours are shown on top of a combined image of all stimuli.
Figure 12
 
The behavioral thresholded template, averaged over all rats and stimuli (red). The red contour visualizes the significant area on which all rats, on average, focus to distinguish between a face and a nonface object. The white contour indicates the contour of the pixel-based template. The contours are shown on top of a combined image of all stimuli.
Table 10
 
Percentages of overlap: how much the pixel-based template (PBT) overlaps with the template of the rat and much the template of the rat overlaps with the PBT. The last row indicates the average overlap for all rats.
Table 10
 
Percentages of overlap: how much the pixel-based template (PBT) overlaps with the template of the rat and much the template of the rat overlaps with the PBT. The last row indicates the average overlap for all rats.
There are some individual differences between the rats, as can be seen in Figure 13, which shows the templates of each rat individually together with the pixel-based template. This figure nicely visualizes the individual differences between all rats. Rat 2, and to some extent Rat 4, also use the bottom half of the stimuli, whereas the other two animals mainly focus on the top half. The extent of asymmetry between left and right also differs between animals. These individual differences also become visible in looking at the percentage overlap with the pixel-based template (see Table 10). For example, Rats 2 and 4 have the highest overlap with this template (50.04% and 50.12%, respectively). This is also visible in Figure 13, where the overlap between the behavioral templates and at least part of the pixel-based template is clear in each animal. 
Figure 13
 
The white contour indicates the pixel-based template. The turquoise, purple, orange, and green contours show the templates of each individual rat.
Figure 13
 
The white contour indicates the pixel-based template. The turquoise, purple, orange, and green contours show the templates of each individual rat.
A next step was to compare the templates of the individual rats with each other to further explore the individual differences between the rats. Table 11 provides the percentage overlap between the rats. These percentages were calculated in the same way as previously explained. From this table it is clear that the overlapping area between Rats 3 and 6 contains a large part of the template of Rat 3 (78.72%), whereas the overlapping area between Rats 2 and 4 contains a large part of the template of Rat 2 (64.46%). 
Table 11
 
Percentages of overlap between rats.
Table 11
 
Percentages of overlap between rats.
Correlations between behavioral and pixel-based templates
In a second analysis, we further compared the unthresholded behavioral templates between animals by calculating correlation matrices between each pair of rats (see Figure 14). The diagonal indicates the correlation per stimulus pair, consisting of the presented target and distractor, between two rats. If the diagonal were clearly visible, this would suggest that the rats use a more similar template when the same stimulus pair is involved compared to between-pairs correlations. However, in none of the matrices is a diagonal clearly visible. This finding is also strengthened by an unpaired t test on the diagonal elements and the nondiagonal elements (see Table 12). However, for the correlation between Rats 3 and 4 and between Rats 3 and 6, the p values are significant—respectively, p < 0.05, t(79) = 2.27, and p < 0.0001, t(79) = 5.21—suggesting that the diagonal elements are significantly different from the nondiagonal elements. This suggests that these two animals use a similar template for each stimulus pair. 
Figure 14
 
Correlation matrices for the between-rats analysis. First, a behavioral template for every pair was obtained from every rat—that is, nine templates per rat. Next, we correlated each template pixel-wise between rats, which then corresponds to one pixel in these matrices. Top row (from left): correlation between Rats 2 and 3, Rats 2 and 4, and Rats 2 and 6. Bottom row (from left): correlation between Rats 3 and 4, Rats 3 and 6, and Rats 4 and 6. The color bar indicates the Pearson correlation coefficient. From these matrices we can see that Rats 3 and 4, as well as Rats 3 and 6, use a similar template for each pair, as indicated by their high correlations. An unpaired t test confirms this, and the results can be found in Table 13. The average correlations of these matrices can be found in Table 13.
Figure 14
 
Correlation matrices for the between-rats analysis. First, a behavioral template for every pair was obtained from every rat—that is, nine templates per rat. Next, we correlated each template pixel-wise between rats, which then corresponds to one pixel in these matrices. Top row (from left): correlation between Rats 2 and 3, Rats 2 and 4, and Rats 2 and 6. Bottom row (from left): correlation between Rats 3 and 4, Rats 3 and 6, and Rats 4 and 6. The color bar indicates the Pearson correlation coefficient. From these matrices we can see that Rats 3 and 4, as well as Rats 3 and 6, use a similar template for each pair, as indicated by their high correlations. An unpaired t test confirms this, and the results can be found in Table 13. The average correlations of these matrices can be found in Table 13.
Table 12
 
The results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 14. The significance of Rats 3 and 4, as well as Rats 3 and 6, indicates that these rats use similar templates.
Table 12
 
The results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 14. The significance of Rats 3 and 4, as well as Rats 3 and 6, indicates that these rats use similar templates.
Overall, the correlations are relatively low (see Table 13). Given that we have already shown a good correspondence between behavioral templates averaged across all stimulus pairs (see previous section), the low correlations might be explained by the fact that the number of trials on which each behavioral template is based is nine times lower. However, the low number of trials does not affect the reliability of this second analysis, as each rat shows high consistency in its behavioral templates for each pair. In Figure 15, we calculated the correlations within each rat between each pair. As can be seen, Rat 3 shows overall very high correlations between each pair, whereas the other three rats have somewhat lower—although still, on average, positive—correlations. To further confirm this, we compared the average correlation of the off-diagonal elements of the between-rats matrices (Figure 14) with the off-diagonal elements of the within-rat matrices (Figure 15). The average correlation of the off-diagonal elements of the between-rats matrices (ρ = 0.06) is lower than that of the off-diagonal elements of the within-rat matrices (ρ = 0.13). An unpaired t test on this data indicates that the former—the average of the between-rats correlations—is significantly lower compared to the within-rat correlations, p < 0.0001, 95% CI [0.04, 0.10], t(574) = 4.93. This highlights the individual differences between the rats, which were already visible in the templates in Figure 13
Table 13
 
Average Pearson correlation coefficients between performance for each pair of rats, as well as the average correlation of the diagonal elements. Not the high average correlation between the templates of Rats 3 and 6 (which can also be seen in the correlation matrices in Figure 14).
Table 13
 
Average Pearson correlation coefficients between performance for each pair of rats, as well as the average correlation of the diagonal elements. Not the high average correlation between the templates of Rats 3 and 6 (which can also be seen in the correlation matrices in Figure 14).
Figure 15
 
Correlation matrices of the within-rat analysis. Each matrix shows the consistency of the templates within a rat between all pairs. For each rat, we pixel-wise correlated the template of one pair to every other pair. From these matrices we can clearly see that Rat 3 has overall high correlations, suggesting that this rat has a high consistency in its templates.
Figure 15
 
Correlation matrices of the within-rat analysis. Each matrix shows the consistency of the templates within a rat between all pairs. For each rat, we pixel-wise correlated the template of one pair to every other pair. From these matrices we can clearly see that Rat 3 has overall high correlations, suggesting that this rat has a high consistency in its templates.
Additionally, we calculated the correlation between the behavioral template of each rat and the pixel-based template, again separately per stimulus pair and per rat. To find out whether the behavioral templates varied between image pairs, we also calculated the correlation between data from different image pairs. The correlation matrices can be found in Figure 16. All rats show positive average correlations (see Table 14 for an overview). 
Figure 16
 
Correlation matrices of each rat between its behavioral template on each stimulus pair and the pixel-based template of each pair. The numbers on the axes indicate the stimulus pair number. The color bar indicates the Pearson correlation coefficient. We can clearly see that Rat 3 has overall high correlations with the pixel-based template. Table 14 provides an overview of the average correlations of these matrices.
Figure 16
 
Correlation matrices of each rat between its behavioral template on each stimulus pair and the pixel-based template of each pair. The numbers on the axes indicate the stimulus pair number. The color bar indicates the Pearson correlation coefficient. We can clearly see that Rat 3 has overall high correlations with the pixel-based template. Table 14 provides an overview of the average correlations of these matrices.
Table 14
 
Average correlation coefficients between the templates of the rats and the pixel-based template, as well as the average correlation of the diagonal elements. Note that on average, all rats show a positive correlation with the pixel-based template. More specifically, Rat 3 shows, on average, the highest correlation to the pixel-based template, which is also visible in Figure 16.
Table 14
 
Average correlation coefficients between the templates of the rats and the pixel-based template, as well as the average correlation of the diagonal elements. Note that on average, all rats show a positive correlation with the pixel-based template. More specifically, Rat 3 shows, on average, the highest correlation to the pixel-based template, which is also visible in Figure 16.
The diagonal of these matrices corresponds to the correlation between the template of the rat and the pixel-based template of the same stimulus pair. In these diagonal cells, the average correlation was slightly higher. We would indeed expect that the correlations on the diagonal would be higher if animals used templates that differ between image pairs in a way that relates to where images differ. If such an effect were consistent, then we would expect to see a highlighted diagonal in the matrices of Figure 16, which is not the case. This can also be seen in the results of the unpaired t test on the correlation values of the diagonal and nondiagonal values (see Table 15), as none of the p values are significant. 
Table 15
 
Results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 16. Due to the lack of significant p values, we assume that the rats do not necessarily use templates that differ between image pairs in a way that relates to where images differ. This can also be seen in Figure 16, as there is no clearly visible diagonal.
Table 15
 
Results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 16. Due to the lack of significant p values, we assume that the rats do not necessarily use templates that differ between image pairs in a way that relates to where images differ. This can also be seen in Figure 16, as there is no clearly visible diagonal.
Discussion
In this study, we first investigated face categorization and generalization in rats. The results reveal that the animals can learn such a categorization task and are able to generalize to new, unseen stimuli. Furthermore, modifying the background of the stimuli resulted in decreased performance. However, the rats can still learn to perform well above chance level, even when asked to generalize to new stimuli. In the second part, we focused on determining the behavioral templates during categorization of the rats by using the Bubbles paradigm (Gosselin & Schyns, 2001). Here we found that their behavioral templates show a clear overlap with the pixel-based template. 
Face categorization
In a first block of five phases, rats were trained to distinguish faces from objects. We found that they were indeed capable of face categorization and, as expected, mastered the later phases much faster than the initial phases. This block was followed by a first generalization test, where rats were presented with five new targets and five new distractors. This first generalization test was successful—that is, the rats were able to generalize to new exemplar stimuli, which is a hallmark of categorization. 
A next set of phases was performed to investigate whether generalization in rats was due to local contrast cues at the edges of the faces or objects. Therefore, the background of the stimuli was modified in several ways. The rats were also able to learn to perform the face-categorization task under these more challenging conditions, again including generalization to untrained exemplars. This suggests that rats can handle a large degree of background clutter. 
After the second generalization test, the rats were challenged simultaneously on several fronts: Images were presented on a noisy background, decreased in size, and shifted in position with no overlap between positions. These modifications resulted in a decrease in performance. The lowest performance was found in the position-invariance phase (Phase 16), dropping to approximately 60%. Nevertheless, this performance is still well above chance level. 
The last block consisted of two phases. First, stimuli were presented upside down. This modification resulted in a significant decrease of performance in our animals. These results can be explained by the research of Jiang et al. (2006), which suggests that both face classification and object classification can be predicted by a simple-to-complex architecture. They present a shape-based model which is able to explain the face-inversion effect because it contains internal templates for upright faces only. The same might be the case in our rats, given that they were presented with only upright faces during training. 
Second, stimuli were presented with reversed contrast, also resulting in a significant decrease of performance. This is in line with the primate research by Maurer et al. (2002), Nederhouser et al. (2007), and Ohayon et al. (2012), finding that in primates contrast reversal significantly affects performance on stimuli and is unique to faces. However, the performance of our animals was still significantly above chance level, suggesting that they were able to generalize to these modified stimuli. 
An important difference between our design and those of other studies investigating face inversion and contrast polarity, such as Ohayon et al. (2012), is that other studies present only one stimulus at a time, whereas we presented both the target and the distractor at the same time on two different screens in the touch-screen setup. This choice was made because we trained the animals on a face-discrimination task rather than a face-recognition task. Zoccolan (2015) states in his review that this would not be a problem if the stimuli were altered independently. One drawback of our study is that this was not the case for all phases. In our last two phases, the stimuli were always transformed in the same manner—that is, they were shown either upside down (Phase 17) or in reversed contrast (Phase 18)—or they were shown with no transformation. In the phase where we tested position invariance, however, the stimuli were independently transformed, as the target and distractor were not necessarily presented on the same position. 
Even though rodents are not the animal model of choice in vision research, partly due to their lower visual acuity (Prusky et al., 2002) and differences in their visual system compared to humans (Vermaercke et al., 2014; Bossens & Op de Beeck, 2016), our findings suggest that rodents are capable of face categorization, a concept that is also present in humans (Jacques & Rossion, 2006). This study therefore provides an argument in favor of using rodents in research on complex pattern vision, as their vision system supports relatively complex categorization tasks. 
Visual strategies
The second step in our study was to unravel the behavioral templates of the rats by using the Bubbles paradigm (Gosselin & Schyns, 2001), as has been done in previous research (Vermaercke & Op de Beeck, 2012; Alemi-Neissi et al., 2013, Rosselli et al., 2015; Djurdjevic et al., 2018). When masking the stimuli with Bubbles, we found that on average and individually, rat behavioral templates align well with the region of the stimuli that contains the most information according to a pixel-based template, as most of the behavioral-template area falls within this template. This part of the study therefore reassures us that the animals use the parts of the image that one would expect to be used in a face-detection task. It could have been possible that all of the animals or individual animals might, for example, use only a small and odd part of the images, such as the top part of the hair. However, this is not the case as suggested by the average and individual templates we found. 
In the behavioral templates of our animals, there does not seem to be a clear preference for either feature of the targets, but there was a general bias toward using the upper left quadrant of the stimuli. The former finding is not in line with the monkey research of Nielsen et al. (2006, 2008), who found that monkeys used specific features in their discrimination task. Rats, therefore, use a large part of the visual input to make their discrimination, similar to the human participants of Nielsen et al. (2008). The latter finding is in line with the studies of Alemi-Neissi et al. (2013), Rosselli et al. (2015), and Djurdjevic et al. (2018). These studies also support the finding that rats show a bias toward the upper part of the stimuli. There is, however, one study which states the contrary—that is, rats focusing on the bottom part of the stimuli (Minini & Jeffery, 2006). One possible explanation is the higher complexity of our stimuli compared to Minini and Jeffery's. Their stimuli consisted of simple shapes such as a triangle or a square, whereas we used faces and computer-generated three-dimensional objects. Another study that adopted the Bubbles paradigm also used (white) triangles or squares on a black background (Vermaercke & Op de Beeck, 2012). Those researchers found that, in line with Minini and Jeffery's findings, rats used the lower part of the screen to make their decision in the discrimination task. Interestingly, when this part was masked by Bubbles, rats switched to a flexible strategy. Alemi-Neissi et al. also used the Bubbles paradigm in their study to investigate the perceptual strategy of rats in an invariant recognition task. Their stimuli resemble ours the most, as they also used three-dimensional objects in gray scale on a black background. They concluded that rats use an advanced strategy, which appears to be shape based and transformation invariant. The findings from the first part of our experiment also provide evidence for this transformation invariance, as the performance of our animals did not significantly drop when the stimuli were modified in several ways. 
Two main concerns arise, however, with the Bubbles that were used in our study. First, it is important to check whether the individual facial features of our stimuli are large enough to be resolved by the animals. To this end, we calculated the pair-wise distance between features of the faces, namely left eye, right eye, mouth, and nose. These distances range from 0.94 to 2.87 cm (see Supplementary Table S2). Furthermore, the features range in size from 0.4 cm (the vertical extent of the eyes) to 1.9 cm (the horizontal extent of the mouth; see Supplementary Table S3). A sinusoidal grating with a period occupying 1 cm would have a spatial frequency of 1 c/°, the reported visual acuity limit of rats (Prusky et al., 2002), at a distance of 57.29 cm. We do not know at what distance from the touch screen the rats make their decision, but they can easily come closer than 10 cm away from the screens. If we take this random point of 10 cm as the distance, then 1 cm corresponds to 5.72° visual angle. For this reason, we expect that the rats have no problem resolving the individual features of the faces. 
Second, we checked whether the Bubbles size was small enough that we could resolve these features in the behavioral template. The Bubbles size in this study is a similar size as that of Vermaercke and Op de Beeck (2012). Figure 4 presents some example stimuli of the Bubbles phase, in which only one facial feature is masked by a Bubble—for example, the right eye or the mouth—while all other features are still visible. This ensures that we can investigate which features are important for the rats to make their discrimination. 
Neural representations
It is important to consider which neural representations might underlie this face categorization and generalization. Research has shown that there might be a homology between the primate and human ventral and dorsal visual streams (Niell, 2011; Wang, Sporns, & Burkhalter, 2012), and researchers have picked up on this concept to further investigate the differences and commonalities between primates and rodents. Vermaercke et al. (2014) have found that both the primate and the rat ventral visual stream have increased tolerance for stimulus position. These findings were further extended by Tafazoli et al. (2017) and Matteucci, Marotti, Riggi, Rosselli, and Zoccolan (2019). Also, Vinken, Van den Bergh, Vermaercke, and Op de Beeck (2016) have found that downstream areas in this rat ventral/lateral visual stream are increasingly sensitive to the existence of image structure, even though these areas lack categorical representations and higher responses to natural stimuli as found in the primate ventral visual stream. This study therefore suggests that the rat ventral visual pathway might not behave in a similar manner as the primate ventral visual stream. Nevertheless, it is a sensible hypothesis that learning a complex categorization task as used in the present study would rely upon this ventral/lateral pathway (Tafazoli et al., 2017; Matteucci et al., 2019). 
Conclusions
In conclusion, we found that rats are capable of face categorization as well as generalization to other category exemplars. Furthermore, our animals showed, to some degree, tolerance for transformations such as adding background noise, turning stimuli upside down, and reversing the contrast polarity. The behavioral templates of the rats, on average and individually, overlap partly with a pixel-based template. Future research should focus on the neural underpinnings of face categorization in rodents to further investigate the (possible) ventral visual stream of rodents and compare it with the research on the primate ventral visual stream. 
Acknowledgments
The data has been made publicly available via the Open Science Framework and can be accessed at https://osf.io/w8j7u/. This work was funded by the Research Foundation Flanders (Fonds voor Wetenschappelijk Onderzoek Vlaanderen) Projects G.0A39.13, G.0882.16, and G.0D78.15, and EOS (Excellence of Science) Grant G0E8718N; Hercules Grant AKUL/13/06; and KU Leuven Research Council Project C14/16/031. We thank Joke Loyens, Shauni Nuyts, and Kim Ceulemans for their help in performing the experiments. 
Commercial relationships: none. 
Corresponding author: Anna Elisabeth Schnell. 
Address: Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium. 
References
Alemi-Neissi, A., Rosselli, F. B., & Zoccolan, D. (2013). Multifeatural shape processing in rats engaged in invariant visual object recognition. The Journal of Neuroscience, 33 (14), 5939–5956, https://doi.org/10.1523/jneurosci.3629-12.2013.
Berman, N., & Cynader, M. (1972). Comparison of receptive-field organization of the superior colliculus in Siamese and normal cats. The Journal of Physiology, 224 (2), 363–389, https://doi.org/10.1113/jphysiol.1972.sp009900.
Blake, R. (1993). Cats perceive biological motion. Psychological Science, 4 (1), 54–57, https://doi.org/10.1111/j.1467-9280.1993.tb00557.x.
Blake, R., & Holopigian, K. (1985). Orientation selectivity in cats and humans assessed by masking. Vision Research, 25 (10), 1459–1467, https://doi.org/10.1016/0042-6989(85)90224-X.
Bossens, C., & Op de Beeck, H. P. (2016). Linear and non-linear visual feature learning in rat and humans. Frontiers in Behavioral Neuroscience, 10, 235, https://doi.org/10.3389/fnbeh.2016.00235.
Chino, Y. M., Kaas, J. H., Smith, E. L.,III, Langston, A. L., & Cheng, H. (1992). Rapid reorganization of cortical maps in adult cats following restricted deafferentation in retina. Vision Research, 32 (5), 789–796, https://doi.org/10.1016/0042-6989(92)90021-A.
De Keyser, R., Bossens, C., Kubilius, J., & Op de Beeck, H. P. (2015). Cue-invariant shape recognition in rats as tested with second-order contours. Journal of Vision, 15 (15): 14, 1–15, https://doi.org/10.1167/15.15.14. [PubMed] [Article]
Djurdjevic, V., Ansuini, A., Bertolini, D., Macke, J. H., & Zoccolan, D. (2018). Accuracy of rats in discriminating visual objects is explained by the complexity of their perceptual strategy. Current Biology, 28 (7), 1005–1015, https://doi.org/10.1016/j.cub.2018.02.037.
Fabre-Thorpe, M., Richard, G., & Thorpe, S. J. (1998). Rapid categorization of natural images by rhesus monkeys. NeuroReport, 9 (2), 303–308.
Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1 (1), 1–47, https://doi.org/10.1093/cercor/1.1.1.
Galper, R. E. (1970). Recognition of faces in photographic negative. Psychonomic Science, 19 (4), 207–208.
Gibson, B. M., Lazareva, O. F., Gosselin, F., Schyns, P. G., & Wasserman, E. A. (2007). Nonaccidental properties underlie shape recognition in mammalian and nonmammalian vision. Current Biology, 17 (4), 336–340.
Gibson, B. M., Wasserman, E. A., Gosselin, F., & Schyns, P. G. (2005). Applying bubbles to localize features that control pigeons' visual discrimination behavior. Journal of Experimental Psychology: Animal Behavior Processes, 31 (3), 376.
Gosselin, F., & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41 (17), 2261–2271, https://doi.org/10.1016/S0042-6989(01)00097-9.
Jacques, C., & Rossion, B. (2006). The speed of individual face categorization. Psychological Science, 17 (6), 485–492, https://doi.org/10.1111/j.1467-9280.2006.01733.x.
Jiang, X., Rosen, E., Zeffiro, T., VanMeter, J., Blanz, V., & Riesenhuber, M. (2006). Evaluation of a shape-based model of human face discrimination using FMRI and behavioral techniques. Neuron, 50 (1), 159–172, https://doi.org/10.1016/j.neuron.2006.03.012.
Kiani, R., Esteky, H., Mirpour, K., & Tanaka, K. (2007). Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of Neurophysiology, 97 (6), 4296–4309, https://doi.org/10.1152/jn.00024.2007.
Matteucci, G., Marotti, R. B., Riggi, M., Rosselli, F. B., & Zoccolan, D. (2019). Nonlinear processing of shape information in rat lateral extrastriate cortex. Journal of Neuroscience, 39 (9), 1649–1670, https://doi.org/10.1523/JNEUROSCI.1938-18.2018.
Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6 (6), 255–260, https://doi.org/10.1016/S1364-6613(02)01903-4.
Minini, L., & Jeffery, K. J. (2006). Do rats use shape to solve “shape discriminations”? Learning & Memory, 13 (3), 287–297, https://doi.org/10.1101/lm.84406.
Mishkin, M., & Ungerleider, L. G. (1982). Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behavioural Brain Research, 6 (1), 57–77, https://doi.org/10.1016/0166-4328(82)90081-X.
Nederhouser, M., Yue, X., Mangini, M. C., & Biederman, I. (2007). The deleterious effect of contrast reversal on recognition is unique to faces, not objects. Vision Research, 47 (16), 2134–2142, https://doi.org/10.1016/j.visres.2007.04.007.
Niell, C. M. (2011). Exploring the next frontier of mouse vision. Neuron, 72 (6), 889–892, https://doi.org/10.1016/j.neuron.2011.12.011.
Nielsen, K. J., Logothetis, N. K., & Rainer, G. (2006). Discrimination strategies of humans and rhesus monkeys for complex visual displays. Current Biology, 16 (8), 814–820, https://doi.org/10.1016/j.cub.2006.03.027.
Nielsen, K. J., Logothetis, N. K., & Rainer, G. (2008). Object features used by humans and monkeys to identify rotated shapes. Journal of Vision, 8 (2): 9, 1–15, https://doi.org/10.1167/8.2.9. [PubMed] [Article]
Ohayon, S., Freiwald, W. A., & Tsao, D. Y. (2012). What makes a cell face selective? The importance of contrast. Neuron, 74 (3), 567–581, https://doi.org/10.1016/j.neuron.2012.03.024.
Orban, G. A. (2008). Higher order visual processing in macaque extrastriate cortex. Physiological Reviews, 88 (1), 59–89, https://doi.org/10.1152/physrev.00008.2007.
Prusky, G. T., Harker, K. T., Douglas, R. M., & Whishaw, I. Q. (2002). Variation in visual acuity within pigmented, and between pigmented and albino rat strains. Behavioural Brain Research, 136 (2), 339–348, https://doi.org/10.1016/S0166-4328(02)00126-2.
Rainer, G., Augath, M., Trinath, T., & Logothetis, N. K. (2002). The effect of image scrambling on visual cortical BOLD activity in the anesthetized monkey. NeuroImage, 16 (3), 607–616, https://doi.org/10.1006/nimg.2002.1086.
Rosselli, F. B., Alemi, A., Ansuini, A., & Zoccolan, D. (2015). Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats. Frontiers in Neural Circuits, 9: 10, https://doi.org/10.3389/fncir.2015.00010.
Simpson, E. L., & Gaffan, E. A. (1999). Scene and object vision in rats. The Quarterly Journal of Experimental Psychology: Section B, 52 (1), 1–29, https://doi.org/10.1080/713932691.
Stryker, M., & Blakemore, C. (1972). Saccadic and disjunctive eye movements in cats. Vision Research, 12 (12), 2005–2013, https://doi.org/10.1016/0042-6989(72)90054-5.
Subramaniam, S., & Biederman, I. (1997). Does contrast reversal affect object identification? Investigative Ophthalmology & Visual Science, 38 (4), 4638–4638.
Tafazoli, S., Di Filippo, A., & Zoccolan, D. (2012). Transformation-tolerant object recognition in rats revealed by visual priming. The Journal of Neuroscience, 32 (1), 21–34, https://doi.org/10.1523/jneurosci.3932-11.2012.
Tafazoli, S., Safaai, H., De Franceschi, G., Rosselli, F. B., Vanzella, W., Riggi, M.,… Zoccolan, D. (2017). Emergence of transformation-tolerant representations of visual objects in rat lateral extrastriate cortex. eLife, 6, e22794, https://doi.org/10.7554/eLife.22794.
Tsao, D. Y., Freiwald, W. A., Tootell, R. B., & Livingstone, M. S. (2006, February 3). A cortical region consisting entirely of face-selective cells. Science, 311 (5761), 670–674, https://doi.org/10.1126/science.1119983.
Vermaercke, B., Cop, E., Willems, S., D'Hooge, R., & Op de Beeck, H. P. (2014). More complex brains are not always better: Rats outperform humans in implicit category-based generalization by implementing a similarity-based strategy. Psychonomic Bulletin & Review, 21 (4), 1080–1086, https://doi.org/10.3758/s13423-013-0579-9.
Vermaercke, B., & Op de Beeck, H. P. (2012). A multivariate approach reveals the behavioral templates underlying visual discrimination in rats. Current Biology, 22 (1), 50–55, https://doi.org/10.1016/j.cub.2011.11.041.
Vinken, K., Van den Bergh, G., Vermaercke, B., & Op de Beeck, H. P. (2016). Neural representations of natural and scrambled movies progressively change from rat striate to temporal cortex. Cerebral Cortex, 26 (7), 3310–3322, https://doi.org/10.1093/cercor/bhw111.
Vinken, K., Vermaercke, B., & Op de Beeck, H. P. (2014). Visual categorization of natural movies by rats. The Journal of Neuroscience, 34 (32), 10645–10658, https://doi.org/10.1523/jneurosci.3663-13.2014.
Vogels, R. (1999a). Categorization of complex visual images by rhesus monkeys. Part 1: Behavioural study. European Journal of Neuroscience, 11 (4), 1223–1238, https://doi.org/10.1046/j.1460-9568.1999.00531.x.
Vogels, R. (1999b). Categorization of complex visual images by rhesus monkeys. Part 2: Single-cell study. European Journal of Neuroscience, 11 (4), 1239–1255, https://doi.org/10.1046/j.1460-9568.1999.00531.x.
Vuong, Q. C., Peissig, J. J., Harrison, M. C., & Tarr, M. J. (2005). The role of surface pigmentation for recognition revealed by contrast reversal in faces and Greebles. Vision Research, 45 (10), 1213–1223, https://doi.org/10.1016/j.visres.2004.11.015.
Wang, Q., Sporns, O., & Burkhalter, A. (2012). Network analysis of corticocortical connections reveals ventral and dorsal processing streams in mouse visual cortex. The Journal of Neuroscience, 32 (13), 4386–4399, https://doi.org/10.1523/jneurosci.6063-11.2012.
Zoccolan, D. (2015). Invariant visual object recognition and shape processing in rats. Behavioural Brain Research, 285, 10–33, https://doi.org/10.1016/j.bbr.2014.12.053.
Zoccolan, D., Oertelt, N., DiCarlo, J. J., & Cox, D. D. (2009). A rodent model for the study of invariant visual object recognition. Proceedings of the National Academy of Sciences, USA, 106 (21), 8748–8753, https://doi.org/10.1073/pnas.0811583106.
Figure 1
 
An overview of all stimuli that were used in the face-categorization part of this study. The first five faces and objects were used for training. The second set of five faces and objects was used for both generalization tests.
Figure 1
 
An overview of all stimuli that were used in the face-categorization part of this study. The first five faces and objects were used for training. The second set of five faces and objects was used for both generalization tests.
Figure 2
 
Examples of stimuli used in each phase. Note that only one face and one object per phase are shown here. These were chosen randomly, and their only purpose is to give an example of the stimuli. For a full set, see Figure 1.
Figure 2
 
Examples of stimuli used in each phase. Note that only one face and one object per phase are shown here. These were chosen randomly, and their only purpose is to give an example of the stimuli. For a full set, see Figure 1.
Figure 3
 
(a) The set stimuli that were used for the Bubbles part of this study. Note that only three faces and three objects were used. All stimuli are shown on a gray background. (b) Example of stimuli that are masked with a number of Gaussian blobs. (c) Unsmoothed differential image.
Figure 3
 
(a) The set stimuli that were used for the Bubbles part of this study. Note that only three faces and three objects were used. All stimuli are shown on a gray background. (b) Example of stimuli that are masked with a number of Gaussian blobs. (c) Unsmoothed differential image.
Figure 4
 
Examples of Bubbles masks. With this Bubbles size, it is still possible to mask only one facial feature while still showing all other features.
Figure 4
 
Examples of Bubbles masks. With this Bubbles size, it is still possible to mask only one facial feature while still showing all other features.
Figure 5
 
Learning curves of the first five phases, including both the new and old stimuli for each phase. There is one subplot for each rat, and each line indicates the learning curve of one phase. The gray data points in the plots of Rats 1, 4, and 6 indicate outliers due to a low number of trials in a session (< 8 trials). The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 5
 
Learning curves of the first five phases, including both the new and old stimuli for each phase. There is one subplot for each rat, and each line indicates the learning curve of one phase. The gray data points in the plots of Rats 1, 4, and 6 indicate outliers due to a low number of trials in a session (< 8 trials). The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 6
 
Learning curves of the first five phases, averaged over all rats. These learning curves include both the new and old stimuli, as each possible combination of old and new stimuli was presented. The shaded error bar corresponds to the standard error. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 6
 
Learning curves of the first five phases, averaged over all rats. These learning curves include both the new and old stimuli, as each possible combination of old and new stimuli was presented. The shaded error bar corresponds to the standard error. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 7
 
(a) Matrix visualizing the performance of all rats combined on the first generalization test (Phase 6) per pair of stimuli. The average number of trials per matrix cell is 24. (b) The performance matrices of each individual rat.
Figure 7
 
(a) Matrix visualizing the performance of all rats combined on the first generalization test (Phase 6) per pair of stimuli. The average number of trials per matrix cell is 24. (b) The performance matrices of each individual rat.
Figure 8
 
Learning curves of four rats for Phase 11. Each line indicates one rat. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 8
 
Learning curves of four rats for Phase 11. Each line indicates one rat. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 9
 
(a) Matrix visualizing the performance of all rats combined on Phase 12, per pair of stimulus. The average number of trials per matrix cell is 16. (b) The performance matrices of all individual rats. Note that only four rats (2, 3, 4, and 5) were used in this phase.
Figure 9
 
(a) Matrix visualizing the performance of all rats combined on Phase 12, per pair of stimulus. The average number of trials per matrix cell is 16. (b) The performance matrices of all individual rats. Note that only four rats (2, 3, 4, and 5) were used in this phase.
Figure 10
 
Learning curves of four rats for Phases 13–16c. Each line indicates one phase. The gray data point in the plot of Rat 3 indicates an outlier. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 10
 
Learning curves of four rats for Phases 13–16c. Each line indicates one phase. The gray data point in the plot of Rat 3 indicates an outlier. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 11
 
Bar plots of the average performance of all rats on Phases 17 (left) and 18 (right). The error bars indicate the standard error across rats. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 11
 
Bar plots of the average performance of all rats on Phases 17 (left) and 18 (right). The error bars indicate the standard error across rats. The black dashed line represents chance-level performance; the red dashed line indicates our performance criterion of 80%.
Figure 12
 
The behavioral thresholded template, averaged over all rats and stimuli (red). The red contour visualizes the significant area on which all rats, on average, focus to distinguish between a face and a nonface object. The white contour indicates the contour of the pixel-based template. The contours are shown on top of a combined image of all stimuli.
Figure 12
 
The behavioral thresholded template, averaged over all rats and stimuli (red). The red contour visualizes the significant area on which all rats, on average, focus to distinguish between a face and a nonface object. The white contour indicates the contour of the pixel-based template. The contours are shown on top of a combined image of all stimuli.
Figure 13
 
The white contour indicates the pixel-based template. The turquoise, purple, orange, and green contours show the templates of each individual rat.
Figure 13
 
The white contour indicates the pixel-based template. The turquoise, purple, orange, and green contours show the templates of each individual rat.
Figure 14
 
Correlation matrices for the between-rats analysis. First, a behavioral template for every pair was obtained from every rat—that is, nine templates per rat. Next, we correlated each template pixel-wise between rats, which then corresponds to one pixel in these matrices. Top row (from left): correlation between Rats 2 and 3, Rats 2 and 4, and Rats 2 and 6. Bottom row (from left): correlation between Rats 3 and 4, Rats 3 and 6, and Rats 4 and 6. The color bar indicates the Pearson correlation coefficient. From these matrices we can see that Rats 3 and 4, as well as Rats 3 and 6, use a similar template for each pair, as indicated by their high correlations. An unpaired t test confirms this, and the results can be found in Table 13. The average correlations of these matrices can be found in Table 13.
Figure 14
 
Correlation matrices for the between-rats analysis. First, a behavioral template for every pair was obtained from every rat—that is, nine templates per rat. Next, we correlated each template pixel-wise between rats, which then corresponds to one pixel in these matrices. Top row (from left): correlation between Rats 2 and 3, Rats 2 and 4, and Rats 2 and 6. Bottom row (from left): correlation between Rats 3 and 4, Rats 3 and 6, and Rats 4 and 6. The color bar indicates the Pearson correlation coefficient. From these matrices we can see that Rats 3 and 4, as well as Rats 3 and 6, use a similar template for each pair, as indicated by their high correlations. An unpaired t test confirms this, and the results can be found in Table 13. The average correlations of these matrices can be found in Table 13.
Figure 15
 
Correlation matrices of the within-rat analysis. Each matrix shows the consistency of the templates within a rat between all pairs. For each rat, we pixel-wise correlated the template of one pair to every other pair. From these matrices we can clearly see that Rat 3 has overall high correlations, suggesting that this rat has a high consistency in its templates.
Figure 15
 
Correlation matrices of the within-rat analysis. Each matrix shows the consistency of the templates within a rat between all pairs. For each rat, we pixel-wise correlated the template of one pair to every other pair. From these matrices we can clearly see that Rat 3 has overall high correlations, suggesting that this rat has a high consistency in its templates.
Figure 16
 
Correlation matrices of each rat between its behavioral template on each stimulus pair and the pixel-based template of each pair. The numbers on the axes indicate the stimulus pair number. The color bar indicates the Pearson correlation coefficient. We can clearly see that Rat 3 has overall high correlations with the pixel-based template. Table 14 provides an overview of the average correlations of these matrices.
Figure 16
 
Correlation matrices of each rat between its behavioral template on each stimulus pair and the pixel-based template of each pair. The numbers on the axes indicate the stimulus pair number. The color bar indicates the Pearson correlation coefficient. We can clearly see that Rat 3 has overall high correlations with the pixel-based template. Table 14 provides an overview of the average correlations of these matrices.
Table 1
 
An overview of all phases of the first part of the study.
Table 1
 
An overview of all phases of the first part of the study.
Table 2
 
Performance criteria. Rats had to perform at or above the threshold (left column) during a predetermined number of sessions (right column). During the last session, they had to be at or above threshold; if they were not, the threshold was lowered.
Table 2
 
Performance criteria. Rats had to perform at or above the threshold (left column) during a predetermined number of sessions (right column). During the last session, they had to be at or above threshold; if they were not, the threshold was lowered.
Table 3
 
An overview of the number of sessions and trials per rat that were performed during the Bubbles part of this study. The last row indicates the average number (± standard error).
Table 3
 
An overview of the number of sessions and trials per rat that were performed during the Bubbles part of this study. The last row indicates the average number (± standard error).
Table 4
 
Binomial test on the pooled response of all rats. The middle column specifies the p value for the first session.
Table 4
 
Binomial test on the pooled response of all rats. The middle column specifies the p value for the first session.
Table 5
 
The percentage correct during the first generalization test for each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 5
 
The percentage correct during the first generalization test for each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 6
 
The percentage correct during the second generalization test (Phase 12) of each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 6
 
The percentage correct during the second generalization test (Phase 12) of each rat individually and for the pooled response of all rats, as well as p values and 95% confidence intervals.
Table 7
 
An overview of performance in Phases 17 (upside down) and 18 (contrast inverted). The last row indicates the average performance over all rats (± standard error).
Table 7
 
An overview of performance in Phases 17 (upside down) and 18 (contrast inverted). The last row indicates the average performance over all rats (± standard error).
Table 8
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 17 (upside down). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 8
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 17 (upside down). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 9
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 18 (contrast inverted). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 9
 
An overview of the p values and 95% confidence intervals (CIs) of the binomial test for Phase 18 (contrast inverted). The last row indicates the results of the binomial test on the pooled response of all rats.
Table 10
 
Percentages of overlap: how much the pixel-based template (PBT) overlaps with the template of the rat and much the template of the rat overlaps with the PBT. The last row indicates the average overlap for all rats.
Table 10
 
Percentages of overlap: how much the pixel-based template (PBT) overlaps with the template of the rat and much the template of the rat overlaps with the PBT. The last row indicates the average overlap for all rats.
Table 11
 
Percentages of overlap between rats.
Table 11
 
Percentages of overlap between rats.
Table 12
 
The results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 14. The significance of Rats 3 and 4, as well as Rats 3 and 6, indicates that these rats use similar templates.
Table 12
 
The results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 14. The significance of Rats 3 and 4, as well as Rats 3 and 6, indicates that these rats use similar templates.
Table 13
 
Average Pearson correlation coefficients between performance for each pair of rats, as well as the average correlation of the diagonal elements. Not the high average correlation between the templates of Rats 3 and 6 (which can also be seen in the correlation matrices in Figure 14).
Table 13
 
Average Pearson correlation coefficients between performance for each pair of rats, as well as the average correlation of the diagonal elements. Not the high average correlation between the templates of Rats 3 and 6 (which can also be seen in the correlation matrices in Figure 14).
Table 14
 
Average correlation coefficients between the templates of the rats and the pixel-based template, as well as the average correlation of the diagonal elements. Note that on average, all rats show a positive correlation with the pixel-based template. More specifically, Rat 3 shows, on average, the highest correlation to the pixel-based template, which is also visible in Figure 16.
Table 14
 
Average correlation coefficients between the templates of the rats and the pixel-based template, as well as the average correlation of the diagonal elements. Note that on average, all rats show a positive correlation with the pixel-based template. More specifically, Rat 3 shows, on average, the highest correlation to the pixel-based template, which is also visible in Figure 16.
Table 15
 
Results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 16. Due to the lack of significant p values, we assume that the rats do not necessarily use templates that differ between image pairs in a way that relates to where images differ. This can also be seen in Figure 16, as there is no clearly visible diagonal.
Table 15
 
Results of an unpaired t test on the diagonal and nondiagonal values of the correlation matrices in Figure 16. Due to the lack of significant p values, we assume that the rats do not necessarily use templates that differ between image pairs in a way that relates to where images differ. This can also be seen in Figure 16, as there is no clearly visible diagonal.
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×