Free
Article  |   January 2012
Animal detection and identification in natural scenes: Image statistics and emotional valence
Author Affiliations
Journal of Vision January 2012, Vol.12, 25. doi:10.1167/12.1.25
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Marnix Naber, Maximilian Hilger, Wolfgang Einhäuser; Animal detection and identification in natural scenes: Image statistics and emotional valence. Journal of Vision 2012;12(1):25. doi: 10.1167/12.1.25.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans process natural scenes rapidly and accurately. Low-level image features and emotional valence affect such processing but have mostly been studied in isolation. At which processing stage these factors operate and how they interact has remained largely unaddressed. Here, we briefly presented natural images and asked observers to report the presence or absence of an animal (detection), species of the detected animal (identification), and their confidence. In a second experiment, the same observers rated images with respect to their emotional affect and estimated their anxiety when imagining a real-life encounter with the depicted animal. We found that detection and identification improved with increasing image luminance, background contrast, animal saturation, and luminance plus color contrast between target and background. Surprisingly, animals associated with lower anxiety were detected faster and identified with higher confidence, and emotional affect was a better predictor of performance than anxiety. Pupil size correlated with detection, identification, and emotional valence judgments at different time points after image presentation. Remarkably, images of threatening animals induced smaller pupil sizes, and observers with higher mean anxiety ratings had smaller pupils on average. In sum, rapid visual processing depends on contrasts between target and background features rather than overall visual context, is negatively affected by anxiety, and finds its processing stages differentially reflected in the pupillary response.

Introduction
Our visual system is optimized to detect, classify, and identify objects that we encounter in everyday life. Without this skill, we would not be able to recognize components of our visual field as, for instance, potentially useful or threatening. The well-developed character of this skill is expressed in the accuracy and speed at which we can process objects in natural scenes (Potter & Levy, 1969). For example, the human and monkey brain can accurately distinguish between specific classes of objects within 45 ms (Mouchetant-Rostaing, Giard, Delpuech, Echallier, & Pernier, 2000) to 150 ms (Thorpe, Fize, & Marlot, 1996; VanRullen & Thorpe, 2001b) and instantiate a behavioral reaction to them within about 120 ms (Kirchner & Thorpe, 2006). Independent of whether brain signals reflect differences across object categories through low-level features or high-level decision making (Rousselet, Husk, Bennett, & Sekuler, 2008; VanRullen & Thorpe, 2001b), these studies demonstrate how fast information about objects can be extracted, represented, and used. 
However, why can we recognize objects so accurately and fast and what are the mechanisms underlying the rapid processing of objects? Some studies have found evidence suggesting that visual processing of objects consists of two stages: an early perceptual stage (∼75 ms) that distinguishes between observed features and a later pathway for decision making (∼150 ms; Johnson & Olshausen, 2003, 2005; VanRullen & Thorpe, 2001b). While the diagnostic act of recognizing and categorizing objects probably happens at a later stage (Smith, Gosselin, & Schyns, 2007; van Rijsbergen & Schyns, 2009), these studies stress that the analysis of features performed by our visual system is highly important for successful detection and identification. In a similar vein, several other studies suggested that the automatic feed-forward analysis of specific image features is mainly responsible for accurate and fast behavioral responses (Kirchner & Thorpe, 2006; Thorpe et al., 1996; VanRullen & Thorpe, 2001a, 2001b). Results that support this line of thought come from a variety of studies in which a specific set of image statistics, such as color, texture, and shape were varied to study their effect on detection (e.g., Vogels, 1999). Image statistics that have been identified to help object or scene recognition are contrast (Brodie, Wallace, & Sharrat, 1991; Macé, Delorme, Richard, & Fabre-Thorpe, 2010), color (Brodie et al., 1991; Delorme, Richard, & Fabre-Thorpe, 2010; Elder & Velisavljeviæ, 2009; Gegenfurtner & Rieger, 2000; Goffaux et al., 2005; Humphrey, Goodale, Jakobson, & Servos, 1994; Oliva & Schyns, 2000; Wichmann, Sharpe, & Gegenfurtner, 2002; Wurm, Legge, Isenberg, & Luebker, 1993; Yao & Einhäuser, 2008), background coherence with the object (Biederman, 1972; Biederman, Glass, & Stacy, 1973), shape or posture (Delorme et al., 2010; Elder & Velisavljeviæ, 2009), object size (Delorme et al., 2010), texture (Delorme et al., 2010; Elder & Velisavljeviæ, 2009; Renninger & Malik, 2004), Fourier spectra (Gaspar & Rousselet, 2009; Joubert, Rousselet, Fabre-Thorpe, & Fize, 2009; McCotter, Gosselin, Sowden, & Schyns, 2005), and, though weakly, luminance (Delorme et al., 2010; Elder & Velisavljeviæ, 2009; Macé et al., 2010; Nandakumar & Malik, 2009). Many of these studies used data sets of images that contain objects belonging to an explicit (target) category. Rarely, however, were subcategories or image features controlled for. Depending on the specific choice, categories might be subject to a selection bias and may have distinct visual features that make them especially easy to detect and recognize (Ohman, 1993; Ohman, Flykt, & Esteves, 2001; Tipples, Young, Quinlan, Broks, & Ellis, 2002). Frequently, animals are chosen as target category, and, indeed, they have very specific characteristics, such as eyes and elongated legs, that are considerably important for their successful identification (Delorme et al., 2010). In addition, images of animals are likely to be pre-segmented by the photographer (Wichmann, Drewes, Rosas, & Gegenfurtner, 2010) and animals are, therefore, often subject to “unnatural” positions and figure–ground separations. Hence, a variety of features and their constellations are important for efficient and rapid processing of objects. It has remained elusive, however, to what extent all these features relatively contribute to recognition performance. Here, we integrate a large variety of features and their relations in a model to predict several measures of performance. 
There are a few indications that particular features have a significant but weaker influence on recognition performance as compared to other features. For example, some studies argue that luminance or color have less influence on performance than texture, shape, and certain diagnostic parts (e.g., eyes and mouth) of the object (Delorme et al., 2010; Elder & Velisavljeviæ, 2009). However, “texture” is, by definition, a repetitive pattern with a specific luminance, contrast, color, and orientation and thus not independent from these constituting features. Similarly, shape of an object is defined by differences between the object and background. Image statistics such as shape are, hence, a high-level and more abstract description for constellations of very basic features. If this terminological hierarchy is compared with the hierarchy of the visual system (Felleman & Van Essen, 1991), we see that information about luminance, contrast, orientation, and color is indeed already available in early visual areas while shape is processed at later stages by areas such as the LOC (e.g., Baylis & Driver, 2001; Grill-Spector, Kushnir, Edelman, Itzchak, & Malach, 1998; Grill-Spector, Kushnir, Hendler et al., 1998; Kourtzi & Kanwisher, 2000, 2001; Malach, Grill-Spector, Kushnir, Edelman, & Itzchak, 1998). Thus, direct relations between performance and basic individual features such as luminance, contrast, or color could have been shadowed by seemingly stronger but indirect relations between performance and high-level features. More importantly, indirect and direct effects on performance cannot be disentangled unless the components of which an object constitutes are separately assessed. For example, an object has to be separated from its background in order to study the effects of shape on performance. So far, there has been little research on the features of the object itself because virtually all studies (except for Delorme et al., 2010; Elder & Velisavljeviæ, 2009; Wichmann et al., 2010) have only looked into the overall statistics of the entire image. In addition, although the degree and causality of the contribution of such high-level features to performance is debatable, there is still a broad range of other—both low- and high-level—features for which it is unclear to what extent they affect detection and identification performance. Hence, a thorough test as to how low- and high-level features affect performance and to what extent these effects depend on whole image, background, or object statistics is needed. 
Besides the aforementioned features, high-level concepts, such as emotions associated with an image or the object depicted therein, may also influence detection or identification. Animals, for example, may provoke a widespread set of emotions such as fear or emotional affect. Although it seems reasonable to assume that objects can induce specific emotions only when they are identified, in some cases emotional processing might not require detailed sensory information. Similar to the fast and automatic feed-forward brain mechanism that processes image features (Kirchner & Thorpe, 2006; Thorpe et al., 1996; VanRullen & Thorpe, 2001a, 2001b), it has been suggested that another distinct brain mechanism (i.e., a subcortical route to the amygdala) processes fear-inducing stimuli (Adolphs et al., 2005; Isbell, 2006; LeDoux, 1996; Morris, Ohman, & Dolan, 1999) and incorporates an evolutionary innate bias toward threatening animals or objects (Deloache & Lobue, 2009; Lobue & DeLoache, 2008, 2010; Ohman et al., 2001; Seligman, 1970). An alternative theory proposes that observers have acquired an attentional bias toward threatening stimuli by learning (Blanchette, 2006; Brosch & Sharma, 2005; Lipp, Derakshan, Waters, & Logies, 2004; Mogg & Bradley, 1998; Tipples et al., 2002; Waters, Lipp, & Spence, 2004). Nevertheless, the emotional valence tied to an object may accordingly help improve the speed and accuracy at which they are detected and identified. More importantly, we can again question to what degree emotions help to process and behaviorally react to objects. Relating this issue back to image statistics, it is important to control for confounds induced by image statistics when measuring effects of anxiety for objects presented in images. Studies on anxiety so far have only controlled for a small part of the differences in image statistics between non-threatening and threatening categories, and this factor may have caused seemingly conflicting results on anxiety. 
Here, we try to assess the weighted effects of a variety of both low- and high-level image features on performance. As it is important to dissociate between detection and identification performance (Evans & Treisman, 2005), we have designed a rapid “animal”/“no animal” detection task that includes the identification of presented species after detection of an animal. We used images of animals with different levels of threat (e.g., snakes are of high threat and caterpillars are of low threat) to investigate the role of emotional valence and its interaction with image statistics. Besides subjective ratings of anxiety and affect per image, we use pupil size dynamics as an objective measure of emotional valence (e.g., Beatty & Lucero-Wagoner, 2000; Hess, 1975) and as reflection of activity in the (para)sympathetic nervous system (e.g., Steinhauer & Hakerem, 1992). In a separate analysis, animals were dissected from their background for comparison with their background features. With these measures, we will assess the following four questions: (i) Which features correlate with detection, identification, reaction times, and confidence, and which of these relations is strongest? (ii) How do features of the object itself affect performance as compared to the statistics of the whole image? (iii) How do low- and high-level features correlate and to what extent do these correlations explain differences in performance across object categories? More specifically, does anxiety affect performance after controlling for covariance of image statistics? (iv) Is anxiety or emotional affect reflected in visible physiological signals? Specifically, does the pupil, an outwardly accessible physiological measure of neural function, reflect emotional valences? 
Methods
Observers
Eight observers (age range: 22–26, 2 females, 6 males) participated in the experiments. All had normal or corrected-to-normal vision, were naive to the purpose of the studies, and gave informed written consent before the experiments. The experiments conformed to Institutional Guidelines for experiments with human subjects and to the Declaration of Helsinki. 
Apparatus
Stimuli were presented on a 21-inch EIZO Flexscan CRT screen at a viewing distance of 70 cm. The display refresh rate of the screen was 100 Hz and the resolution was 1152 × 864 pixels. CIE color space coordinates (x, y) of the screen were (0.623, 0.344), (0.287, 0.609), and (0.151, 0.065) for the red, green, and blue guns, respectively. Stimuli were generated on an Optiplex 755 DELL computer, using Matlab (Mathworks, Natick, MA) and PsychToolbox (Brainard, 1997; Pelli, 1997) and EyeLink toolbox (Cornelissen, Peters, & Palmer, 2002) extensions. During the rapid animal detection experiment (see Stimuli and procedure section), pupil diameter (size) and direction of gaze were tracked with an infrared sensitive camera (EyeLink 2000, SR Research, Osgoode, ON, Canada) at a sampling rate of 1000 Hz. The eye tracker was (re-)calibrated before each experiment and after each break. Observers' head was supported by chin and forehead rests to ensure steady viewing. 
Stimuli and procedure
Our study consisted of two separate experiments: a rapid animal detection and identification task (Figure 1A) and an emotional valence rating task (Figure 1B). Both experiments were taken on separate days. To prevent effects of learning due to feedback (the prolonged presentation for rating could give observers feedback about their performance, when done after the rapid presentation) and priming (rapid presentation could affect subsequent ratings and vice versa), substantial time elapsed between the two tasks (mean ± SD: 248 ± 110 days). 
Figure 1
 
Paradigms and stimuli. (A) In the rapid presentation task, observers were shown an image of either an animal or distracter for 10 ms that was subsequently masked by a random noise image. Observers decided as fast as possible whether they had seen an animal or not by making an eye movement to the left or right part of the screen (text is here displayed as “yes”/“no” for reasons of legibility, but the actual display text was “no animal”/“animal”). Only for trials in which observers indicated to have seen an animal, they were asked to identify the species from a list (see gray arrow; arrangement of species is here illustrated by letters for reasons of legibility, but in the actual display full species names were shown) and could indicate their confidence on a ruler (0–100). (B) In the emotional valence rating task, observers were shown images of animals until response. Observers were asked to indicate their affect (−100–100) for the animal and their level of anxiety (0–100) if they would encounter the animal in real life.
Figure 1
 
Paradigms and stimuli. (A) In the rapid presentation task, observers were shown an image of either an animal or distracter for 10 ms that was subsequently masked by a random noise image. Observers decided as fast as possible whether they had seen an animal or not by making an eye movement to the left or right part of the screen (text is here displayed as “yes”/“no” for reasons of legibility, but the actual display text was “no animal”/“animal”). Only for trials in which observers indicated to have seen an animal, they were asked to identify the species from a list (see gray arrow; arrangement of species is here illustrated by letters for reasons of legibility, but in the actual display full species names were shown) and could indicate their confidence on a ruler (0–100). (B) In the emotional valence rating task, observers were shown images of animals until response. Observers were asked to indicate their affect (−100–100) for the animal and their level of anxiety (0–100) if they would encounter the animal in real life.
The first experiment was a rapid “animal”/“no animal” detection and identification task. Observers were presented with a variety of chromatic images of natural scenes that either did or did not contain an animal. Images were chosen from two databases (http://www.photolibrary.uk.com/ and http://visionlab.ece.uiuc.edu/datasets.html) with a total of 708 animal pictures that were divided in five object categories (Figure 2A): Phobic animals (P), predators (PR), domesticated animals (DO), pets (PE), and non-phobic animals (NP). Each animal category was divided in three subcategories of species (phobic animals: Snakes, spiders, and scorpions; predators: Bears, wolves, lions; domesticated animals: Cows, sheep, horses; pets: Rabbits, cats, dogs; non-phobic animals: Caterpillars, beetles, butterflies). The non-phobic category was intended to match phobic animals with respect to their gross visual appearance: In particular, snakes share visual features with caterpillars and have been used as visually similar categories earlier (Lobue & DeLoache, 2008). Based on previous literature, phobic animals and predators were subsumed as “threatening,” all other categories as “non-threatening” hereafter (also see second experiment below for the justification of categorization). We further distinguished between the superordinate categories of mammals (predators, domesticated animals, and pets) and non-mammals (phobic and non-phobic animals). The images containing no animals (distracters, DI) consisted of 383 natural scenes that included natural objects such as flowers, meadows, trees, mushrooms, fruits, vegetables, or stones (Figure 2B). Per observer, a total of 437 (62%) images of animals and 263 (38%) images of distracters were randomly chosen for presentation. Each animal species counted 28–30 images that for each experiment were randomly chosen from a database that contained between 42 and 68 images of each species. All images were 10° of visual angle in width and 7° in height. Pixel values of images were normalized such that the entire range (0–255) of possible luminance value was covered. 
Figure 2
 
Image examples. (A) Examples of presented images of animal species (columns) per category (rows). (B) Examples of several non-animal distracter images from different categories.
Figure 2
 
Image examples. (A) Examples of presented images of animal species (columns) per category (rows). (B) Examples of several non-animal distracter images from different categories.
A rapid detection trial (Figure 1A) started with the presentation of a fixation dot (0.8° diameter) for 1000 ms, followed by an image for 10 ms and a subsequent white noise mask. Observers were instructed to decide as fast as possible whether the presented image contained an animal or not by making an eye movement to either one side of the screen. Each side of the screen contained the text animal or no animal. The location of this text (i.e., left or right) was fixed per observer but counterbalanced across observers. If observers reported having seen an animal—independent of whether their observation was correct—they were asked to identify the species of the animal. Five hundred milliseconds after the animal/no animal decision, observers could choose the observed species from a 4 by 4 grid of 15 species names with a computer mouse. The arrangement of species in the grid was randomized across observers. The identification response was followed by an indication of the observers' level of confidence on a ruler that scaled from 0 (not confident) to 100 (very confident about identification). The experiment continued with the next trial after a button press and observers could take a break after each block of 100 trials. 
The second experiment consisted of an emotional valence rating task in which observers were asked to rate their level of affect and anxiety for images of animals that were shown in the rapid presentation task (Figure 1B). Observers fixated for 1000 ms and were subsequently shown an image of an animal until response. The images were accompanied with a question and a ruler on which the observer could indicate their response with a computer mouse. The first question was designed to measure the observers' level of emotional affect for the animal. To the question “How do you feel when looking at this picture?” observers could indicate how they felt on a scale from −100 (very bad) to 100 (very good). The next question appeared after observers had rated their affect: “How anxious would you be if you would encounter this animal in real life?” Observers could indicate on a scale from 0 (not anxious) to 100 (very anxious) their level of anxiety for the animal. The next trial started after a button press. 
Analysis
To separate the effects of the image as a whole, the target animal alone, and the background (non-target image area), author MH and a novel observer independently produced cutouts of the image areas containing only the animal (Figure 3A). Cutouts were consistent across both annotators and the overlap (intersect) of both cutouts was considered the animal for further analysis. The following statistics were computed separately for the entire image, for the animal alone and for the background alone (for extreme examples, see Figure 3B): mean and median luminance, mean luminance contrast, mean color and luminance contrast between animal and background, mean radius of animal shape, variance in radius of animal shape (i.e., spikiness and elongation of shape), animal size, horizontal and vertical positions of the animal, and distance between center of gravity of animal and center of image. Luminance contrast was computed as the standard deviation in luminance. We calculated the contrast in color between the animal and its background (hereafter “color contrast animal–background”) in the physiologically defined DKL color space (Derrington, Krauskopf, & Lennie, 1984). DKL space is spanned by three orthogonal axes, luminance, the difference between long (L) and medium (M) wavelength cone excitations (L–M axis, “red–green” or “constant blue”), and the difference between the short (S) wavelength cone excitation and the sum of the other two cone excitations (S–(L + M), “blue–yellow” or “tritanopic confusion” axis). Color contrast animal–background then was defined as the distance between the mean position of the animal's pixels and the mean position of the background's pixel in DKL space, projected on the isoluminant plane. The similar measure without the projection on the isoluminant plane encompasses both luminance and color information and will be referred to as “color + luminance contrast animal–background.” 
Figure 3
 
Cutout examples and extreme image statistics. (A) Four examples (rows) of original animal images (first column) of which the animal was cut out (second column) and analyzed separately from its background (third column). Cutouts were created by two individuals. Green and red areas indicate non-overlapping parts of the individual cutouts, white areas indicate overlap between the two individual cutouts (see second column). (B) A variety of image statistics were computed for correlation with performance; the panel depicts examples of images with statistical extremes shown. For example, the Animal Rad. Var. image shows an image in which the animal has a highly variable shape (i.e., not circular or rectangular but complex). Col. Con. Ani.–Bg. shows an image of an animal whose color strongly contrasts with its background.
Figure 3
 
Cutout examples and extreme image statistics. (A) Four examples (rows) of original animal images (first column) of which the animal was cut out (second column) and analyzed separately from its background (third column). Cutouts were created by two individuals. Green and red areas indicate non-overlapping parts of the individual cutouts, white areas indicate overlap between the two individual cutouts (see second column). (B) A variety of image statistics were computed for correlation with performance; the panel depicts examples of images with statistical extremes shown. For example, the Animal Rad. Var. image shows an image in which the animal has a highly variable shape (i.e., not circular or rectangular but complex). Col. Con. Ani.–Bg. shows an image of an animal whose color strongly contrasts with its background.
Statistics and notation
Statistical tests were computed with Matlab (Mathworks, Natick, MA) and its Statistics Toolbox extension. When we report multiple post-hoc comparisons across animal categories (e.g., for detection probabilities), we provide only noteworthy t-tests and correlations in the main text. To avoid overloading the main text, we, in addition, only report the effect that has the highest p-value (i.e., is “least significant” colloquially speaking). That is, a statement like “(t(7) > 1.90, p < 0.05)” is meant to imply that all p-values were below 0.05, all t-values were above 1.90, and the degree of freedom was at least 7. Full tables are given in 1. Main effects or interactions of ANOVAs as well as correlation are referred to as “significant,” if their p-value is below a corrected alpha level that has been adjusted to an expected false discovery rate (FDR) of 5% using Benjamini and Hochberg's (1995) method. Spearman's correlation coefficients between behavioral data sets (i.e., performance and emotional valence) were based on the average per image. A general linear model (GLM) was created with the various image statistics and emotional valence as independent variables and the measures of performance as dependent variables. Another GLM used behavioral statistics and emotional valence as independent variables and pupil size as dependent variable. Input of the GLMs was z-score normalized. For some analyses, pupil size was normalized by subtracting the pupil size at image onset from pupil traces per trial and subsequently dividing by the same value. This way the normalized pupil size represented the proportion pupil size change with respect to image onset. Some analyses were based on the “absolute” pupil size (i.e., not normalized). Note that absolute pupil size is somewhat arbitrary as it depends on the settings of the gaze-tracking camera. These settings were set individually for each observer. To enable comparison within observers, settings were kept fixed across trials. To allow comparison of absolute data between observers, the variation of settings across observers was kept to a minimum (mean and SEM threshold for detection in the unit of the device were 120.5 ± 4.8 for pupil and 228.9 ± 2.5 for the cornea reflex). Since we are interested in relative rather than absolute reaction times, we define reaction times as the time between image onset and the moment the center of gaze crossed the 25% or 75% border of the screen (see Figure 1A). This is a more robust measure than saccadic onset because it reduces the probability to miscategorize decisions based on initial mistake saccades that preceded but was directed to the other direction than the final decision side. This measure further incorporates time delays of such initial erroneous saccades and therefore includes a measure of decisional uncertainty. Trials with reaction times shorter than 50 ms (5 trials of 5600) and reaction times longer than 1500 ms (123 trials) were removed from analysis (2% of all trials). 
Results
Performance, emotional valence, and image statistics per animal category
Threatening animals are generally believed to be detected and identified with high accuracy (Deloache & Lobue, 2009; Lobue & DeLoache, 2008, 2010; Ohman et al., 2001; Seligman, 1970). Here, we tried to replicate these findings by computing performance and emotional valence ratings per animal category. Of all animal pictures, 70% ± 15% were correctly detected (i.e., hits) and consequently 30% ± 15% missed. Of all distracter pictures, 73% ± 22% were correctly rejected and consequently 27% ± 22% mistakenly induced a false alarm of an animal. 
Although observers differed in their subjective criteria (i.e., some observers were more liberal to mistakenly indicate a distracter as animal, Figure 4A), the performance per animal category was generally similar across observers (Figures 4B4F; for box plots, see 1). There was a significant main effect of animal categories on detection probabilities (hit rates, Figure 4C; F(4,7) = 12.62, p = 5.413 * 10−6; FDR-adjusted alpha level for all ANOVAs: 0.022). Detection probabilities were higher for predators (PR), domesticated animals (DO), and pets (PE) and were smaller for phobic animals (P) and non-phobic insects (NP). Remarkably, the threatening animals did not differ significantly from the non-threatening animals (t(38) = 1.41, p = 0.166, conf. = −0.19–0.03). The mammalian animal categories of predators, domesticated animals, and pets did have higher detection probabilities than phobic animals (t(38) > 2.76, p < 0.01, conf. = −0.24–0.04; see 1 for all post-hoc comparisons). Domesticated animals and pets also had better detection performances than non-phobic animals (t(7) > 2.75, p < 0.05, see 1 for confidence intervals). A very similar pattern was found for reaction times (Figure 4D; F(4,7) = 15.06, p = 1.107 * 10−6): Mammals were detected faster than phobic animals and non-phobic animals (t(7) > 2.95, p < 0.05). Since reaction times were qualitatively similar to accuracies across categories, data are not affected by a speed–accuracy trade-off. Identification performance, calculated only for trials in which animals were correctly detected (i.e., the conditional probability identification given detection), showed no significant differences across animal categories (Figure 4E; F(4,7) = 1.47, p = 0.239). Unconditionally (i.e., without taking detection performance into account), identification showed differences across animal categories (F(4,7) = 3.71, p = 0.015), which also directly follows from the results on detection and identification conditioned on detection. Confidence ratings, however, depicted a pattern analogous to detection probabilities (Figure 4F; F(4,7) = 3.31, p = 0.024; t(7) > 2.64, p < 0.05). In general, data implied that detection was better (higher performance, faster reaction times) for mammals than for non-mammals. Once correctly detected, animal identification did not depend on category, while confidence followed a similar pattern as detection. This suggests a dissociation between detection and identification based on category. 
Figure 4
 
Detection and identification performance. (A) Percentage of hits (animal correctly detected) and false alarms (distracter mistaken for animal) for each of the 8 observers (colored points) and the average (blue). Observers clearly perform above chance (diagonal) and show similar performance despite variability in their criteria. (B) Percentage of false alarms by animal category (for abbreviations, see text); each observer is presented by a line (colors as in (A)). Note the similar pattern across observers. (C) Probability to correctly detect an animal (percentage of hits) by category (notation as (B)). (D) Mean reaction times per category. (E) Probability to correctly identify the animal's category, given it was correctly detected. (F) Mean confidence ratings. (G) Mean anxiety. (H) Mean emotional affect per category. (I) Identification confusion matrix given that an animal is detected correctly but incorrectly identified. Brighter patches indicate that species are more likely to be mixed up with each other during identification. Matrix is normalized per presented species (3 per category) with the main diagonal excluded (blue), such that each entry gives the probability of a presented species (column) being confused with another species (row), excluding correct identifications. White lines delineate animal categories, order of species per category as given in Figure 1A. See Figure 1 for list of abbreviations of categories. Note that the lines in (B) through (H) are meant for illustration only and do not indicate a functional dependence.
Figure 4
 
Detection and identification performance. (A) Percentage of hits (animal correctly detected) and false alarms (distracter mistaken for animal) for each of the 8 observers (colored points) and the average (blue). Observers clearly perform above chance (diagonal) and show similar performance despite variability in their criteria. (B) Percentage of false alarms by animal category (for abbreviations, see text); each observer is presented by a line (colors as in (A)). Note the similar pattern across observers. (C) Probability to correctly detect an animal (percentage of hits) by category (notation as (B)). (D) Mean reaction times per category. (E) Probability to correctly identify the animal's category, given it was correctly detected. (F) Mean confidence ratings. (G) Mean anxiety. (H) Mean emotional affect per category. (I) Identification confusion matrix given that an animal is detected correctly but incorrectly identified. Brighter patches indicate that species are more likely to be mixed up with each other during identification. Matrix is normalized per presented species (3 per category) with the main diagonal excluded (blue), such that each entry gives the probability of a presented species (column) being confused with another species (row), excluding correct identifications. White lines delineate animal categories, order of species per category as given in Figure 1A. See Figure 1 for list of abbreviations of categories. Note that the lines in (B) through (H) are meant for illustration only and do not indicate a functional dependence.
Interestingly, the dependence of detection and confidence on category does not seem to follow an intuitive anxiety or emotional axis. To test whether the a priori categorization indeed matches the subjective experience of our observers, we assessed levels of anxiety and affect per animal category during the second, emotional valence rating experiment. As expected, anxiety ratings were substantially higher for threatening animals (i.e., phobic animals and predators) than for other animal categories (Figure 4G; F(4,7) = 101.39, p = 3.331 * 10−16; t(7) > 8.72, p < 0.001, for all comparisons between threatening and non-threatening animals). Emotional affect also differed significantly across categories (F(4,7) = 10.00, p = 3.717 * 10−5) and was negative for phobic animals (Figure 4H; t(7) = 4.87, p = 0.002, conf. = 20.14–39.65), neutral for predators (t(7) = 0.25, p = 0.811, conf. = 35.46–67.95) and domesticated animals (t(7) = 2.30, p = 0.055, conf. = 49.73–68.73), and positive for pets (t(7) = 4.73, p = 0.002, conf. = 59.45–78.36) and non-phobic animals (t(7) = 2.47, p = 0.043, conf. = 50.19–58.70). These data indicate that anxiety and affect are distinct concepts and measurements. The overall pattern of anxiety across animal categories did neither seem to overlap with detection nor with identification performance. Surprisingly, no effect of anxiety on performance was seen across categories. This suggests that low-level features rather than higher order emotional valences may underlie the differences in performance across animal categories. Interestingly, mammals were more easily mixed up with each other during identification, and the same held for phobic and non-phobic animals (Figure 4I; t(5) > 2.21, p < 0.05, for all comparisons of phobic animals and non-phobic animals versus mammals). At least for the species considered here, features or high-level categories (e.g., mammal) other than those defined by emotional valence contribute more strongly to identification errors. 
To summarize, detection performance and identification performance were not necessarily determined by the mean level of threat or affection for the depicted animals per category. For instance, although observers had negative affect and high anxiety for phobic animals, these animals were still difficult to detect. Predators were also not easier to detect than non-threatening animals. These findings might indicate that differences in image statistics relate to the differences in performance across animal categories. To test this hypothesis, we next assess the relative contributions of emotional valence and image statistics to performance. 
Correlations between performance, emotional valence, and image statistics
To assess how strong emotional valences and image statistics affected performance, we calculated for a subset of dependent variables whether they correlated with image statistics (Figure 5). Correlations with image statistics were only based on images with animals (n = 708) and data were based on the average across observers per image. Correlations that include identification or confidence (in identification) were only based on images that at least one observer had correctly detected (n = 681). To control for correlations across variables, we performed a partial correlation analysis for correlations between performance and emotional valences treating image statistics as controlling variables (for correlations between image statistics and emotional valences, see Figure 6). We only report the most noteworthy correlations (for scatter plots, see 1). We found no significant relations between anxiety and detection probability (r(706) = −0.06, p = 0.133), while correlations with identification probability (r(680) = −0.11, p = 0.006), reaction time (r(706) = 0.12, p = −0.002), and confidence (r(680) = −0.14, p = 0.001) were significant. Emotional affect similarly correlated with identification probability (r(680) = 0.10, p = 0.017), reaction time (r(706) = −0.11, p = 0.004), and confidence (r(680) = 0.12, p = 0.002). 
Figure 5
 
Correlations matrix. Matrix of correlations between performance variables (vertical axis) and image statistics and emotional valences (horizontal axes), r-values given as numbers. Only significant correlations (p smaller than an FDR-adjusted alpha level of 0.014) are presented with numbers. Most noteworthy correlations were those between animal (rather than image) statistics and performance and absent or weakly negative correlations between anxiety and performance. Animal (i.e., the object of interest or target) statistics have much larger effects on performance than whole image statistics, and anxiety has a weak effect on performance, which is reversed relative to our prior expectation. See Figure 1 for list of abbreviations of categories. Abbreviations of image statistics: Rad. Var. = variance in radius of animal shape outline, Col. = color, Lum. = luminance, Con. = contrast, Ani. = animal, Bg. = background.
Figure 5
 
Correlations matrix. Matrix of correlations between performance variables (vertical axis) and image statistics and emotional valences (horizontal axes), r-values given as numbers. Only significant correlations (p smaller than an FDR-adjusted alpha level of 0.014) are presented with numbers. Most noteworthy correlations were those between animal (rather than image) statistics and performance and absent or weakly negative correlations between anxiety and performance. Animal (i.e., the object of interest or target) statistics have much larger effects on performance than whole image statistics, and anxiety has a weak effect on performance, which is reversed relative to our prior expectation. See Figure 1 for list of abbreviations of categories. Abbreviations of image statistics: Rad. Var. = variance in radius of animal shape outline, Col. = color, Lum. = luminance, Con. = contrast, Ani. = animal, Bg. = background.
Figure 6
 
Correlation matrix. Matrix of correlations between image statistics and emotional valences. Only significant correlations (at FDR-adjusted alpha level of 0.020) are depicted.
Figure 6
 
Correlation matrix. Matrix of correlations between image statistics and emotional valences. Only significant correlations (at FDR-adjusted alpha level of 0.020) are depicted.
Image statistics that correlated with detection included animal luminance (r(706) = 0.29, p = 1.776 * 10−15), animal contrast (r(706) = 0.21, p = 9.628 * 10−9), background contrast (r(706) = −0.19, p = 1.880 * 10−7), animal color saturation (r(706) = 0.20, p = 2.982 * 10−8), animal size (r(706) = 0.19, p = 1.579 * 10−7), and variance of shape radius (r(706) = 0.11, p = 0.002). In addition, color contrast animal–background and color + luminance contrast animal–background had significant effects on detection (r(706) = 0.20, p = 7.371 * 10−8; r(706) = 0.27, p = 5.362 * 10−14). All other performance variables (i.e., identification, reaction time, and confidence) had very similar correlations with the image statistics (p < 0.01). 
The significant correlations of performance with image statistics were higher than the correlations with emotional valence for detection (t(10) = 3.10, p = 0.011, conf. = 0.04–0.22) but not for the other performance variables (t(7) < 2.31, p > 0.05). The absolute correlations of image statistics that related to merely the animal cutouts were also higher than the correlations related to the whole image (t(22) = 5.59, p = 1.267 * 10−5, conf. = 0.07–0.16) or only the background (t(22) = 3.55, p = 0.002, conf. = 0.04–0.17). Correlations with detection were further higher than correlations with identification (t(14) = 5.39, p = 9.605 * 10−5, conf. = 0.03–0.07). Similar to the correlations between performance and emotional valences, we also performed a partial correlation per image statistic with each performance variable while we controlled for correlations across image statistics. After this control, only background contrast, color + luminance contrast animal–background, size of animal, and variance of shape radius of animal correlated significantly with performance. As color + luminance contrast animal–background incorporates information about luminance and saturation, it also correlates with these statistics (Figure 6). From this point of view, color + luminance contrast animal–background is an intermediate-level statistic and can, therefore, explain all the variance in performance without the help of the more low-level statistics of luminance and saturation. Indeed, when removing the contrast animal–background factors as covariates from the partial correlation, animal luminance and saturation significantly correlates with all performance variables except for reaction times (r(680) > 0.10, p < 0.01). As a final control, we created a GLM with image statistics and emotional valences as independent variables and performances as dependent variables (Table 1). For reasons of statistical power, we kept the model simple by only selecting the variables that significantly correlated with performance (Figure 5) and were indicated as independent factors by a factor analysis (see 1). The standardized beta coefficients were comparable with the correlations reported above (r(46) = 0.55, p = 0.001), but some remarkable deviations were observed. Despite that the GLM was statistically stricter than the correlation analysis, several image statistics and emotional valences that correlated significantly with performance were not significant predictors of performance in the GLM. Absolute beta coefficients of detection were not higher than identification (t(11) = 0.72, p = 0.486), beta coefficients of animal features were smaller instead of larger than background and whole image statistics (t(22) = 2.20, p = 0.039), and beta coefficients of image statistics were not higher than emotional valences (t(10) = 0.53, p = 0.610). It was also remarkable to see that anxiety was not a significant factor in the model to predict any performance variable. When comparing beta coefficients across observers (i.e., we performed another GLM on data per observer), emotional valences was a stronger predictor than anxiety of detection (t(7) = 5.53, p = 0.001) and reaction times (t(7) = 10.12, p = 1.979 * 10−5). No other remarkable patterns were found across predictors. In sum, the overall pattern that emotional affect and several image statistics can decrease or improve performance remains evident after controlling for covariance across predictors. 
Table 1
 
GLM—Effects of image statistics and emotional valences on performance (significant predictors are printed in bold).
Table 1
 
GLM—Effects of image statistics and emotional valences on performance (significant predictors are printed in bold).
Detection Identification
Variable Beta t Stat. p-Value Variable Beta t Stat. p-Value
Lum. Im. 0.214 3.97 7.883 * 10 −5 Lum. Im. 0.122 2.11 0.035
Lum. Ani. −0.021 0.33 0.743 Lum. Ani. −0.020 0.29 0.772
Con. Ani. 0.148 2.92 0.004 Con. Ani. 0.077 1.42 0.158
Con. Bg. −0.271 6.25 6.902 * 10 −10 Con. Bg. −0.248 5.27 1.876 * 10 −7
Sat. Im. −0.044 0.86 0.390 Sat. Im. −0.029 0.53 0.593
Sat. Ani. 0.126 2.55 0.011 Sat. Ani. 0.152 2.89 0.004
Col. Con. Ani.–Bg. 0.020 0.34 0.736 Col. Con. Ani.–Bg. −0.064 1.02 0.310
Col. + Lum. Con. Ani.–Bg. 0.163 2.87 0.004 Col. + Lum. Con. Ani.–Bg. 0.176 2.90 0.004
Size Ani. 0.082 2.22 0.027 Size Ani. −0.003 0.08 0.936
Var. Rad. Ani. 0.059 1.63 0.103 Var. Rad. Ani. 0.070 1.81 0.071
Anxiety 0.030 0.66 0.513 Anxiety −0.005 0.10 0.92
Emotional affect 0.131 2.89 0.004 Emotional affect 0.025 0.51 0.608
Reaction times Confidence
Variable Beta t Stat. p-Value Variable Beta t Stat. p-Value
Lum. Im. −0.145 2.49 0.013 Lum. Im. 0.202 3.78 1.739 * 10 −4
Lum. Ani. −0.034 0.50 0.619 Lum. Ani. −0.047 0.75 0.453
Con. Ani. −0.043 0.79 0.428 Con. Ani. 0.092 1.83 0.067
Con. Bg. 0.112 2.38 0.017 Con. Bg. −0.358 8.21 1.221 * 10 −15
Sat. Im. 0.065 1.18 0.240 Sat. Im. −0.062 1.22 0.223
Sat. Ani. −0.086 1.62 0.107 Sat. Ani. 0.150 3.05 0.002
Col. Con. Ani.–Bg. 0.024 0.37 0.709 Col. Con. Ani.–Bg. −0.007 0.12 0.901
Col. + Lum. Con. Ani.–Bg. −0.060 0.98 0.327 Col. + Lum. Con. Ani.–Bg. 0.210 3.71 2.229 * 10 −4
Size Ani. −0.069 1.78 0.082 Size Ani. 0.108 2.95 0.003
Var. Rad. Ani. −0.126 3.26 0.001 Var. Rad. Ani. 0.068 1.88 0.060
Anxiety −0.030 0.61 0.581 Anxiety 0.053 1.15 0.249
Emotional affect −0.061 1.24 0.215 Emotional affect 0.133 2.90 0.004
In conclusion, both image statistics and emotional valences contribute to the recognition of animals in natural images. Features generally related to the contrast between animal and background rather than the image as a whole contributed significantly to performance. In addition, background contrast decreased performance. Contrast in luminance and color between animal and background also affected performance, and this became particularly evident after controlling for correlations across image statistics. It has been suggested, however, that an animal is more likely to be presented in the center of the image because of pre-segmentation by the photographer that took the actual images (Wichmann et al., 2010). This bias could potentially underlie our finding that a stronger contrast between statistics of the object and background rather than the whole image increased performance because parts of the image that are presented in the fovea could be enhanced as compared to parts in the periphery. However, the animal's position varied and did not have any effect on detection (r(706) = 0.04, p = 0.218) or identification performance (r(680) = 0.07, p = 0.058). Thus, the effect of low-level features on detection and identification performance was independent of object position. Levels of emotional valences—higher level features that are explicitly tied to the object—also significantly contributed to detection, identification, reaction time, and confidence. The negative effect of anxiety on performance was unexpected as literature generally reports a positive effect (see Introduction section). Furthermore, anxiety was not a significant predictor in the GLM, probably because emotional affect covaried with anxiety (see Figure 6) and explained more variance in performance than anxiety. Taken together, this indicates that anxiety only weakly affects the processing of objects and that emotional affect is a better predictor of performance. To get a better picture of whether anxiety is really involved in physiological processes related to object recognition, we next assess how the pupil responds to the presentation of animal pictures. 
Pupil size as a window to different stages of object processing and anxiety
To investigate at which stages specific factors have an effect on the neural processing of animal images, a physiological signal that reflects such processing is needed. We here quantify the relations between the time course of pupillary responses and performance and emotional valences (for effects of image statistics on pupillary responses, see, for example, Alpern & Campbell, 1962; Gamlin, Zhang, Harlow, & Barbur, 1998; Z. Li, Liang, & Sun, 2006; Tsujimura, Wolffsohn, & Gilmartin, 2001; Young & Alpern, 1980). We computed correlations between pupil size and these dependent variables as a function of time around image onset (Figure 7A). All dependent variables correlated with pupil size (p < 0.05) at some time point. Correlations with anxiety and identification peaked during intermediate time points (1000–1500 ms), detection probability and confidence peaked during both the pupillary response (approximately 600 ms; see Figures 7C7G for examples of the pupillary responses) and during later time periods (>1500 ms), and correlations with reaction time and confidence peaked during periods the pupil dilation or constriction was fastest (i.e., around the pupillary response, when pupil changes were fast). Remarkably, anxiety, confidence, and reaction times correlated with normalized pupil size already before the onset of the trial (for possible interpretations, see the Discussion section). As a control for correlations among the behavioral variables, we performed a GLM with these variables as predictors and pupil size as dependent variable (Figure 7B). GLM beta coefficients were very similar to the correlation data (compare Figure 7A with Figure 7B). To summarize, performance and emotional valence tied to the presented objects were reflected in the pupil. 
Figure 7
 
Pupil responses. Change in pupil size relative to image onset as a function of time after image onset. (A) Pupil size was correlated with each dependent variable at some point in time (thick lines represent periods of significant correlations at p < 0.05). (B) Beta coefficients of a GLM indicated how strong behavioral factors predicted pupil size. Pupil size plotted (C) per anxiety level for correctly identified animals, (D) for unidentified animals, (E) per presented animal category for identified animals, (F) per presented animal category for unidentified animals, and (G) per observed (i.e., perceived animal rather than presented) animal category for unidentified animals. In sum, anxiety is reflected in the pupil after correct identification of the animal. Significance levels for the plots where pupil size is presented per animal category (E–G) were based on comparisons between pooled trials of threatening animals, on the one hand, and pooled trials of non-threatening animals, on the other hand. High and low anxiety levels in (C) and (D) were based on splitting the respective data at the median, such that each level (high anxiety and low anxiety) contains the same number of trials.
Figure 7
 
Pupil responses. Change in pupil size relative to image onset as a function of time after image onset. (A) Pupil size was correlated with each dependent variable at some point in time (thick lines represent periods of significant correlations at p < 0.05). (B) Beta coefficients of a GLM indicated how strong behavioral factors predicted pupil size. Pupil size plotted (C) per anxiety level for correctly identified animals, (D) for unidentified animals, (E) per presented animal category for identified animals, (F) per presented animal category for unidentified animals, and (G) per observed (i.e., perceived animal rather than presented) animal category for unidentified animals. In sum, anxiety is reflected in the pupil after correct identification of the animal. Significance levels for the plots where pupil size is presented per animal category (E–G) were based on comparisons between pooled trials of threatening animals, on the one hand, and pooled trials of non-threatening animals, on the other hand. High and low anxiety levels in (C) and (D) were based on splitting the respective data at the median, such that each level (high anxiety and low anxiety) contains the same number of trials.
We further checked for relations between average performance or emotional valence and pupil size throughout the experiment. We calculated the mean of each dependent variable per observer over all trials and computed correlation coefficients between these means. Remarkably, the overall mean pupil size correlated with the mean anxiety rating per observer (r(6) = −0.81, p = 0.022). This negative correlation between mean pupil size and anxiety ratings could imply that the pupil size is an objective indicator of an observer's general anxiety level. Anxiety might not only be reflected in the mean pupil size but also in the amplitude of the pupillary response. However, the correlation between image statistics and anxiety ratings (see Figure 6) could indicate that the reported differences in pupil size are a result of image statistics rather than anxiety. To disentangle the effects of image statistics and emotional states on pupil size, we hypothesize that fear for an animal is a subjective concept that is not necessarily determined by a particular set of image features. We therefore argue that if the pupil shows a clear response to threat of the presented animal while the animal could not be consciously identified, image statistics may underlie effects of anxiety on pupil size. On the contrary, if the pupil only shows a response to threatening animals when identification was successful but not when unsuccessful, image statistics do not underlie the effects of anxiety on pupillary responses. To test this hypothesis, we performed several additional analyses of the effects of anxiety on pupil size as a function of time around image onset (Figures 7C7G). First, we computed pupil size for correctly identified animals that were either rated with high or low anxiety (Figure 7C). The pupil responded with a larger amplitude (i.e., more negativity in the signal) and with an overall lower pupil size after ∼600 ms to images of animals rated with high anxiety. We further computed pupillary responses for unidentified (i.e., correctly detected but incorrectly identified) animals per anxiety level of the presented animal (Figure 7D). If animals were not correctly identified, the pupil did not show a significant difference between low and high threatening animals. The level of threat per animal category also determined the amplitude of pupillary responses and the subsequent overall pupil size for correctly identified animals (Figure 7E). Threatening animals (i.e., phobic animals and predators) induced larger pupillary amplitudes at ∼600 ms than non-threatening animals. Similar to the anxiety levels, this effect was only pertinent for animals that were correctly identified because unidentified animals induced no such difference in pupil size across animal categories (Figure 7F). In turn, there is a significant effect if pupil traces are sorted per subjectively observed (i.e., perceived animal as indicated by the observer) rather than truly presented animal category (Figure 7G). These results indicate that the effects of anxiety on pupil size only occur after proper identification of the presented animal. This is support for the notion that anxiety is a separate mechanism that operates on object processing independent of image statistics. 
We found evidence that anxiety contributes to performance and is reflected in pupillary responses after conscious identification of the animal. The degree of an observer's overall anxiety for depicted animals is also reflected in the average size of the pupil. In other words, both short-term anxiety and long-term anxiety are reflected in the pupil (for details, see the Discussion section), and short-term anxiety only occurs after the activation of a higher object representation. 
Discussion
Performance relies on contrasts between object and background
Successful detection of animals in images mainly relied on increased levels of image luminance, animal saturation, animal size, animal shape, color, and luminance contrast between animal and background and on decreased levels of background contrast. Except for image luminance and background contrast, features predominantly tied to the target object seem to contribute to performance, a phenomenon that most studies have disregarded because they did not aim at distinguishing between effects of object features and overall image features. It is unlikely that this phenomenon is explained by the overall foveal rather than peripheral position of animals because of lacking effects of position on performance. Although this could be the result of lacking variance in animal position (animal center of gravity was 1.0 ± 0.6 deg; image size was 10 by 6.7 deg), it has been shown that the categorization of animals that are presented in the near periphery rather than the fovea is still highly accurate (±90% at 10°; Thorpe, Gegenfurtner, Fabre-Thorpe, & Bülthoff, 2001). Hence, we suggest that performance is predominantly affected by object features and their contrast with their backgrounds rather than overall image features. Obviously, the background alone cannot provide definite proof on the presence or absence of an animal for any given image, though—especially when the distracter database is carefully selected and the potential habitat of all species under consideration is visually similar. Nonetheless, the background could, in principle, still be probabilistically informative on the presence or absence of an animal and may even aid identification (say, if the background is predominately green, the animal is probably more likely a cow than a whale), rendering the relative importance of target features as non-trivial. Scene gist and context can be important factors for rapid categorization (Joubert, Fize, Rousselet, & Fabre-Thorpe, 2008; Joubert, Rousselet, Fize, & Fabre-Thorpe, 2007). In addition, we found that background luminance contrast negatively affected performance. It could well be that the presence of high luminance contrasts in the background “distracts” the visual processing of the object. Nonetheless, image features that were tied to the segmented object or ground depicted correlations with performance. While clearly beyond the scope of the present study, using targeted feature manipulations ('t Hart, Schmidt, Klein-Harmeyer, & Einhäuser, 2011) to address the question as to whether these results are an effect of background per se (e.g., by inducing higher order contrasts), a general property of natural scenes or merely an effect of the chosen databases will be an interesting issue for further research. 
Moderate contributions of all image features to performance
We found that a variety of features had significant effects on performance during the task to detect and identify animals as fast as possible after the rapid presentation of natural images. If our visual system is indeed tuned particularly for processing natural stimuli efficiently (Barlow, 1961) and effortlessly (F. F. Li, VanRullen, Koch, & Perona, 2002), it seems natural that those features represented abundantly early in the visual hierarchy contribute especially to rapid processing. However, each of the features investigated here explained only a moderate percentage of variance in performance. Despite the considerable number of image features covered, there are, of course, many other features left untested that might explain the remaining variance. We expect that these will be high-level features that are characteristic or diagnostic for the objects (Delorme et al., 2010) and relate to responses higher up in the visual processing hierarchy. In addition, higher areas may also be involved in rapidly forming initial hypotheses about objects that are only later refined by processing in earlier areas (Hochstein & Ahissar, 2002). In any case, a great deal of the unexplained variance might still be caused by the frequency of feature appearance (expertise), by individual preferences for specific features and to noise in the visual system. For example, it is proposed that observers are flexible when it comes to learning to use new features for categorization (Schyns, 1998; Schyns, Goldstone, & Thibaut, 1998). Such flexibility is necessary because the level as to when a feature is informative depends on the task. When, for instance, only simple features are attended during a task, higher order information could go lost despite its usefulness. Attention is another factor that could strongly determine how we process an object. Although the detection of objects does not necessarily require attention (F. F. Li et al., 2002), attention may still modulate performance, in particular for identification (Evans & Treisman, 2005). Similarly, fluctuations in arousal and alertness are likely contributors to trial-to-trial variability, which will remain largely unexplainable based on image features alone. Although image features alone might not explain image processing in full, our study provides a first quantification of how they interact with each other, with object representations, and with high-level processes for rapid detection and identification of objects. 
Relative importance of luminance and luminance contrast to performance
Whereas we find an important role of luminance and contrast features, previous studies reported relatively small effects of luminance and contrast on performance as compared to other features (Elder & Velisavljeviæ, 2009; Macé et al., 2010; Nandakumar & Malik, 2009). These results may look paradoxical to ours, but there are three possible explanations for this. (I) Some of the previous studies implied that high-level features such as shape and texture affect performance more strongly than luminance. However, shape and texture are higher level features that depend on low-level features such as luminance and contrast. Hence, it is difficult to make assumptions about their separate contributions to performance as they were not independently assessed. (II) Other studies found a high object recognition performance even after decreasing image luminance or contrast. We would like to stress that the robustness of such performance after degradation of images does not necessarily have to relate to the impact of image features on performance. Very low levels of luminance or contrast might still be successfully extracted and employed by our visual system and enhance performance. For example, objects in isoluminant images are hard to recognize, but only a very small deviation from the equilibrium of isoluminance substantially recovers recognition performance (Liebmann, 1927; Schiller, Logothetis, & Charles, 1991; West, Spillmann, Cavanagh, Mollon, & Hamlin, 1996). (III) In contrast to previous studies, we accounted for contrast of the object separated from its background. As we find that background contrast rather than image contrast correlated with performance, studies that manipulated contrast of the whole image could possibly have found larger effects on performance if only the object or background would have been manipulated separately. In sum, this suggests that luminance and luminance contrast are not unimportant per se, but their importance may depend on relations to higher order features, on other features available, and on the objects they constitute. 
Dissociation between detection and identification
Though many image statistics relate to both detection and identification performance, several other statistics relate to only one of them. In contrast to detection, identification was not significantly affected by animal contrast, object size, and variance of object radius. Several other significant correlations with identification were weaker than with detection. This dissociation, however, is subtle and it is hard to draw conclusions regarding distinct visual processes or stages that could separately establish successful detection and identification of objects. However, it has been proposed that correct detection and classification of animals may rely on the analysis of low-level image features rather than on a high-level object and scene representations (e.g., Evans & Treisman, 2005). Importantly, note that detection is not an equivalent but perhaps a prerequisite of identification. Observers might be able to classify an object into a category without being able to identify and thus report which exact object they have seen (also see our animal species mix-up results in Figure 4I), while successful identification cannot occur unless the object as a whole is correctly detected. As such, it is tempting to suggest that after correct detection and the extraction of the object from its background, image features—especially that of object's shape—might lose their usefulness with respect to object processing. That object and background/scene information is processed by distinct visual areas (Epstein, Graham, & Downing, 2003; Epstein & Kanwisher, 1998) elegantly concurs with this idea. In the context of these dissociations, our data therefore stress the importance to distinguish between detection and identification in experimental designs. 
Weak and reversed contribution of anxiety to performance
Anxiety induced by the animal depicted in the presented images slightly correlated with performance. A partial correlation—controlling for image statistics—revealed a negative effect of anxiety on reaction times and confidence (i.e., high fear reduced performance). A GLM showed that emotional affect rather than anxiety was a significant predictor of performance. These results appear to conflict with several earlier studies, which found that objects that induce fear or negative feelings facilitated processing. However, there are two possibilities that may explain this apparent discrepancy. First, stronger effects of image features on performance shadowed effects of anxiety when performance was assessed per animal category. This could explain the contrasting results of recent studies that report significant fear-induced increases in performance for threatening animals in some cases (e.g., Lobue & DeLoache, 2008; Ohman et al., 2001) and none or non-specific increase in other cases (e.g., Lipp et al., 2004; Tipples et al., 2002). When categorizing a small number of images into the categories “fearful” or “non-fearful,” it is conceivable that one or more low-level image features are overrepresented in one of these categories and eventually lead to differences in performance across categories. If this confound is not controlled for, a facilitation of processing by low-level features may shadow or even reverse the intended effect of threat. Second, it is possible that the advantage does not arise for threatening animals as such but rather for them being mammals. An advantage for mammals might be based on our increased expertise with this category, which in turn yields better performance. Mammals also have very large and salient eyes and this may too facilitate detection (Delorme et al., 2010). As such, some of the earlier studies may have confounded threat with expertise. Including expertise as factor in addition to threat and low-level features may, thus, be an interesting line of future research. 
The very short presentation times used in the current design may additionally have played an important role for the presence of fear-related effects. Many of the previous studies that report an effect of anxiety used a visual search paradigm in which there is sufficient time for the deployment of attention to the stimuli. It could be that attention cannot be deployed successfully if images are presented very briefly. There is some evidence that attention is crucial for the processing of fear-relevant stimulus as reported in a recent fMRI study. Alpers et al. (2009) found that attention is important for the activation of the amygdala, a brain structure important for the processing of fear-relevant stimuli, during the presentation of spiders. As such, very brief presentation times might not be sufficient to trigger attention-related processes and therefore might not activate the mechanism that enhances the processing of fear-relevant stimuli. Nevertheless, even for brief presentation times, our results depict effects of anxiety on performance and agree with the view that anxiety acts as a separate mechanism by modulating the processing of and responses to threatening objects. 
Pupil size changes reflect anxiety
Changes in pupil size have been related to a variety of psychological mechanisms including responses to the onset of novel and/or alerting stimuli (e.g., Andreassi, 2000; Beatty & Lucero-Wagoner, 2000; Hakerem, 1967; Hess, 1975; Janisse, 1977; Kahneman, 1973). Here, we have assessed how the pupil responds to onset of images that induced different levels of anxiety and that contained a broad range of image features. We find two types of how anxiety is reflected in the pupil: (I) Short-term anxiety (i.e., temporary state of anxiety) was reflected in the pupil as frightening animals induced larger pupillary responses (i.e., more negativity in the amplitude) to the image, and (II) long-term anxiety (i.e., the overall level of anxiety of an individual) was reflected in the average pupil size throughout the experiment as this correlated with the mean anxiety ratings of observers. These findings agree with a previous study that showed increased pupillary response amplitudes for panic disorder patients (Kojima et al., 2004) but also disagrees with another study that reported decreased amplitudes for normal observers with an increased state anxiety (Bitsios, Szabadi, & Bradshaw, 2002). As Bitsios et al. (2002) used auditory stimuli that were sometimes paired with unconditioned shocks, it could be that the experimental design underlies these results. Indeed, such aversive unconditioned stimuli may have other effects on pupillary responses than non-aversive stimuli (e.g., Reinhard & Lachnit, 2002). Furthermore, our results show that mean anxiety ratings were higher for observers with smaller pupil sizes. This is unexpected as it is generally reported that raised levels of anxiety increase the pupil size (Nagai, Wada, & Sunaga, 2002; Simpson & Molloy, 1971). Thus, in contrast to previous reports, our data suggest that an observer's increased level of anxiety is reflected in an overall decrement of pupil size. Unlike these previous studies, which used standardized psychological questionnaires to assess state and trait anxiety or audience anxiety (Simpson & Molloy, 1971), we here based the anxiety level on subjective image ratings, as our primary aim was to obtain the relative anxiety on an image-by-image basis. Whether higher general anxiety levels according to standardized test yields a different internal mapping of anxiety to our 0–100 scale or whether image-based anxiety ratings are fundamentally different from other types of anxiety remains an open issue. It is known, however, that both state and trait anxiety affect other physiological signals, such as heart rate (Kantor, Endler, Heslegrave, & Kocovski, 2001). It is, therefore, not unlikely that the pupil is similarly affected by both mechanisms and that we have at least measured some form of anxiety. As several months elapsed between pupil size and anxiety measurements (see Stimuli and procedure section), it is likely that we have measured effects of trait rather than state anxiety because the former remains stable over time (e.g., Rule & Traver, 1983). Nonetheless, state anxiety might have changed between these moments. Such changes could potentially increase variance and therefore decrease correlations between anxiety and performance or between anxiety and pupil measurement. Although this hypothesis is interesting of its own right and may merit further investigation, it still cannot account for the seemingly contrary effects of anxiety on pupil size or performance. 
Based on the relative image-by-image ratings, we can speculate on some of the mechanisms underlying the relation between anxiety and pupillary control. Activation of the parasympathetic nervous system (PSNS) is responsible for contracting the pupil at short intervals (600 ms) while an activated sympathetic nervous system (SNS) tends to dilate the pupil over longer time intervals (1100 ms; e.g., Steinhauer & Hakerem, 1992). The increased contraction and larger pupillary response amplitudes could, thus, be a result of an overall activation of the PSNS through anxiety-related processes. Such a relation between anxiety and the PSNS is not implausible as there are known neuronal connections between the amygdala and locus coeruleus (Bitsios et al., 2002)—both fear mediating nuclei (Charney, Scheier, & Redmond, 1995; Davis, 1992). These have projections to the midbrain Edinger–Westphal nucleus that is a parasympathetic control site for pupillary contraction (e.g., Einhäuser, Stout, Koch, & Carter, 2008; Loewenfeld, 1993). Our results do not rule out the possibility that these connections are part of the subcortical “quick and dirty” pathway that has been proposed to be responsible for the processing of conditioned threatening stimuli (LeDoux, 1996). 
The correlations between confidence or reaction time and pupil size before image onset and around the pupillary response were also remarkable. We suppose that this is most likely the result of general vigilance. As people have an increased level of attention and/or arousal, the pupil is subject to more variance and reacts more vigorously to visual changes. This would lead to faster pupillary responses and a faster settling back of the pupil size to its baseline. Indeed, we find that the correlations were most evident during the phases of dilation or contraction, thus during these periods the pupil size was either decreasing or increasing at high speed (0–500 ms and 1200–2000 ms). It would be interesting to study the neurological basis of such vigilance effects and related pupillary dynamics in more detail. 
Conclusion
To summarize, features that pop out the object from its background mainly accounted for both detection and identification performance. Furthermore, high image luminance increased and high background contrast decreased performance. Anxiety only weakly and negatively affected performance although pupil responses to fearful images showed that anxiety is a valid concept that changes neural processing. We further reported a slight dissociation between how particular image features affect detection and identification performance separately. These results indicate that future studies on object processing should strongly control for relative effects of image features, intercorrelations, and their relation to a variety of performance measures. 
Appendix A
Supplementary analyses
Analyzing data separately by observer (Figures A1AA1D) or pooling across observers (Figures A1EA1H) yields qualitatively similar results to the individual data presented in Figure 4. Post-hoc tests including the respective test statistics and confidence intervals for all statistical analysis are provided in Table A1. Inspecting the raw data underlying the correlations between performance measures and features (image statistics and emotional valences) individually (Figure A2) verifies that the reported correlations are not dominated by outliers (cf. Figure 5). A factor analysis revealed groupings of features (Figure A3). Variables that were largely redundant with other variables, namely, background luminance (grouped with background contrast and image luminance, red group in Figure A3), or did, by themselves, not correlate to any performance measure (image contrast and background saturation) were excluded from subsequent GLM analysis. 
Figure A1
 
Performance details. Box plots of the data shown in Figure 4: median (red line), 25th and 75th percentiles (blue boxes), and extent of the data without outliers (black dashed lines). (A–D) Data across observers. (E–H) Pooled data. (A, E) Reaction times. (B, E) Confidence ratings. (C, F) Anxiety ratings. (D, G) Emotional affect.
Figure A1
 
Performance details. Box plots of the data shown in Figure 4: median (red line), 25th and 75th percentiles (blue boxes), and extent of the data without outliers (black dashed lines). (A–D) Data across observers. (E–H) Pooled data. (A, E) Reaction times. (B, E) Confidence ratings. (C, F) Anxiety ratings. (D, G) Emotional affect.
Table A1
 
Post-hoc comparisons across animal categories. Only the post-hoc tests for significant differences as indicated by an ANOVA are depicted. Significant differences (p < 0.05) are highlighted in bold text.
Table A1
 
Post-hoc comparisons across animal categories. Only the post-hoc tests for significant differences as indicated by an ANOVA are depicted. Significant differences (p < 0.05) are highlighted in bold text.
Phobic Predators Domesticated Pets
Detection probability
p-Values
    Predators 0.0046
    Domesticated 0.0022 0.0194
    Pets 0.0045 0.0933 0.2245
    Non-phobic 0.1409 0.0599 0.0011 0.0285
t Statistic
    Predators 4.0910
    Domesticated 4.6858 3.0180
    Pets 4.1098 1.9414 −1.3321
    Phobic 1.6598 −2.2419 −5.3225 −2.7500
Confidence intervals
    Predators 0.0510/0.1909
    Domesticated 0.1003/0.3045 0.0176/0.1453
    Pets 0.0692/0.2568 −0.0092/0.0933 −0.1093/0.0305
    Phobic −0.0191/0.1089 −0.1562/0.0042 −0.2274/−0.0875 −0.2197/−0.0166
Reaction times (all)
p-Values
    Predators 0.0105
    Domesticated 0.0001 0.0025
    Pets 0.0042 0.2656 0.0955
    Phobic 0.6386 0.0215 0.0007 0.0159
t Statistic
    Predators −3.4605
    Domesticated −8.5654 −4.6003
    Pets −4.1774 −1.2099 1.9260
    Phobic 0.4907 2.9485 5.8047 3.1620
Confidence intervals
    Predators −0.0389/−0.0073
    Domesticated −0.0687/−0.0390 −0.0465/−0.0149
    Pets −0.0496/−0.0137 −0.0253/0.0082 −0.0050/0.0494
    Phobic −0.0158/0.0241 0.0054/0.0491 0.0343/0.0816 0.0090/0.0625
Reaction times (only correctly detected)
p-Values
    Predators 0.0136
    Domesticated 0.0006 0.0028
    Pets 0.0002 0.5411 0.0885
    Phobic 0.9107 0.0146 0.0003 0.0220
t Statistic
    Predators −3.2719
    Domesticated −5.9072 −4.4919
    Pets −7.3335 −0.6423 1.9774
    Phobic −0.1163 3.2241 6.6696 2.9295
Confidence intervals
    Predators −0.0571/−0.0092
    Domesticated −0.0944/−0.0404 −0.0523/−0.0162
    Pets −0.0545/−0.0279 −0.0379/0.0217 −0.0051/0.0575
    Phobic −0.0257/0.0233 0.0085/0.0553 0.0427/0.0896 0.0077/0.0723
Confidence
p-Values
    Predators 0.0059
    Domesticated 0.0001 0.0838
    Pets 0.0013 0.4643 0.0432
    Phobic 0.4892 0.0336 0.0003 0.0043
t Statistic
    Predators 3.9023
    Domesticated 8.2335 2.0149
    Pets 5.1675 0.7739 −2.4636
    Phobic 0.7298 −2.6358 −6.6331 −4.1559
Confidence intervals
    Predators −2.1860/8.6082
    Domesticated 2.2756/9.9036 −3.5802/9.3371
    Pets −2.8243/7.7106 −6.9180/5.3821 −8.3628/1.0700
    Phobic −5.2042/3.5173 −10.3915/2.2824 −11.0435/−2.8225 −5.8813/−0.6919
Anxiety
p-Values
    Predators 0.0060
    Domesticated 0.0001 0.0000
    Pets 0.0000 0.0000 0.1430
    Phobic 0.0000 0.0000 0.7555 0.5044
t Statistic
    Predators 3.8816
    Domesticated −8.7288 −14.2350
    Pets −12.1246 −37.7291 −1.6498
    Phobic −13.0446 −12.8014 −0.3238 0.7036
Confidence intervals
    Predators 4.9700/20.4646
    Domesticated −71.020/−40.743 −79.994/−57.203
    Pets −74.762/−50.360 −79.996/−70.560 −16.253/2.894
    Phobic −68.482/−47.464 −83.748/−57.633 −17.366/13.182 −10.830/20.005
Emotional affect
p-Values
    Predators 0.0259
    Domesticated 0.0009 0.4824
    Pets 0.0007 0.0225 0.1478
    Phobic 0.0008 0.7212 0.2749 0.0101
t Statistic
    Predators 2.8155
    Domesticated 5.5024 0.7417
    Pets 5.7372 2.9147 1.6270
    Phobic 5.6226 0.3716 −1.1845 −3.4914
Confidence intervals
    Predators 3.4930/40.1280
    Domesticated 16.7287/41.9421 −16.4653/31.5151
    Pets 22.9300/55.0841 3.2455/31.1475 −4.3844/23.7277
    Phobic 14.2240/34.8715 −14.6798/20.1543 −14.3454/4.7702 −24.2520/−4.6665
Figure A2
 
Scatter plots for all correlations between performance measures and features. Plots of performance (y-axis) as a function of image statistics or emotional valences (x-axis). Each data point indicates the average performance per image. Data were z-score normalized and each line that indicates an axis corresponds to z-score values between −10 and 10. Blue lines are fitted linear regressions. Order of features and performance measures as in Figure 5.
Figure A2
 
Scatter plots for all correlations between performance measures and features. Plots of performance (y-axis) as a function of image statistics or emotional valences (x-axis). Each data point indicates the average performance per image. Data were z-score normalized and each line that indicates an axis corresponds to z-score values between −10 and 10. Blue lines are fitted linear regressions. Order of features and performance measures as in Figure 5.
Figure A3
 
Factor analysis. Image features as a function of three factors that indicate high covariance across features. The colors indicate groups of image features that covary. Abbreviations as in Figure 5.
Figure A3
 
Factor analysis. Image features as a function of three factors that indicate high covariance across features. The colors indicate groups of image features that covary. Abbreviations as in Figure 5.
Acknowledgments
We thank Marius 't Hart and Susanne Klauke for support with the color space analysis. This work was supported by the German Research Foundation (DFG) through Research Training Group 885 “NeuroAct” (MN) and Grant EI852/1 (WE). 
Commercial relationships: none. 
Corresponding author: Marnix Naber. 
Email: marnixnaber@gmail.com. 
Address: AG Neurophysik, Karl-von-Frisch-Str. 8a (Altes MPI), 35032 Marburg, Germany. 
References
Adolphs R. Gosselin F. Buchanan T. W. Tranel D. Schyns P. Damasio A. R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature, 433, 68–72. [CrossRef] [PubMed]
Alpern M. Campbell F. W. (1962). The spectral sensitivity of the consensual light reflex. The Journal of Physiology, 164, 478–507. [CrossRef] [PubMed]
Alpers G. W. Gerdes A. B. Lagarie B. Tabbert K. Vaitl D. Stark R. (2009). Attention and amygdala activity: An fMRI study with spider pictures in spider phobia. Journal of Neural Transmission, 116, 747–757. [CrossRef] [PubMed]
Andreassi J. L. (2000). Pupillary response and behavior. In Andreassi J. L. (Ed.), Psychophysiology: Human behavior & physiological response (4th ed., pp. 218–233). Mahwah, NJ: Lawrence Erlbaum Assoc.
Barlow H. B. (1961). possible principles underlying the transformations of sensory messages. In Rosenblith W. A. (Ed.), Sensory communication (pp. 217–234). Cambridge, MA: MIT Press.
Baylis G. C. Driver J. (2001). Shape-coding in IT cells generalizes over contrast and mirror reversal, but not figure–ground reversal. Nature Neuroscience, 4, 937–942. [CrossRef] [PubMed]
Beatty J. Lucero-Wagoner B. (2000). The pupillary system. In Cacioppo J. T. Berntson G. Tassinar L. G. (Eds.), Handbook of psychophysiology (2nd ed., 142–162). Hillsdale, NJ: Cambridge University Press.
Benjamini Y. Hochberg J. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, 57, 289–300.
Biederman I. (1972). Perceiving real-world scenes. Science, 177, 77–80. [CrossRef] [PubMed]
Biederman I. Glass A. L. Stacy E. W. (1973). Searching for objects in real-world scenes. Journal of Experimental Psychology, 97, 22–27. [CrossRef] [PubMed]
Bitsios P. Szabadi E. Bradshaw C. M. (2002). Relationship of the ‘fear-inhibited light reflex’ to the level of state/trait anxiety in healthy subjects. International Journal of Psychophysiology, 43, 177–184. [CrossRef] [PubMed]
Blanchette I. (2006). Snakes, spiders, guns, and syringes: How specific are evolutionary constraints on the detection of threatening stimuli? Quarterly Journal of Experimental Psychology, 59, 1484–1504. [CrossRef]
Brainard D. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Brodie E. E. Wallace A. M. Sharrat B. (1991). Effect of surface characteristics and style of production on naming and verification of pictorial stimuli. American Journal of Psychology, 104, 517–545. [CrossRef] [PubMed]
Brosch T. Sharma D. (2005). The role of fear-relevant stimuli in visual search: A comparison of phylogenetic and ontogenetic stimuli. Emotion, 5, 360–364. [CrossRef] [PubMed]
Charney D. S. Scheier I. H. Redmond D. E. (1995). Noradrenergic neural substrates for anxiety and fear. In Bloom F. E. Kupfer D. J. (Eds.), Psychopharmacology: The fourth generation of progress (pp. 387–396). New York: Raven Press.
Cornelissen F. Peters E. Palmer J. (2002). The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox. Behavior Research Methods, Instruments, & Computers, 34, 613–617. [CrossRef]
Davis M. (1992). The role of the amygdala in conditioned fear. In Aggleton J. (Ed.), The amygdala: Neurobiological aspects of emotion, memory, and mental dysfunction (pp. 255–305). New York: Wiley-Liss.
Deloache J. S. Lobue V. (2009). The narrow fellow in the grass: Human infants associate snakes and fear. Developmental Science, 12, 201–207. [CrossRef] [PubMed]
Delorme A. Richard G. Fabre-Thorpe M. (2010). Key visual features for rapid categorization of animals in natural scenes. Frontiers in Psychology, 1, 1:13.
Derrington A. M. Krauskopf J. Lennie P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of Physiology, 357, 241–265. [CrossRef] [PubMed]
Einhäuser W. Stout J. Koch C. Carter O. (2008). Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry. Proceedings of the National Academy of Sciences of the United States of America, 105, 1704–1709. [CrossRef] [PubMed]
Elder J. H. Velisavljević L. (2009). Cue dynamics underlying rapid detection of animals in natural scenes. Journal of Vision, 9(7):7, 1–20, http://www.journalofvision.org/content/9/7/7, doi:10.1167/9.7.7. [PubMed] [Article] [CrossRef] [PubMed]
Epstein R. Graham K. S. Downing P. E. (2003). Viewpoint-specific scene representations in human parahippocampal cortex. Neuron, 37, 865–876. [CrossRef] [PubMed]
Epstein R. Kanwisher N. (1998). A cortical representation of the local visual environment. Nature, 392, 598–601. [CrossRef] [PubMed]
Evans K. K. Treisman A. (2005). Perception of objects in natural scenes: Is it really attention free? Journal of Experimental Psychology: Human Perception and Performance, 31, 1476–1492. [CrossRef] [PubMed]
Felleman D. J. Van Essen D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. [CrossRef] [PubMed]
Gamlin P. D. Zhang H. Harlow A. Barbur J. L. (1998). Pupil responses to stimulus color, structure and light flux increments in the rhesus monkey. Vision Research, 38, 3353–3358. [CrossRef] [PubMed]
Gaspar C. M. Rousselet G. A. (2009). How do amplitude spectra influence rapid animal detection? Vision Research, 49, 3001–3012. [CrossRef] [PubMed]
Gegenfurtner K. R. Rieger J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10, 805–808. [CrossRef] [PubMed]
Goffaux V. Jacques C. Mouraux A. Oliva A. Schyns P. G. Rossion B. (2005). Diagnostic colours contribute to the early stages of scene categorization: Behavioral and neurophysiological evidence. Visual Cognition, 12, 878–892. [CrossRef]
Grill-Spector K. Kushnir T. Edelman S. Itzchak Y. Malach R. (1998). Cue-invariant activation in object-related areas of the human occipital lobe. Neuron, 21, 191–202. [CrossRef] [PubMed]
Grill-Spector K. Kushnir T. Hendler T. Edelman S. Itzchak Y. Malach R. (1998). A sequence of object-processing stages revealed by fMRI in the human occipital lobe. Human Brain Mapping, 6, 316–328. [CrossRef] [PubMed]
Hakerem G. (1967). Pupillography. In Venables P. H. Martin I. (Eds.), A manual of psychophysiological methods (pp. 335–349). Amsterdam, The Netherlands: North-Holland Publishing.
Hess E. (1975). The tell-tale eye. New York: Van Nostrand Reinhold.
Hochstein S. Ahissar M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36, 791–804. [CrossRef] [PubMed]
Humphrey G. K. Goodale M. A. Jakobson L. S. Servos P. (1994). The role of surface information in object recognition: Studies of a visual form agnosic and normal subjects. Perception, 23, 1457–1481. [CrossRef] [PubMed]
Isbell L. A. (2006). Snakes as agents of evolutionary change in primate brains. Journal of Human Evolution, 51, 1–35. [CrossRef] [PubMed]
Janisse M. (1977). Pupillometry: The psychology of the pupillary response. Washington, DC: Hemisphere Publishing Co.
Johnson J. S. Olshausen B. A. (2003). Timecourse of neural signatures of object recognition. Journal of Vision, 3(7):4, 499–512, http://www.journalofvision.org/content/3/7/4, doi:10.1167/3.7.4. [PubMed] [Article] [CrossRef]
Johnson J. S. Olshausen B. A. (2005). The earliest EEG signatures of object recognition in a cued-target task are postsensory. Journal of Vision, 5(4):2, 299–312, http://www.journalofvision.org/content/5/4/2, doi:10.1167/5.4.2. [PubMed] [Article] [CrossRef]
Joubert O. R. Fize D. Rousselet G. A. Fabre-Thorpe M. (2008). Early interference of context congruence on object processing in rapid visual categorization of natural scenes. Journal of Vision, 8(13):11, 1–18, http://www.journalofvision.org/content/8/13/11, doi:10.1167/8.13.11. [PubMed] [Article] [CrossRef] [PubMed]
Joubert O. R. Rousselet G. A. Fabre-Thorpe M. Fize D. (2009). Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise. Journal of Vision, 9(1):2, 1–16, http://www.journalofvision.org/content/9/1/2, doi:10.1167/9.1.2. [PubMed] [Article] [CrossRef] [PubMed]
Joubert O. R. Rousselet G. A. Fize D. Fabre-Thorpe M. (2007). Processing scene context: Fast categorization and object interference. Vision Research, 47, 3286–3297. [CrossRef] [PubMed]
Kahneman D. (1973). Attention and effort. New Jersey, USA: Prentice Hall.
Kantor L. Endler N. S. Heslegrave R. J. Kocovski N. L. (2001). Validating self-report measures of state and trait anxiety against a physiological measure. Current Psychology, 20, 207–215. [CrossRef]
Kirchner H. Thorpe S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46, 1762–1776. [CrossRef] [PubMed]
Kojima M. Shioiri T. Hosoki T. Kitamura H. Bando T. Someya T. (2004). Pupillary light reflex in panic disorder A trial using audiovisual stimulation. European Archives of Psychiatry and Clinical Neuroscience, 254, 242–244. [CrossRef] [PubMed]
Kourtzi Z. Kanwisher N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20, 3310–3318. [PubMed]
Kourtzi Z. Kanwisher N. (2001). Representation of perceived object shape by the human lateral occipital complex. Science, 293, 1506–1509. [CrossRef] [PubMed]
LeDoux J. (1996). Emotional networks and motor control: A fearful view. Progress in Brain Research, 107, 437–446. [PubMed]
Li F. F. VanRullen R. Koch C. Perona P. (2002). Rapid natural scene categorization in the near absence of attention. Proceedings of the National Academy of Sciences of the United States of America, 99, 9596–9601. [CrossRef] [PubMed]
Li Z. Liang P. Sun F. (2006). Properties of pupillary responses to dynamic random-dot stereograms. Experimental Brain Research, 168, 436–440. [CrossRef] [PubMed]
Liebmann S. (1927). Über das Verhalten farbiger Formen bei Helligkeitsgleichheit von Figur und Grund. Psychologische Forschung, 9, 300–353. [CrossRef]
Lipp O. V. Derakshan N. Waters A. M. Logies S. (2004). Snakes and cats in the flower bed: Fast detection is not specific to pictures of fear-relevant animals. Emotion, 4, 233–250. [CrossRef] [PubMed]
LoBue V. DeLoache J. S. (2008). Detecting the snake in the grass: Attention to fear-relevant stimuli by adults and young children. Psychological Science, 19, 284–289. [CrossRef] [PubMed]
LoBue V. DeLoache J. S. (2010). Superior detection of threat-relevant stimuli in infancy. Developmental Science, 13, 221–228. [CrossRef] [PubMed]
Loewenfeld I. (1993). The pupil: Anatomy, physiology, and clinical applications. Detroit, MI: Wayne State Univ. Press.
Macé M. J. Delorme A. Richard G. Fabre-Thorpe M. (2010). Spotting animals in natural scenes: Efficiency of humans and monkeys at very low contrasts. Animal Cognition, 13, 405–418. [CrossRef] [PubMed]
Malach R. Grill-Spector K. Kushnir T. Edelman S. Itzchak Y. (1998). Rapid shape adaptation reveals position and size invariance. NeuroImage, 7, S43.
McCotter M. Gosselin F. Sowden P. Schyns P. (2005). The use of visual information in natural scenes. Visual Cognition, 12, 938–953. [CrossRef]
Mogg K. Bradley B. P. (1998). A cognitive-motivational analysis of anxiety. Behaviour Research and Therapy, 36, 809–848. [CrossRef] [PubMed]
Morris J. S. Ohman A. Dolan R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences of the United States of America, 96, 1680–1685. [CrossRef] [PubMed]
Mouchetant-Rostaing Y. Giard M. H. Delpuech C. Echallier J. F. Pernier J. (2000). Early signs of visual categorization for biological and non-biological stimuli in humans. Neuroreport, 11, 2521–2525. [CrossRef] [PubMed]
Nagai M. Wada M. Sunaga N. (2002). Trait anxiety affects the pupillary light reflex in college students. Neuroscience Letters, 328, 68–70. [CrossRef] [PubMed]
Nandakumar C. Malik J. (2009). Understanding rapid category detection via multiply degraded images. Journal of Vision, 9(6):19, 1–18, http://www.journalofvision.org/content/9/6/19, doi:10.1167/9.6.19. [PubMed] [Article] [CrossRef] [PubMed]
Ohman A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information processing mechanisms. In Lewis M. Haviland J. M. (Eds.), Handbook of emotions (pp. 511–536). New York: Guilford Press.
Ohman A. Flykt A. Esteves F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130, 466–478. [CrossRef] [PubMed]
Oliva A. Schyns P. G. (2000). Diagnostic colors mediate scene recognition. Cognitive Psychology, 41, 176–210. [CrossRef] [PubMed]
Pelli D. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Potter M. C. Levy E. I. (1969). Recognition memory for a rapid sequence of pictures. Journal of Experimental Psychology, 81, 10–15. [CrossRef] [PubMed]
Reinhard G. Lachnit H. (2002). Differential conditioning of anticipatory pupillary dilation responses in humans. Biological Psychology, 60, 51–68. [CrossRef] [PubMed]
Renninger L. W. Malik J. (2004). When is scene identification just texture recognition? Vision Research, 44, 2301–2311. [CrossRef] [PubMed]
Rousselet G. A. Husk J. S. Bennett P. J. Sekuler A. B. (2008). Time course and robustness of ERP object and face differences. Journal of Vision, 8(12):3, 1–18, http://wwwjournalofvision.org/content/8/12/3, doi:10.1167/8.12.3. [PubMed] [Article] [CrossRef] [PubMed]
Rule W. R. Traver M. D. (1983). Test–retest reliabilities of State–Trait Anxiety Inventory in a stressful social analogue situation. Journal of Personality Assessment, 47, 276–277. [CrossRef] [PubMed]
Schiller P. H. Logothetis N. K. Charles E. R. (1991). Parallel pathways in the visual system: Their role in perception at isoluminance. Neuropsychologia, 29, 433–441. [CrossRef] [PubMed]
Schyns P. G. (1998). Diagnostic recognition: Task constraints, object information, and their interactions. Cognition, 67, 147–179. [CrossRef] [PubMed]
Schyns P. G. Goldstone R. L. Thibaut J. P. (1998). The development of features in object concepts. Behavioral Brain Science, 21, 1–17; discussion 17-54.
Seligman M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406–418. [CrossRef]
Simpson H. M. Molloy F. M. (1971). Effects of audience anxiety on pupil size. Psychophysiology, 8, 491–496. [CrossRef] [PubMed]
Smith M. L. Gosselin F. Schyns P. G. (2007). From a face to its category via a few information processing states in the brain. Neuroimage, 37, 974–984. [CrossRef] [PubMed]
Steinhauer S. R. Hakerem G. (1992). The pupillary response in cognitive psychophysiology and schizophrenia. Annals of the New York Academy of Sciences, 658, 182–204. [CrossRef] [PubMed]
't Hart B. M. Schmidt H. Klein-Harmeyer I. Einhäuser W. (2011). Objects in natural scenes: Do rapid detection and gaze-control utilize the same features? [Abstract]. In European Conference of Eye Movements (p. 64). Marseille, France: ECEM.
Thorpe S. J. Fize D. Marlot C. (1996). Speed of processing in the human visual system. Nature, 381, 520–522. [CrossRef] [PubMed]
Thorpe S. J. Gegenfurtner K. R. Fabre-Thorpe M. Bülthoff H. H. (2001). Detection of animals in natural images using far peripheral vision. European Journal of Neuroscience, 14, 869–876. [CrossRef] [PubMed]
Tipples J. Young A. W. Quinlan P. Broks P. Ellis A. W. (2002). Searching for threat. Quarterly Journal of Experimental Psychology A, 55, 1007–1026. [CrossRef]
Tsujimura S. Wolffsohn J. S. Gilmartin B. (2001). A linear chromatic mechanism drives the pupillary response. Proceedings of the Royal Society of London B: Biological Sciences, 268, 2203–2209. [CrossRef]
van Rijsbergen N. J. Schyns P. G. (2009). Dynamics of trimming the content of face representations for categorization in the brain. PLoS Computational Biology, 5, e1000561.
VanRullen R. Thorpe S. J. (2001a). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30, 655–668. [CrossRef]
VanRullen R. Thorpe S. J. (2001b). The time course of visual processing: From early perception to decision-making. Journal of Cognitive Neuroscience, 13, 454–461. [CrossRef]
Vogels R. (1999). Categorization of complex visual images by rhesus monkeys: Part 1. Behavioural study. European Journal of Neuroscience, 11, 1223–1238. [CrossRef] [PubMed]
Waters A. M. Lipp O. V. Spence S. H. (2004). Attentional bias toward fear-related stimuli: An investigation with nonselected children and adults and children with anxiety disorders. Journal of Experimental Child Psychology, 89, 320–337. [CrossRef] [PubMed]
West M. Spillmann L. Cavanagh P. Mollon J. Hamlin S. (1996). Susanne Liebmann in the critical zone. Perception, 25, 1451–1495. [CrossRef]
Wichmann F. A. Drewes J. Rosas P. Gegenfurtner K. R. (2010). Animal detection in natural scenes: Critical features revisited. Journal of Vision, 10(4):6, 1–27, http://www.journalofvision.org/content/10/4/6, doi:10.1167/10.4.6. [PubMed] [Article] [CrossRef] [PubMed]
Wichmann F. A. Sharpe L. T. Gegenfurtner K. R. (2002). The contributions of color to recognition memory for natural scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 509–520. [CrossRef] [PubMed]
Wurm L. H. Legge G. E. Isenberg L. M. Luebker A. (1993). Color improves object recognition in normal and low vision. Journal of Experimental Psychology: Human Perception and Performance, 19, 899–911. [CrossRef] [PubMed]
Yao A. Y. Einhäuser W. (2008). Color aids late but not early stages of rapid natural scene recognition. Journal of Vision, 8(16):12, 1–13, http://www.journalofvision.org/content/8/16/12, doi:10.1167/8.16.12. [PubMed] [Article] [CrossRef] [PubMed]
Young R. S. Alpern M. (1980). Pupil responses to foveal exchange of monochromatic lights. Proceedings of the National Academy of Sciences of the United States of America, 70, 697–706.
Figure 1
 
Paradigms and stimuli. (A) In the rapid presentation task, observers were shown an image of either an animal or distracter for 10 ms that was subsequently masked by a random noise image. Observers decided as fast as possible whether they had seen an animal or not by making an eye movement to the left or right part of the screen (text is here displayed as “yes”/“no” for reasons of legibility, but the actual display text was “no animal”/“animal”). Only for trials in which observers indicated to have seen an animal, they were asked to identify the species from a list (see gray arrow; arrangement of species is here illustrated by letters for reasons of legibility, but in the actual display full species names were shown) and could indicate their confidence on a ruler (0–100). (B) In the emotional valence rating task, observers were shown images of animals until response. Observers were asked to indicate their affect (−100–100) for the animal and their level of anxiety (0–100) if they would encounter the animal in real life.
Figure 1
 
Paradigms and stimuli. (A) In the rapid presentation task, observers were shown an image of either an animal or distracter for 10 ms that was subsequently masked by a random noise image. Observers decided as fast as possible whether they had seen an animal or not by making an eye movement to the left or right part of the screen (text is here displayed as “yes”/“no” for reasons of legibility, but the actual display text was “no animal”/“animal”). Only for trials in which observers indicated to have seen an animal, they were asked to identify the species from a list (see gray arrow; arrangement of species is here illustrated by letters for reasons of legibility, but in the actual display full species names were shown) and could indicate their confidence on a ruler (0–100). (B) In the emotional valence rating task, observers were shown images of animals until response. Observers were asked to indicate their affect (−100–100) for the animal and their level of anxiety (0–100) if they would encounter the animal in real life.
Figure 2
 
Image examples. (A) Examples of presented images of animal species (columns) per category (rows). (B) Examples of several non-animal distracter images from different categories.
Figure 2
 
Image examples. (A) Examples of presented images of animal species (columns) per category (rows). (B) Examples of several non-animal distracter images from different categories.
Figure 3
 
Cutout examples and extreme image statistics. (A) Four examples (rows) of original animal images (first column) of which the animal was cut out (second column) and analyzed separately from its background (third column). Cutouts were created by two individuals. Green and red areas indicate non-overlapping parts of the individual cutouts, white areas indicate overlap between the two individual cutouts (see second column). (B) A variety of image statistics were computed for correlation with performance; the panel depicts examples of images with statistical extremes shown. For example, the Animal Rad. Var. image shows an image in which the animal has a highly variable shape (i.e., not circular or rectangular but complex). Col. Con. Ani.–Bg. shows an image of an animal whose color strongly contrasts with its background.
Figure 3
 
Cutout examples and extreme image statistics. (A) Four examples (rows) of original animal images (first column) of which the animal was cut out (second column) and analyzed separately from its background (third column). Cutouts were created by two individuals. Green and red areas indicate non-overlapping parts of the individual cutouts, white areas indicate overlap between the two individual cutouts (see second column). (B) A variety of image statistics were computed for correlation with performance; the panel depicts examples of images with statistical extremes shown. For example, the Animal Rad. Var. image shows an image in which the animal has a highly variable shape (i.e., not circular or rectangular but complex). Col. Con. Ani.–Bg. shows an image of an animal whose color strongly contrasts with its background.
Figure 4
 
Detection and identification performance. (A) Percentage of hits (animal correctly detected) and false alarms (distracter mistaken for animal) for each of the 8 observers (colored points) and the average (blue). Observers clearly perform above chance (diagonal) and show similar performance despite variability in their criteria. (B) Percentage of false alarms by animal category (for abbreviations, see text); each observer is presented by a line (colors as in (A)). Note the similar pattern across observers. (C) Probability to correctly detect an animal (percentage of hits) by category (notation as (B)). (D) Mean reaction times per category. (E) Probability to correctly identify the animal's category, given it was correctly detected. (F) Mean confidence ratings. (G) Mean anxiety. (H) Mean emotional affect per category. (I) Identification confusion matrix given that an animal is detected correctly but incorrectly identified. Brighter patches indicate that species are more likely to be mixed up with each other during identification. Matrix is normalized per presented species (3 per category) with the main diagonal excluded (blue), such that each entry gives the probability of a presented species (column) being confused with another species (row), excluding correct identifications. White lines delineate animal categories, order of species per category as given in Figure 1A. See Figure 1 for list of abbreviations of categories. Note that the lines in (B) through (H) are meant for illustration only and do not indicate a functional dependence.
Figure 4
 
Detection and identification performance. (A) Percentage of hits (animal correctly detected) and false alarms (distracter mistaken for animal) for each of the 8 observers (colored points) and the average (blue). Observers clearly perform above chance (diagonal) and show similar performance despite variability in their criteria. (B) Percentage of false alarms by animal category (for abbreviations, see text); each observer is presented by a line (colors as in (A)). Note the similar pattern across observers. (C) Probability to correctly detect an animal (percentage of hits) by category (notation as (B)). (D) Mean reaction times per category. (E) Probability to correctly identify the animal's category, given it was correctly detected. (F) Mean confidence ratings. (G) Mean anxiety. (H) Mean emotional affect per category. (I) Identification confusion matrix given that an animal is detected correctly but incorrectly identified. Brighter patches indicate that species are more likely to be mixed up with each other during identification. Matrix is normalized per presented species (3 per category) with the main diagonal excluded (blue), such that each entry gives the probability of a presented species (column) being confused with another species (row), excluding correct identifications. White lines delineate animal categories, order of species per category as given in Figure 1A. See Figure 1 for list of abbreviations of categories. Note that the lines in (B) through (H) are meant for illustration only and do not indicate a functional dependence.
Figure 5
 
Correlations matrix. Matrix of correlations between performance variables (vertical axis) and image statistics and emotional valences (horizontal axes), r-values given as numbers. Only significant correlations (p smaller than an FDR-adjusted alpha level of 0.014) are presented with numbers. Most noteworthy correlations were those between animal (rather than image) statistics and performance and absent or weakly negative correlations between anxiety and performance. Animal (i.e., the object of interest or target) statistics have much larger effects on performance than whole image statistics, and anxiety has a weak effect on performance, which is reversed relative to our prior expectation. See Figure 1 for list of abbreviations of categories. Abbreviations of image statistics: Rad. Var. = variance in radius of animal shape outline, Col. = color, Lum. = luminance, Con. = contrast, Ani. = animal, Bg. = background.
Figure 5
 
Correlations matrix. Matrix of correlations between performance variables (vertical axis) and image statistics and emotional valences (horizontal axes), r-values given as numbers. Only significant correlations (p smaller than an FDR-adjusted alpha level of 0.014) are presented with numbers. Most noteworthy correlations were those between animal (rather than image) statistics and performance and absent or weakly negative correlations between anxiety and performance. Animal (i.e., the object of interest or target) statistics have much larger effects on performance than whole image statistics, and anxiety has a weak effect on performance, which is reversed relative to our prior expectation. See Figure 1 for list of abbreviations of categories. Abbreviations of image statistics: Rad. Var. = variance in radius of animal shape outline, Col. = color, Lum. = luminance, Con. = contrast, Ani. = animal, Bg. = background.
Figure 6
 
Correlation matrix. Matrix of correlations between image statistics and emotional valences. Only significant correlations (at FDR-adjusted alpha level of 0.020) are depicted.
Figure 6
 
Correlation matrix. Matrix of correlations between image statistics and emotional valences. Only significant correlations (at FDR-adjusted alpha level of 0.020) are depicted.
Figure 7
 
Pupil responses. Change in pupil size relative to image onset as a function of time after image onset. (A) Pupil size was correlated with each dependent variable at some point in time (thick lines represent periods of significant correlations at p < 0.05). (B) Beta coefficients of a GLM indicated how strong behavioral factors predicted pupil size. Pupil size plotted (C) per anxiety level for correctly identified animals, (D) for unidentified animals, (E) per presented animal category for identified animals, (F) per presented animal category for unidentified animals, and (G) per observed (i.e., perceived animal rather than presented) animal category for unidentified animals. In sum, anxiety is reflected in the pupil after correct identification of the animal. Significance levels for the plots where pupil size is presented per animal category (E–G) were based on comparisons between pooled trials of threatening animals, on the one hand, and pooled trials of non-threatening animals, on the other hand. High and low anxiety levels in (C) and (D) were based on splitting the respective data at the median, such that each level (high anxiety and low anxiety) contains the same number of trials.
Figure 7
 
Pupil responses. Change in pupil size relative to image onset as a function of time after image onset. (A) Pupil size was correlated with each dependent variable at some point in time (thick lines represent periods of significant correlations at p < 0.05). (B) Beta coefficients of a GLM indicated how strong behavioral factors predicted pupil size. Pupil size plotted (C) per anxiety level for correctly identified animals, (D) for unidentified animals, (E) per presented animal category for identified animals, (F) per presented animal category for unidentified animals, and (G) per observed (i.e., perceived animal rather than presented) animal category for unidentified animals. In sum, anxiety is reflected in the pupil after correct identification of the animal. Significance levels for the plots where pupil size is presented per animal category (E–G) were based on comparisons between pooled trials of threatening animals, on the one hand, and pooled trials of non-threatening animals, on the other hand. High and low anxiety levels in (C) and (D) were based on splitting the respective data at the median, such that each level (high anxiety and low anxiety) contains the same number of trials.
Figure A1
 
Performance details. Box plots of the data shown in Figure 4: median (red line), 25th and 75th percentiles (blue boxes), and extent of the data without outliers (black dashed lines). (A–D) Data across observers. (E–H) Pooled data. (A, E) Reaction times. (B, E) Confidence ratings. (C, F) Anxiety ratings. (D, G) Emotional affect.
Figure A1
 
Performance details. Box plots of the data shown in Figure 4: median (red line), 25th and 75th percentiles (blue boxes), and extent of the data without outliers (black dashed lines). (A–D) Data across observers. (E–H) Pooled data. (A, E) Reaction times. (B, E) Confidence ratings. (C, F) Anxiety ratings. (D, G) Emotional affect.
Figure A2
 
Scatter plots for all correlations between performance measures and features. Plots of performance (y-axis) as a function of image statistics or emotional valences (x-axis). Each data point indicates the average performance per image. Data were z-score normalized and each line that indicates an axis corresponds to z-score values between −10 and 10. Blue lines are fitted linear regressions. Order of features and performance measures as in Figure 5.
Figure A2
 
Scatter plots for all correlations between performance measures and features. Plots of performance (y-axis) as a function of image statistics or emotional valences (x-axis). Each data point indicates the average performance per image. Data were z-score normalized and each line that indicates an axis corresponds to z-score values between −10 and 10. Blue lines are fitted linear regressions. Order of features and performance measures as in Figure 5.
Figure A3
 
Factor analysis. Image features as a function of three factors that indicate high covariance across features. The colors indicate groups of image features that covary. Abbreviations as in Figure 5.
Figure A3
 
Factor analysis. Image features as a function of three factors that indicate high covariance across features. The colors indicate groups of image features that covary. Abbreviations as in Figure 5.
Table 1
 
GLM—Effects of image statistics and emotional valences on performance (significant predictors are printed in bold).
Table 1
 
GLM—Effects of image statistics and emotional valences on performance (significant predictors are printed in bold).
Detection Identification
Variable Beta t Stat. p-Value Variable Beta t Stat. p-Value
Lum. Im. 0.214 3.97 7.883 * 10 −5 Lum. Im. 0.122 2.11 0.035
Lum. Ani. −0.021 0.33 0.743 Lum. Ani. −0.020 0.29 0.772
Con. Ani. 0.148 2.92 0.004 Con. Ani. 0.077 1.42 0.158
Con. Bg. −0.271 6.25 6.902 * 10 −10 Con. Bg. −0.248 5.27 1.876 * 10 −7
Sat. Im. −0.044 0.86 0.390 Sat. Im. −0.029 0.53 0.593
Sat. Ani. 0.126 2.55 0.011 Sat. Ani. 0.152 2.89 0.004
Col. Con. Ani.–Bg. 0.020 0.34 0.736 Col. Con. Ani.–Bg. −0.064 1.02 0.310
Col. + Lum. Con. Ani.–Bg. 0.163 2.87 0.004 Col. + Lum. Con. Ani.–Bg. 0.176 2.90 0.004
Size Ani. 0.082 2.22 0.027 Size Ani. −0.003 0.08 0.936
Var. Rad. Ani. 0.059 1.63 0.103 Var. Rad. Ani. 0.070 1.81 0.071
Anxiety 0.030 0.66 0.513 Anxiety −0.005 0.10 0.92
Emotional affect 0.131 2.89 0.004 Emotional affect 0.025 0.51 0.608
Reaction times Confidence
Variable Beta t Stat. p-Value Variable Beta t Stat. p-Value
Lum. Im. −0.145 2.49 0.013 Lum. Im. 0.202 3.78 1.739 * 10 −4
Lum. Ani. −0.034 0.50 0.619 Lum. Ani. −0.047 0.75 0.453
Con. Ani. −0.043 0.79 0.428 Con. Ani. 0.092 1.83 0.067
Con. Bg. 0.112 2.38 0.017 Con. Bg. −0.358 8.21 1.221 * 10 −15
Sat. Im. 0.065 1.18 0.240 Sat. Im. −0.062 1.22 0.223
Sat. Ani. −0.086 1.62 0.107 Sat. Ani. 0.150 3.05 0.002
Col. Con. Ani.–Bg. 0.024 0.37 0.709 Col. Con. Ani.–Bg. −0.007 0.12 0.901
Col. + Lum. Con. Ani.–Bg. −0.060 0.98 0.327 Col. + Lum. Con. Ani.–Bg. 0.210 3.71 2.229 * 10 −4
Size Ani. −0.069 1.78 0.082 Size Ani. 0.108 2.95 0.003
Var. Rad. Ani. −0.126 3.26 0.001 Var. Rad. Ani. 0.068 1.88 0.060
Anxiety −0.030 0.61 0.581 Anxiety 0.053 1.15 0.249
Emotional affect −0.061 1.24 0.215 Emotional affect 0.133 2.90 0.004
Table A1
 
Post-hoc comparisons across animal categories. Only the post-hoc tests for significant differences as indicated by an ANOVA are depicted. Significant differences (p < 0.05) are highlighted in bold text.
Table A1
 
Post-hoc comparisons across animal categories. Only the post-hoc tests for significant differences as indicated by an ANOVA are depicted. Significant differences (p < 0.05) are highlighted in bold text.
Phobic Predators Domesticated Pets
Detection probability
p-Values
    Predators 0.0046
    Domesticated 0.0022 0.0194
    Pets 0.0045 0.0933 0.2245
    Non-phobic 0.1409 0.0599 0.0011 0.0285
t Statistic
    Predators 4.0910
    Domesticated 4.6858 3.0180
    Pets 4.1098 1.9414 −1.3321
    Phobic 1.6598 −2.2419 −5.3225 −2.7500
Confidence intervals
    Predators 0.0510/0.1909
    Domesticated 0.1003/0.3045 0.0176/0.1453
    Pets 0.0692/0.2568 −0.0092/0.0933 −0.1093/0.0305
    Phobic −0.0191/0.1089 −0.1562/0.0042 −0.2274/−0.0875 −0.2197/−0.0166
Reaction times (all)
p-Values
    Predators 0.0105
    Domesticated 0.0001 0.0025
    Pets 0.0042 0.2656 0.0955
    Phobic 0.6386 0.0215 0.0007 0.0159
t Statistic
    Predators −3.4605
    Domesticated −8.5654 −4.6003
    Pets −4.1774 −1.2099 1.9260
    Phobic 0.4907 2.9485 5.8047 3.1620
Confidence intervals
    Predators −0.0389/−0.0073
    Domesticated −0.0687/−0.0390 −0.0465/−0.0149
    Pets −0.0496/−0.0137 −0.0253/0.0082 −0.0050/0.0494
    Phobic −0.0158/0.0241 0.0054/0.0491 0.0343/0.0816 0.0090/0.0625
Reaction times (only correctly detected)
p-Values
    Predators 0.0136
    Domesticated 0.0006 0.0028
    Pets 0.0002 0.5411 0.0885
    Phobic 0.9107 0.0146 0.0003 0.0220
t Statistic
    Predators −3.2719
    Domesticated −5.9072 −4.4919
    Pets −7.3335 −0.6423 1.9774
    Phobic −0.1163 3.2241 6.6696 2.9295
Confidence intervals
    Predators −0.0571/−0.0092
    Domesticated −0.0944/−0.0404 −0.0523/−0.0162
    Pets −0.0545/−0.0279 −0.0379/0.0217 −0.0051/0.0575
    Phobic −0.0257/0.0233 0.0085/0.0553 0.0427/0.0896 0.0077/0.0723
Confidence
p-Values
    Predators 0.0059
    Domesticated 0.0001 0.0838
    Pets 0.0013 0.4643 0.0432
    Phobic 0.4892 0.0336 0.0003 0.0043
t Statistic
    Predators 3.9023
    Domesticated 8.2335 2.0149
    Pets 5.1675 0.7739 −2.4636
    Phobic 0.7298 −2.6358 −6.6331 −4.1559
Confidence intervals
    Predators −2.1860/8.6082
    Domesticated 2.2756/9.9036 −3.5802/9.3371
    Pets −2.8243/7.7106 −6.9180/5.3821 −8.3628/1.0700
    Phobic −5.2042/3.5173 −10.3915/2.2824 −11.0435/−2.8225 −5.8813/−0.6919
Anxiety
p-Values
    Predators 0.0060
    Domesticated 0.0001 0.0000
    Pets 0.0000 0.0000 0.1430
    Phobic 0.0000 0.0000 0.7555 0.5044
t Statistic
    Predators 3.8816
    Domesticated −8.7288 −14.2350
    Pets −12.1246 −37.7291 −1.6498
    Phobic −13.0446 −12.8014 −0.3238 0.7036
Confidence intervals
    Predators 4.9700/20.4646
    Domesticated −71.020/−40.743 −79.994/−57.203
    Pets −74.762/−50.360 −79.996/−70.560 −16.253/2.894
    Phobic −68.482/−47.464 −83.748/−57.633 −17.366/13.182 −10.830/20.005
Emotional affect
p-Values
    Predators 0.0259
    Domesticated 0.0009 0.4824
    Pets 0.0007 0.0225 0.1478
    Phobic 0.0008 0.7212 0.2749 0.0101
t Statistic
    Predators 2.8155
    Domesticated 5.5024 0.7417
    Pets 5.7372 2.9147 1.6270
    Phobic 5.6226 0.3716 −1.1845 −3.4914
Confidence intervals
    Predators 3.4930/40.1280
    Domesticated 16.7287/41.9421 −16.4653/31.5151
    Pets 22.9300/55.0841 3.2455/31.1475 −4.3844/23.7277
    Phobic 14.2240/34.8715 −14.6798/20.1543 −14.3454/4.7702 −24.2520/−4.6665
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×