Open Access
Article  |   April 2023
Anger is red, sadness is blue: Emotion depictions in abstract visual art by artists and non-artists
Author Affiliations
Journal of Vision April 2023, Vol.23, 1. doi:https://doi.org/10.1167/jov.23.4.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Claudia Damiano, Pinaki Gayen, Morteza Rezanejad, Archi Banerjee, Gobinda Banik, Priyadarshi Patnaik, Johan Wagemans, Dirk B. Walther; Anger is red, sadness is blue: Emotion depictions in abstract visual art by artists and non-artists. Journal of Vision 2023;23(4):1. https://doi.org/10.1167/jov.23.4.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Through the manipulation of color and form, visual abstract art is often used to convey feelings and emotions. Here, we explored how colors and lines are used to express basic emotions and whether non-artists express emotions through art in similar ways as trained artists. Both artists and non-artists created abstract color drawings and line drawings depicting six emotions (i.e., anger, disgust, fear, joy, sadness, and wonder). To test whether people represented basic emotions in similar ways, we computationally predicted the emotion of a given drawing by comparing it to a set of references created by averaging across all other participants’ drawings within each emotion category. We found that prediction accuracy was higher for color drawings than line drawings and higher for color drawings by non-artists than by artists. In a behavioral experiment, we found that people (N = 242) could also accurately infer emotions, showing the same pattern of results as our computational predictions. Further computational analyses of the drawings revealed systematic use of certain colors and line features to depict each basic emotion (e.g., anger is generally redder and more densely drawn than other emotions, sadness is more blue and contains more vertical lines). Taken together, these results imply that abstract color and line drawings are able to convey certain emotions based on their visual features, which are also used by human observers to understand the intended emotional connotation of abstract artworks.

Introduction
People have been telling stories through art for thousands of years. In fact, some of the earliest cave paintings of animals, dating back over 45,000 years (Aubert et al., 2014; Aubert et al., 2018; Brumm et al., 2021), are thought to depict hunting scene narratives and creatures that existed, or perhaps did not, in the world of the early human (Valladas et al., 2001). Through the manipulation of, for example, color, texture, and form, art communicates desired information and is used to capture our rituals and experiences of daily life. Among those experiences are events that trigger emotions such as anger, fear, and happiness. Here, we explore the visual features that allow us to convey specific emotions and whether training in the arts plays a role in one's ability to depict emotions through art. 
Several traditional theories of emotion state that emotions are brought about by physical processes of an animal's brain or body and can be explained by reactions to events in the physical world (Barrett, Mesquita, Ochsner, & Gross, 2007). For example, appraisal theories suggest that emotions are adaptive responses to appraisals of the features of an organism's environment and that these responses are necessary for the organism's well-being (Moors, Ellsworth, Scherer, & Frijda, 2013). Similarly, the basic emotion theory generally defines emotions as patterns of behavior related to certain subjective experiences (Keltner, Sauter, Tracy, & Cowen, 2019). As such, emotion judgments are an integral part of daily life and reflect the most basic and fundamental aspects of human behavior (Ledoux, 1998). Ekman, Sorenson, and Friesen (1969) suggested that there are six basic emotion categories that are universal in their production and recognition: anger, disgust, fear, joy, sadness, and surprise. In the current study, we examined how people create abstract drawings representing each of these emotions; however, we opted to replace “surprise” with “wonder” for two reasons. First, although anger, disgust, fear, and sadness are clearly negative emotions and joy is clearly a positive emotion, surprise does not have a clear positive or negative valence (Fontaine, Scherer, Roesch, & Ellsworth, 2007). Thus, we chose to include an emotion that was more obviously positive to balance out the several negative emotions in the set. Second, wonder is an underrepresented and understudied emotion within the field, despite its observed relationship to art appreciation and aesthetic pleasure (Fingerhut & Prinz, 2018). 
No matter the exact emotion in question, emotional appraisals are often triggered quickly and automatically (Barrett et al., 2007; Moors et al., 2013). Past research suggests that there are common visual features that people use as cues to infer certain emotions or to make certain affective evaluations. For example, previous work has found that angularity is associated with negative valence and threat (Bar & Neta, 2006; Damiano, Walther, & Cunningham, 2021a; Larson, Aronoff, & Stearns, 2007), symmetry or repeating patterns are associated with positive aesthetic judgments (Damiano, Wilder, Zhou, Walther, & Wagemans, 2021b; Pecchinenda, Bertamini, Makin, & Ruta, 2014), and certain colors, such as red and yellow, are often linked with certain emotions, such as anger and joy, respectively (Jonauskaite, Parraga, Quiblier, & Mohr, 2020). It is thus unsurprising that we even have sayings in English that use color to represent emotions (e.g., “seeing red” to mean “being angry,” “feeling blue” to mean “feeling sad”). Artists and designers take advantage of these associations and manipulate color or shape to achieve a certain reaction from the viewer or user. For example, in Disney's film Inside Out, specific visual features are matched to the personalities of different emotion characters (e.g., the Anger character is the color red, with spiky fire-like hair). This depiction seems to make intuitive sense to the viewer, and even children can immediately understand the intended emotion of artworks simply from visual features such as color (Pouliou, Bonoti, & Nikonanou, 2018). 
Thus, there is ample evidence that clear associations between certain visual features and emotions exist, but the exact nature of these associations could be explored further. For example, although certain contour features such as smooth long lines or short angular lines are typically associated with a general positive or negative valence, respectively (Damiano et al., 2021a), their relation to more specific emotions such as disgust, fear, and sadness is not as well known. In the current study, we asked whether there is a recognizable distinction in line usage between multiple negative emotions, such as anger and disgust or fear and sadness. A previous study that did use more specific emotion labels found that people indeed were successfully able to predict basic emotion labels from small abstract line drawings made by artists (Stamatopoulou & Cupchik, 2017). This result suggests that there are indeed noticeable differences in line drawing representations among several emotions, allowing viewers to be able to successfully tell them apart; yet, the specific features associated with each emotion that allowed people to make these predictions were neither explored nor discussed. 
In contrast to contour features, the relationship of color to basic emotion categories is much better studied (Jonauskaite et al., 2020; Mohr, Jonauskaite, Dan-Glauser, Uusküla, & Dael, 2018). We hypothesized that we would replicate previous findings in the current study, with colors such as red being more often associated with anger, blue with sadness, yellow with joy, etc. What is yet to be explored, however, is whether colors are better than contour features at conveying differences across emotions. On the one hand, the well-established associations between certain colors and emotions potentially mean that color is a stronger carrier of emotion information than contour features. On the other hand, accurate visual perception of objects and scenes can be achieved rapidly from viewing simple line drawings (Walther, Chai, Caddigan, Beck, & Fei-Fei, 2011) completely devoid of texture or color information. Additionally, the suggested evolutionary link between contour features and emotional appraisals (e.g., angular contours are rated as threatening because they may cue dangerous stimuli such as thorns or fangs) (Bar & Neta, 2006; Friedenberg, Lauria, Hennig, & Gardner, 2022) could mean that contour–emotion associations are even more fundamental than color–emotion associations. In the current study, we explored whether emotions are more easily inferred from color drawings or line drawings by attempting to predict emotions from these drawings and comparing prediction accuracies across image types. 
Another question we explored in parallel in this study is whether artists are better than non-artists at depicting emotions, such that other viewers can accurately infer which emotion is being depicted. Besides skill-level differences in artistic tasks such as drawing, visual artists tend to be better than non-artists at switching between local and global processing (Chamberlain & Wagemans, 2015). Research has also shown slight relationships between drawing ability and other visual–spatial tasks, such as the Embedded Figures Test and Mental Rotation Task (Chamberlain et al., 2019), as well as superior performance on imagery tasks (Calabrese & Marucci, 2006). This means that not only are artists better at drawing than non-artists but they also seemingly outperform non-artists on imagining and producing what something should look like. Therefore, a reasonable hypothesis is that artists will more skillfully depict emotion concepts using lines and colors, and thus emotions will be more easily understood from artists’ drawings compared with non-artists’ drawings. However, as artists gain more experience and training, they likely become more distinctive in their individual artistic styles. This could lead to atypical feature usage that would make the interpretation of the depicted emotions more difficult. Thus, we will compare the accuracy with which emotions can be deciphered from artists’ drawings versus non-artists’ drawings. 
To summarize our goals and approach, in the current study we used computational analysis methods to determine how people depict and interpret emotions through abstract artworks. More specifically, we explored the degree to which people convey emotions through art in similar ways and which specific visual features are associated with each emotion. In an additional behavioral study, we asked whether non-artists can successfully infer emotions from abstract artworks and what type of information they use to be able to do so. We also compared drawings made by artists and non-artists to determine whether artistic training changes the ways in which emotions are depicted through abstract art and whether those differences, if they exist, make it easier or more difficult for viewers to understand the desired emotion. 
Methods
Stimulus collection
Participants
A group of 46 artists and another group of 45 non-artists provided the stimuli for this study. The artists (mean age, 23.8 years; 19 men, 26 women) were recruited from the Ontario College of Art and Design (OCAD) University. When analyzing the data, we realized that six people from the artist group did not fit our criteria for inclusion (i.e., having formal art training and currently in an art-related program, such as drawing and illustration, at OCAD University); thus, we excluded them from the sample. The remaining participants in the artist group were art students attending OCAD University in the Drawing and Painting or Illustration programs, and most were in their third year of study in a 4-year program (first year, three people; second year, eight people; third year, 18 people; fourth year or above, 11 people). 
The other group consisted of 45 non-artists (mean age, 23.6 years; 14 men, 31 women) recruited from the University of Toronto. We excluded four non-artists because they had formal art training or were currently in an art-related program at the University of Toronto. The remaining non-artists were students in STEM programs (e.g., science, technology, engineering, math), and most were in their fourth year of study in a 4-year program (first year, two people; second year, two people; third year, eight people; fourth year or above, 29 people). 
All participants across both groups were each paid $10 Canadian for their participation. The experiment was approved by the ethics boards at the University of Toronto and OCAD University. 
Materials
All participants were given two white letter-sized sheets of paper to make line drawings and color drawings in designated boxes. There were six boxes per sheet and each box was labeled with an emotion (anger, disgust, fear, joy, sadness, or wonder) and drawing type (color drawing or line drawing) that should be drawn in that box. Below each emotion label, participants were also asked to briefly specify their reasons for using certain lines or colors in their drawings. The written responses are not analyzed here. 
Participants were given pencils (2B, 4B, and 6B) to create the line drawings and a pastel color set to create the color drawings. The pastel sets were Pentel Oil Pastel sets consisting of 16 colors, listed here using the conventional name, the closest Munsell color chip notation, and the corresponding approximate red, green, and blue (RGB) values: red (5R 5/18; [255, 0, 0]), orange (2.5YR 6/14; [255, 140, 0]), dark yellow (2.5Y 8/12; [255, 200, 0]), yellow (7.5Y 9/12; [255, 255, 0]), light green (7.5GY 9/14; [150, 255, 0]), green (2.5G 6/10; [0, 200, 65]), cyan/light blue (7.5B 9/8; [80, 230, 255]), blue (5PB 6/14; [0, 155, 255]), dark blue (7.5PB 5/20; [50, 125, 255]), pink (2.5RP 9/6; [255, 200, 230]), beige (5Y 9/2; [250, 240, 200]), light brown (2.5YR 5/15; [200, 70, 0]), dark brown (7.5YR 3/6; [100, 60, 10]), gray (5GY 9/1; [228, 228, 228]), black (5N 1/0; [0, 0, 0]), and white (5N 10/0; [255, 255, 255]). Note that we did not use a chromameter to measure the pastel colors; therefore, both the Munsell color chips and RGB values are the closest approximations of the colors used by participants. 
Procedure
The artists’ drawings were collected in the outdoor courtyard of OCAD University. Participants were approached individually and were seated in a location where they felt comfortable drawing. Several artists may have been drawing at the same time, but they were seated separately from each other, with sufficient distance that they could neither see nor be influenced by each other's drawings. The non-artists’ drawings were collected in a classroom setting at the University of Toronto. Again, several participants may have been drawing at the same time, but they were seated at individual tables separately from one another and could not see each other’s drawings. 
In both cases, participants were asked to sit comfortably, and all necessary art material and paper were provided to them. The experimenters gave the following instructions, which were also written on sheets of paper given to participants: “Please make one line drawing and one color drawing separately for each of the six emotions (anger, disgust, fear, joy, sadness, and wonder). You can remember previous experiences associated with these emotions, but please depict all emotions only through non-representational or abstract visual components. Do not use figurative art, meaningful shapes, or text in your artworks. You may use multiple lines and colors in your artworks. There is no set time limit, but it should not take longer than 30 minutes to complete the drawings.” 
Each participant provided one line drawing and one color drawing for each of the six emotions, providing 12 artworks in total. Overall, 552 abstract drawings of six emotions (276 line drawings + 276 color drawings) were collected from the 46 artists, and 540 abstract drawings were collected from the 45 non-artists, for a total of 1092 drawings. These drawings were then scanned (300 dpi) to be used digitally for computational image analyses and in an online behavioral experiment with a new set of participants. Due to the exclusion of six participants from the artist group and four participants from the non-artist group, subsequent analyses of the drawings were conducted on a subset of the stimuli (972 total: 480 artists’ drawings and 492 non-artists’ drawings). See Figure 1 for sample artworks from an artist and non-artist. The full set of drawings is available on the OSF page for this study (https://osf.io/b6xuq/). 
Figure 1.
 
(A) Sample color and line drawings for each emotion, made by one artist (#2) and one non-artist (#1). (B) One example of a set of color reference drawings used in the computational emotion prediction procedure with color drawings (top) and line drawings (bottom).
Figure 1.
 
(A) Sample color and line drawings for each emotion, made by one artist (#2) and one non-artist (#1). (B) One example of a set of color reference drawings used in the computational emotion prediction procedure with color drawings (top) and line drawings (bottom).
Computational emotion predictions
In order to determine if both artists and non-artists depict emotions in predictable ways, we ran a computational emotion prediction analysis. If we can successfully predict the depicted emotion from drawings at an above-chance level, that tells us that people's drawings tend to be similar within a certain emotion and different across emotions. 
The emotion predictions were made using a leave-one-subject-out, cross-validation procedure. In order to predict the emotion category from all drawings of each participant, we compared each drawing, one at a time, to an average reference drawing made by averaging all drawings by all other participants within one emotion category (see Figure 1B for an example of the reference drawings). Therefore, each drawing was compared to six reference drawings (one per emotion) by correlating the drawing in a pixel-wise manner to each reference drawing. The label of the reference that gave the highest correlation became the prediction for that drawing. If the prediction matched the true emotion label, then that was counted as an accurate guess; otherwise, it was inaccurate. The total accuracy for each participant was the proportion of correctly predicted emotions (chance = 1/6 because there were six emotions). We performed this procedure four times and then compared the prediction results using a two-way analysis of variance (ANOVA), with art experience (artists, non-artists) and drawing type (color drawing, line drawing) as factors. 
An additional prediction procedure was performed on the line drawings only using the distributions of length, orientation, and curvature features as predictor variables (see Visual feature analysis section below for details on how the features were computed for each line drawing, and see Figure 6 for average feature histograms per emotion). In this version, we once again predicted the emotion on an image-by-image basis using a leave-one-subject-out, cross-validation procedure. However, instead of correlating the entire image to a set of reference maps, we compared the histograms of contour features on each drawing to a set of reference feature histograms (i.e., the average feature histograms of all non-left-out images, per emotion) using chi-square distance. This time, the prediction was the label of the reference histogram that gave the smallest chi-square distance. Once again, if the prediction matched the true emotion label, then that was counted as an accurate prediction; otherwise, it was inaccurate. The total accuracy for each participant was the proportion of correctly predicted emotions (chance = 1/6). 
Emotion predictions by humans
Participants
A total of 244 first-year psychology students at KU Leuven (Belgium) participated in this experiment (39 men and 205 women; mean age of 19 years). All participants could speak English and received one experimental credit for participation, which counted toward extra credit in their first-year psychology course. The experiment was approved by the Social and Societal Ethics Committee at KU Leuven. To check that participants were paying sufficient attention to the task, we calculated the variance of responses within a five-trial-long sliding window, as well as the mean reaction time across all trials, for each participant. Participants failed the attention check if their mean variance was less than 0.2 or if their mean response time was less than 500 ms. Two participants were excluded for failing our criteria. Thus, the data of a total of 242 participants were included in further analyses. 
Stimuli
All drawings collected from the artists and non-artists were used as stimuli in the behavioral experiment. Each participant viewed 312 drawings in total (156 line drawings and 156 color drawings) chosen randomly from the set of 1092 drawings, with the requirement that there be an equal number of drawings in each condition—that is, 13 drawings of each combination of art training (artists vs. non-artists) × drawing type (line drawing vs. color drawing) × emotion (anger, disgust, fear, joy, sadness, or wonder). Each drawing was displayed at a resolution of 700 × 550 pixels. Recall that 120 images were removed from the analyses; thus, trials containing any of those images were not analyzed (an average of 2.58 trials per participant were removed). 
Procedure
As the behavioral experiment took place online, participants used their own computers to participate. Participants received a link to participate through KU Leuven's participant pool. Participants provided informed consent by clicking on “I agree” after reading the consent form online. Upon a participant's agreeing to participate, the experiment began with instructions stating that participants would see abstract drawings, one at a time, depicting one of six emotions (anger, disgust, fear, joy, sadness, or wonder) and that participants simply had to indicate which of the six emotions was being depicted in each drawing. Each trial consisted of a black fixation cross in the center of the screen shown for 500 ms, followed by a drawing displayed in the center of the screen against a gray background (RGB color [128, 128, 128]). The button options for each of the six emotions were displayed horizontally below the drawing in alphabetical order, along with the prompt, “What emotion is being depicted?” The images were displayed on screen until the participant made a response. The response (i.e., the chosen emotion) and reaction time were recorded on each trial. The experiment was programmed using jsPsych (de Leeuw, 2015), and consisted of two separate blocks with a short break given in between blocks for participants to rest. Approximately half of the participants saw the set of 156 color drawings first (N = 119), and the other half (N = 125) viewed the line drawings first. The experiment took an average of 22 minutes to complete. 
Visual feature analysis
To quantify any differences in color and line usage across categories and artistic expertise, we computationally extracted several features from the drawings, such as the number of colors and lines and the percentage of pixels used. The following sections detail the types of features that were extracted and how this was done. 
Color drawings
In order to analyze the color drawings, we first needed to label and quantify the colors that were used in each drawing. To do this, we compared the RGB value of each pixel to a large online table of RGB values (see https://www.rapidtables.com/web/color/RGB_Color.html#color-table). To obtain meaningful labels for our drawings, we collapsed many of the color labels from the online table into its closest label that was present in our color set; for example, crimson, red, and brick red would all receive the label “red” (see the final table on our OSF page for this study, https://osf.io/b6xuq/). The labels we used, based on the colors that were provided to the artists and non-artists, were red, orange, dark yellow, yellow, light green, green, cyan, blue, dark blue, pink, beige, light brown, dark brown, gray, black, and white (see Stimulus collection section above). Violet (10PB 3/10; [80, 60, 130]) was also included because participants sometimes mixed red and blue to create violet. Any pixels labeled as white were excluded from further analyses because they corresponded to the background of each drawing. 
For each pixel, the comparison was done by subtracting the pixel values from each row in the table, separately for the R, G, and B channels. We then took the sum of the absolute values of the differences (i.e., |ΔR| + |ΔG| + |ΔB|). The pixel label was the color that gave the smallest sum of differences. With this analysis, we were able to extract, from each drawing, the percentage of pixels that were a certain color, the percentage of the drawing that contained color (i.e., non-white pixels), and the number of different colors used in each drawing. This approach is not perfect, as in our study it sometimes mislabeled pixels (e.g., labeling red as pink if the red is only lightly pressed on the paper), but it provides a fairly good approximation of the colors present in each drawing. 
Line drawings
Histogram distributions of contour features were extracted from each line drawing using the Mid-level Vision (MLV) Toolbox (https://mlvtoolbox.org) (Rezanejad, Downs, Wilder, Walther, Jepson, Dickinson, & Siddiqi, 2019; Rezanejad & Siddiqi, 2013; Walther & Shen, 2014; Walther, Farzanfar, Han, & Rezanejad, 2023). First, each line drawing was transformed into a vectorized version using the traceLinedrawingFromEdgeMap function. This function extracts each separate line present in the line drawing and turns it into a vector, or series of vectors, with length and direction information. Using these vectorized line drawings, the getContourPropertiesStats function calculates the angularity, length, and orientation statistics of each line drawing (see Figure 2 for examples). In addition to the contour feature distributions, we also calculated features of the line drawing as a whole, such as the proportion of the drawing that contained non-white pixels (i.e., the density of the drawing) and the number of individual lines present in each drawing. 
Figure 2.
 
Two example line drawings and their extracted contour feature statistics. The top drawing represents anger and the bottom drawing represents joy.
Figure 2.
 
Two example line drawings and their extracted contour feature statistics. The top drawing represents anger and the bottom drawing represents joy.
Results
Computational emotion predictions
To determine whether emotions are depicted commonly across individuals, we ran a leave-one-subject-out, cross-validation procedure on each of the four image types (i.e., artists’ color drawings, artists’ line drawings, non-artists’ color drawings, and non-artists’ line drawings). In this procedure, we predicted the depicted emotion of each drawing by comparing it (i.e., correlation) to reference drawings made by averaging all other drawings. We found that emotions are indeed predictable from abstract color and line drawings at an above-chance level, meaning that commonalities exist in how individuals depict certain emotions. A 2 × 2 ANOVA with drawing type (color vs. line drawing) and artistic training (artists vs. non-artists) as factors revealed a main effect of drawing type, F(1, 157) = 61.45, p < 0.001, η2 = 0.27, and interaction, F(1, 157) = 5.37, p < 0.05, η2 = 0.02. In general, it is easier to predict emotions from color drawings than line drawings. The interaction revealed that, for the color drawings, emotions were more easily predictable from non-artists’ drawings than artists’ drawings (accuracy = 50.81% vs. 39.17%, respectively; p < 0.05). However, for the line drawings, emotion predictability did not differ between artists and non-artists (accuracy = 24.58% and 23.17%, respectively). 
These findings suggest that non-artists’ color drawings are more uniform within an emotion category. In other words, a non-artist seems to draw emotions similarly to other non-artists. In contrast, artists’ drawings tend to be unique, which leads to lower prediction accuracies, as the computational prediction method relies on similarities across individuals within an emotion category. 
An additional prediction analysis was performed on the line drawings only, using contour feature distributions as predictor variables. Using this approach, we could also predict emotions from line drawings made by artists and non-artists (accuracy = 25% and 23.17%, respectively) at an above-chance level (both p < 0.01). These prediction accuracies did not significantly differ between artists and non-artists (see Figure 3 for a summary of prediction results). 
Figure 3.
 
Summary of computational and behavioral emotion prediction results. The boxes represent the means and 95% confidence intervals. The dots in the behavioral plots represent individual (N = 242) average accuracy per condition. Significant differences in prediction accuracy between artists’ and non-artists’ drawings are marked.
Figure 3.
 
Summary of computational and behavioral emotion prediction results. The boxes represent the means and 95% confidence intervals. The dots in the behavioral plots represent individual (N = 242) average accuracy per condition. Significant differences in prediction accuracy between artists’ and non-artists’ drawings are marked.
Behavioral results: Emotion predictions by humans
The computational results tell us that the degree of uniformity of drawings within emotion categories is lower for artists compared to non-artists (for color drawings), but they do not tell us whether artists are worse at being understood by other human viewers. Thus, we ran a behavioral experiment to determine whether emotions were more predictable by human observers from artists’ or non-artists’ drawings. Once again, we ran a 2 × 2 ANOVA with drawing type and art expertise as factors. Like the computational prediction results, we found a main effect of drawing type, F(1, 964) = 386.73, p < 0.001, η2 = 0.40, such that human observers more easily inferred emotions from the abstract color drawings compared with the line drawings. Additionally, a main effect of artistic training was found, F(1, 964) = 12.26, p < 0.001, η2 = 0.01, such that emotions were more difficult to predict from artists’ drawings than non-artists’ drawings. Finally, the interaction was also significant, F(1, 964) = 7.00, p < 0.01, η2 = 0.007, showing that participants were better able to predict emotions from non-artists’ drawings compared with artists’ drawings when the drawings contained color (accuracy = 43.06% vs. 40%, respectively; p < 0.001), but there was no difference in prediction accuracies between artists’ and non-artists’ line drawings (31.5% vs. 31.93%, respectively). These results imply that non-artists are actually better than artists at accurately depicting specific emotions in abstract drawings, at least when using color (see Figure 3). 
Computational and behavioral error correlations
The results thus far have shown that computational algorithms and human participants can accurately predict emotions from color and line drawings of abstract artworks. We do not yet know, however, whether humans and computers use the same type of information to make these predictions. To explore this question, we should not compare the accuracy scores but instead should compare the mistakes made by humans and our computational models. If the error patterns between humans and the computer are similar, then we can assume that people use features similar to those of the computational model to make their emotion predictions, which is why they would make mistakes similar to those of the computer (Walther & Shen, 2014). 
During the prediction accuracy analysis, we also collected any mistakes in categorization made by the computational model and by the human participants and placed them in confusion matrices. In these matrices, the diagonal elements represented accurate predictions (i.e., the correct answer was “anger” and the prediction was “anger”), and the off-diagonals represented the confusions (i.e., the correct answer was “joy” but the prediction was “wonder”). Then, for the error correlation analysis, we averaged the confusion matrices across participants and computed the Pearson correlation coefficient for correlating the off-diagonal elements of the confusion matrix for two types of predictions (i.e., the computational predictions and human predictions). 
See Figure 4 for average confusion matrices and a summary of the error pattern correlations. Beginning with the color drawings, we found that the error patterns between humans and computers were highly similar for both artists’ drawings (r = 0.74, p < 0.001) and non-artists’ drawings (r = 0.62, p < 0.001). Conversely, for the line drawings, the error patterns of the computational predictions were not significantly similar to those of humans (artists’ drawings: r = 0.31, p = 0.1; non-artists’ drawings: r = –0.07, p = 0.7). Recall that the computational predictions were made by comparing each drawing to a set of average reference drawings using pixel-wise correlation. Thus, the above results mean that, for the color drawings, humans are also performing a general color comparison between the current drawing and a set of (internal) references of known color–emotion associations and that these known associations must somewhat match those created by averaging all drawings within each emotion category. This approach did not work for the line drawings, however, as the average reference maps essentially became a jumble of lines and thus likely measured overall pixel density rather than any contour features. This was still somewhat effective, given that the computational predictions for the line drawings were above chance, but this is clearly not the (only) information people are using to make their predictions, as shown by the low error–pattern correlations. 
Figure 4.
 
Average confusion matrices extracted from the behavioral and computational emotion predictions. Each cell of the confusion matrix shows the proportion of drawings with a certain emotion label (true labels along rows) that were predicted as that or another emotion (predicted labels along columns). The diagonal represents correct predictions, and the off-diagonal represents errors. The gray level of a cell corresponds to the value within the cell, from white (0 = that emotion was never predicted) to black (1 = that emotion was always predicted). The correlation values shown beside the arrows represent the correlations between off-diagonal elements (i.e., non-red cells) of behavioral and computational predictions.
Figure 4.
 
Average confusion matrices extracted from the behavioral and computational emotion predictions. Each cell of the confusion matrix shows the proportion of drawings with a certain emotion label (true labels along rows) that were predicted as that or another emotion (predicted labels along columns). The diagonal represents correct predictions, and the off-diagonal represents errors. The gray level of a cell corresponds to the value within the cell, from white (0 = that emotion was never predicted) to black (1 = that emotion was always predicted). The correlation values shown beside the arrows represent the correlations between off-diagonal elements (i.e., non-red cells) of behavioral and computational predictions.
Instead, when predicting emotions from line drawings using the histograms of contour features, we found that the human–computer correlations were significantly similar (artists’ drawings: r = 0.47, p < 0.01; non-artists’ drawings: r = 0.47, p < 0.01). This suggests that the people were comparing the contour statistics of the drawing to an average set of statistics that they associate with each emotion (e.g., anger might be short and spiky, joy long and smooth), similarly to how the computational algorithm compares the contour statistics of one drawing to the set of reference histograms. 
Visual feature descriptions across emotions and artistic training
It is clear from the computational and behavioral predictions that there are similarities in color and line usage across individuals within an emotion category, as well as differences across emotion categories. These similarities and differences are explored in more detail in the following sections. Specifically, we explore the overall density (i.e., non-white pixels) of the drawings and the number of colors and lines used and then identify the exact colors and types of contour features used to depict the different emotions. 
Color drawings
Beginning with the density measure, a two-way ANOVA with emotion (anger, disgust, fear, sadness, joy, or wonder) and artistic training (artists vs. non-artists) as factors revealed only a main effect of emotion, F(5, 474) = 2.46, p < 0.05, η2 = 0.03, but not for artistic training, F(1, 474) = 0.69, p = 0.41, η2 = 0.001. Post hoc tests revealed that the emotion anger is more densely drawn than wonder (proportion of colored pixels: anger = 0.51 vs. wonder = 0.39; p < 0.05). All of the other emotions fell in between these two but were not statistically different from each other (disgust = 0.47; fear = 0.43; sadness = 0.50; joy = 0.43). 
In terms of the average number of unique colors per image, the two-way ANOVA revealed a main effect of emotion, F(5, 474) = 10.66, p < 0.001, η2 = 0.10, and a main effect of artistic expertise, F(1, 474) = 31.42, p < 0.001, η2 = 0.06. Anger, fear, and sadness were represented with fewer colors per drawing on average (2.53, 2.49, and 2.39, respectively) than disgust and wonder (3.39, and 3.53, respectively; all p < 0.001). Sadness also contained significantly fewer colors than joy (3.02; p < 0.05). Additionally, artists tended to use fewer colors per drawings (2.55) than non-artists (3.24; p < 0.001), suggesting that artists are more minimalistic in their color usage. 
Examining the types of colors used in each image across emotion types revealed that certain colors are indeed related to specific emotions. Generally, red was used to convey anger, greens and browns to convey disgust, black and gray to convey fear, dark blue for sadness, and yellow and pink for joy and wonder (Figure 5Table 1). These overall patterns do not seem to differ drastically between artists and non-artists, although on an individual basis some artists used unconventional colors to represent emotions in their drawings (e.g., using pink rather than green to represent disgust, green rather than black to represent fear). This atypical color usage was sometimes found in the non-artists’ drawings, as well, but not as often as in the artists’ drawings. 
Figure 5.
 
Average color usage per drawing, averaged over all drawings within an emotion category, separately for artists’ and non-artists’ drawings. See Table 1 for the percentages of each color per emotion.
Figure 5.
 
Average color usage per drawing, averaged over all drawings within an emotion category, separately for artists’ and non-artists’ drawings. See Table 1 for the percentages of each color per emotion.
Table 1.
 
Average color usage per drawing (in percent of non-white pixels), averaged over all drawings within an emotion category separately for artists’ and non-artists’ drawings.
Table 1.
 
Average color usage per drawing (in percent of non-white pixels), averaged over all drawings within an emotion category separately for artists’ and non-artists’ drawings.
Line drawings
As with the color drawings, the two-way ANOVA on the density of the line drawings revealed only a main effect of emotion, F(5, 474) = 7.99, p < 0.001, η2 = 0.08, but not with artistic training, F(1, 474) = 3.06, p = 0.08, η2 = 0.006. Anger was more densely drawn than all other emotions (proportion of colored pixels: anger = 0.22, disgust = 0.16, fear = 0.14, sadness = 0.14, joy = 0.10, wonder = 0.12; all p < 0.05). In terms of average number of lines per image, the two-way ANOVA also revealed only a main effect of emotion, F(5, 474) = 2.57, p < 0.05, η2 = 0.03, but not artistic training, F(1, 474) = 1.45, p = 0.23, η2 = 0.003. Post hoc comparisons across emotions failed to reveal significant differences, but the overall pattern is that the drawings conveying negative emotions contained the greatest number of contours and the drawings conveying positive emotions contained the fewest (number of contours: anger = 378.12; disgust = 334.77; fear = 346.49; sadness = 377.61; joy = 141.79; wonder = 234.72). 
Figure 6 shows the histograms of contour feature distributions across emotion categories, separately for artists’ and non-artists’ drawings. With these histograms, we see that, once again, the general pattern of contour feature usage between the artists and non-artists did not differ much. Within each feature, there are subtle differences across emotions. These differences are likely what people were picking up on to be able to successfully infer emotions from the line drawings. Beginning with the angularity measure, we see that the negative emotions have pixels in the highest angularity bins, meaning that the images representing those emotion categories contained some angular (i.e., sharp) contours, but the positive emotions do not. For the length feature, we see a skew toward the shorter lines for the negative emotions and a skew toward medium to long lines for the positive emotions. For the orientation feature, drawings representing sadness contained the most vertical lines, and the other emotions contained lines of various orientations. 
Figure 6.
 
Average distributions of contour features, separately for artists’ and non-artists’ drawings, for each of the six emotion categories.
Figure 6.
 
Average distributions of contour features, separately for artists’ and non-artists’ drawings, for each of the six emotion categories.
Discussion
This study was an exploration of how certain visual features, such as color and contour features, are able to convey specific emotions through abstract visual art and whether the use of these features differs between trained artists and non-artists. To summarize, we found that there are indeed specific color and contour features that are used to convey different basic emotions (e.g., anger is generally red and more densely drawn than other emotions, whereas sadness is conveyed by blue shades and a higher proportion of vertical lines), allowing for accurate predictions of emotions from both color drawings and line drawings. Colors, however, seem to more easily convey emotions compared with contour features. Additionally, although small, we did find differences in how the trained artists versus non-artists depicted emotions—namely, artists generally used fewer but more unique colors (as implied by a lower computational prediction accuracy) compared with non-artists across all emotions. 
Anger is red, sadness is blue: Feature differences across basic emotions
Taking the color and line drawing results together, we found that the negative emotions were more densely drawn (especially anger) with a tendency to be drawn in darker colors such as red, blue, brown, black, and gray. The positive emotions tended to be less dense and contain brighter colors. These findings are very much in line with a study by van Paasschen, Zamboni, Bacci, and Melcher (2014), who asked participants to rate a large set of abstract artworks on valence and arousal and found that people consistently rated dark and complex pieces as negative and artworks with bright colors and clear lines as positive. We replicated these findings and extended the positive versus negative distinction to include color–emotion associations for more specific basic emotions. 
Beginning with the color drawings, we found that there are clear differences in color usage across different emotions. These findings were expected based on previous work that prompted participants with a color label and asked them to identify all emotion labels that they associated with that color (Jonauskaite et al., 2020). In such studies, red is typically associated with anger, yellow with joy, brown with disgust, etc. We found very similar results in the current study, with almost all color–emotion associations replicating those of Jonauskaite et al. (2020), except for disgust, which included an additional association with green in our study. These strong color–emotion associations make emotions easily predictable from color drawings, both computationally and by human observers, as both people and computers pick up on the similarities within an emotion category (e.g., anger is typically depicted using red) and differences across categories (e.g., emotions other than anger are not typically depicted using red). This is revealed through the strong error–pattern correlations between humans and machines. This result implies that people use the same types of features that our computational algorithm uses to predict emotions from color drawings—simple color comparisons. In other words, when trying to infer an emotion from the color drawings, people presumably compare each drawing to an internal template of what color they believe each emotion should be and then guess the emotion based on which internal template best matches the drawing in question. Of note, however, is that the computational predictions on the color drawings are in fact better (i.e., numerically higher accuracy) than the predictions made by humans, implying that people may use other information (perhaps shape/form) that could interfere with the color information. In terms of understanding emotions through color, relying solely on color comparisons and ignoring other cues would seem to be most useful. 
We must note, however, that we could not ensure that the colors were correctly displayed on each participants’ computer in the behavioral task, given that the online participants used different devices and lived in different settings. Therefore, it is also possible that the human emotion predictions were numerically lower than the computational predictions simply due to participants not perceiving the colors as they were drawn. It would be useful to rule this possibility out in future studies by testing participants on the same device or calibrating each monitor to ensure that the colors are being displayed consistently across participants. 
When color is absent (e.g., with line drawings), differences in monitor display settings are less problematic. Instead of hue, one must now rely on shape or contour cues to correctly predict the displayed emotion. In the case of line drawings, our computational model (which uses the correlation approach to make predictions) did not reach a high prediction accuracy, and the error patterns were not very similar between the human and computational predictions. On the other hand, when we instead predicted emotions computationally from the set of feature histograms, the accuracy was still lower than human-level accuracy, but the error patterns were now significantly correlated with human error patterns. This tells us that people are indeed relying on comparing the contour features present in each line drawing to an internal template of the types of visual features that should be related to each emotion (e.g., vertical lines representing sadness). 
In fact, analyses of the contours in each line drawing revealed clear orientation and length differences that distinguish among the basic emotion categories. We found that vertical lines were mostly used to represent sadness, longer lines to represent joy and wonder, and angular lines to represent negative emotions. Also, the sheer number of lines helped to distinguish between anger and other emotions (i.e., the anger drawings contained the highest number of lines and were the most densely drawn). These results indicate that our computational line drawing analysis using the MLV Toolbox is an efficient way to extract and explore contour feature distributions in line drawings. These findings nicely replicate previous non-computational observations (Gayen, 2021; Takahashi, 1995). 
If we pit color and contour features against each other, we find that emotions in abstract art are more easily represented by colors than by contour features. One potential reason may be that color drawings contain extra information, as they may contain shape information in addition to color. However, as mentioned above, in this study this extra information may have actually hindered people's ability to predict emotions from color drawings, and relying solely on color would result in a higher prediction accuracy. 
A more likely reason emotions are easier to guess from color drawings compared to line drawings is that the associations between colors and emotions must be stronger or more well known than the associations between contour features and emotions. Whether these associations are innate or culturally learned is an interesting and important question that unfortunately cannot be addressed in the current study. However, the simulation theory (Johnson-Laird & Oatley, 2021) states that mimesis (i.e., imitation) plays a role in our emotion associations, meaning that our abstract feature–emotion associations reflect real-world feature–emotion associations, such as one's face turning red when angry. Associations between line features and emotions seem to not be as clear mimetically as those between colors and emotions. Instead, contour–emotion associations seem to rely more on the entropy of the image, with many jagged or unpredictable lines simulating negative events such as a broken mirror. Dissociating among several negative emotions seems to be more difficult with contours than with colors, potentially explaining why emotion predictions were higher on color drawings than line drawings. 
Artists are seemingly worse than non-artists at depicting basic emotions
Finally, the current study also taught us something about the impact of artistic expertise on one's ability to convey emotions through abstract art. The results of our computational predictions and subsequent behavioral experiment suggest that differences between artists’ and non-artists’ drawings may have had a direct impact on the ability to infer the depicted emotion from the artworks. It was more difficult for the new group of naïve viewers to understand the depicted emotion when viewing color drawings created by artists compared with those created by non-artists. This was seemingly due to artists being more minimalistic in their color usage and sometimes using unconventional colors to depict certain emotions. 
It is important to acknowledge that the drawing task we gave to the artists and non-artists was to only depict one basic emotion at a time. Although they were free to be as creative as they wanted, within the confines of the drawing space, it is possible that the artists did not find this task stimulating enough. In general, artists mention items such individualism, divergent thinking, and intrinsic motivation as being among the most important factors in creating their art (Botella et al., 2013). Therefore, the artists’ drawings in our study were more likely unique due to each artist's individual style and self-expression, resulting in drawings that were more difficult to understand than the non-artists’ drawings. Perhaps artists’ works would indeed be more easily understood than those of non-artists if they were conveying more complex emotions of their own choosing through their artworks. Of course, it is also a possibility that artists strive to explore the frontiers of visual communication and to challenge their viewers and therefore are not aiming to be understood in a conventional sense. Future work could explore these possibilities by prompting artists and non-artists with more complex emotion labels or allowing them to create artworks without prompts and asking them after the fact which emotions they were attempting to convey, and then later testing the audience's understanding of the artworks. Additionally, only psychology students participated in the behavioral task. Research has shown that artists rate abstract art as less confusing compared with non-artists (Silvia, 2013). Perhaps a group of expert participants would be better able to predict emotions from their fellow artists’ drawings, which may have been more confusing for our non-expert participants. 
Note also that we did not tell the artists and non-artists, before they drew, that another group of people would be trying to guess the emotions from their drawings. If we had done so, both groups may have tried to use more obvious color–emotion associations, and artists, being more minimalistic in their feature usage and perhaps having a better knowledge or intuition of these associations, may have outperformed non-artists in their ability to depict the desired emotions. 
One final possibility for why emotions were more difficult to predict from the artists’ drawings compared to the non-artists’ drawings could be that the artists in our sample were still students. Artists who have been making art for several decades could develop better or different ways to convey emotions that may or may not involve characteristic colors or shapes. Perhaps the artists in our study were only just beginning to develop this skill and thus displayed more individuality, but they were not yet able to convey a desired emotion in a specialized way. Follow-up studies could recruit artists at different stages of their careers to determine whether their unique styles eventually develop into superior emotion transmission or whether the uniqueness remains a detriment to the depictions of basic emotion categories. 
Conclusions
Through the manipulation of color and form, visual abstract art is able to carry emotional information. Here, we explored specifically how colors and lines were used to express basic emotions. Our findings lend support to the simulation theory, as they showed that certain visual features are related to specific emotions based on real-world associations that can be “simulated” through abstract features and used by human observers to understand the intended emotional connotation of abstract artworks. These findings could potentially be leveraged by product or app designers, as well as artists, to better convey certain information to a user or viewer. Future research should expand on basic experiments, such as ours, using a similar approach of emotion understanding based on contour and color features in real abstract (e.g., geometric, expressionist) artwork. Including artworks with more complex feature–emotion associations will lead to a better understanding of the multifaceted links among visual perception, art production and appreciation, and emotional appraisals. 
Acknowledgments
Supported by the Shastri Indo-Canadian Institute Research Fellowship 2017-18 (to P.G.), a Natural Sciences and Engineering Research Council of Canada Discovery Grant (RGPIN-2020-04097 to D.B.W), and by long-term structural grants from the Flemish Government (METH/14/02 and METH/21/02 to J.W.). 
Commercial relationships: none. 
Corresponding author: Claudia Damiano. 
Email: claudia.damiano@kuleuven.be. 
Address: Department of Brain and Cognition, KU Leuven, Leuven, Belgium. 
References
Aubert, M., Brumm, A., Ramli, M., Sutikna, T., Saptomo, E. W., Hakim, B., … & Dosseto, A. (2014). Pleistocene cave art from Sulawesi, Indonesia. Nature, 514(7521), 223–227. [CrossRef] [PubMed]
Aubert, M., Setiawan, P., Oktaviana, A. A., Brumm, A., Sulistyarto, P. H., Saptomo, E. W., … & Brand, H. E. A. (2018). Palaeolithic cave art in Borneo. Nature, 564(7735), 254–257. [CrossRef] [PubMed]
Bar, M., & Neta, M. (2006). Humans prefer curved visual objects. Psychological Science, 17(8), 645–648. [CrossRef] [PubMed]
Barrett, L. F., Mesquita, B., Ochsner, K. N., & Gross, J. J. (2007). The experience of emotion. Annual Review of Psychology, 58, 373. [CrossRef] [PubMed]
Botella, M., Glaveanu, V., Zenasni, F., Storme, M., Myszkowski, N., Wolff, M., et al. (2013). How artists create: Creative process and multivariate factors. Learning and Individual Differences, 26, 161–170. [CrossRef]
Brumm, A., Oktaviana, A. A., Burhan, B., Hakim, B., Lebe, R., Zhao, J. X., … Aubert, M. (2021). Oldest cave art found in Sulawesi. Science Advances, 7(3), eabd4648. [CrossRef] [PubMed]
Calabrese, L., & Marucci, F. S. (2006). The influence of expertise level on the visuo-spatial ability: Differences between experts and novices in imagery and drawing abilities. Cognitive Processing, 7(1), 118–120.
Chamberlain, R., Drake, J. E., Kozbelt, A., Hickman, R., Siev, J., & Wagemans, J. (2019). Artists as experts in visual cognition: An update. Psychology of Aesthetics, Creativity, and the Arts, 13(1), 58. [CrossRef]
Chamberlain, R., & Wagemans, J. (2015). Visual arts training is linked to flexible attention to local and global levels of visual stimuli. Acta Psychologica, 161, 185–197. [CrossRef] [PubMed]
Damiano, C., Walther, D. B., & Cunningham, W. A. (2021). Contour features predict valence and threat judgements in scenes. Scientific Reports, 11(1), 19405. [CrossRef] [PubMed]
Damiano, C., Wilder, J., Zhou, E. Y., Walther, D. B., & Wagemans, J. (2021). The role of local and global symmetry in pleasure, interest, and complexity judgments of natural scenes [published online ahead of print June 24, 2021]. Psychology of Aesthetics, Creativity, and the Arts, https://doi.org/10.1037/aca0000398.
de Leeuw, J. R. (2015). jsPsych: A JavaScript library for creating behavioral experiments in a web browser. Behavior Research Methods, 47(1), 1–12. [PubMed]
Ekman, P., Sorenson, E. R., & Friesen, W. V. (1969). Pan-cultural elements in facial displays of emotion. Science, 164(3875), 86–88. [PubMed]
Fingerhut, J., & Prinz, J. J. (2018). Wonder, appreciation, and the value of art. Progress in Brain Research, 237, 107–128. [PubMed]
Fontaine, J. R., Scherer, K. R., Roesch, E. B., & Ellsworth, P. C. (2007). The world of emotions is not two-dimensional. Psychological Science, 18(12), 1050–1057. [PubMed]
Friedenberg, J., Lauria, G., Hennig, K., & Gardner, I. (2022). Beauty and the sharp fangs of the beast: degree of angularity predicts perceived preference and threat [preprint], https://doi.org/10.13140/RG.2.2.35163.03366.
Gayen, P. (2021). Understanding the language of abstract paintings: An exploration of representation and communication of emotions through lines and colors (Doctoral dissertation). Kharagpur, India: Indian Institute of Technology.
Johnson-Laird, P. N., & Oatley, K. (2021). Emotions, simulation, and abstract art. Art & Perception, 9(3), 260–292.
Jonauskaite, D., Parraga, C. A., Quiblier, M., & Mohr, C. (2020). Feeling blue or seeing red? Similar patterns of emotion associations with colour patches and colour terms. i-Perception, 11(1), 2041669520902484. [PubMed]
Keltner, D., Sauter, D., Tracy, J., & Cowen, A. (2019). Emotional expression: Advances in basic emotion theory. Journal of Nonverbal Behavior, 43(2), 133–160. [PubMed]
Larson, C. L., Aronoff, J., & Stearns, J. J. (2007). The shape of threat: Simple geometric forms evoke rapid and sustained capture of attention. Emotion, 7(3), 526. [PubMed]
LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. New York: Simon & Schuster.
Mohr, C., Jonauskaite, D., Dan-Glauser, E. S., Uusküla, M., & Dael, N. (2018). Unifying research on colour and emotion. In MacDonald, L. W., Biggam, C. P., & Paramel, G. V. (Eds.), Progress in colour studies: Cognition, language and beyond (pp. 209–222). Amsterdam: John Benjamins Publishing Company.
Moors, A., Ellsworth, P. C., Scherer, K. R., & Frijda, N. H. (2013). Appraisal theories of emotion: State of the art and future development. Emotion Review, 5(2), 119–124.
Pecchinenda, A., Bertamini, M., Makin, A. D. J., & Ruta, N. (2014). The pleasantness of visual symmetry: Always, never or sometimes. PLoS One, 9(3), e92685. [PubMed]
Pouliou, D., Bonoti, F., & Nikonanou, N. (2018). Do preschoolers recognize the emotional expressiveness of colors in realistic and abstract art paintings? The Journal of Genetic Psychology, 179(2), 53–61. [PubMed]
Rezanejad, M., Downs, G., Wilder, J., Walther, D. B., Jepson, A., Dickinson, S., et al. (2019). Scene categorization from contours: Medial axis based salience measures. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4116–4124). Piscataway, NJ: Institute of Electrical and Electronics Engineers.
Rezanejad, M., & Siddiqi, K. (2013). Flux graphs for 2D shape analysis. In Dickinson, S. J. & Pizlo, Z. (Eds.), Shape perception in human and computer vision (pp. 41–54). London: Springer.
Silvia, P. J. (2013). Interested experts, confused novices: Art expertise and the knowledge emotions. Empirical Studies of the Arts, 31(1), 107–115.
Stamatopoulou, D., & Cupchik, G. C. (2017). The feeling of the form: Style as dynamic ‘textured’ expression. Art & Perception, 5(3), 262–298.
Takahashi, S. (1995). Aesthetic properties of pictorial perception. Psychological Review, 102(4), 671. [PubMed]
Valladas, H., Clottes, J., Geneste, J. M., Garcia, M. A., Arnold, M., Cachier, H., et al. (2001). Evolution of prehistoric cave art. Nature, 413(6855), 479. [PubMed]
Van Paasschen, J., Zamboni, E., Bacci, F., & Melcher, D. (2014). Consistent emotions elicited by low-level visual features in abstract art. Art & Perception, 2(1-2), 99–118.
Walther, D. B., & Shen, D. (2014). Nonaccidental properties underlie human categorization of complex natural scenes. Psychological Science, 25(4), 851–860. [PubMed]
Walther, D. B., Chai, B., Caddigan, E., Beck, D. M., & Fei-Fei, L. (2011). Simple line drawings suffice for functional MRI decoding of natural scene categories. Proceedings of the National Academy of Sciences, USA, 108(23), 9661–9666.
Walther, D. B., Farzanfar, D., Han, S., & Rezanejad, M. (2023). The mid-level vision toolbox for computing structural properties of real-world images. Frontiers in Psychology, 14, 1322.
Figure 1.
 
(A) Sample color and line drawings for each emotion, made by one artist (#2) and one non-artist (#1). (B) One example of a set of color reference drawings used in the computational emotion prediction procedure with color drawings (top) and line drawings (bottom).
Figure 1.
 
(A) Sample color and line drawings for each emotion, made by one artist (#2) and one non-artist (#1). (B) One example of a set of color reference drawings used in the computational emotion prediction procedure with color drawings (top) and line drawings (bottom).
Figure 2.
 
Two example line drawings and their extracted contour feature statistics. The top drawing represents anger and the bottom drawing represents joy.
Figure 2.
 
Two example line drawings and their extracted contour feature statistics. The top drawing represents anger and the bottom drawing represents joy.
Figure 3.
 
Summary of computational and behavioral emotion prediction results. The boxes represent the means and 95% confidence intervals. The dots in the behavioral plots represent individual (N = 242) average accuracy per condition. Significant differences in prediction accuracy between artists’ and non-artists’ drawings are marked.
Figure 3.
 
Summary of computational and behavioral emotion prediction results. The boxes represent the means and 95% confidence intervals. The dots in the behavioral plots represent individual (N = 242) average accuracy per condition. Significant differences in prediction accuracy between artists’ and non-artists’ drawings are marked.
Figure 4.
 
Average confusion matrices extracted from the behavioral and computational emotion predictions. Each cell of the confusion matrix shows the proportion of drawings with a certain emotion label (true labels along rows) that were predicted as that or another emotion (predicted labels along columns). The diagonal represents correct predictions, and the off-diagonal represents errors. The gray level of a cell corresponds to the value within the cell, from white (0 = that emotion was never predicted) to black (1 = that emotion was always predicted). The correlation values shown beside the arrows represent the correlations between off-diagonal elements (i.e., non-red cells) of behavioral and computational predictions.
Figure 4.
 
Average confusion matrices extracted from the behavioral and computational emotion predictions. Each cell of the confusion matrix shows the proportion of drawings with a certain emotion label (true labels along rows) that were predicted as that or another emotion (predicted labels along columns). The diagonal represents correct predictions, and the off-diagonal represents errors. The gray level of a cell corresponds to the value within the cell, from white (0 = that emotion was never predicted) to black (1 = that emotion was always predicted). The correlation values shown beside the arrows represent the correlations between off-diagonal elements (i.e., non-red cells) of behavioral and computational predictions.
Figure 5.
 
Average color usage per drawing, averaged over all drawings within an emotion category, separately for artists’ and non-artists’ drawings. See Table 1 for the percentages of each color per emotion.
Figure 5.
 
Average color usage per drawing, averaged over all drawings within an emotion category, separately for artists’ and non-artists’ drawings. See Table 1 for the percentages of each color per emotion.
Figure 6.
 
Average distributions of contour features, separately for artists’ and non-artists’ drawings, for each of the six emotion categories.
Figure 6.
 
Average distributions of contour features, separately for artists’ and non-artists’ drawings, for each of the six emotion categories.
Table 1.
 
Average color usage per drawing (in percent of non-white pixels), averaged over all drawings within an emotion category separately for artists’ and non-artists’ drawings.
Table 1.
 
Average color usage per drawing (in percent of non-white pixels), averaged over all drawings within an emotion category separately for artists’ and non-artists’ drawings.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×