Open Access
Article  |   June 2023
Statistical image properties and aesthetic judgments on abstract paintings by Robert Pepperell
Author Affiliations
  • Christoph Redies
    Jena University Hospital, Experimental Aesthetics Group, Institute of Anatomy I, Friedrich Schiller University Jena, Jena, Germany
    christoph.redies@med.uni-jena.de
  • Ralf Bartho
    Jena University Hospital, Experimental Aesthetics Group, Institute of Anatomy I, Friedrich Schiller University Jena, Jena, Germany
    ralf.bartho@med.uni-jena.de
Journal of Vision June 2023, Vol.23, 1. doi:https://doi.org/10.1167/jov.23.6.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christoph Redies, Ralf Bartho; Statistical image properties and aesthetic judgments on abstract paintings by Robert Pepperell. Journal of Vision 2023;23(6):1. https://doi.org/10.1167/jov.23.6.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In this exploratory study, we asked whether objective statistical image properties can predict subjective aesthetic ratings for a set of 48 abstract paintings created by the artist Robert Pepperell. Ruta and colleagues (2021) used the artworks previously to study the effect of curved/angular contour on liking and wanting decisions. We related a predefined set of statistical image properties to the eight different dimensions of aesthetic judgments from their study. Our results show that the statistical image properties can predict a large portion of the variance in the different aesthetic judgments by Ruta and colleagues. For example, adjusted R2 values for liking, attractiveness, visual comfort, and approachability range between 0.52 and 0.60 in multiple linear regression models with four predictors each. For wanting judgments in an (imagined) gallery context, the explained variance is even higher (adjusted R2 of 0.78). To explain these findings, we hypothesize that differences in cognitive processing of Pepperell’s abstract paintings are minimized because this set of stimuli has no apparent content and is of uniform artistic style and cultural context. Under this condition, the aesthetic ratings by Ruta and colleagues are largely based on perceptual processing that systematically varies along a relatively small set of objective image properties.

Introduction
Contemporary psychological models of aesthetic experience cover a multitude of factors, which generally embrace the triad of perceptual, cognitive, and emotional processing (Chatterjee & Vartanian, 2014; Graf & Landwehr, 2015; Jacobsen, 2006; Redies, 2015). In the present study, we focus on the perceptual processing of artworks. From a perceptual perspective, the role of statistical image properties (SIPs) remains poorly understood. SIPs represent objective image features, such as the distribution of oriented luminance gradients (Redies et al., 2012), edge orientation entropy (Redies et al., 2017), and various color statistics (Geller et al., 2022). Differences in how SIPs affect subjective aesthetic ratings have been described, for example, for abstract artworks (Lyssenko et al., 2016; Redies et al., 2015; Sidhu et al., 2018), representational paintings (Sidhu et al., 2018), and other visual stimuli (Grebenkina et al., 2018). 
In the present study, we reanalyzed ratings by Ruta (2021) and Ruta et al. (2021) on abstract paintings because, by definition, abstract images are devoid of recognizable content that may prompt cognitive processing. Abstract art is therefore particularly useful for studying the perceptual basis of aesthetic judgments (Geller et al., 2022; Menzel et al., 2018; Redies, 2015; Schwabe et al., 2018). 
Human responses to (abstract) artworks have been compared to other types of images, such as photographs of landscapes or faces (Leder et al., 2016; Vartanian & Skov, 2014; Vessel et al., 2019; Vessel et al., 2018). In general, the preference of human observers for abstract artworks has been described as more variable among observers compared to the other types of images, possibly because cultural influences have more diverse effects on individual preferences for abstract art (Hayn-Leichsenring et al., 2020; Leder et al., 2016; Sidhu et al., 2018; Vessel et al., 2018). 
The abovementioned results insinuate that, in general, the preference for abstract visual stimuli is highly idiosyncratic. In agreement with this interpretation, objective image features have been considered only weakly predictive for preference of abstract artworks (Chamberlain, 2022; Leder et al., 2016; Mallon et al., 2014). Instead, the role of variations in individual expertise, cultural exposure, and context has been emphasized (Sidhu et al., 2018; Vessel et al., 2018). For example, the title of abstract artworks plays an important role in judging their profoundness (Turpin et al., 2019). Furthermore, differences in the personality traits have been linked to diverse private preferences of the observers for artworks (Chamorro-Premuzic et al., 2009); to differences in the subjective interpretation of the rating terms, which also depend on personality traits (Lyssenko et al., 2016); and to differences in art expertise (Jacobsen, 2010; Mullennix & Robinet, 2018; Silvia, 2013). 
However, another line of research reveals that responses to abstract stimuli are more consistent and less idiosyncratic across observers. In general, these studies used simple stimuli that were rather homogeneous in visual appearance, such as simple graphic patterns (Jacobsen & Höfel, 2002), fractal patterns (Spehar et al., 2016), line patterns (Grebenkina et al., 2018), texture (Friedenberg, 2022), and artificial intelligence–generated artworks (Geller et al., 2022). In these studies, image properties accounted for about 50% to 80% of the variance in the aesthetic ratings. Together, these results demonstrate that the preference for particular patterns of SIPs is highly consistent across observers not only for patterns that can readily be recognized as artificial, but also for computer-generated artworks. It remains unclear, however, whether these conclusions can be generalized to humanmade abstract paintings. 
What, then, is the difference between data sets of images with a low degree versus a high degree of variance explained by the SIPs? One explanation to account for this difference may be that a low degree of explained variance is observed for data sets that represent a wide variety of different abstract art styles (i.e., there is much heterogeneity in the stimulus set). In this case, idiosyncratic preferences of individual observers for cultural content and context may possibly override universal preferences for particular SIP patterns. Conversely, much of the group-level variance in preference ratings can be explained if the paintings analyzed are restricted to a specific art style or to art by a single artist. In this case, images in the data set will differ mostly with respect to objective SIPs in a well-defined and restricted manner, leaving little room for differences in cognitive processing between individuals. Indeed, Hayn-Leichsenring et al. (2020) reported that cross-observer agreement in their study increased noticeably when observers switched from the evaluation of a large variety of paintings to the evaluation of art from a single artist. Alternatively, computer-generated stimuli that have a relatively homogeneous visual appearance may be recognized more easily as artificial and may therefore be valued less than humanmade artworks (Chamberlain et al., 2018). 
In conclusion, SIPs can predict a large portion of the variance of aesthetic responses to artificial patterns and computer-generated abstract images, if the stimuli represent a relatively homogeneous set of images. In the present study, we explore whether this notion also applies to a set of 48 humanmade abstract artworks by the contemporary artist Robert Pepperell.1 The set has been described in the previous study by Ruta et al. (2021). The artist created the paintings to study the effect of curvature on aesthetic wanting and liking decisions. It is well established that humans prefer curved over sharp-angled stimuli (Bar & Neta, 2006; Bertamini et al., 2016; Gómez-Puerto et al., 2016), for example, for common objects, design, and architecture (for a review, see Chuquichambi et al., 2022). Ruta and colleagues demonstrated that the preference for the Pepperell paintings is also affected by curvature. 
We reanalyzed the rating data by Ruta (2021) and Ruta et al. (2021) to examine whether other objective stimulus features predict Ruta's results on liking and wanting decisions as well. Specifically, we explored whether a relatively small set of eight SIPs can predict Ruta’s participants’ ratings for the 48 abstract paintings by Robert Pepperell. The SIPs have been used in previous studies as predictors of aesthetic ratings (Brachmann et al., 2017; Braun et al., 2013; Redies et al., 2017). In particular, they have been investigated in abstract artworks of different artistic styles that were generated by a computer (Geller et al., 2022). The set of SIPs therefore seemed particularly suitable for our present study of the same type of artworks (i.e., abstract paintings). 
Specifically, we asked the following questions: 
  • (1) To what extent can the space spanned by the SIPs (henceforth called SIP space) predict the ratings on the Pepperell paintings from Ruta (2021) and Ruta et al. (2021) along the eight aesthetic dimensions?
  • (2) Does the explanatory power of the SIPs differ between the various types of aesthetic rating dimensions examined by Ruta (2021) and Ruta et al. (2021)?
  • (3) What is the relation between the SIP space and curved/angular contour in predicting the aesthetic ratings from Ruta (2021) and Ruta et al. (2021)?
Methods
In the present work, we reanalyze data from the study by Ruta et al. (2021). Specifically, we use the visual stimuli and the subjective aesthetic ratings from their study. For each stimulus, we calculate the eight SIPs that Geller et al. (2022) previously investigated in a large set of abstract artworks of different artistic styles. We then ask to which extent the SIP space predicts the aesthetic ratings by Ruta (2021). In the following, we will briefly recapitulate the rating experiments by Ruta et al. (2021) and introduce the SIPs calculated in the present work. In doing so, we will focus on the issues that are relevant for the present experiment. 
Stimuli
Stimuli were high-quality digital photographs (945 × 619 pixels) of 48 abstract acrylic paintings, which were created and kindly provided for the present study by Robert Pepperell. According to Ruta and colleagues, “The paintings were designed to present ambiguous forms that suggested certain objects but were not specifically recognizable” (Ruta et al., 2021, p. 3). The 48 paintings (henceforth called Pepperell paintings) comprise 16 triplets. In each triplet, there is one version with curved and smooth contours (henceforth called curved Contour), another one with straight lines and sharp-angled contours (angular Contour), and a third one with mixed curved and sharp-angled contours (mixed Contour). The triplets differ in the shape of the depicted object and in their general coloration. Examples of the Pepperell paintings are shown in Figure 1
Figure 1.
 
Examples of the stimuli used by Ruta et al. (2021). The four paintings with the highest ratings (A–D) and the lowest ratings (E–H) in the Wanting (gallery) context (Study 2) are shown in descending order of the ratings. Paintings are of curved Contour (A, C–E), mixed Contour (H), and angular Contour (B, F, G), respectively. Robert Pepperell created the paintings, which are reproduced with his permission (copyright Robert Pepperell, 2022).
Figure 1.
 
Examples of the stimuli used by Ruta et al. (2021). The four paintings with the highest ratings (A–D) and the lowest ratings (E–H) in the Wanting (gallery) context (Study 2) are shown in descending order of the ratings. Paintings are of curved Contour (A, C–E), mixed Contour (H), and angular Contour (B, F, G), respectively. Robert Pepperell created the paintings, which are reproduced with his permission (copyright Robert Pepperell, 2022).
Rating procedure (Ruta et al., 2021)
Ruta et al. (2021) carried out two rating studies. Study 1 was an online study with 41 participants who were asked to rate their agreement with the following statements that reflect four aesthetic dimensions: “I like this painting,” “I think this painting is comfortable to look at,” “I think this painting is approachable,” and “I think this painting is attractive.” We refer to these rating dimensions as Liking, Comfort, Approachability, and Attractiveness, respectively. Study 2 was a laboratory study with 50 participants and consisted of four separate tasks. The intention of the tasks was to differentiate between liking and wanting of the stimuli. In Task 1, participants had to press a space bar for as long as they wanted to see one of the digital paintings. The length of the free viewing time was taken as an implicit measure for wanting of the paintings (henceforth called Implicit wanting). In Task 2, participants had to take a dichotomous choice whether or not they liked a painting (Explicit liking). Tasks 3 and 4 served to record explicit wanting judgments for artworks, either in a home context or in a gallery context. For the home context (Task 3), participants were asked to assess the probability of whether they would take each artwork home (Wanting [home]). In Task 4, participants assessed the probability of exhibiting each painting in their art gallery (Wanting [gallery]). The aesthetic ratings from the two studies were downloaded from the Open Science Framework (Ruta, 2021). As an example, Figure 1 shows the four paintings with the highest and lowest judgments, respectively, for the Wanting (gallery) context. A superficial inspection of the paintings in Figure 1 reveals that other image properties, such as coloration and spatial composition, may also play a role in their aesthetic ratings. For more information on the rating procedure, see Ruta et al. (2021)
SIPs
For each image, we calculated a set of eight SIPs that was used in a previous study on abstract artworks (see Introduction; Geller et al., 2022). A more detailed account of these measures is given in Redies et al. (2020) and in the Appendix of Geller et al. (2022). The SIPs represent the following image properties: 
Complexity
To determine image complexity, each image was converted into a so-called gradient image, which represents the luminance and color gradients in the image. Complexity was expressed as the mean gradient strength in the gradient image (see Appendix in Braun et al., 2013). 
Self-similarity
To calculate self-similarity, we used the Pyramid Histogram of Oriented Gradients (PHOG) method (see Appendix in Braun et al., 2013). Self-similarity was calculated on the basis of histogram of oriented gradients (HOG descriptors with 32 orientation bins) for pyramidal subsections of the gradient image. Self-similarity is higher if the HOG descriptors of the subsections are closer to those of other subsections at different levels of the pyramid or at the ground level (Amirshahi et al., 2012). In the present study, we compared HOGs of all levels up to Level 3 of the pyramid with the HOG of the ground image. 
Second-order entropy of edge orientations
To measure how independent the orientations of luminance edges are distributed across an image, we filtered each image with a set of oriented Gabor filters. To obtain second-order entropy of edge orientations, we determined the differences between the orientations for all pairs of edges in the image (Geisler et al., 2001; Redies et al., 2017). Entropy is high if all orientation differences tend to be about equally prominent (i.e., if orientations are independent of each other across an image). Entropy is lower if particular differences of edge orientations (e.g., parallel or orthogonal orientations) are more frequent in an image. 
Variances of feature responses in convolutional neural networks
Convolutional neural networks (CNNs) that have been trained to recognize a multitude of objects or scenes can provide a good model for neural response properties at lower levels of the visual system (Cadena et al., 2019; Kindel et al., 2019). We study two variances, Pa(2) and Pf(30), in the present work. Variance Pa(2) can be interpreted as sparseness, which means that few CNN features tend to respond in few of the four quadrants of an image. The inverse to sparseness is richness of filter responses (low Pa(2)). Traditional artworks tend to show low sparseness (i.e., high richness; Brachmann et al., 2017). Variance Pf(30) is a measure of variability of filter responses between subsections of an image. 
Color measures
We calculated three chromatic features, as described in Geller et al. (2022). For the L*a*b* or CIELAB color space, we included the mean values for the b channel (yellow-blue dimension) in the analysis (henceforth abbreviated Lab [b]; positive values, yellowish; negative values, blueish). Moreover, we used the mean value of the S-channel (saturation) of the HSV color space (abbreviated HSV [S]). Finally, as a measure of colorfulness, we determined the Shannon entropy of the H channel of the HSV color space (abbreviated HSV [H] entropy or Colorfulness). 
Contour
As an additional predictor, we analyzed the categorical variable Contour (curved/mixed/angular) of the objects depicted in the paintings (Ruta et al., 2021). 
Results of a cross-correlation analysis of the SIPs are given in the Supplementary Material, Section S2
Statistical methods
The statistical analysis was performed by using the R program (R Development Core Team, 2017) and the PRISM program for macOS (version 8.4.3; GraphPad Software, San Diego, CA, USA). 
For the multiple linear regression analysis, we were faced with the problem of potential overfitting due to too many predictors (SIPs) for the ratings of a relatively low number of Pepperell paintings (48 images). To avoid overfitting, we reduced the number of predictive variables for each data set from eight to four. We thereby followed a rule of thumb that there had to be at least 10 datapoints per predictor variable for multiple linear regression (Peduzzi et al., 1996). The reduction was accomplished by calculating coefficients of determination (R2) for multiple linear regression models for each rating dimension. For this purpose, the lm function of the stats package in the R project (R Development Core Team, 2017) was used. R2 values were adjusted to account for the number of predictors in each model (R2adj). We started with the full set of eight SIPs and iteratively eliminated the four most redundant SIPs from the models by keeping predictive power high, as monitored by Akaike's entropy-based information criterion, which considers the fit of the model as well as the number of parameters. For more details on the multiple linear regression analysis, see Supplementary Material, Section S3
Results
In the present study, we use SIPs to predict how participants in the study by Ruta et al. (2021) rated a set of abstract paintings along different aesthetic dimensions. For the SIPs, a comparison between the Pepperell paintings and three data sets of Western paintings is shown in Supplementary Figure S1. We developed multiple linear regression models to study how well the SIPs or their interactions predict the aesthetic ratings of the 48 paintings. 
To avoid overfitting, we reduced the number of predictors in the models from eight to four variables (SIPs) for each rating dimension, as described in the Methods section. Table 1 lists the R2adj values for the final models. All models are significant. The portion of variance explained (R2adj) ranges from a low value of 18% for Implicit wanting to as high as 78% for Wanting in the gallery context (Wanting [gallery]; Study 2). The other R2adj values range from 0.39 (Explicit liking, Study 2) to 0.60 (Approachability, Study 1) and are indicative of moderate to strong effects. Standardized β values for the SIPs of all models are listed in Supplementary Table S1
Table 1.
 
R2adj values for the aesthetic ratings from the study by Ruta et al. (2021). Notes: Results are listed for three different models (Model 1 to Model 3). The four SIPs that constitute Model 1 and Model 2 are listed in Supplementary Table S1 for each rating dimension. The models are significant at *p < 0.05, **p < 0.01, ****p < 0.0001; ns, not significant. The F statistic is given in each case. Standardized β coefficients for Model 1 are listed in Supplementary Table S1. aModel 2 differs significantly from Model 1 (ANOVA).
Table 1.
 
R2adj values for the aesthetic ratings from the study by Ruta et al. (2021). Notes: Results are listed for three different models (Model 1 to Model 3). The four SIPs that constitute Model 1 and Model 2 are listed in Supplementary Table S1 for each rating dimension. The models are significant at *p < 0.05, **p < 0.01, ****p < 0.0001; ns, not significant. The F statistic is given in each case. Standardized β coefficients for Model 1 are listed in Supplementary Table S1. aModel 2 differs significantly from Model 1 (ANOVA).
We also compared our results to the findings by Ruta and colleagues (2021), who studied the role of Contour (curved/mixed/angular) in mediating aesthetic ratings on the Pepperell paintings (see Introduction section). Ruta et al. (2021) obtained higher ratings for the curved versions of the paintings compared to the angular versions for all types of ratings studied, except for the Implicit wanting and the Wanting (gallery) rating (Ruta et al., 2021). In the present study, we built two additional models. For Model 2, we used the same four SIPs as for Model 1 but introduced Contour as an additional categorical factor in the regression analysis. Model 3 consists of Contour only. Results are shown in Table 1. For most rating dimensions, Model 3 is not significant. Exceptions are Explicit liking and Wanting (home), where weak effects of Contour on the ratings can be seen. An analysis of variance (ANOVA) revealed that there is a significant difference in the explained variance (R2adj) between Model 1 and Model 2 for Explicit liking (F(2, 42) = 13.67) and Wanting (home) (F(2, 42) = 4.94) as well as for ratings of Comfort (F(2, 42) = 4.21), where the addition of Contour to Model 1 increases predictive power. For the other rating dimensions, the two models do not differ significantly. In conclusion, the overall effect of Contour is less robust and consistent when compared to the SIP space used in the present study. 
Discussion
SIPs are strong predictors for aesthetic ratings of the Pepperell paintings
The present results indicate that as few as four SIPs can predict a large percentage of the variance (R2adj) of the aesthetic responses to the 48 Pepperell paintings (group averages for each painting; Table 1). The magnitude of the explained variances is similar to that found previously for artificial patterns and computer-generated abstract artworks (Geller et al., 2022; Grebenkina et al., 2018; Jacobsen & Höfel, 2002; Schwabe et al., 2018; Spehar et al., 2016). We conclude that a high degree of predictability of aesthetic ratings by SIPs is not restricted to artificial stimuli or to computer-generated abstract artworks but can be found also in the set of abstract paintings created by Robert Pepperell. 
One possible reason for the high explained variance of the Pepperell paintings is their large degree of homogeneity with respect to artistic style, cultural context, and (the lack of) figurative content. We speculate that this homogeneity in the stimulus material leaves little room for the type of differential cognitive processing of contextual and cultural qualities that more diverse sets of paintings might elicit. A similar uniformity is not observed when sets of abstract artworks diverge more widely across contextual and cultural variables (Mallon et al., 2014; Sidhu et al., 2018; Vessel et al., 2018). We speculate that, in the latter case, differences in contextual (cognitive) features overrule the responses to perceptual features, and as a consequence, SIPs cannot explain the variance in the ratings to the same extent (i.e., R2adj values are relatively low). 
Differences between the rating dimensions
In the present study, we investigated a relatively large set of eight aesthetic rating dimensions. Although these dimensions shared patterns of predictive SIPs overall (Supplementary Table S1), there were also differences in which SIPs predicted which ratings best. As pointed out previously (Augustin et al., 2012; Lyssenko et al., 2016; Redies et al., 2015; Sidhu et al., 2018), a coherent choice of the rating terms is thus crucial in obtaining results that can be compared between studies. 
Of particular interest is the difference between the two Wanting ratings (home and gallery contexts) from Study 2 of Ruta et al. (2021). Notably, the explained variance for the Wanting (gallery) rating (R2adj = 0.78; Table 1) is higher than that of the Wanting (home) rating (R2adj = 0.47). Moreover, for the home situation, participants prefer less saturated paintings (lower values of HSV [S]) but not for the gallery situation (Supplementary Table S1). The opposite pattern is seen for the variable Lab (b). Here, participants prefer more bluish as opposed to yellowish paintings for the gallery situation (Figure 1) but not for the home situation (Supplementary Table S1). Why a particular combination of SIPs predicts participants’ preference better in the gallery situation than in the home situation remains unclear. One hypothetical explanation would be that shared taste plays a larger role in the gallery situation, while in the home situation, the participants’ idiosyncratic taste is more dominant. We hypothesize that the awareness of shared taste may thus allow participants to carry out normative evaluations for the Pepperell paintings in the gallery situation. Sidhu et al. (2018) advanced a similar notion for representational paintings. 
Experimental limitations
Our study has a number of limitations. First, it is based on the analysis of 48 paintings from a single Western artist and thus cannot be generalized to abstract art as a whole. It thus remains to be studied whether any conclusion from the present study can be generalized to abstract artworks of other cultural contexts, art styles, artists, and under other experimental conditions. Second, our analysis is based on a relatively small set of eight SIPs. In view of the large number of other SIPs that have been employed in aesthetics research (Brachmann & Redies, 2017; Sidhu et al., 2018), we cannot exclude that other SIPs would predict the aesthetic ratings even more strongly. Third, the ratings from Study 1 were acquired in an online study (Ruta et al., 2021). Differences in the presentation mode (e.g., due to uncalibrated monitors) may increase unexplained variance in the SIPs (e.g., in the color statistics of the paintings). 
Hypothesis and outlook
In the present study, we show that the SIPs can explain the aesthetic judgments about the 48 abstract paintings by Robert Pepperell surprisingly well. The paintings are homogeneous with regard to their artistic style, as well as cultural context and content. Similar results were obtained previously for a homogeneous set of computer-generated abstract artworks (Geller et al., 2022). We speculate that for data sets of images that exhibit a uniform cultural context and similar artistic style, perceptual processing based on SIPs dominates the aesthetic evaluation while differences in cognitive processing of cultural context play a minor role only (Menzel et al., 2018; Redies, 2015; Schwabe et al., 2018). Under this condition, a large portion of the variance in mean aesthetic ratings can be explained by the SIPs. Moreover, we speculate that participants have an (intuitive) idea of this common taste and can base their aesthetic judgments on it, as might be the case in the Wanting (gallery) condition. However, these hypotheses need to be tested with more extensive data sets because the set of artworks used in the present exploratory study is exceedingly small. 
Acknowledgments
The authors are grateful to Robert Pepperell for his permission to use his data set of abstract paintings and to reproduce some of them in this article. They thank Katja Thömmes and Hannah Geller for discussion and expert comments on the manuscript. 
Supported by funds from the Institute of Anatomy I, Jena University Hospital, Germany; the German Research Foundation (Project No. 512648189); and the Open Access Publication Fund of the Thüringer Universitäts- und Landesbibliothek Jena. 
Commercial relationships: none. 
Corresponding author: Christoph Redies. 
Email: christoph.redies@med.uni-jena.de. 
Address: Institute of Anatomy I, Jena, Germany. 
Footnotes
1   https://robertpepperell.com (accessed December 8, 2022).
References
Altmann, C. S., Brachmann, A., & Redies, C. (2021). Liking of art and the perception of color. Journal of Experimental Psychology: Human Perception and Performance, 47(4), 545–564, https://doi.org/10.1037/xhp0000771. [PubMed]
Amirshahi, S. A., Hayn-Leichsenring, G. U., Denzler, J., & Redies, C. (2015). JenAesthetics subjective dataset: Analyzing paintings by subjective scores. Lecture Notes in Computer Science, 8925, 3–19, https://doi.org/10.1007/978-3-319-16178-5_1. [CrossRef]
Amirshahi, S. A., Koch, M., Denzler, J., & Redies, C. (2012). PHOG analysis of self-similarity in esthetic images. Proceedings of SPIE (Human Vision and Electronic Imaging XVII), 8291, 82911J, https://doi.org/10.1117/12.911973.
Augustin, M. D., Wagemans, J., & Carbon, C. C. (2012). All is beautiful? Generality vs. specificity of word usage in visual aesthetics. Acta Psychologica, 139(1), 187–201, https://doi.org/10.1016/j.actpsy.2011.10.004. [CrossRef] [PubMed]
Bar, M., & Neta, M. (2006). Humans prefer curved visual objects. Psychological Science, 17(8), 645–648, https://doi.org/10.1111/j.1467-9280.2006.01759.x. [CrossRef] [PubMed]
Bertamini, M., Palumbo, L., Gheorghes, T. N., & Galatsidas, M. (2016). Do observers like curvature or do they dislike angularity? British Journal of Psychology, 107(1), 154–178, https://doi.org/10.1111/bjop.12132. [CrossRef]
Brachmann, A., Barth, E., & Redies, C. (2017). Using CNN features to better understand what makes visual artworks special. Frontiers in Psychology, 8, 830, https://doi.org/10.3389/fpsyg.2017.00830. [CrossRef] [PubMed]
Brachmann, A., & Redies, C. (2017). Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience, 11, 102, https://doi.org/10.3389/fncom.2017.00102. [CrossRef] [PubMed]
Braun, J., Amirshahi, S. A., Denzler, J., & Redies, C. (2013). Statistical image properties of print advertisements, visual artworks and images of architecture. Frontiers in Psychology, 4, 808, https://doi.org/10.3389/fpsyg.2013.00808. [CrossRef] [PubMed]
Cadena, S. A., Denfield, G. H., Walker, E. Y., Gatys, L. A., Tolias, A. S., Bethge, M., & Ecker, A. S. (2019). Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Computational Biology, 15(4), e1006897, https://doi.org/10.1371/journal.pcbi.1006897. [CrossRef] [PubMed]
Chamberlain, R. (2022). The interplay of objective and subjective factors in empirical aesthetics. In Ionescu, B., Bainbridge, W. A., Murray, N. (Eds.), Human perception of visual information (pp. 115–132). Cham, Switzerland: Springer.
Chamberlain, R., Mullin, C., Scheerlinck, B., & Wagemans, J. (2018). Putting the art in artificial: Aesthetic responses to computer-generated art. Psychology of Aesthetics, Creativity, and the Arts, 12(2), 177–192, https://doi.org/10.1037/aca0000136. [CrossRef]
Chamorro-Premuzic, T., Reimers, S., Hsu, A., & Ahmetoglu, G. (2009). Who art thou? Personality predictors of artistic preferences in a large UK sample: the importance of openness. British Journal of Psychology, 100(3), 501–516, https://doi.org/10.1348/000712608X366867. [CrossRef]
Chatterjee, A., & Vartanian, O. (2014). Neuroaesthetics. Trends in Cognitive Sciences, 18(7), 370–375, https://doi.org/10.1016/j.tics.2014.03.003. [CrossRef] [PubMed]
Chuquichambi, E. G., Vartanian, O., Skov, M., Corradi, G. B., Nadal, M., Silvia, P. J., & Munar, E. (2022). How universal is preference for visual curvature? A systematic review and meta-analysis. Annals of the New York Academy of Sciences, 1518(1), 151–165, https://doi.org/10.1111/nyas.14919. [CrossRef] [PubMed]
Fekete, A., Pelowski, M., Specker, E., Brieber, D., Rosenberg, R., & Leder, H. (2022). The Vienna Art Picture System (VAPS): A data set of 999 paintings and subjective ratings for art and aesthetics research. Psychology of Aesthetics, Creativity, and the Arts (advance online publication), https://doi.org/10.1037/aca0000460.
Friedenberg, J. (2022). What makes textures beautiful? Effects of shared orientation. Psychology of Aesthetics, Creativity, and the Arts, 16(2), 361–369, https://doi.org/10.1037/aca0000349. [CrossRef]
Geisler, W. S., Perry, J. S., Super, B. J., & Gallogly, D. P. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41(6), 711–724, https://doi.org/10.1016/S0042-6989(00)00277-7. [CrossRef] [PubMed]
Geller, H. A., Bartho, R., Thommes, K., & Redies, C. (2022). Statistical image properties predict aesthetic ratings in abstract paintings created by neural style transfer. Frontiers in Neuroscience, 16, 999720, https://doi.org/10.3389/fnins.2022.999720. [CrossRef] [PubMed]
Gómez-Puerto, G., Munar, E., & Nadal, M. (2016). Preference for curvature: A historical and conceptual framework. Frontiers in Human Neuroscience, 9, 712, https://doi.org/10.3389/fnhum.2015.00712. [CrossRef] [PubMed]
Graf, L. K., & Landwehr, J. R. (2015). A dual-process perspective on fluency-based aesthetics: the pleasure-interest model of aesthetic liking. Personality and Social Psychology Review, 19(4), 395–410, https://doi.org/10.1177/1088868315574978. [CrossRef]
Grebenkina, M., Brachmann, A., Bertamini, M., Kaduhm, A., & Redies, C. (2018). Edge orientation entropy predicts preference for diverse types of man-made images. Frontiers in Neuroscience, 12, 678, https://doi.org/10.3389/fnins.2018.00678. [CrossRef] [PubMed]
Hayn-Leichsenring, G. U., Kenett, Y. N., Schulz, K., & Chatterjee, A. (2020). Abstract art paintings, global image properties, and verbal descriptions: An empirical and computational investigation. Acta Psychologica, 202, 102936, https://doi.org/10.1016/j.actpsy.2019.102936. [CrossRef] [PubMed]
Hayn-Leichsenring, G. U., Lehmann, T., & Redies, C. (2017, May–June). Subjective ratings of beauty and aesthetics: Correlations with statistical image properties in Western oil paintings. i-Perception, pp. 1–21, https://doi.org/10.1177/2041669517715474.
Jacobsen, T. (2006). Bridging the arts and the sciences: A framework for the psychology of aesthetics. Leonardo 39(2), 155–162, https://doi.org/10.1162/leon.2006.39.2.155. [CrossRef]
Jacobsen, T. (2010). Beauty and the brain: Culture, history and individual differences in aesthetic appreciation. Journal of Anatomy, 216(2), 184–191, https://doi.org/10.1111/j.1469-7580.2009.01164.x. [CrossRef] [PubMed]
Jacobsen, T., & Höfel, L. (2002). Aesthetic judgments of novel graphic patterns: Analyses of individual judgments. Perceptual and Motor Skills, 95(3, Pt 1), 755–766, https://doi.org/10.2466/pms.2002.95.3.755. [CrossRef] [PubMed]
Kindel, W. F., Christensen, E. D., & Zylberberg, J. (2019). Using deep learning to probe the neural code for images in primary visual cortex. Journal of Vision, 19(4), 29, https://doi.org/10.1167/19.4.29. [CrossRef] [PubMed]
Leder, H., Goller, J., Rigotti, T., & Forster, M. (2016). Private and shared taste in art and face appreciation. Frontiers in Human Neuroscience, 10, 155, https://doi.org/10.3389/fnhum.2016.00155. [CrossRef] [PubMed]
Lyssenko, N., Redies, C., & Hayn-Leichsenring, G. U. (2016). Evaluating abstract art: Relation between term usage, subjective ratings, image properties and personality traits. Frontiers in Psychology, 7, 973, https://doi.org/10.3389/fpsyg.2016.00973. [CrossRef] [PubMed]
Mallon, B., Redies, C., & Hayn-Leichsenring, G. U. (2014). Beauty in abstract paintings: Perceptual contrast and statistical properties. Frontiers in Human Neuroscience, 8, 161, https://doi.org/10.3389/fnhum.2014.00161. [CrossRef] [PubMed]
Menzel, C., Kovacs, G., Amado, C., Hayn-Leichsenring, G. U., & Redies, C. (2018). Visual mismatch negativity indicates automatic, task-independent detection of artistic image composition in abstract artworks. Biological Psychology, 136, 76–86, https://doi.org/10.1016/j.biopsycho.2018.05.005. [CrossRef] [PubMed]
Mullennix, J. W., & Robinet, J. (2018). Art expertise and the processing of titled abstract art. Perception, 47(4), 359–378, https://doi.org/10.1177/0301006617752314. [CrossRef] [PubMed]
Myers, R. J. (1994). Classical and modern regression analysis with applications (2nd ed.). Belmont, CA: Duxbury.
Nascimento, S. M., Albers, A. M., & Gegenfurtner, K. R. (2021). Naturalness and aesthetics of colors—Preference for color compositions perceived as natural. Vision Research, 185, 98–110, https://doi.org/10.1016/j.visres.2021.03.010. [CrossRef] [PubMed]
Nascimento, S. M., Linhares, J. M., Montagner, C., Joao, C. A., Amano, K., Alfaro, C., & Bailao, A. (2017). The colors of paintings and viewers' preferences. Vision Research, 130, 76–84, https://doi.org/10.1016/j.visres.2016.11.006. [CrossRef] [PubMed]
Peduzzi, P., Concato, J., Kemper, E., Holford, T. R., & Feinstein, A. R. (1996). A simulation study of the number of events per variable in logistic regression analysis. Journal of Clinical Epidemiology, 49(12), 1373–1379, https://doi.org/10.1016/s0895-4356(96)00236-3. [CrossRef] [PubMed]
R Development Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
Redies, C. (2015). Combining universal beauty and cultural context in a unifying model of visual aesthetic experience. Frontiers in Human Neuroscience, 9, 219, https://doi.org/10.3389/fnhum.2015.00218. [CrossRef] [PubMed]
Redies, C., Amirshahi, S. A., Koch, M., & Denzler, J. (2012). PHOG-derived aesthetic measures applied to color photographs of artworks, natural scenes and objects. ECCV 2012 Ws/Demos, Part I, Lecture Notes in Computer Science, 7583, 522–531, https://doi.org/10.1007/978-3-642-33863-2_54.
Redies, C., & Brachmann, A. (2017). Statistical image properties in large subsets of traditional art, bad art, and abstract art. Frontiers in Neuroscience, 11, 593, https://doi.org/10.3389/fnins.2017.00593. [PubMed]
Redies, C., Brachmann, A., & Hayn-Leichsenring, G. U. (2015). Changes of statistical properties during the creation of graphic artworks. Art & Perception, 3, 93–116, https://doi.org/10.1163/22134913-00002017.
Redies, C., Brachmann, A., & Wagemans, J. (2017). High entropy of edge orientations characterizes visual artworks from diverse cultural backgrounds. Vision Research, 133, 130–144, https://doi.org/10.1016/j.visres.2017.02.004. [PubMed]
Redies, C., Grebenkina, M., Mohseni, M., Kaduhm, A., & Dobel, C. (2020). Global image properties predict ratings of affective pictures. Frontiers in Psychology, 11, 953, https://doi.org/10.3389/fpsyg.2020.00953.
Ruta, N. (2021). Preference for paintings is also affected by curvature (dataset). Accessed April 25, 2023, from https://osf.io/yfe8p.
Ruta, N., Vano, J., Pepperell, R., Corradi, G. B., Chuquichambi, E. G., Rey, C., & Munar, E. (2021). Preference for paintings is also affected by curvature. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication, https://doi.org/10.1037/aca0000395.
Schwabe, K., Menzel, C., Mullin, C., Wagemans, J., & Redies, C. (2018). Gist perception of image composition in abstract artworks. i-Perception, 9(3), 2041669518780797, https://doi.org/10.1177/2041669518780797. [PubMed]
Sidhu, D. M., McDougall, K. H., Jalava, S. T., & Bodner, G. E. (2018). Prediction of beauty and liking ratings for abstract and representational paintings using subjective and objective measures. PLoS One, 13(7), e0200431, https://doi.org/10.1371/journal.pone.0200431. [PubMed]
Silvia, P. J. (2013). Interested experts, confused novices: Art expertise and the knowledge emotions. Empirical Studies of the Arts, 31(1), 107–115, https://doi.org/10.2190/EM.31.1.f.
Spehar, B., Walker, N., & Taylor, R. P. (2016). Taxonomy of individual variations in aesthetic responses to fractal patterns. Frontiers in Human Neuroscience, 10, 350, https://doi.org/10.3389/fnhum.2016.00350. [PubMed]
Turpin, M. H., Walker, A., Kara-Yakoubian, M., Gabert, N. N., Fugelsang, J., & Stolz, J. A. (2019). Bullshit makes the art grow profounder. Judgment and Decision Making, 14(6), 658–670, https://doi.org/10.2139/ssrn.3410674.
Vartanian, O., & Skov, M. (2014). Neural correlates of viewing paintings: Evidence from a quantitative meta-analysis of functional magnetic resonance imaging data. Brain and Cognition, 87, 52–56, https://doi.org/10.1016/j.bandc.2014.03.004. [PubMed]
Vessel, E. A., Isik, A. I., Belfi, A. M., Stahl, J. L., & Starr, G. G. (2019). The default-mode network represents aesthetic appeal that generalizes across visual domains. Proceedings of the National Academy of Sciences of the United States of America, 116(38), 19155–19164, https://doi.org/10.1073/pnas.1902650116. [PubMed]
Vessel, E. A., Maurer, N., Denker, A. H., & Starr, G. G. (2018). Stronger shared taste for natural aesthetic domains than for artifacts of human culture. Cognition, 179, 121–131, https://doi.org/10.1016/j.cognition.2018.06.009. [PubMed]
Figure 1.
 
Examples of the stimuli used by Ruta et al. (2021). The four paintings with the highest ratings (A–D) and the lowest ratings (E–H) in the Wanting (gallery) context (Study 2) are shown in descending order of the ratings. Paintings are of curved Contour (A, C–E), mixed Contour (H), and angular Contour (B, F, G), respectively. Robert Pepperell created the paintings, which are reproduced with his permission (copyright Robert Pepperell, 2022).
Figure 1.
 
Examples of the stimuli used by Ruta et al. (2021). The four paintings with the highest ratings (A–D) and the lowest ratings (E–H) in the Wanting (gallery) context (Study 2) are shown in descending order of the ratings. Paintings are of curved Contour (A, C–E), mixed Contour (H), and angular Contour (B, F, G), respectively. Robert Pepperell created the paintings, which are reproduced with his permission (copyright Robert Pepperell, 2022).
Table 1.
 
R2adj values for the aesthetic ratings from the study by Ruta et al. (2021). Notes: Results are listed for three different models (Model 1 to Model 3). The four SIPs that constitute Model 1 and Model 2 are listed in Supplementary Table S1 for each rating dimension. The models are significant at *p < 0.05, **p < 0.01, ****p < 0.0001; ns, not significant. The F statistic is given in each case. Standardized β coefficients for Model 1 are listed in Supplementary Table S1. aModel 2 differs significantly from Model 1 (ANOVA).
Table 1.
 
R2adj values for the aesthetic ratings from the study by Ruta et al. (2021). Notes: Results are listed for three different models (Model 1 to Model 3). The four SIPs that constitute Model 1 and Model 2 are listed in Supplementary Table S1 for each rating dimension. The models are significant at *p < 0.05, **p < 0.01, ****p < 0.0001; ns, not significant. The F statistic is given in each case. Standardized β coefficients for Model 1 are listed in Supplementary Table S1. aModel 2 differs significantly from Model 1 (ANOVA).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×