Although contemporary AI models have shown a remarkable ability to generate impressive visual artworks across a variety of styles, isn’t there something fundamental missing in these works? In the words of French post-Impressionist painter Paul Cézanne, “A work of art which does not begin in emotion is not art.” Is it possible for machines to understand and integrate human emotions in this context?
Meet ArtEmis, a new large-scale dataset of emotional reactions and explanations for visual artworks from researchers at Stanford University, Laboratoire d’Informatique de l’Ecole Polytechnique (LIX) and King Abdullah University of Science and Technology (KAUST). The team used ArtEmis to develop machine learning models that can predict the dominant emotion from images or texts and provide associated explanations.
The researchers propose that, unlike most natural images in machine learning tasks that are typically labelled based on the objects or actions that appear in the images, the visual art domain also involves understanding viewers’ affective responses. This requires a relatively complex analysis integrating image content and its effect on the viewer. The team says the development of novel models for predicting emotion from nuanced perceptual images such as visual arts could also lead to a richer understanding of ordinary images for downstream tasks.

The researchers surveyed 6,377 human annotators recruited via Amazon Mechanical Turk to provide a grounded verbal explanation for the dominant emotion communicated in different artistic images. The annotators were provided eight emotions and a “something-else” option to choose from when viewing an artwork. They were also asked to explain the dominant elicited emotion in their own words and identify elements in the artworks that contributed to their decision.


The researchers built the ArtEmis dataset on top of the visual art encyclopedia WikiArt. ArtEmis contains 81,446 curated artworks from 1,119 artists, covering 27 art styles and 45 genres from the 15th century to the present. It also includes 439,121 explanatory utterances and emotional responses related to the artworks.

The researchers trained a series of “neural speaker” models on ArtEmis. While these are not exactly art critics, they can produce plausible grounded emotion explanations after viewing artworks. The team says they speak in language that is “significantly more affective, abstract, and rich with metaphors and similes” to express “moods, feelings, personal attitudes, but also abstract concepts like freedom or love.” (Maybe they are a lot like art critics.)
The neural speakers perform emotion explanation evoked by artworks with the help of two popular language model backbone architectures: Show-Atten-Tell (SAT), an image encoder with a word/image attentive LSTM combined; and M² – a meshed Transformer model with memory for image captioning. The set of proposed neural speakers also includes a fine-tuned ImageNet-based pretrained ResNet32 encoder, in which researchers minimized the KL-divergence between its output and the empirical user distributions of ArtEmis.
The researchers say the task of predicting fine-grained emotions explained in ArtEmis data remains challenging due to intrinsic difficulties involved in producing emotion explanations in language. Nonetheless, the team observed in experiments that the best of their neural speaker models was able to produce “well-grounded affective explanations, respond to abstract visual stimuli, and fare reasonably well in emotional Turing tests, even when competing with humans.“
The paper ArtEmis: Affective Language for Visual Art is available on arXiv, and the ArtEmis dataset and newly proposed neural speakers can be found at artemisdataset.org.
Reporter: Fangyu Cai | Editor: Michael Sarazen

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Artists’ critiques and reactions of fellow-artists’ artworks abound on art-sharing platforms such as Facebook Groups and Instagram. I’m sure this “image-reaction” data source can add to the power of the Artemis AI model.
I belong to six plein air art groups (three in the USA and three in Canada), and host a weekly, online “Art Meet Group Critique” where critiques and reactions to fellow artists’ paintings are common and sought after in order to raise the skill level of member artists.
Pingback: AI Art Critic? New Dataset And Models Make Emotional Sense Of Visual Artworks - AI Summary
If you are interested in the topic of AI art then make sure to check out our blog at https://aiartshop.com/blogs/ai-art-blog. AI Art Shop is the leading online gallery that offers paintings created by AI. We are also the first to launch an exclusive AI Art NFT collection (limited to 46 original paintings).
We use some of these technologies, but also some other cutting edge approaches to creating original AI Artwork, including language-based control which varies from other approaches. We briefly discuss the approach here https://www.artaygo.com/pages/how-does-artificial-intelligence-produce-artwork
All of Artaygo’s original framed canvas artworks are one-of-a-kind.