AI has in recent years become increasingly capable of generating impressive artworks in a variety of styles, thanks mainly to the emergence and refining of Generative Adversarial Networks (GANs). Now, Princeton undergrad student Alice Xue has designed a GAN framework for Chinese landscape painting generation that is so effective most humans can’t distinguish its works from the real thing.
The proposed framework, Sketch-And-Paint GAN (SAPGAN), is the first end-to-end model for Chinese landscape painting generation without conditional input. The 242 participants in a visual Turing test identified SAPGAN paintings as human artworks with a frequency significantly higher than paintings from baseline GANs.
“Popular GAN-based art generation methods such as style transfer rely too heavily on conditional inputs,” Xue explains. Models dependant on conditional input have limited generative capability since their images are built on a single, human-fed input. This means they can only produce derivative artworks that are in essence stylistic copies of the conditional input.
Xue proposes that a model not reliant on conditional input could generate an infinite set of paintings seeded from latent space, with not only the style but also the content of its outputs varied artistically through this end-to-end creation process.
In order to mimic the sketch-and-paint process of traditional Chinese landscape painters, SAPGAN was designed with two stages: A SketchGAN component for the generation of edge maps and PaintGAN component for subsequent edge-to-painting translation. To improve SketchGAN’s training, Xue curated a new dataset of 2,192 high-quality traditional Chinese landscape paintings sourced from museum collections.
When compared to RaLSGAN and StyleGAN2, the proposed SAPGAN was judged as performing better with regard to both realism and artistic composition. The human evaluators in the Visual Turing Test each looked at 18 paintings — six each from SAPGAN, human painters, and the baseline model RaLSGAN. The SAPGAN paintings were chosen as human-produced 55 percent of the time, while the baseline RaLSGAN model’s generations managed a fooling frequency rate of just 11 percent.
Xue believes the research can help lay the groundwork for truly machine-original art generation. She says the model is not confined to Chinese paintings, and can be generalized to other artistic styles that emphasize edge definition.
The paper End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks is on arXiv.
Reporter: Yuan Yuan | Editor: Michael Sarazen
This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.
Click here to find more reports from us.
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.