The rapid advancements of AI models such as generative adversarial networks (GAN) has enabled the generation of photorealistic images and video. Although researchers have used GANs and style transfer techniques to successfully synthesize artworks based on Western painting styles such as Impressionism, their capability on traditional Chinese paintings remains relatively untested.
In the paper ChipGAN: A Generative Adversarial Network for Chinese Ink Wash Painting Style Transfer, a team of researchers from Peking University and Tsinghua University propose an end-to-end GAN-based architecture that can transfer input photos into the style of Chinese ink wash paintings.
The first problem researchers encountered was finding an appropriate dataset, as the majority of current painting datasets contain artworks from Western artists such as Van Gogh and Monet. Researchers built their “ChipPhi” dataset with 1630 pictures of horses in various colors and poses and 912 images of horse paintings by Chinese ink wash painting master Xu Beihong, who is known for his studies of horses. The dataset also includes 1976 photos of landscapes from around the world and 1542 images of landscape paintings by ink wash master Huang Binhong. The images were collected from the Internet and art studios.
Unlike Western oil on canvas paintings, traditional Chinese painting uses black ink brushed onto highly absorbent paper that diffuses the ink. Three unique techniques distinguish Chinese painting: voids, brush strokes, and ink wash. Researchers addressed these one-by-one.
The void technique describes how Chinese artists purposely leave blank areas in their pictures — sometimes referred to as “negative space” — to balance a composition. To synthesize this effect, the model must intentionally ignore some components of the input image. For example, it needs to be “blind” to the blue sky or clouds above a horse while still preserving the animal’s outline. Researchers used combined cycle consistency loss and adversarial loss as a constraint to achieve this effect. Researchers also introduced brush stroke loss to reduce brush strokes in the generated picture to the bare essentials. In the last stage, an ink wash loss is added to correct the overall diffusion effect of the image.
The three types of constraints enable ChipGAN’s impressive performance in automatic style transfer from real pictures to Chinese ink wash paintings. The simulated tone and diffusion effects convey for example the depth and majesty of distant mountains, an effect admired in ink wash landscapes.
Although many may regard AI painting as an unwelcome intrusion on a human cultural domain, the researchers believe that by demonstrating subtle details when generating new paintings based on the work of old masters, ChipGAN could for example help novice painters better appreciate the art form’s unique characteristics, such as how a master might depict a galloping horse’s legs and mane.
The paper ChipGAN: A Generative Adversarial Network for Chinese Ink Wash Painting Style Transfer is available on the MIT website.
Journalist: Fangyu Cai | Editor: Michael Sarazen
Pingback: [Links 11.10.2019] Hong Kong Protest-Art; Zebra-Kühe; Wissenschaftler implantieren synthetische Erinnerungen in Finken | NERDCORE