The age-old beauty industry is getting a dynamic makeover from the thousands of bloggers sharing beauty and makeup tips and techniques and cosmetic preferences on the Internet. But when it comes to visualizing questions like “which shade of lipstick should I try?” or “why does my makeup look so different from the makeup in the demo video?” AI may be better equipped to provide the answers.
A multi-institute research group recently released the paper PSGAN: Pose-Robust Spatial-Aware GAN for Customizable Makeup Transfer, which proposes a novel method for transferring makeup styles from a reference picture to a user’s source image.
PSGAN comprises three networks:
- A Makeup Distillation Network (MDNet) utilizes the encoder-bottleneck architecture of GANs (Generative Adversarial Networks) but without the decoder part. MDNet disentangles makeup related features such as lip gloss, eye shadow, etc. from intrinsic facial features (eye size and shape, etc), which are represented in matrices γ and β
- An Attentive Makeup Morphing (AMM) module specifies how pixels in the source image should be morphed from pixels in the reference image, and produces the matrices γ’ and β’
- A De-makeup & Re-makeup Network (DRNet) removes the original makeup from the source image, then re-applies makeup using pixel-wise weighted multiplication and addition with the γ’ and β’ matrices.
PSGAN can not only transfer the makeup styles from reference images which contain different poses and facial expressions from the source image, it can also process partial and interpolated makeup styles from multiple reference images.
The researchers trained and tested PSGAN using the MT (Makeup Transfer) dataset, which contains 1,115 source images and 2,719 reference images. They also conducted a user study on Amazon Mechanical Turk (AMT) for quantitative evaluation, using 20 source images and 20 reference images randomly selected from the MT test set and MT-wild test set. The PSGAN output images were ranked dramatically higher than those produced by other makeup transferring models (BGAN, DIA and CGAN).
It’s believed virtual makeup effect generation can reduce the need for customers to visit the makeup counter at brick-and-mortar stores in order to try new cosmetics, and may encourage users to experiment with new looks and new products.
The paper’s authors are from Beihang University, the Chinese Academy of Sciences, the National University of Singapore, and promising Chinese AI unicorn YITU Tech. Shuicheng Yan, one of the authors, was Vice President of leading Chinese software company Qihoo 360 before joining YITU Tech as CTO.
The paper PSGAN: Pose-Robust Spatial-Aware GAN for Customizable Makeup Transfer is on arXiv.
Author: Victor Lu | Editor: Michael Sarazen; Tony Peng