Site icon Synced

DeepFaceDrawing Generates Photorealistic Portraits from Freehand Sketches

A team of researchers from the Chinese Academy of Sciences and the City University of Hong Kong has introduced a local-to-global approach that can generate lifelike human portraits from relatively rudimentary sketches.

Recent deep image-to-image translation techniques have enabled the prompt generation of human face images from sketches, but these methods tend to suffer from overfitting to their inputs. They thus achieve the most realistic results only when the source drawings have high-quality artistry or are accompanied by edge maps.

Unlike most deep learning based solutions for sketch-to-image translation that take input sketches as fixed, ‘hard’ constraints and then attempt to reconstruct the missing texture or shading information between strokes, the key idea behind the new approach is to implicitly learn a space of plausible face sketches from real face sketch images and find the point in this space that best approximates the input sketch. Because this approach treats input sketches more as ‘soft’ constraints that will guide image synthesis, it is able to produce high-quality face images with increased plausibility even from rough and/or incomplete inputs.

Illustration of the model’s deep learning framework architecture

The system consists of three main modules — CE (Component Embedding), FM (Feature Mapping), and IS (Image Synthesis). The CE module adopts an auto-encoder architecture and separately learns five feature descriptors — left-eye, right-eye, nose, mouth, and remainder — from the face sketch data. The FM and IS modules together form another deep learning sub-network for conditional image generation, and map component feature vectors to realistic images.

The researchers also provide a shadow-guided interface, implemented based on CE, that makes it easier for users to refine the input sketches. Their system can produce high-quality realistic face images — with resolution of 512 × 512 — that faithfully respect and reflect the input sketches.

Both qualitative and quantitative evaluations show that the method produces visually more pleasing face images, according to the researchers. The system’s usability and expressiveness were also favourably confirmed in a user study.

The researchers say their tool is easy to use, even for non-artists, while still supporting fine-grained control of shape details. They are working on releasing the source code soon.

The paper DeepFaceDrawing: Deep Generation of Face Images from Sketches has been accepted by SIGGRAPH 2020 and is available on arXiv.


Journalist: Yuan Yuan | Editor: Michael Sarazen

Exit mobile version