AI Technology

Microsoft & Peking University FaceShifter: High Fidelity, Occlusion Aware Face Swapping

A new study from Peking University and Microsoft Research Asia proposes a novel two-phase framework, FaceShifter, that aims for high-fidelity and occlusion-aware face exchange.

Face swapping technologies are something of a double-edged sword in AI research. The ability to realistically switch and manipulate faces presents dangers for misuse in identity theft, fake news and other scenarios; but also widespread opportunities in the billion dollar film, television and computer game industries. Face-swapping’s kinder side has made the tech a popular new tool in the visual and graphic arts communities.

One of the current challenges in SOTA face-swapping is to integrate both realistic and high-fidelity effects — particularly how to extract and adaptively reorganize the identity and attributes of the source and target facial images. A new study from Peking University and Microsoft Research Asia proposes a novel two-phase framework, FaceShifter, that aims for high-fidelity and occlusion-aware face exchange.

Compared to existing face swapping methods, FaceShifter uses much more information from the target image. The model fully and adaptively utilizes and integrates target attributes to generate exchanged faces with high fidelity in the first processing stage.

The researchers proposed a new attribute encoder for deriving multi-level target face attributes and a well-designed generator with Adaptive Attentional Denormalization (AAD) layers to integrate target attributes with identity characteristics display. They also added a second stage to address the challenging face occlusion problem with a new Heuristic Error Acknowledging Refinement Network (HEAR-Net) that can recover anomalous areas in a self-supervised manner without any manual annotation.

Using natural face images from the FaceForensics ++ test images dataset, researchers conducted experiments on FaceShifter and other face swapping tools FaceSwap, Nirkin, DeepFakes, IPGAN, and the latest FSGAN. Human evaluators were asked to choose i) the one having the most similar identity with the source face; ii) the one sharing the most similar head pose, face expression and scene lighting with the target image; iii) the most realistic one. FaceShifter’s results where judged not only more perceptually attractive, but also to have retained a better overall identity display.

Researchers noted that all the other face swapping tools they tested synthesized inner facial areas first then combined that information with the contour of the target face, which could create inconsistent and unnatural appearances.

Moreover, the faces generated by the other methods ignored the shape of the original face, and did not consider key elements in the target image such as lighting and resolution. IPGAN for example has a single-level attribute representation, which degrades resolution and does not accurately retain target face expressions such as eyes closed. The team says FaceShifter solves all these problems.

Screen Shot 2020-01-03 at 6.28.32 PM.png
Comparison with FaceSwap, Nirkin et al., Deep- Fakes, IPGAN on FaceForensics++ face images.
Screen Shot 2020-01-03 at 6.28.41 PM.png
Comparison with FSGAN.

The paper FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping is on arXiv.


Author: Herin Zhao | Editor: Michael Sarazen

2 comments on “Microsoft & Peking University FaceShifter: High Fidelity, Occlusion Aware Face Swapping

  1. It’s worth reading Your .

  2. I’m not sure that the snapshots of Faceshifter and the other open-source face-swapping tools are fairly portraying each project’s strengths. I say this because I have seen many very good face-swaps on the internet, and I’ve also created many realistic face-swaps using the very same tools discussed in your article.

Leave a Reply

Your email address will not be published.

%d bloggers like this: