AI China Research

Shake Your Booty: AI Deepfakes Dance Moves From a Single Picture

Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up.

Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up. The method, proposed in a new paper from ShanghaiTech University and Tencent AI Lab that’s been accepted by ICCV 2019, requires only a single photo and a video clip of the target dance.

Current human image synthesis approaches can struggle for example with identification of clothing in different styles, colours and textures; the large spatial and geometric changes of the human body; and multiple source inputs.

Liquid Warping GAN addresses these challenges with body mesh recovery, flow composition and a GAN module with Liquid Warping Block (LWB). Unlike previous human image synthesis methods, Liquid Warping GAN can not only model joint locations and rotations but also characterize a personalized body shape from a single picture and video clip input.

Liquid Warping GAN’s human motion imitation, appearance transfer and novel view synthesis involves (left to right) a source image, reference condition such as an image or novel camera view, and the synthesized results.

To evaluate Liquid Warping GAN performance, researchers had 30 subjects with diverse body shapes, heights, genders and clothing demonstrate random movements to built a new dataset called Impersonator (iPER) with 206 video sequences and 241,564 frames. Trained on the iPER dataset, Liquid Warping GAN outperformed existing motion imitation methods such as PG2, DSC and SHUP.

In August a team from UC Berkeley published similar research in their paper Everybody Dance Now. They used a video-to-video translation approach with the pose as an intermediate representation, and also released an open-source dataset of videos for training and motion transfer.

The paper Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis is on arXiv. The related PyTorch implementation can be found on GitHub.


Author: Yuqing Li | Editor: Michael Sarazen

3 comments on “Shake Your Booty: AI Deepfakes Dance Moves From a Single Picture

  1. Pingback: AI Deepfakes Dance Moves From a Single Picture – CodingNova

  2. Pingback: AI Deepfakes Dance Moves From a Single Picture - ALLINVENTS

  3. Pingback: 科技爱好者周刊:第 83 期 - rankment

Leave a Reply

Your email address will not be published. Required fields are marked *