Just two days after a story on Motherboard went viral, developers of the “DeepNude” AI-powered software that enables users to virtually disrobe images of women have announced they are shutting down the website and the free and premium versions of the app they launched in March.
“The world is not yet ready for DeepNude,” tweeted the team, saying they “greatly underestimated the request(s)” that flooded the project website after mainstream media picked up the story: “if 500,000 people use it, the probability that people will misuse it is too high.”
So how does this app work? Users could input images of people wearing clothes and the system would swap the clothes for naked flesh, including breasts and a vulva. Whether the image was of a women or a man, it got the same body parts. The downloadable DeepNude desktop app runs on Windows 10 and Linux and works with both CPU and GPU. The free version added a large watermark, while the US$50 version put a smaller “Fake” stamp in the corner of its generated images.
“DeepFake“ is a broad term denoting any image synthesis technique based on machine learning — more specifically, a generative adversarial network (GAN). DeepFakes first caught public attention two years ago when a Reddit user leveraged the technique to produce highly convincing video clips that swapped celebrities’ faces with those of porn stars in the act.
The core technique used in DeepNude is pix2pix, a GAN variant first proposed in 2017 by Berkeley AI Research Lab as a general-purpose solution for image-to-image translation, Motherboard reported. Pix2pix not only uses a conditional GAN to learn the mapping from input image to output image, but also learn a loss function to train this mapping.
Pix2pix has spawned a wide range of downstream applications, such as photo generation, image colorization, and video translation. In the 2018 paperSeamless Nudity Censorship: An Image-to-Image Translation Approach based on Adversarial Training, a group of researchers from Brazil used CycleGAN, an improved GAN algorithm on top of the pix2pix architecture, to mask sensitive regions in semi-nude images — generating virtual bikinis on specific body parts — while preserving image semantics. DeepNude basically did something like that, but the other way round.
GAN-based image generation techniques are still in a research stage. In a recent DeepMind paper, researchers identify a number of problems plaguing GAN, including mode collapse issues (the generator produces limited varieties of samples) and lack of diversity (generated samples do not fully capture the diversity of the true data distribution). The quality of GAN-generated images can also be diminished if the data distribution of target images is very different from training data.
AI-powered techniques can also add realistically synchronized speech to the generated videos. Earlier this month, the US Congress organized a House Intelligence Committee hearing to investigate the national security risks of DeepFake technology, particularly with regard to fake news and election campaigns. The committee concluded that DeepFakes can cause profound economic, social and psychological damages and undermine the public’s ability to discern real from fake.
It’s been said that technology is a magnifier, it changes while humans remain the same. In June 1969, American inventor Harold N Braunhut patented his “X-Ray Specs” — an “optical means for simulating an X-ray image.” The novelty glasses were advertised in comic books, with a gawking man seemingly viewing a woman’s silhouette through her gown. The spectacles’ appearance uncannily matches those sported by the DeepNude Twitter account’s avatar.
If the DeepNude creators intended their product as some sort of modern tribute to the ’60s novelty item, someone forgot to tell them that the world has changed over the last 50 years.
Author: Tony Peng & Yuqing Li | Editor: Michael Sarazen