AI Research

NSFW Dataset Removes Humans From Content Review

The proliferation of social media in our daily lives has profoundly changed the way we work and play with others. It has also created an entirely new job: thousands of people worldwide now work for Google, Facebook and Twitter “Community Operations Teams.” Whenever a user flags content as offensive, it's sent to these guys for review.

The proliferation of social media in our daily lives has profoundly changed the way we work and play with others. It has also created an entirely new job: thousands of people worldwide now work for Google, Facebook and Twitter “Community Operations Teams.” Whenever a user flags content as offensive, it’s sent to these guys for review.

Community Operations Teams however do not get a lot of love from Internet users — they’re often being criticized for their decisions, explanations or work speed. There are also issues with the job itself. All this exposure to NSFW content — and some of it is extremely disturbing — can even be linked with Post-Traumatic Stress Disorder.

Now, Montréal-based data scientist Alexander Kim (GitHub name “Alexkimxyz”) has come up with a way to take humans out of the content review loop. “NSFW Data Scrapper” is a set of scripts that enable the collection of tens of thousands of images which developers can use to train convolutional neural network (CNN) image classifiers. The NSFW dataset contains over 220,000 images in five “loosely defined” categories:

  • porn – pornography images
  • hentai – hentai images, but also includes pornographic drawings
  • sexy – sexually explicit images, but not pornography. Think nude photos, playboy, bikini, beach volleyball, etc.
  • neutral – safe for work neutral images of everyday things and people
  • drawings – safe for work drawings (including anime)

Each of the images in the dataset comes with an accessible URL, which can be easily read and downloaded in different system interferences and toolkits. Kim has also developed scripts for Ubuntu 16.04 Linux distribution platform users to read and download the image dataset, executable as follows:

  • Python3 environment: conda env create -f environment.yml
  • Java runtime environment:
    • Ubuntu linux:sudo apt-get install default-jre
  • Linux command line tools: wget, convert (imagemagick suite of tools), rsync, shuf

The script introduction:

  • 1_get_urls.sh – iterates through text files under scripts/source_urls downloading URLs of images for each of the 5 categories above. The Ripme application performs all the heavy lifting. The source URLs are mostly links to various subreddits, but could be any website that Ripme supports. Note: I already ran this script for you, and its outputs are located in raw_data directory. No need to rerun unless you edit files under scripts/source_urls
  • 2_download_from_urls.sh – downloads actual images for urls found in text files in raw_data directory
  • 3_optional_download_drawings.sh – (optional) script that downloads SFW anime images from the Danbooru2018 database
  • 4_optional_download_neutral.sh – (optional) script that downloads SFW neutral images from the Caltech256 dataset
  • 5_create_train.sh – creates data/train directory and copy all *.jpg and *.jpeg file into it from raw_data. Also removes corrupted images
  • 6_create_test.sh – creates data/test directory and moves N=2000 random files for each class from data/train to data/test (change this number inside the script if you need a different train/test split). Alternatively, you can run it multiple times, each time it will move N images for each class from data/train to data/test.

Kim used a simple CNN to achieve a 91 percent accuracy rate for actual classification tasks with following confusion matrix:
image.png
Current applications that leverage AI to review flagged content include Chinese startup TupuTech’s AI algorithm, which has achieved a 99 percent success rate identifying pornographic images. But as we are seeing with microblogging website Tumblr’s ongoing battle against nudity, content review tech remains far from perfect.

The NSFW Data Scrapper project release will surely accelerate AI-based image review algorithms in real life applications, while the tech’s scope can also be expected to expand beyond porn. It’s very possible that all those Community Operation Team jobs will be gone just as quickly as they appeared.

More information on the NSFW Data Scrapper is available on the project’s GitHub page.


Author: Robert Tian | Editor: Michael Sarazen

2 comments on “NSFW Dataset Removes Humans From Content Review

  1. Pingback: Huge New NSFW Dataset for Content Filtering | Synced

  2. Pingback: Huge New NSFW Dataset for Content Filtering - AI+ NEWS

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: