AI Computer Vision & Graphics Research

3DPeople | First Dataset to Map Clothing Geometry

A team of researchers from Institut de Robòtica i Informàtica Industrial and Harvard University recently introduced 3DPeople, a large-scale comprehensive dataset with specific geometric shapes of clothes that is suitable for many computer vision tasks involving clothed humans.

Recent progress in the field of 3D human shape estimation enables the efficient and accurate modeling of naked body shapes, but doesn’t do so well when tasked with displaying the geometry of clothes. A team of researchers from Institut de Robòtica i Informàtica Industrial and Harvard University recently introduced 3DPeople, a large-scale comprehensive dataset with specific geometric shapes of clothes that is suitable for many computer vision tasks involving clothed humans.

In addition to the new dataset, researchers also developed a novel shape parameterization algorithm and a multi-resolution end-to-end deep generative network for predicting dressed body shape.

3DPeople contains 2.5 million frames of photo-realistic images comprising 80 subjects (40 male and 40 female) with a variety of body shapes, wearing various clothing and performing 70 different actions. Researchers used four cameras to capture each subject’s action sequence. In addition to providing textured 3D meshes for the clothing and bodies, researchers also annotated the dataset with RGB (one of the most widely used color systems, including almost all colors that can be perceived by human vision), 3D skeleton, depth, optical flow, and semantic information (body parts and cloth labels).

The researchers used 2D geometric images to construct 3D shapes using a new area-preserving parameterization algorithm with optimal mass transportation. Existing spherical maps tend to make the geometric image incomplete by shrinking the elongated parts of the human body, such as arms, hands and legs — the approach proposed by researchers is designed to ameliorate this issue.

image.png
Geometry image representation of the reference mesh.

Researchers have also proposed GimNet, a multi-resolution deep generative network that can estimate the shape of a human body and the clothing it is wearing from a single image in an end-to-end manner without requiring a parametric model.

image.png
GimNet overview

The researchers say that although their current research results are very promising, they intend to apply further efforts on exploring new directions, such as extending the problem to video, exploring new regularization schemes on the geometric images, or combining segmentation and 3D reconstruction.

The complete dataset is available for download, and project researcher Albert Pumarola says the 3DPeople team is eager to see what exciting ideas and applications the community may come up with.

For more information about 3DPeople, please check out the project page.


Author: Herin Zhao | Editor: Michael Sarazen

0 comments on “3DPeople | First Dataset to Map Clothing Geometry

Leave a Reply

Your email address will not be published. Required fields are marked *

%d