AI Technology

Facebook, MIT & UW Introduce DeepSDF AI for 3D Shape Representation

Researchers from the University of Washington, MIT, and Facebook Realty Lab recently proposed the novel deep representation method “DeepSDF” to study 3D object generation.

Shape recognition and generation has a long history in computer vision, where significant progress has been made thanks to the advances in machine learning over the past decades. However, synthesis or reconstruction of high fidelity 3D objects remains challenging due to the high complexity of their geometry.

Researchers from the University of Washington, MIT, and Facebook Realty Lab recently proposed the novel deep representation method “DeepSDF” to study 3D object generation. This is the first attempt to represent the surface of a 3D shape by its continuous volumetric field based on signed distance functions (SDF), and the first to introduce an auto-decoder generative model that produces high-quality 3D shapes using minimal memory.

Example latent space interpolation using DeepSDF. (Images rendered through raycasting)

SDF is a continuous function that can mark any spatial point’s distance to the boundary with a negative, positive or zero to indicate whether the point is inside, outside or on the surface of a given shape. Consequently, the surface of a shape can be implicitly represented by extracting the isosurface of SDF = 0.

With the aim of expressing high-grade continuous complex surfaces, the DeepSDF researchers trained models using deep neural networks to accurately predict the signed distance value of 3D point samples at given query positions.

The authors learned the shape features in latent space by training decoder-only networks, where random latent vectors for all data points were firstly randomly assigned and then optimized through back-propagation. Compared with traditional auto-encoders with both encoder and decoder architectures, such decoder-only networks make the model more compact while maintaining good performance.

Models were trained with the synthetic objects dataset “ShapeNet” with complete 3D shape meshes provided. The authors normalized each mesh to a unit sphere and sampled singed distance values for 500,000 spatial points. To capture greater geometric detail of the objects for better model training, sampling was most aggressive near surface areas.

DeepSDF outperformed state-of-the-art methods such as AtlasNet and OGN in known and unknown 3D shape representation tasks regarding model generalization and detail description. Both qualitative and quantitative results showed DeepSDF’s ability to produce a wide class of shapes (chair, plane, table, etc.) with high accuracy and precision, smooth and complete surface detail, and no defects.

The paper DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation is on arXiv.Supplementary video for shape completion and latent interpolation examples is available on YouTube.

Source: Synced China


Localization: Tingting Cao | Editor: Michael Sarazen

0 comments on “Facebook, MIT & UW Introduce DeepSDF AI for 3D Shape Representation

Leave a Reply

Your email address will not be published.

%d bloggers like this: