Tag: 3D Reconstruction

AI Computer Vision & Graphics Machine Learning & Data Science Research

UC Berkeley’s Instruct-NeRF2NeRF Edits 3D Scenes With Text Instructions

In the new paper Instruct-NeRF2NeRF: Editing 3D Scenes With Instructions, a UC Berkeley research team presents Instruct-NeRF2NeRF, an approach for editing 3D NeRF scenes through natural language text instructions. The proposed method can edit large-scale, real-world 3D scenes with improved ease of use and realism.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Oxford U Presents RealFusion: 360° Reconstructions of Any Object from a Single Image

In the new paper RealFusion: 360° Reconstruction of Any Object from a Single Image, an Oxford University research team leverages a diffusion model to generate 360° reconstructions of objects from a single image. Their RealFusion approach achieves state-of-the-art performance on monocular 3D reconstruction benchmarks.