RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models

CVPR 2023

1South China University of Technology

2Technical University of Munich , 3Peng Cheng Laboratory

Abstract

We address the challenge of recovering an underlying scene geometry and colors from a sparse set of RGBD view observations. In this work, we present a new solution termed RGBD2 that sequentially generates novel RGBD views along a camera trajectory, and the scene geometry is simply the fusion result of these views.

More specifically, we maintain an intermediate surface mesh used for rendering new RGBD views, which subsequently becomes complete by an inpainting network; each rendered RGBD view is later back-projected as a partial surface and is supplemented into the intermediate mesh.

The use of intermediate mesh and camera projection helps solve the tough problem of multi-view inconsistency. We practically implement the RGBD inpainting network as a versatile RGBD diffusion model, which is previously used for 2D generative modeling; we make a modification to its reverse diffusion process to enable our use.

We evaluate our approach on the task of 3D scene synthesis from sparse RGBD inputs; extensive experiments on the ScanNet dataset demonstrate the superiority of our approach over existing ones.

Introduction

Pipeline

Visualization

BibTeX

If you find our work useful, please consider citing our paper:
@InProceedings{Lei_2023_CVPR,
    author = {Lei, Jiabao and Tang, Jiapeng and Jia, Kui},
    title = {RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2023}
}