| CARVIEW |
SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation
Alex Schwing*
UIUC
Liangyan Gui*
UIUC
CVPR 2023
SDFusion is a diffusion-based 3D shape generator. It enables various applications. (top-left) SDFusion can generate 3D shapes conditioned on different input modalities, including partial shapes, images, and text. (bottom-left) SDFusion can even jointly handle multiple conditioning modalities while controlling the strength for each of them. (top-right) We showcase an application where we leverage pretrained 2D models to texture 3D shapes generated by SDFusion. (bottom-right) We use a 3D-printer to print out the generated shapes of SDFusion.
Abstract
In this work, we present a novel framework built to simplify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, including images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multi-modal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.
Shape Completion
Input
Multimodal Shape Completion
Input
Multimodal Shape Completion
Given a partial shape, SDFusion can perform multimodal shape completion. Red cuboid indicates the missing region.
Text-guided 3D Generation
Given a text description, SDFusion can generate diverse and high quality shapes.
Conditional Guidance for Shape Completion
Text-guided shape completion.
Given a partial shape, SDFusion use the text inputs to guide the shape completion process.
Image-guided shape completion.
Given a partial shape, SDFusion use an image as guidance as well for shape completion.
Single-view Reconstruction
Input
GT
Ours
AutoSDF
ResNet2Vox
ResNet2SDF
Pix2Vox
The results of single-view reconstruction and the comparisons with the baselines.
Multi-condition: controlling the conditional strength between image and text
Given a partial shape, SDFusion can condition on multiple conditional modalities, e.g., image and text. We can generate different results by adjusting the weights between image and text.
Text-guided Colorization
Given a shape generated by SDFusion and a text description, we adopt an off-the-shelf 2D diffusion model (inspired from DreamFusion) to perform texturization.
Unconditional Generation
We show the unconditional generation results from BuildingNet.
We also demonstrate the unconditional generation results from ShapeNet.
Overview of SDFusion
(left) To enable high-resolution generation, we first encode 3D shapes into a latent space, where a diffusion model is trained. Furthermore, to enable flexible conditional generation, we adopt class-specific encoders along with classifier-free guidance to enable multi-modality conditioning. (right) At inference time, we can control the importance of each conditioning modality.
Citation
@inproceedings{cheng2023sdfusion,
title = {{SDFusion}: Multimodal 3D Shape Completion, Reconstruction, and Generation},
author={Cheng, Yen-Chi and Lee, Hsin-Ying and Tulyakov, Sergey and Schwing, Alexander G and Gui, Liang-Yan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4456--4465},
year={2023},
}