| CARVIEW |
Material Anything: Generating Materials for Any 3D Object via Diffusion
CVPR 2025 (Highlight)
Demo Video
Abstract
We present Material Anything, a fully-automated, unified diffusion framework designed to generate physically-based materials for 3D objects. Unlike existing methods that rely on complex pipelines or case-specific optimizations, Material Anything offers a robust, end-to-end solution adaptable to objects under diverse lighting conditions. Our approach leverages a pre-trained image diffusion model, enhanced with a triple-head architecture and rendering loss to improve stability and material quality. Additionally, we introduce confidence masks as a dynamic switcher within the diffusion model, enabling it to effectively handle both textured and texture-less objects across varying lighting conditions. By employing a progressive material generation strategy guided by these confidence masks, along with a UV-space material refiner, our method ensures consistent, UV-ready material outputs. Extensive experiments demonstrate our approach outperforms existing methods across a wide range of object categories and lighting conditions.
Method
Overview of Material Anything. For texture-less objects, we first generate coarse textures using image diffusion models. For objects with pre-existing textures, we directly process them. Next, a material estimator progressively estimates materials for each view from a rendered image, normal, and confidence mask. The confidence mask serves as additional guidance for illuminance uncertainty, addressing lighting variations in the input image and enhancing consistency across generated multi-view materials. These materials are then unwrapped into UV space and refined by a material refiner.
Generate Materials for Texture-Less Objects
Generate Materials for Albedo-Only Objects
Generate Materials for Generated Objects
Generate Materials for Scanned Objects
Comparisons
We compare our method with texture generation methods, Text2Tex, SyncMVD, and Paint3D. Additionally, we assess our method alongside optimization-based material generation approaches, NvDiffRec and DreamMat, and a retrieval-based method, Make-it-Real. Finally, we also include comparisons with the closed-source methods, Rodin Gen-1 and Tripo3D.
Applications
Material Anything offers robust capabilities to edit and customize materials of texture-less 3D objects by simply adjusting the input prompt. Moreover, our method supports relighting, enabling objects to be viewed under different lighting conditions.
BibTeX
@article{huang2024materialanything,
author = {Huang, Xin and Wang, Tengfei and Liu, Ziwei and Wang, Qing},
title = {Material Anything: Generating Materials for Any 3D Object via Diffusion},
journal = {arXiv preprint arXiv:2411.15138},
year = {2024}
}