You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This paper addresses a novel task of anticipating 3D human-object interactions (HOIs). Most existing research on HOI synthesis lacks comprehensive whole-body interactions with dynamic objects, e.g., often limited to manipulating small or static objects. Our task is significantly more challenging, as it requires modeling dynamic objects with various shapes, capturing whole-body motion, and ensuring physically valid interactions. To this end, we propose InterDiff, a framework comprising two key steps: (i) interaction diffusion, where we leverage a diffusion model to encode the distribution of future human-object interactions; (ii) interaction correction, where we introduce a physics-informed predictor to correct denoised HOIs in a diffusion step. Our key insight is to inject prior knowledge that the interactions under reference with respect to contact points follow a simple pattern and are easily predictable. Experiments on multiple human-object interaction datasets demonstrate the effectiveness of our method for this task, capable of producing realistic, vivid, and remarkably long-term 3D HOI predictions.
📖 Implementation
To create the environment, you can check and build according to the requirement file requirements.txt, which is based on Python 3.7.
[2023-09-01] Our paper is available on the Arxiv 🎉 Code/Models are coming soon. Please stay tuned! ☕️
📝 TODO List
Release more demos.
Data preparation.
Release training and evaluation (short-term) codes.
Release checkpoints.
Release evaluation (long-term) and optimization codes.
Release code for visualization.
🔍 Overview
💡 Key Insight
We present HOI sequences (left), object motions (middle), and objects relative to the contacts after coordinate transformations (right). Our key insight is to inject coordinate transformations into a diffusion model, as the relative motion shows simpler patterns that are easier to predict, e.g., being almost stationary (top), or rotating around a fixed axis (bottom).
🔗 Citation
If you find our work helpful, please cite:
@inproceedings{
xu2023interdiff,
title={{InterDiff}: Generating 3D Human-Object Interactions with Physics-Informed Diffusion},
author={Xu, Sirui and Li, Zhengyuan and Wang, Yu-Xiong and Gui, Liang-Yan},
booktitle={ICCV},
year={2023},
}
👏 Acknowledgements
BEHAVE: We use the BEHAVE dataset for the mesh-based interaction.
HO-GCN: We use its presented dataset for the skeleton-based interaction.
TEMOS: We adopt the rendering code for HOI visualization.
Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.
🌟 Star History
About
[ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion"