| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://marcopesavento.github.io/ANIM/
x-github-request-id: B646:444BC:859984:9617E1:69520EE0
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 05:17:21 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210029-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766985441.955729,VS0,VE200
vary: Accept-Encoding
x-fastly-request-id: f7806b9dcc74b36e123870cc0dd31d4c0c381f6a
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 19 Mar 2024 23:18:49 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"65fa1d59-4174"
expires: Mon, 29 Dec 2025 05:27:21 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: E4ED:2B0FD4:854286:95BFD1:69520EE0
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 05:17:21 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210029-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766985441.203753,VS0,VE218
vary: Accept-Encoding
x-fastly-request-id: e740837bf0418ae62df541f45b9d4e3507f9542e
content-length: 4587
ANIM
Recent progress in human shape learning shows that neural implicit models are effective in generating 3D human surfaces from a limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera's optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.
Our proposed framework has three major components: i) a multi-resolution appearance feature extractor for color and normal inputs (LR-FE and HR-FE), ii) a novel SparseConvNet U-Net
(Volume Feature Extractor or VFE) that efficiently extracts geometry features from 3D voxels and low-resolution image features, iii) an MLP that estimate the implicit surface representation of full-body humans.
Quantitative comparisons with state-of-the-art approaches in 3D human reconstruction from a single input.
Qualitative comparisons with state-of-the-art approaches in 3D human reconstruction from a single RGB-D data.
Qualitative comparisons with state-of-the-art approaches in 3D human reconstruction from different kinds of input.
Results obtained with real data from Azure-Kinect after fine-tuning ANIM with ANIM-Real
This research was supported by Meta, UKRI EPSRC and BBC Prosperity Partnership AI4ME: Future Personalised Object-Based Media Experiences Delivered at Scale Anywhere EP/V038087.
ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image

ANIM Reconstruction Example

ANIM-Real data
Abstract
Recent progress in human shape learning shows that neural implicit models are effective in generating 3D human surfaces from a limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera's optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.
ANIM Approach
Paper
Citation
M. Pesavento, Y. Xu, N. Sarafianos, R. Maier, Z. Wang, C. Yao, M. Volino, E. Boyer, A. Hilton and T. Tung, "ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image", The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
Bibtex
@misc{pesavento2024anim,
title={ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image},
author={Marco Pesavento and Yuanlu Xu and Nikolaos Sarafianos and Robert Maier and Ziyan Wang and Chun-Han Yao and Marco Volino and Edmond Boyer and Adrian Hilton and Tony Tung},
year={2024},
eprint={2403.10357},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Results
Quantitative Comparisons
Qualitative Comparisons
Real Data
References
- • IF-Net: J. Chibane et al., "Implicit functions in feature space for 3d shape reconstruction and completion", CVPR 2020.
- • PaMIR: Z. Zheng et al., "PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction", TPAMI, 2021.
- • PIFuHD: S. Saito et al., "PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization", CVPR, 2020.
- • ICON: Y. Xiu et al., "ICON: Implicit Clothed humans Obtained from Normals", CVPR, 2022.
- • PHORHUM: T. Alldieck et al., "Photorealistic Monocular 3D Reconstruction of Humans Wearing Clothing", CVPR, 2022.
- • ECON: Y. Xiu et al., "ECON: Explicit Clothed humans Optimized via Normal integration", CVPR, 2023.
- • SuRS: M. Pesavento et al., "Super-resolution 3D Human Shape from a Single Low-Resolution Image", ECCV, 2022. • PIFu: S. Saito et al., “PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization”, ICCV, 2019.
- • NormalGAN: L. Wang et al., “NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image”, ECCV, 2020.
- • OcPlane: T. He et al., “Occupancy Planes for Single-view RGB-D Human Reconstruction”, AAAI, 2023.
Acknowledgement
This research was supported by Meta, UKRI EPSRC and BBC Prosperity Partnership AI4ME: Future Personalised Object-Based Media Experiences Delivered at Scale Anywhere EP/V038087.