| CARVIEW |
SkillBlender performs versatile autonomous humanoid loco-manipulation tasks within different embodiments and environments, given only one or two intuitive reward terms.
Summary Video
Framework: SkillBlender
Method Pipeline
Overview of SkillBlender. We first pretrain goal-conditioned primitive expert skills that are task-agnostic, reusable, and physically interpretable, and then reuse and blend these skills to achieve complex whole-body loco-manipulation tasks given only one or two task-specific reward terms.
A Pretrain-then-Blend Paradigm
Low-Level Primitive Skills (on H1)
High-Level Loco-Manipulation Tasks (on H1)
Benchmark: SkillBench
Our SkillBench is a parallel, cross-embodiment, and diverse simulated benchmark containing three embodiments, four primitive skills, and eight loco-manipulation tasks.
Parallel Simulation
Cross-Embodiment
Diverse Tasks
Experiment Results
Qualitative Comparison
Qualitative comparison between different methods. Our SkillBlender not only achieves higher task accuracy, but also avoids reward hacking and yields more natural and feasible movements.
Skill Blending Decomposition
Visualization of whole-body per-joint weights at different task stages. More blue means more Reaching, and more green means more Walking. This visualization highlights the spatial-temporal decomposition of our skill blending, where the two skills interleave rather than one skill dominating the overall motion.
Primitive Skill Deployment
We also provide a sim2real toolkit support that deploys our simulation-trained primitive skill policies in the real world using a Unitree H1 humanoid robot.
Our Team
3Peking University, 4University of California, Berkeley
* Equal contributions
@article{kuang2025skillblender,
title={SkillBlender: Towards Versatile Humanoid Whole-Body Loco-Manipulation via Skill Blending},
author={Kuang, Yuxuan and Geng, Haoran and Elhafsi, Amine and Do, Tan-Dzung and Abbeel, Pieter and Malik, Jitendra and Pavone, Marco and Wang, Yue},
journal={arXiv preprint arXiv:2506.09366},
year={2025}
}
If you have any questions, please contact Yuxuan Kuang.