| CARVIEW |
Content
- Autonomous Loco-Manipulation
- Visual Randomization in Simulation
- Key Teacher-Student Elements
- Compute Scaling
- Key Sim2Real Elements
- Generalization
- #1: Tray Position - Y Axis
- #2: Tray Position - X Axis
- #3: Cylinder Position
- #4: Robot Position - Y Axis
- #5: Robot Position - X Axis
- #6: Table Height
- #7: Lighting Conditions
- #8: Table Cloth Color
- #9: Table Type
- #10: Object
- Our Visual Sim2Real Journey
- Failure Cases
- Abstract
- Method
- BibTeX
VIRAL: Visual Sim-to-Real at Scale for
Humanoid Loco-Manipulation
Zhengyi Luo Wenli Xiao Ye Yuan Xingye Da Fernando Castañeda
Shankar Sastry Changliu Liu Guanya Shi Linxi "Jim" Fan† Yuke Zhu†
Autonomous Loco-Manipulation
Time Lapse
Consecutive Successes
Visual Randomization in Simulation
All Randomization
Dome Light Rand
Image Rand
Material Rand
Key Teacher Elements
Delta Action Space & Reference State Initialization (RSI)
Key Sim2Real Elements
Finger SysID
FOV Alignment
Compute Scaling for Teacher-Student Training
Scaling Compute for Teacher
Scaling Compute for Student
Generalization #1: Tray Position - Y Axis
Tray Position - Y Axis: The robot demonstrates precise tray manipulation across different Y-axis positions, adapting to left, middle, and right placements.
Generalization #2: Tray Position - X Axis
Tray Position - X Axis: The robot demonstrates adaptive manipulation across various X-axis positions, from 20cm inside the table to 15cm beyond the edge.
Generalization #3: Cylinder Position
Cylinder Position: The robot precisely manipulates cylinders at varied positions, demonstrating adaptive positioning and control.
Generalization #4: Robot Position - Y Axis
Robot Position - Y Axis: The robot performs manipulation tasks from different Y-axis positions, demonstrating adaptability to left, middle, and right positions.
Generalization #5: Robot Position - X Axis
Robot Position - X Axis: The robot demonstrates consistent manipulation performance across different X-axis distances, from near to far positions relative to the table.
Generalization #6: Table Height
Table Height: The robot demonstrates remarkable adaptability across various table heights, from 26.5 inches to 31.8 inches, showcasing robust manipulation capabilities.
Generalization #7: Lightening Conditions
Lighting Conditions: The robot maintains consistent manipulation performance across different lighting conditions, from bright to dark and flashing environments.
Generalization #8: Table Cloth Color
Table Cloth Color: The robot successfully adapts to various table cloth colors, from gray and green to bright colors like yellow, purple, cyan, blue, orange, and red.
Generalization #9: Table Type
Table Type: The robot demonstrates versatility across different table types, showcasing consistent manipulation performance regardless of table material and design.
Generalization #10: Object
Object Variety: The robot shows strong adaptability across objects of varying shapes, sizes, and materials.
Our Visual Sim2Real Journey
The First RGB-based Sim2Real for Reaching
May 30, 2025: The task is to reach the green/red box based on the visual input. Red box to close fingers, and green box to open fingers.
Failure Cases
Failure Cases: While the robot demonstrates robust performance, occasional failures occur including unreliable deployment, hand getting stuck, accidental drops, and challenges with out-of-distribution objects.
Abstract
A key barrier to the real-world deployment of humanoid robots is the lack of autonomous loco-manipulation skills. We introduce VIRAL, a visual sim-to-real framework that learns humanoid loco-manipulation entirely in simulation and deploys it zero-shot to real hardware. VIRAL follows a teacher-student design: a privileged RL teacher, operating on full state, learns long-horizon loco-manipulation using a delta action space and reference state initialization. A vision-based student policy is then distilled from the teacher via large-scale simulation with tiled rendering, trained with a mixture of online DAgger and behavior cloning. We find that compute scale is critical: scaling simulation to tens of GPUs (up to 64) makes both teacher and student training reliable, while low-compute regimes often fail. To bridge the sim-to-real gap, VIRAL combines large-scale visual domain randomization over lighting, materials, camera parameters, image quality, and sensor delays—with real-to-sim alignment of the dexterous hands and cameras. Deployed on a Unitree G1 humanoid, the resulting RGB-based policy performs continuous loco-manipulation for up to 54 cycles, generalizing to diverse spatial and appearance variations without any real-world fine-tuning, and approaching expert-level teleoperation performance. Extensive ablations dissect the key design choices required to make RGB-based humanoid loco-manipulation work in practice.
Method
There are three steps in the VIRAL framework:
- Teacher Training with Privileged Information: A privileged RL teacher with full state access learns long-horizon loco-manipulation using delta action space and reference state initialization.
- Student Distillation at Scale: A vision-based student policy is distilled from the teacher via large-scale simulation with tiled rendering, trained using a mixture of online DAgger and behavior cloning across tens of GPUs.
- Sim-to-Real Transfer: Large-scale visual domain randomization combined with real-to-sim alignment of dexterous hand and camera parameters enables zero-shot deployment to real hardware.
BibTeX
@article{he2025viral,
title={VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation},
author={He, Tairan and Wang, Zi and Xue, Haoru and Ben, Qingwei and Luo, Zhengyi and Xiao, Wenli and Yuan, Ye and Da, Xingye and Castañeda, Fernando and Sastry, Shankar and Liu, Changliu and Shi, Guanya and Fan, Linxi and Zhu, Yuke},
journal={arXiv preprint arXiv:2511.15200},
year={2025}
}