| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://liamjing.github.io/Neural3Points/
x-github-request-id: 7374:2DDCFF:A89E88:BD8E38:69545F16
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 23:24:06 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210037-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767137046.334928,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: a61c6ea494aacece17762857f8d14efde4e7c8fb
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 10 Jan 2023 09:52:42 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"63bd356a-16e7"
expires: Tue, 30 Dec 2025 23:34:06 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 8840:3946E9:A7B603:BCA81B:69545F15
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 23:24:06 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210037-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767137047.546887,VS0,VE219
vary: Accept-Encoding
x-fastly-request-id: d8fa5323719e2d6da8363b111ed52dc884ca7b19
content-length: 1844
Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users
Computer Graphics Forum (SCA 2022)
Yongjing Ye (1,2) Libin Liu†
(3) Lei Hu
(1,2)
Shihong Xia† (1,2)
(1) Institute of Computing Technology, Chinese Academy of Sciences
(2) University of Chinese Academy of Sciences
(3) Peking University
Paper: [PDF] Video: [
Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users
Abstract
Animating an avatar that reflects a user's action in the VR world enables natural interactions with the virtual environment. It has the potential to allow remote users to communicate and collaborate in a way as if they met in person. However, a typical VR system provides only a very sparse set of up to three positional sensors, including a head-mounted display (HMD) and optionally two hand-held controllers, making the estimation of the user's full-body movement a difficult problem. In this work, we present a data-driven physics-based method for predicting the realistic full-body movement of the user according to the transformations of these VR trackers and simulating an avatar character to mimic such user actions in the virtual world in real-time. We train our system using reinforcement learning with carefully designed pretraining processes to ensure the success of the training and the quality of the simulation. We demonstrate the effectiveness of the method with an extensive set of examples. Paper: [PDF] Video: [Youtube | Bilibili]
From reality to simulation:
Mirror scene:
Mini games:
Physical interactions:
One-Point tracking with HMD: