| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 16 Nov 2025 20:58:49 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"691a3b09-331f"
expires: Mon, 29 Dec 2025 18:57:42 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 2EC3:2BC55:9337B5:A52390:6952CCCC
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 18:47:42 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210026-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767034062.285436,VS0,VE210
vary: Accept-Encoding
x-fastly-request-id: 9b404e8a417a7d28bed379204ea6013aa872600d
content-length: 3862
Zhenyu Li's Homepage
Zhenyu Li 李震宇Phd StudentKing Abdullah University of Science and Technology Email: zhenyu.li.9955@gmail.com; Github: https://github.com/zhyever Google Scholar: Google Scholar Link CV: CV Link |
|
Biography
I'm a 3rd-year PhD student at King Abdullah University of Science and Technology (KAUST), advised by Prof. Peter Wonka.
I got my B.E. and M.S degrees in computer science at Harbin Institute of Technology, China.
My research currently concentrates on 3D reconstruction and scene understanding.
News
- August 2025, I started my internship at Meta Reality Labs, Zurich, Switzerland!
- June 2025, Amodal-DepthAnything is accepted to ICCV 2025!
- July 2024, PatchRefiner is accepted to ECCV 2024!
- March 2024, PatchFusion is accepted to CVPR 2024!
Experience
- Mar.2021 - Sep.2021, Development and Research Intern, SenseTime
- Jan.2022 - July.2022, Research Intern, SenseTime
- Aug.2022 - Apr.2023, Research Intern (Elite Camp), DiDi Cargo
- Jan.2025 - July.2025, Research Intern (TopSeed Candidate), ByteDance Seed
- Aug.2025 - Now, Research Intern, Meta Reality Labs
Awards
- 1st place at VCL 2023 Challenge, Multitask Learning for Robustness Track! (ICCV 2023 Workshop)
- China National Scholarship 2022.
- 3rd place at SSLAD 2022 Challenge, 3D Object Detection Track! (ECCV 2022 Workshop)
- 2nd place at Mobile AI&AIM 2022 Challenge, Monocular Depth Estimation Track! (ECCV 2022 Workshop)
Codebase
![]() |
Monocular Depth Estimation Toolbox Zhenyu Li 2022 [Code] [Bibtex] |
|
@misc{lidepthtoolbox2022,
title={Monocular Depth Estimation Toolbox}, author={Zhenyu Li}, howpublished = {\url{https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox}}, year={2022} } |
|
Selected Publications
![]() |
Depth Anything 3: Recovering the Visual Space from Any Views Haotong Lin, Sili Chen, Hao Liew Jun, Y. Chen Donny, Zhenyu Li, Guang Shi, Jiashi Feng, Bingyi Kang Arxiv, 2025 [Project Page] [PDF] [Code] |
![]() |
Amodal Depth Anything: Amodal Depth Estimation in the Wild Zhenyu Li, Mykola Lavreniuk, Shariq Farooq Bhat, Peter Wonka ICCV, 2025 [Project Page] [PDF] [Code] |
![]() |
PatchRefiner: Leveraging Synthetic Data for Real-Domain High-Resolution Monocular Metric Depth Estimation Zhenyu Li, Shariq Farooq Bhat, Peter Wonka ECCV, 2024 [PDF] [Code] |
![]() |
PatchFusion: An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation Zhenyu Li, Shariq Farooq Bhat, Peter Wonka CVPR, 2024 [Project Page] [PDF] [Code] |
![]() |
AutoAlignV2: Deformable Feature Aggregation for Dynamic Multi-Modal 3D Object Detection Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang, Feng Zhao ECCV, 2022 [PDF] [Code] |
![]() |
BinsFormer: Revisiting Adaptive Bins for Monocular Depth Estimation Zhenyu Li, Xuyang Wang, Xianming Liu, Junjun Jiang Ranked 1st on KITTI depth estimation benchmark (Feb, 2022). Transactions on Image Processing [PDF] [Code] |
Service
- Conference Reviewer: CVPR, ECCV, ICCV, NeurIPS, SIGGRAPH.






