😃 Welcome to my personal page!

I am Tianqi Liu (刘天齐 in Chinese), a third-year (2023.09-) master’s student in Artificial Intelligence at Huazhong University of Science and Technology (HUST), supervised by Prof. Zhiguo Cao. I was fortunate to have worked as a research assistant at MMLAB@NTU, advised by Prof. Ziwei Liu. Before that, I received my bachelor’s degree from HUST in 2023. My current research interests include 3D/4D generation and reconstruction.

🔥 News

📝 Publications

(* denotes equal contribution.)

arXiv 2025
sym

[arXiv 2025] Light-X: Generative 4D Video Rendering with Camera and Illumination Control
Tianqi Liu, Zhaoxi Chen, Zihao Huang, Shaocong Xu, Saining Zhang, Chongjie Ye, Bohan Li, Zhiguo Cao, Wei Li, Hao Zhao, Ziwei Liu.
[Project page] [Paper] [Code] [Video] [中文解读]

Light-X is a video generation framework that jointly controls camera trajectory and illumination from monocular videos.

arXiv 2025
sym

[arXiv 2025] Generative Photographic Control for Scene-Consistent Video Cinematic Editing
Huiqiang Sun, Liao Shen, Zhan Peng, Kun Wang, Size Wu, Yuhang Zang, Tianqi Liu, Zihao Huang, Xingyu Zeng, Zhiguo Cao, Wei Li, Chen Change Loy.
[Project page] [Paper] [Code] [中文解读]

Cinectrl is the first video cinematic editing framework that provides fine control over professional camera parameters (e.g., bokeh, shutter speed).

arXiv 2025
sym

[arXiv 2025] 4DNeX: Feed-Forward 4D Generative Modeling Made Easy
Zhaoxi Chen*, Tianqi Liu*, Long Zhuo*, Jiawei Ren, Zeng Tao, He Zhu, Fangzhou Hong, Liang Pan, Ziwei Liu.
[Project page] [Paper] [Code] [Dataset] [Video] [中文解读]

4DNeX is a feed-forward framework that generates 4D (dynamic 3D) scene representations from a single image by adapting a video diffusion model. It produces high-quality dynamic point clouds and enables novel-view video synthesis.

ICCV 2025
sym

[ICCV 2025] Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency
Tianqi Liu*, Zihao Huang*, Zhaoxi Chen, Guangcong Wang, Shoukang Hu, Liao Shen, Huiqiang Sun, Zhiguo Cao, Wei Li, Ziwei Liu.
[Project page] [Paper] [Code] [Video] [中文解读]

Free4D is a tuning-free framework for 4D scene generation from a single image or text.

ICCV 2025
sym

[ICCV 2025] MuGS: Multi-Baseline Generalizable Gaussian Splatting Reconstruction
Yaopeng Lou, Liao Shen, Tianqi Liu, Jiaqi Li, Zihao Huang, Huiqiang Sun, Zhiguo Cao.
[Paper] [Code]

MuGS is the first multi-baseline generalizable gaussian splatting method.

CVPR 2025
sym

[CVPR 2025] DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting
Liao Shen, Tianqi Liu, Huiqiang Sun, Jiaqi Li, Zhiguo Cao, Wei Li, Chen Change Loy.
[Project page] [Paper] [Code]

We introduce DoF-Gaussian, a controllable depth-of-field method for 3D-GS. We develop a lens-based imaging model based on geometric optics principles to control DoF effects. Our framework is customizable and supports various interactive applications.

CVPR 2025
sym

[CVPR 2025] WildAvatar: Learning In-the-wild 3D Avatars from the Web
Zihao Huang, Shoukang Hu, Guangcong Wang, Tianqi Liu, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu.
[Project page] [Paper] [Code] [Video] [中文解读]

We present WildAvatar, a web-scale in-the-wild video dataset for 3D avatar creation.

CVPR 2025 Highlight
sym

[CVPR 2025 Highlight] CH3Depth: Efficient and Flexible Depth Foundation Model with Flow Matching
Jiaqi Li, Yiran Wang, Jinghong Zheng, Junrui Zhang, Liao Shen, Tianqi Liu, Zhiguo Cao.
[Paper] [Code]

CH₃Depth is an efficient and flexible flow-matching-based depth estimation framework that achieves state-of-the-art zero-shot performance in accuracy, efficiency, and temporal consistency.

ECCV 2024
sym

[ECCV 2024] MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo
Tianqi Liu, Guangcong Wang, Shoukang Hu, Liao Shen, Xinyi Ye, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu.
[Project page] [Paper] [Code] [Video] [中文解读]

MVSGaussian is a Gaussian-based method designed for efficient reconstruction of unseen scenes from sparse views in a single forward pass. It offers high-quality initialization for fast training and real-time rendering.

ECCV 2024
sym

[ECCV 2024] DreamMover: Leveraging the Prior of Diffusion Models for Image Interpolation with Large Motion
Liao Shen, Tianqi Liu, Huiqiang Sun, Xinyi Ye, Baopu Li, Jianming Zhang, Zhiguo Cao.
[Project page] [Paper] [Code]

By leveraging the prior of diffusion models, DreamMover can generate intermediate images from image pairs with large motion while maintaining semantic consistency.

CVPR 2024
sym

[CVPR 2024] Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields
Tianqi Liu, Xinyi Ye, Min Shi, Zihao Huang, Zhiyu Pan, Zhan Peng, Zhiguo Cao.
[Project page] [Paper] [Code] [Video]

We present GeFu, a generalizable NeRF method that synthesizes novel views from multi-view images in a single forward pass.

CVPR 2024
sym

[CVPR 2024] 3D Multi-frame Fusion for Video Stabilization
Zhan Peng, Xinyi Ye, Weiyue Zhao, Tianqi Liu, Huiqiang Sun, Baopu Li, Zhiguo Cao.
[Paper] [Code]

RStab is a novel framework for video stabilization that integrates 3D multi-frame fusion through volume rendering.

ICCV 2023
sym

[ICCV 2023] When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo
Tianqi Liu, Xinyi Ye, Weiyue Zhao, Zhiyu Pan, Min Shi, Zhiguo Cao.
[Paper] [Code]

ETMVSNet uses epipolar geometric priors to constrain feature aggregation fileds, thereby efficiently inferring multi-view depths and reconstructing scenes.

ICCV 2023
sym

[ICCV 2023] Constraining Depth Map Geometry for Multi-View Stereo: A Dual-Depth Approach with Saddle-shaped Depth Cells
Xinyi Ye, Weiyue Zhao, Tianqi Liu, Zihao Huang, Zhiguo Cao, Xin Li.
[Paper] [Code]

DMVSNet proposes a new perspective to consider the depth geometry of multi-view stereo and introduces a dual-depth approach to approximate the depth geometry with saddle-shaped cells.

🎖 Honors and Awards

  • 2025    National Scholarship (Top 0.2% Nationwide)
  • 2024    National Scholarship (Top 0.2% Nationwide)
  • 2023    Honours Degree (Top 2%)
  • 2022    National Scholarship (Top 0.2% Nationwide)
  • 2022    Merit Student (Top 2%)
  • 2021    Outstanding Undergraduate Student (Top 2%)

📎 Links