My current research focuses on 3D/4D scene reconstruction and generation.
I recently authored Splannequin (WACV 2026), which reconstructs static scenes from casual monocular videos through self-anchoring.
More broadly, I am interested in developing robust and interesting applications by grounding them in fundamental algorithmic analysis.
My goal is to build intelligent systems that can perceive and reconstruct the visual world as effectively as humans do.
Previously, I completed my M.S. in ECE at UCLA and my B.S. in Electrophysics at NCTU (now NYCU).
Prior to my current focus, I worked on privacy-preserving AI and IoT security, leading to publications in MobiCom and IEEE IoT-J.
Research Interests
Computer Vision Applications
3D & 4D Reconstruction
Generative AI
Physically-Informed Optimization
Dataset Curation
Privacy-Preserving & IoT AI
News
Nov 2025
Paper accepted at WACV 2026: "Splannequin: Freezing Monocular Mannequin-Challenge Footage". 🎉
Oct 2024
Joined the Computational Photography Lab at NYCU as a Research Assistant.
May 2024
Paper accepted at IEEE Security and Privacy Workshops 2024: "Virtual Keymysteries Unveiled". 🔒
Sep 2023
Paper accepted at MobiCom 2023: "Enc2: Privacy-Preserving Inference for Tiny IoTs". 📱
Jun 2020
Earned MS in Electrical and Computer Engineering at UCLA. 🎓
Publications
Splannequin: Freezing Monocular Mannequin-Challenge Footage with Dual-Detection Splatting
Hao-Jen Chien, Yi-Chuan Huang, Chung-Ho Wu, Wei-Lun Chao, Yu-Lun Liu
WACV 2026
Splannequin freezes dynamic Gaussian splats into crisp 3D scenes from monocular videos by anchoring artifacts to more reliable temporal states.
GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction
Yi-Chuan Huang, Hao-Jen Chien, Chin-Yang Lin, Yin-Huan Chen, Yu-Lun Liu
Under Submission
GaMO reformulates sparse-view 3D reconstruction as multi-view outpainting, expanding the field of view with geometry-aware diffusion to achieve consistent, high-quality reconstructions efficiently from very few input views.
Yi-Chuan Huang, Jiewen Chan, Hao-Jen Chien, Yu-Lun Liu
ArXiv 2025
Voxify3D is a differentiable two-stage method that converts 3D meshes into stylized voxel art with discrete palette control. It preserves semantic structure using multi-view pixel-art supervision and CLIP-guided optimization.