I recently graduated from the University of Toronto with a Computer Science Specialist and a Math Minor. I am broadly interested in computer vision and graphics. Currently, I am working with Prof. Amir Zamir at EPFL.
I spent one year at Deep Genomics as a machine learning researcher. Earlier in my undergraduate studies, I conducted research with Lisa Jeffrey and worked as a web developer with Prof. David Liu.
I am passionate about mentoring junior students and helping them explore research opportunities at UofT and beyond. If you ever want to chat about research (or anything else!), feel free to reach out: junru.lin@mail.utoronto.ca.
news
Jul 1, 2025
Started My internship at EPFL with Prof. Amir Zamir!
In this work, we address the challenge of deploying Neural Radiance Field (NeRFs) in Simultaneous Localization and Mapping (SLAM) under the condition of lacking depth information, relying solely on RGB inputs. The key to unlocking the full potential of NeRF in such a challenging context lies in the integration of real-world priors. A crucial prior we explore is the binary opacity prior of 3D space with opaque objects. To effectively incorporate this prior into the NeRF framework, we introduce a ternary-type opacity (TT) model instead, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface. This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques. Therefore, we further propose a novel hybrid odometry (HO) scheme that merges bundle adjustment and warping-based localization. Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets, in terms of both speed and accuracy. This breakthrough underscores the potential of NeRF-SLAM in navigating complex environments with high fidelity.
@inproceedings{lin2023ternarytypeopacityhybridodometry,booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},title={Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM},author={Lin, Junru and Nachkov, Asen and Peng, Songyou and Gool, Luc Van and Pani Paudel, Danda},year={2024},volume={},number={},pages={7929-7936},keywords={Location awareness;Solid modeling;Accuracy;Simultaneous localization and mapping;Navigation;Neural radiance field;Rendering (computer graphics);Real-time systems;Odometry;Optimization},doi={10.1109/IROS58592.2024.10802493},}
ICCV
Global Motion Corresponder for 3D Point-Based Scene Interpolation
Junru Lin*, Chirag Vashist*, Mikaela Angelina Uy, and 4 more authors
Existing dynamic scene interpolation methods typically assume that the motion between consecutive timesteps is small enough so that displacements can be locally approximated by linear models. In practice, even slight deviations from this small-motion assumption can cause conventional techniques to fail. In this paper, we introduce Global Motion Corresponder (GMC), a novel approach that robustly handles large motion and achieves smooth transitions. GMC learns unary potential fields that predict SE(3) mappings into a shared canonical space, balancing correspondence, spatial and semantic smoothness, and local rigidity. We demonstrate that our method significantly outperforms existing baselines on 3D scene interpolation when the two states undergo large global motions. Furthermore, our method enables extrapolation capabilities where other baseline methods cannot.
@misc{lin2025gmc,title={Global Motion Corresponder for 3D Point-Based Scene Interpolation},author={Lin*, Junru and Vashist*, Chirag and Uy, Mikaela Angelina and Stearns, Colton and Luo, Xuan and Guibas, Leonidas and Li, Ke},year={2025},}