Ph.D. in Computer Science with President Graduate Fellowship (PGF)
National University of Singapore
Hi, this is Junting Chen (陈俊廷), currently a CS Ph.D. student at the National University of Singapore (NUS), advised by Prof. Lin Shao. Before that, he was a student researcher at Shanghai Artificial Intelligence Lab in Embodied AI and robotic systems, working with Yao Mu and Ping Luo. He earned his master’s degree in Robotics and Artificial Intelligence at ETH Zurich, with Prof. Luc Van Gool as master’s program tutor. At ETH Zurich, he worked with Dr. Suryansh Kumar and Dr. Guohao Li at CVL. Before that, he earned my BSc in Computer Science at the University of Chinese Academy of Sciences, working with Prof. Ruiping Wang and Xilin Chen during his undergraduate studies.
My current research interest is the gap between the perception and action in indoor-scene Embodied AI: 1) How to construct a unified scene representation during active exploration that supports multiple high-level embodied tasks. 2) How to make long-term decisions and predict instant actions conditioned on different intentions by leveraging the information in the scene representation and common sense knowledge.
I (and my lab) am actively recruiting intern students on-site (Singapore/ Shenzhen, China) or online. Please send me your proposal and resume for internship application. Besides, I am open to any form of discussion/ cooperation, please drop me email.
Object goal navigation is an important problem in Embodied AI that involves guiding the agent to navigate to an instance of the object category in an unknown environment—typically an indoor scene. We present a modular and training-free solution, which embraces more classic approaches, to tackle the object goal navigation problem. Our method builds a structured scene representation based on classic visual simultaneous localization and mapping (V-SLAM) framework and reasons about promising areas to search goal object by injecting semantics to geometric-based frontier exploration.
RoboScript: Code Generation for Free-Form Manipulation Tasks across Real and Simulation
We present the first simulation benchmark LISN-Bench for language instructed socially navigation benchmark with human-robot coexistence and high environment dynamic, and the fast-slow hierarchical system Social-Nav-Modulator to tackle the introduced social navigation tasks.