| CARVIEW |
Hello, I'm Jeongeun Park
I am a Ph.D student at Korea Unversity, where I work with Sungjoon Choi on artificial inteligence and robotics. During Ph.D, I have also spent time at NAVER AI Lab (Korea) and Ingenuity Labs (Canada).
I’m passionate about how robots can assist people in everyday life by collaborating and interacting naturally. In particular, I aim to investigate how foundation models can enable robots to understand, anticipate, and respond to human needs, making day-to-day tasks smoother and more intuitive.
News
- [Dec 2025]: I have successfully defended my PhD dissertation, “Towards Multimodal Foundation Models for Human-Centered Robot Behavior” at Korea University.
- [Sep 2025]: Our paper about visual representation learning got accepted to NeurIPS 2025.
- [May 2025]: Our paper about interactive motion-language model got accepted to ICCV 2025.
- [Feburary 2025]: I am presenting our work about human-robot interaction in Korea Robotics Society Annual Conference (KRoC) as invited speaker in flagship conference session.
- [July 2024]: Our paper about persona-driven human robot interaction has been accepted to Robotics and Automation letters! We are looking forward to presenting our work at ICRA 2025.
- [May 2024]: Our paper about socially aware navigation has been accepted to RO-MAN 2024.
- [April 2024 - October 2024]: I have started internship at NAVER Cloud AI Lab, advised by Sangdoo Yun.
- [Feburary 2024]: I am presenting our work about uncertainty estimation in Korea Robotics Society Annual Conference (KRoC) as invited speaker in flagship conference session.
- [January 2024]: Our paper about semi-autonomous teleoperation with physical simulators and large language models has been accepted to ICRA 2024!
- [November 2023]: Our paper about uncertainty estimation from large language models has been accepted to Robotics and Automation letters! We are looking forward to presenting our work at ICRA 2024.
- [Feburary 2023]: I am presenting our work about vision-based navigation at NAVER Tech Talk.
- [January 2023]: Our paper about active visual search has been accepted to ICRA 2023.
- [September 2022 - December 2022]: I am heading to Queens University, Canada as visiting researcher, advised by Matthew Pan. This program is funded by Mitacs Globalink Research Award to Canada.
- [May 2023]: Our paper about abnormal driving dataset has been accepted to IROS 2022.
- [January 2022]: Our paper about semi-autonomous teleoperation for non-prehensile manipulations has been accepted to ICRA 2022.
Publications
Hierarchical Vision Language Action Model Using Success and Failure Demonstrations
We propose VINE, a dual-system framework that injects failure-aware reasoning into VLAs. System 1 executes grounded action chunks, while System 2 builds a tree of thought states and scores candidate subgoals using both success and failure data.
Token Bottleneck: One Token to Remember Dynamics
ICLR 2025 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities
ToBo uses extreme masked autoencoding to learn compact, temporally aware visual state embeddings that boost robotic manipulation and locomotion performance in both simulation and real-world.
A Unified Framework for Motion Reasoning and Generation in Human Interaction
MoLaM (Interactive Motion language model) integrates language and motion modalities to understand, generate, and control interactive motions in multi-turn conversations, addressing the scarcity of multi-turn interactive motion data with a synthetic dataset, INTER-MT2.
Towards Embedding Dynamic Personas in Interactive Robots: Masquerading Animated Social Kinematics (MASK)
International Conference of Robotics and Automation (ICRA), 2025
ICRA 2025 The 2nd Workshop on Nonverbal Cues for Human-Robot Cooperative Intelligence
Employing a persona-driven interactive framework to animate an anthropomorphic robotic system, enhancing audience engagement through non-verbal interactions
SPOTS: Stable Placement of Objects with Reasoning in Semi-Autonomous Teleoperation Systems
Introducing a teleoperation framework that enhances the 'place' task in robotics by combining simulation-driven stability verification with semantic reasoning from large language models.
CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents
International Conference of Robotics and Automation (ICRA), 2024
Introducing a method for interactive robotic agents using large language models (LLMs) to classify user commands as clear, ambiguous, or infeasible, enhancing reliability by leveraging uncertainty estimation, situational awareness, and user interaction for disambiguation.
Text-based Human Search and Approach using a Robot Dog
A robotic system using textual descriptions for human search and approach, combining language models for identifying targets and a hybrid learning framework for generating human-friendly robotic motions
Zero-shot Active Visual Search (ZAVIS): Intelligent Object Search for Robotic Assistants
A mobile robot system that uses free-form text for target object search using commonsene knowledge.
Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation
Uncertainty-aware robust learning
Towards Defensive Autonomous Driving: Collecting and Probing Driving Demonstrations of Mixed Qualities
R3 Driving Dataset, a novel collection of driving data categorizing abnormal behaviors to enhance OOD detection for autonomous driving.
Learning Robot Structure and Motion Embeddings using Graph Neural Networks
Using graph neural networks (GNN) to find compact, low-dimensional embeddings of a robot’s kinematic structure and pose data.
Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills
Semi-Autonomous Teleoperation framework for non-prehensile manipulation tasks.
Reviewing activities
- • I serve as a reviewer at ICRA, RA-L 2026.
- • I serve as a reviewer at ICRA, CHI, NeurIPS Workshop (Video-Language Models), CVPR, IROS, HRI 2025.
- • I serve as a reviewer at RA-L, IROS 2024.