| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sat, 28 Jun 2025 13:05:20 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"685fe890-4b14"
expires: Mon, 29 Dec 2025 22:03:19 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: B4FA:3ABDEF:9489D2:A6CB9F:6952F84E
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 21:53:19 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210059-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767045199.117122,VS0,VE219
vary: Accept-Encoding
x-fastly-request-id: 098c730384d88b0b3b4a6b664df8ccfc62675054
content-length: 4873
Seonghyeon Ye
Seonghyeon Ye
KAIST Graduate school of AI
[Mail] [GitHub] [Google Scholar] [X]
I am a third year Ph.D student in KAIST Graduate school of AI, advised by Minjoon Seo and Kimin Lee. I am also currently a research intern at NVIDIA GEAR team, led by Jim Fan and Yuke Zhu. I am interested in building robotic foundation models.
Publications
2025
-
DreamGen: Unlocking Generalization in Robot Learning through Neural Trajectories
Joel Jang*, Seonghyeon Ye*, Zongyu Lin*, Jiannan Xiang*, Johan Bjorck, Yu Fang, Fengyuan Hu, Spencer Huang, Kaushil Kundalia, Yen-Chen Lin, Loic Magne, Ajay Mandlekar, Avnish Narayan, You Liang Tan, Guanzhi Wang, Jing Wang, Qi Wang, Yinzhen Xu, Xiaohui Zeng, Kaiyuan Zheng, Ruijie Zheng, Ming-Yu Liu, Luke Zettlemoyer, Dieter Fox, Jan Kautz, Scott Reed*, Yuke Zhu*, Linxi "Jim" Fan*
[paper] [website]
-
GR00T N1: An Open Foundation Model for Generalist Humanoid Robots
NVIDIA, Johan Bjorck, Fernando CastaƱeda, Nikita Cherniadev, Xingye Da, Runyu Ding, Linxi "Jim" Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer Huang, Joel Jang, Zhenyu Jiang, Jan Kautz, Kaushil Kundalia, Lawrence Lao, Zhiqi Li, Zongyu Lin, Kevin Lin, Guilin Liu, Edith Llontop, Loic Magne, Ajay Mandlekar, Avnish Narayan, Soroush Nasiriany, Scott Reed, You Liang Tan, Guanzhi Wang, Zu Wang, Jing Wang, Qi Wang, Jiannan Xiang, Yuqi Xie, Yinzhen Xu, Zhenjia Xu, Seonghyeon Ye, Zhiding Yu, Ao Zhang, Hao Zhang, Yizhou Zhao, Ruijie Zheng, Yuke Zhu
[paper] [code] [website]
-
Magma: A Foundation Model for Multimodal AI Agents
Jianwei Yang, Reuben Tan, Qianhui Wu, Ruijie Zheng, Baolin Peng, Yongyuan Liang, Yu Gu, Mu Cai, Seonghyeon Ye, Joel Jang, Yuquan Deng, Lars Liden, Jianfeng Gao
CVPR 2025
[paper] [code] [website]
-
Latent Action Pretraining from Videos
Seonghyeon Ye*, Joel Jang*, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee*, Jianfeng Gao*, Luke Zettlemoyer*, Dieter Fox*, Minjoon Seo*
ICLR 2025
LangRob Workshop @ CoRL 2024 Best Paper
[paper] [code] [website]
2024
-
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Hoyeon Chang, Jinho Park, Seonghyeon Ye, Sohee Yang, Youngkyung Seo, Du-Seong Chang, Minjoon Seo
NeurIPS 2024
[paper] [code]
-
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae
EMNLP 2024
[paper] [code]
-
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo
EMNLP 2024 Findings
[paper] [code]
-
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Seonghyeon Ye*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo
ICLR 2024 Spotlight
[paper] [code]
-
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis
Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo
TACL 2024
[paper] [code]
-
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following
Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo
AAAI 2024
[paper] [code]
-
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sung Ju Hwang, Se-young Yun
NAACL 2024
[paper]
2023
-
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning
Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo
EMNLP 2023
[paper] [code]
-
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
EMNLP 2023 Findings
[paper] [code]
-
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
ICML 2023
[paper] [code]
-
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
ICLR 2023
[paper] [code]
-
SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation
Seonghyeon Ye*, Yongrae Jo*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Minjoon Seo
Blog post
[blog] [code]
2022
-
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Joel Jang*, Seonghyeon Ye*, Minjoon Seo
Transfer Learning for NLP Workshop @ NeurIPS 2022
[paper] [code]
-
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang*, Seonghyeon Ye*, Chango Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo
EMNLP 2022
[paper] [code]
-
Towards Continual Knowledge Learning of Language Models
Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
ICLR 2022
[paper] [code]
2021
Education
-
KAIST AI
M.S. & Ph.D. in Artificial Intelligence, 2022 - Present
Advisor: Minjoon Seo, Kimin Lee -
KAIST CS
B.S. in Computer Science, 2017 - 2021
Advisor: Alice Oh, Jong C. Park
Work Experience
-
NVIDIA GEAR
Research Intern, Dec 2024 - Present
Working with Jim Fan, Yuke Zhu -
Microsoft Research
Research Intern, June 2024 - September 2024
Working with Jianfeng Gao, Jianwei Yang, and Baolin Peng -
LG AI Research
Research Intern, July 2022 - Mar 2023
Working with Joongbo Shin