| CARVIEW |
Yujie Lu
Email: yujielu10@gmail.com
Research
Yujie's research focuses on vision language models (VLMs) and large language models (LLMs), with an emphasis on robust and efficient post-training (text, image, video), interpretable and faithful evaluation, task planning with LLM agents, and alignment with human preferences.
News
News! 02/26/2025 VITED: Video Temporal Evidence Distillation is accepted at CVPR 2025!
News! 01/24/2025 I successfully complete my PhD defense and will join Meta GenAI Llama Research Team!
News! 01/22/2025 MMWorld is accepted at ICLR 2025!
News! 09/26/2024 WildVision is accepted at NeurIPS 2024 D&B Track! T2IScoreScore is accepted at NeurIPS 2024 Main Track!
News! 09/20/2024 Multimodal Procedural Planning is accepted at EMNLP 2024!
News! 08/27/2024 We release WildVision datasets: WV-Chat, WV Battle, and WV Bench.
News! 06/24/2024 I started working on video large language models at Meta (FAIR Embodied AI) in NYC this Summer!
News! 03/01/2024 Check out our WildVision Arena demo on HuggingFace for live benchmark VLMs!
Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought
Vaishnavi Himakunthala, Andy Ouyang, Daniel Philip Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, William Yang Wang
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
ULN: Towards Underspecified Vision-and-Language Navigation
Weixi Feng, Tsu-Jui Fu, Yujie Lu, William Yang Wang
The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Imagination-Augmented Natural Language Understanding
Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang Wang,
North American Chapter of the Association for Computational Linguistics (NAACL), Oral Presentation, 2022