| CARVIEW |
Workshop on Long Context Foundation Models (LCFM)
@ ICML 2025
Location: West Meeting Room 202-204, Vancouver Convention Center, Vancouver, Canada
Date: Saturday, July 19, 2025
Quick Links:
OpenReview Portal | ICML Site
Contact Us: lcfm25-workshop@googlegroups.com
Announcements
Workshop schedule is now available!
Call for Papers
Important Dates
- Submission Begins: May 10, 2025
- Submission Portal: OpenReview
- Template: Overleaf
- Submission Deadline:
May 22May 28, 2025 (11:59 am, anywhere on earth) - Notification of Acceptance: June 9, 2025
- Workshop Date: Saturday, July 19
Guidelines
- We welcome papers up to 4 pages (max), not including references or appendix.
- The paper should be anonymized and uploaded to OpenReview as a single PDF.
- You may use as many pages of references and appendix as you wish, but reviewers are not required to read the appendix.
- Posting papers on preprint servers like ArXiv is permitted.
- This is a non-archival workshop. No submission will be indexed nor have archival proceedings.
- Accepted papers will appear on the workshop website. They will also be available on OpenReview and ICML virtual site.
- We accept submissions that are under review at other venues (e.g., NeurIPS 2025), as long as this does not violate the dual-submission / anonymity policy of the other venue.
- The review process will be double-blind.
Topics of Interest
Many challenging tasks for foundation models require synthesizing information over thousands to millions of individual pieces of data, which may take many forms, including images, text, audio, genomes, etc. Enabling foundation models to process long contexts introduces key challenges in computational efficiency, data quality and quantity, and evaluation. Our workshop aims to convene researchers to address these challenges, fostering discussions, developments, and evaluation of long-context foundation models across various AI disciplines, including but not limited to:
- New modeling, training, and data strategies.
- Efficiency techniques for (long-context) foundation models.
- Evaluation and understanding of long-context models.
- Retrieval-augmented foundation models.
- Long-context reasoning.
- Long-context multimodal learning.
- Long-range AI for science.
- What’s next for long-context foundation models?
Speakers
Jiajun Wu
Stanford University
Tri Dao
Princeton University
Together AI
Pang Wei Koh
University of Washington
Dima Rekesh
NVIDIA
Volodymyr Kuleshov
Cornell University
Cornell Tech University
Panelists
Yuandong Tian
Meta
Mohit Iyyer
University of Massachusetts Amherst
Dima Rekesh
NVIDIA
Xinyu Yang
Carnegie Mellon University
Organizers
Zexue He
MIT-IBM
MIT
Tianyu Gao
Princeton University
Amanda Bertsch
Carnegie Mellon University
Howard Yen
Princeton University
Yuandong Tian
Meta
Danqi Chen
Princeton University
Graham Neubig
Carnegie Mellon University
Rogerio Feris
MIT-IBM