| CARVIEW |
This workshop explores using programmatic representations (e.g., code, symbolic programs, rules) to enhance agent learning and address key challenges in creating autonomous agents. By leveraging structured representations, we aim to improve interpretability, generalization, efficiency, and safety in agent systems, moving beyond the limitations of “black box” deep learning models. The workshop brings together researchers in sequential decision-making and program synthesis/code generation to discuss using programs as policies (e.g., LEAPS, Code as Policies, HPRL, RoboTool, Carvalho et al. 2024), reward functions (e.g., Eureka, Language2Reward, Text2Reward), skill libraries (e.g., Voyager), task generators (e.g., GenSim), or environment models (e.g., WorldCoder, Code World Models), ultimately driving progress toward robust, understandable, and adaptable autonomous agents across diverse applications.
Tentative Schedule
Location: West Meeting Room 301-305
Room Capacity: 710
| Time | Event |
|---|---|
| 8:20 - 8:30 | Opening Remarks |
| 8:30 - 9:00 | Invited Talk: Animesh Garg |
| 9:00 - 9:30 | Invited Talk: Amy Zhang |
| 9:30 - 10:00 | Coffee Break |
| 10:00 - 10:15 | Oral Presentation: Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces |
| 10:15 - 10:30 | Oral Presentation: Searching Latent Program Spaces |
| 10:30 - 10:45 | Oral Presentation: Lifelong Experience Abstraction and Planning |
| 10:45 - 11:00 | Sponsor Presentation - BASIS |
| 11:00 - 11:30 | Invited Talk: Dale Schuurmans |
| 11:30 - 12:00 | Invited Talk: Sheila McIlraith |
| 12:00 - 13:00 | Lunch |
| 13:00 - 14:00 | Poster Session 1 |
| 14:00 - 14:30 | Invited Talk: Jason Ma |
| 14:30 - 15:00 | Invited Talk: Wenhao Yu |
| 15:00 - 16:00 | Poster Session 2 |
| 16:00 - 16:15 | Coffee Break |
| 16:15 - 17:00 | Panel Discussion |
| 17:00 - 17:30 | Networking Session |
All times are in Pacific Time (PT).
Speakers
Organizers
Call For Papers
We invite the submission of research papers and position papers on the topic of programmatic representations for agent learning. This workshop aims to explore the use of program-like structures to represent policies, reward functions, tasks, and environment models.
Topics of interest include, but are not limited to:
- Programs as Policies: Representing decision-making logic through programmatic policies in Python or domain-specific languages.
- Programs as Reward Functions: Synthesizing reward function codes for agent learning.
- Programs as Skill Libraries: Representing acquired skills as programs, allowing for reusing and composing skills.
- Programmatically Generating Tasks: Producing codes that describe diverse task variants.
- Programs as Environment Models: Inferring executable codes to simulate environment dynamics.
Submission Types:
- Full Papers: Up to 9 pages in ICML or NeurIPS format, with potentially large-scale experiments.
- Short Papers: 2-4 pages in ICML or NeurIPS format, with proof-of-concept demonstrations (demos, code, blog posts).
Important Dates:
- Submission Deadline:
May 24, 2025, AoEMay 30, 2025, AoE - Author Notification:
June 7, 2025, AoEJune 13, 2025, AoE - Camera Ready Deadline: July 7, 2025, AoE
- Workshop Date: July 18, 2025
Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations.
All accepted papers will be made publicly available as non-archival reports, allowing for future submissions to archival conferences or journals.
Please submit your papers to the Open Review site.
Camera Ready Instructions
Please incorporate reviewers’ feedbacks and prepare for your camera-ready submission. Please submit your camera-ready version on OpenReview. Your camera-ready submission should be de-anonymized, and include at most 9 pages for full papers, and 2-4 pages for short papers, excluding the references and appendices. The paper can be in ICML or NeurIPS formats, with footnote “ICML 2025 Workshop on Programmatic Representations for Agent Learning”.
Camera-Ready LaTeX Templates:
The camera-ready deadline is July 7, 2025, Anywhere on Earth (AoE).
Accepted Papers
Optimizing Agentic Architectures for Cybersecurity Tasks with Trace
Anish Chaudhuri, Prerit Choudhary, Max Piasevoli, Shannon Xiao, Allen Nie
Leveraging Learned Programmatic Facts for Enhanced LLM Agent Planning and World Modeling
Samuel Holt, Max Ruiz Luyten, Thomas Pouplin, Mihaela van der Schaar
FormulaCode: Evaluating Agentic Superoptimization on Large Codebases
Atharva Sehgal, James Hou, Swarat Chaudhuri, Jennifer J. Sun, Yisong Yue
PDL: Declarative Representation of Agentic Prompting Patterns
Mandana Vaziri, Louis Mandel, Martin Hirzel, Anca Sailer, Yuji Watanabe, Hirokuni Kitahara
Zero-Shot Instruction Following in RL via Structured LTL Representations
Mattia Giuri, Mathias Jackermeier, Alessandro Abate
EditLord: Learning Code Transformation Rules for Code Editing
Weichen Li, Albert Jan, Baishakhi Ray, Junfeng Yang, Chengzhi Mao, Kexin Pei
Time to Impeach LLM-as-a-Judge: Programs are the Future of Evaluation
Tzu-Heng Huang, Harit Vishwakarma, Frederic Sala
InstructFlow: Adaptive Symbolic Constraint-Guided Code Generation for Long-Horizon Planning
Haotian Chi, Zeyu Feng, Yueming Lyu, Chengqi Zheng, Linbo Luo, Yew-Soon Ong, Ivor Tsang, Hechang Chen, Yi Chang, Haiyan Yin
Sketch-Plan-Generalize : Learning and Planning with Neuro-Symbolic Programmatic Representations for Inductive Spatial Concepts
Namasivayam Kalithasan, Sachit Sachdeva, Himanshu Gaurav Singh, Vishal Bindal, Arnav Tuli, Gurarmaan Singh Panjeta, Harsh Himanshu Vora, Divyanshu Agarwal, Rohan Paul, Parag Singla
Discovering Logic-Informed Intrinsic Rewards to Explain Human Policies
Chengzhi Cao, Yinghao Fu, Chao Yang, Shuang Li
Searching Latent Program Spaces
Matthew Macfarlane, Clément Bonnet
Lifelong Experience Abstraction and Planning
Peiqi Liu, Jiayuan Mao, Leslie Pack Kaelbling, Joshua B. Tenenbaum
Afterburner: Reinforcement Learning Facilitates Self-Improving Code Efficiency Optimization
Mingzhe Du, Anh Tuan Luu, Yue Liu, Yuhao QING, Dong HUANG, Xinyi He, Qian Liu, Zejun MA, See-Kiong Ng
How Robust Reinforcement Learning Enables Courier-Friendly Route Planning for Last-Mile Delivery?
Ziying Jia, Zeyu Dong, Miao Yin, Sihong He
Interpretable Reward Modeling with Active Concept Bottlenecks
Sonia Laguna, Kasia Kobalczyk, Julia E Vogt, Mihaela van der Schaar
Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors
Fan Nie, Lan Feng, Haotian Ye, Weixin Liang, Pan Lu, Huaxiu Yao, Alexandre Alahi, James Zou
Leveraging LLM-based sentiment analysis for portfolio optimization with proximal policy optimization
Kemal Kirtac, Guido Germano
Learning Game-Playing Agents with Generative Code Optimization
Zhiyi Kuang, Ryan Rong, YuCheng Yuan, Allen Nie
Making LLMs Program Interpreters via Execution Trace Chain of Thought
Koshi Eguchi, Takuya Akiba
Scalable Gameplay AI through Composition of LLM-Generated Heuristics
Danrui Li, Sen Zhang, Mubbasir Kapadia
Learning to Discover Abstractions for LLM Reasoning
Yuxiao Qu, Anikait Singh, Yoonho Lee, Amrith Setlur, Ruslan Salakhutdinov, Chelsea Finn, Aviral Kumar
Learned Representations Enhance Multi Agent Path Planning
Marius Captari, Herke van Hoof
DyPO: Dynamic Policy Optimization for Multi-Turn Interactive Reasoning
Xiao Feng, Bo Han, Zhanke Zhou, Jiaqi Fan, Jiangchao Yao, Ka Ho Li, Dahai Yu, Michael Ng
Large Language Models Can Think and Act Probabilistically
Kou Misaki, Takuya Akiba
ReasonRec: A Reasoning-Augmented Multimodal Agent for Unified Recommendation
Yihua Zhang, Xi Liu, Xihuan Zeng, Mingfu Liang, Jiyan Yang, Rong Jin, Wen-Yen Chen, Yiping Han, Bo Long, Huayu Li, Buyun Zhang, Liang Luo, Sijia Liu, Tianlong Chen
Inefficiencies of Meta Agents for Agent Design
Batu El, Mert Yuksekgonul, James Zou
Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
Anjiang Wei, Allen Nie, Thiago S. F. X. Teixeira, Rohan Yadav, Wonchan Lee, Ke Wang, Alex Aiken