| CARVIEW |
Fan Feng
I am a postdoctoral researcher at UCSD and the CMU/MBZUAI CLeaR group, working with Kun Zhang and Biwei Huang. Previously, I also work closely with Sara Magliacane at the University of Amsterdam. I completed my PhD at City University of Hong Kong, working with Rosa Chan and closely collaborating with Qi She (Bytedance).
My long-term research goal is to build agents that not only imagine the world but also understand it, act in it, discover goals within it, and continually refine their internal models in an open-ended and self-improving loop. To achieve this, my current research centers on generative learning for causal world models and reinforcement learning agents. Specifically, my research therefore focuses on the following questions:
- Can we encode passive or offline observations into meaningful representations that are minimal yet sufficient world models for control and planning, especially with domain or distribution shifts?
- How can agents compositionally reuse learned structure in new tasks and environments?
- How can agents actively explore in a purposeful, open-ended way that discover and achieve the goals, and also improve the world model?
Beyond research, I am also actively engaged in community building. I co-organize the “I Can’t Believe It’s Not Better” (ICBINB) workshop series, which brings together researchers to discuss the practical challenges, failure modes, and lessons learned in deploying ML systems, to promote a culture of open, constructive discourse around what does not work and how we can collectively build more robust and reliable learning systems:)
Feel free to contact me for research collaborations or other engagements.
ffeng1017 [at] gmail.com
Publications (* marks equal contribution.)
🌟 Selected 📚 Topic: Causal Reinforcement Learning | Structured Representation Learning | Continual Learning for Robotic Vision
Ada-Diffuser: Latent-Aware Adaptive Diffusion for Decision-Making
Fan Feng,
NeurIPS 2025 EWM
NeurIPS 2025 ARLET
Provably Learning Task-Relevant World Representation
NeurIPS 2025 ResponsibleFM
Learning Interactive World Model for Object-Centric Reinforcement Learning
Fan Feng,
NeurIPS 2025
Online Time Series Forecasting with Theoretical Guarantees
NeurIPS 2025
Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning
ICLR 2025
[OpenReview]
[Code]
Towards Empowerment Gain through Causal Structure Learning in Model-Based Reinforcement Learning
ICLR 2025
[OpenReview]
[Code]
Causal Information Prioritization for Efficient Reinforcement Learning
ICLR 2025
[OpenReview]
[Code]
Towards Generalizable Reinforcement Learning via Causality-Guided Self-Adaptive Representations
Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning
Fan Feng and
NeurIPS 2023
[ArXiv]
[Project Page]
[Code]
Factored Adaptation for Non-Stationary Reinforcement Learning
Fan Feng,
AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning
Community Channel-Net: Efficient Channel-wise Interactions via Community Graph Topology
Fan Feng*,
Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning with Preferential Attachment
Fan Feng,
IEEE Transactions on Neural Networks and Learning Systems, 2022
[PDF]
[Code]
Towards Lifelong Object Recognition: A Dataset and Benchmark
OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning
ICRA 2020
[PDF]
[Project Page]
[Code]
Service
Workshop Co-organizer: ICLR 2025 ICBINB Workshop; RLC 2024 RLBRew Workshop; NeurIPS 2022-2023 ICBINB WorkshopEvent Proposal Reviewer:ICML/NeurIPS/ICLR Workshops 2023-2025
Area Chair or Senior Program Committee: KDD 2025-2026 ADS Track
Conference Reviewer or Program Committee Members: ICML, NeurIPS, ICLR, UAI, AISTATS, CLeaR, CVPR, ECCV, ICCV, WACV, LoG, AAAI, IJCAI, ICRA, ACML
Journal Reviewer: JMLR, TMLR, IEEE TNNLS, IJAR
Workshop Reviewer or Program Committee: Montreal AI Symposium (MAIS) 22, ICML/NeurIPS/ICLR/UAI Workshops
Talks
- [11/2025] Causal World Models for Generalizable and Adaptive Decision-Making, at JHU Workshop on Measurement Errors & Latent Variables .
- [08/2025] Learning Causal Representation for Efficient and Adaptive Decision-Making Agents in the Physical World, at NSF IAIFI Summer Workshop, Harvard University .
- [05/2025] Learning and Using Causal Representation for Reinforcement Learning, at JHU Workshop on Measurement Errors & Latent Variables .
- [04/2024] Structured and causal modeling for adaptive RL agents, at SCALAR Lab@UMass Amherst .
- [03/2023] Causal Reinforcement Learning, at Swarma Club .
- [05/2022] Factored Adaptation in heterogeneous and non-stationary RL, at Dr. Herke van Hoof's group in AMLab UvA. [Slides]
- [03/2022] Factored Adaptation in non-stationary RL, at INDELab in UvA. [Slides]