| CARVIEW |
Welcome!
I am a Staff Research Scientist at DeepMind on the Agency team and an Honorary Fellow at the University of Edinburgh.
I am fortunate to work with wonderful students at the University of Edinburgh, where I help support the MARBLE group.
If you are interested in working together at Google DeepMind, see open roles here. Unfortunately, I do not have any current openings for direct reports or interns.
Featured Research
|
Plasticity as the Mirror of Empowerment
NeurIPS 2025
We propose an agent-centric measure for plasticity, and highlight a new connection to empowerment.
Joint work with Michael Bowling, André Barreto, Will Dabney, Shi Dong, Steven Hansen, Anna Harutyunyan, Khimya Khetarpal, Clare Lyle, Razvan Pascanu, Georgios Piliouras, Doina Precup, Jonathan Richens, Mark Rowland, Tom Schaul, Satinder Singh.
|
|
General Agents Contain World Models
ICML 2025
We prove that any agent that can solve a sufficiently rich set of goal-directed tasks must contain a predictive model of the environment.
|
|
We reflect on the paradigm of RL and suggest three departures from our current thinking.
Joint with Mark Ho and Anna Harutyunyan.
|
|
A Definition of Continual Reinforcement Learning
NeurIPS 2023
We present a precise definition of the continual reinforcement learning problem.
|
|
Settling the Reward Hypothesis
ICML 2023
We illustrate the implicit requirements on goals and purposes under which the reward hypothesis holds.
|
|
We develop a new theory describing how people simplify and represent problems when planning.
Led by Mark K. Ho, joint with Carlos G. Correa, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths.
|
|
On the Expressivity of Markov Reward
NeurIPS 2021 (Outstanding Paper Award)
We study the expressivity of Markov reward functions in finite environments by analysing what kinds of tasks such functions can express.
Joint work with Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael L. Littman, Doina Precup, Satinder Singh.
|
|
A Theory of Abstraction in Reinforcement Learning
Ph.D Thesis, 2020
My dissertation, aimed at understanding abstraction and its role in effective reinforcement learning.
Advised by Michael L. Littman.
|
|
The Value of Abstraction
Current Opinions in Behavioral Science 2019
We discuss the vital role that abstraction plays in efficient decision making.
|
|
We prove that the problem of finding options that minimize planning time is NP-Hard.
|
Interests
My research focuses on understanding the foundations of agency, learning, and computation.
I tend to get excited by fundamental questions, philosophical depth, and clarity. I typically work with the reinforcement learning problem, drawing on tools and perspectives from across philosophy, math, and computer science.
I am currently interested in developing the scientific bedrock of agency. Previously, I studied the limits of reward as a mechanism for capturing goals (2021, 2022, 2023). Before that, my dissertation studied how agents model the worlds they inhabit, focusing on the representational practices that underly effective learning and planning.
News
- Dec. 2025: Attending NeurIPS in San Diego.
- Nov. 2025: Preprint, Forgetting is Everywhere, led by my student Ben, is out.
- Nov. 2025: Talk at Cambridge.
- Oct. 2025: Talk at Institute of Philosophy's AI and Humanity Project.
- Oct. 2025: Talk at Imperial.
- Oct. 2025: Talk at Bath.
- Oct. 2025: Talk at AISHED.
- Oct. 2025: Had a lovely conversation with Ram on Base Rates
- Oct. 2025: External Examiner for Riccardo Zamboni.
- Sep. 2025: Talk at Mila.
- Sep. 2025: TalkRL episode out.
- Aug. 2025: Helping out with the Finding the Frame workshop.
- Jul. 2025: External examiner for Ted Moskovitz.
- Jun. 2025: Talk at Edinburgh CogSci.
- Jun. 2025: My student Max is presenting new work on resource-constrained RL at RLDM.
- Jun. 2025: Attending RLDM in Dublin.
- Jun. 2025: New ICML paper, General Agents Contain World Models, led by Jon Richens.
- Spring 2025: Talk at Edinburgh AI Student Society.
- May 2025: Talk at Max Planck.
- May 2025: Talk at LSE Philosophy in ASENT group.
- Apr. 2025: My student Samuel is presenting new work on representations in RL at ICLR.
- Apr. 2025: Hosting a mentoring session at ICLR.
- Apr. 2025: Attending the alignment workshop co-located with ICLR.
- Apr. 2025: Attending ICLR.
- Apr. 2025: Talk with Chevening AI Scholars.
- Mar. 2025: Talk in IPAB seminar at UoE.
- Feb. 2025: New RLDM abstract out: Agency Is Frame-Dependent.
- Winter 2025: I co-taught the RL course at Edinburgh.
- 2025: Workshop chair for RLDM 2025.
- 2025: Associate program chair for CoLLAs 2025.
- Dec. 2024: Talk at Penn State.
- Nov. 2024: Guest Lecture at U. of Alberta.
- Nov. 2024: Visit and talk at UCL's Gatsby.
- Nov. 2024: Talk at UoE.
- Oct. 2024: Talk at Purdue.
- Oct. 2024: Talk at RTX.
- Sep. 2024: I started as an Honorary Fellow at the University of Edinburgh.
- Sep. 2024: Attending the Dagstuhl Seminar on Explainable AI.
- Aug. 2024: Co-organized a workshop at RLC.
- Aug. 2024: On a panel at RLC and gave a talk at the RL safety workshop on Agency.
- Jul. 2024: Gave a talk at the RSS workshop on task specification.
- Jun. 2024: Gave a (virtual) talk at Oregon State
- May 2024: External Examiner for Jacob Beck.
- May 2024: Visited Montreal and gave a talk at Mila.
- May 2024: Three Dogmas of Reinforcement Learning accepted to RLC.
- Mar. 2024: I moved to Edinburgh, Scotland.
Active Service
- JMLR Editorial Board (2020-)
- NeurIPS Area Chair (2023-)
- RLC Senior PC Member (2024-)
- RLDM Workshop Chair (2025)
- CoLLAs Associate Program Chair (2024-)
- Recent Reviewing: AAAI, ACM, AISTATS, CoLLAs, ICML, ICLR, JMLR, Nature, NeurIPS, OpenMind, Philosophical Transactions, TMLR, RLC, RLDM, RSS.
Selected Awards
- Principal's Medal, University of Edinburgh.
- Outstanding Paper Award, NeurIPS 2021, On the Expressivity of Markov Reward
- Presidential Award for Excellence in Teaching, Brown University
- Runner-Up, 2020 AAAI/ACM SIGAI Dissertation Award
- 8x Top Reviewer: ICML (x4); NeurIPS (x3), AISTATS (x1).
About Me
Before joining DeepMind, I completed my Ph.D in Computer Science at Brown University where I was fortunate to be advised by Prof. Michael Littman. I got my start in research working with Prof. Stefanie Tellex at Brown, and before that studied Philosophy and Computer Science at Carleton College.
I'm a big fan of basketball, baking, reading, lifting, games, and music--I play violin and guitar, and love listening to just about everything. I live in Edinburgh, Scotland with my wife Elizabeth and our dog Barley.
Q: What should I call you? A: I usually go by "Dave", but I take no offense to "David". If I'm teaching your class, "Dave" / "Professor Dave" / "Professor Abel" are all okay.
If you want to arrange a call with me for any reason, I have a recurring open slot in my calendar here.
If you have feedback of any kind, please feel free to fill out this anonymous feedback form.
