| CARVIEW |
Rajeev Verma
I'm an ELLIS PhD student at AMLab / Delta Lab supervised by Eric Nalisnick and Christian A. Naesseth. Previously, I studied Electrical Engineering at the Indian Institute of Technology Patna (IITP) and Artificial Intelligence at the University of Amsterdam (UvA).
Research Interests
I'm generally interested in bridging the gap between prediction and decision-making, especially in the context of the institutional separation between model designers and decision-makers. I'm also interested in safe statistics, imprecise probabilities, and possibility theory.
Previously, I worked on studying the calibration properties of learning to defer (L2D) systems [ICML'22], extending L2D systems to allow for multiple experts [AISTATS'23], and studying the out-of-distribution behavior of L2D systems (in preparation). I also collaborated on a project on the test-time adaption of L2D to new experts [AISTATS'24].
Blog
- Notes from the underground (a running blog of my most-random thoughts).
- No sure loss, calibration, and insurance.
- What are good forecasts?
- A relativistic perspective of uncertainty in machine learning.
Selected Publications
(* Denotes equal contribution)
-
Rajeev Verma*, Rabanus Derr*, Christian A. Naesseth, Volker Fischer, Eric Nalisnick
So What are Good Imprecise Forecasts?
Appearing at the Workshop on Epistemic Intelligence in Machine Learning.
-
Alexander Timans*, Rajeev Verma*, Eric Nalisnick, Christian A. Naesseth.
On Continuous Monitoring of Risk Violations under Unknown Shift.
In UAI 2025. [talk].
-
Rajeev Verma, Volker Fischer, Eric Nalisnick.
On Calibration in Multi-Distribution Learning.
In ACM FAccT 2025.
Note: Also gave an invited talk at the 2nd Workshop on Learning Under Weakly Structured Information.
-
Dharmesh Tailor, Aditya Patra, Rajeev Verma, Putra Manggala, Eric Nalisnick.
Learning to Defer to a Population: A Meta-Learning Approach.
In Twenty-seventh Conference on Artificial Intelligence and Statistics (AISTATS 2024).
[Oral, Student paper award (top 1%)].
-
Rajeev Verma*, Daniel Barrejón*, Eric Nalisnick.
Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles.
In Twenty-sixth Conference on Artificial Intelligence and Statistics (AISTATS 2023).
Note: Also appeared at the ICML 2022 Workshop on Human-Machine Collaboration and Teaming as On the Calibration of Learning to Defer to Multiple Experts.
-
Rajeev Verma, Eric Nalisnick.
Calibrated Learning to Defer with One-vs-All Classifiers.
In International Conference on Machine Learning (ICML 2022).
-
Rajeev Verma.
On the Calibration of Learning to Defer Systems.
Master's Thesis 2022 (UvA). [talk] [UvA News]
Service
Reviewer:
ICML: 2023-2025
NeurIPS: 2023 (Top reviewer)
UAI: 2024-2025
ICLR: 2023
ACL ARR: 2024, 2025
Teaching:
Human-in-the-Loop Machine Learning (Teaching Assistant)
Deep Learning 2 (Teaching Assistant)
Machine Learning 2 (Teaching Assistant)