| CARVIEW |
📚 Google Scholar
News
- 🗓 July 2025: Organizing the Reliable ML with Unreliable Data workshop at NeurIPS 2025 with Andrew Ilyas, Anay Mehrotra, and Manolis Zampetakis
- 🗓 June 2025: Our paper on causal inference got the Best Paper Award at COLT 2025
- 🗓 April 2025: Our paper on diffusion models and distribution learning got the Short Best Paper Award at the ICLR 2025 DeLTa workshop
- 🗓 April 2025: Completed my course on Stability in Machine Learning for Spring 2025. The lecture notes can be found here
I am an FDS Postdoctoral Fellow at Yale University. Before that, I was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. I completed my undergraduate studies in the School of Electrical and Computer Engineering Department of the NTUA.
I work on statistical and computational learning theory. My research focuses on the design of algorithms with rigorous guarantees for Machine Learning problems. I am particularly interested in:
- Learning from imperfect data: designing efficient algorithms that are robust to imperfect data for problems arising in Machine Learning and Econometrics with applications to Causal Inference.
- Generative modeling: proving rigorous guarantees for generative models, but also designing practical methods for diffusion and language models.
- Generalization and stability: understanding the generalization properties of algorithms and their stability to changes in the training data (replicability, privacy, memorization, learning curves).
I am on the 2025/26 job market.
Feel free to contact me at: alkis.kalavasis[at]yale.eduRecent Publications
-
COLT 2025with Yang Cai, Katerina Mamali, Anay Mehrotra and Manolis ZampetakisBest Paper Award
-
with Kulin Shah, Adam Klivans and Giannis Daras
-
with Anay Mehrotra and Grigoris Velegkas
-
with Ioannis Anagnostides and Tuomas Sandholm
-
with Amin Karbasi, Argyris Oikonomou, Katerina Sotiraki, Grigoris Velegkas and Manolis Zampetakis
-
with Amin Karbasi, Grigoris Velegkas and Felix Zhou
-
with Andreas Galanis and Anthimos Vardis Kandiros
-
with Anay Mehrotra and Manolis Zampetakis
-
with Idan Attias, Steve Hanneke, Amin Karbasi and Grigoris Velegkas
-
with Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas and Felix ZhouSelected as Spotlight
-
On the Complexity of Computing Sparse Equilibria and Lower Bounds for No-Regret Learning in Games ITCS 2024with Ioannis Anagnostides, Tuomas Sandholm and Manolis Zampetakis
-
with Andreas Galanis and Anthimos Vardis Kandiros
-
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods NeurIPS 2023with Constantine Caramanis, Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos Selected as Oral
-
with Idan Attias, Steve Hanneke, Amin Karbasi and Grigoris VelegkasSelected as Oral
-
with Amin Karbasi, Shay Moran and Grigoris Velegkas
-
Replicable Bandits ICLR 2023with Hossein Esfandiari, Amin Karbasi, Andreas Krause, Vahab Mirrokni and Grigoris Velegkas
-
Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes NeurIPS 2022with Grigoris Velegkas and Amin Karbasi
-
with Konstantinos Stavropoulos and Manolis Zampetakis Selected as Oral
-
Perfect Sampling from Pairwise Comparisons NeurIPS 2022with Dimitris Fotakis and Christos Tzamos
-
Linear Label Ranking with Bounded Noise NeurIPS 2022with Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos Selected as Oral
-
with Dimitris Fotakis and Eleni Psaroudaki Selected for Long Presentation
-
with Jason Milionis, Dimitris Fotakis and Stratis Ioannidis
-
with Dimitris Fotakis, Vasilis Kontonis and Christos Tzamos
-
Aggregating Incomplete and Noisy Rankings AISTATS 2021with Dimitris Fotakis and Konstantinos Stavropoulos
-
with Dimitris Fotakis and Christos Tzamos Algorithmica 2022
Pre-prints
-
with Sinho Chewi, Anay Mehrotra and Omar Montasser
Short Best Paper Award @ ICLR 2025 Workshop on Deep Generative Models in Machine Learning -
with Anay Mehrotra and Felix Zhou
-
with Anay Mehrotra and Grigoris Velegkas
-
with Ilias Zadik and Manolis Zampetakis
Teaching
Stability in Machine Learning: Generalization, Privacy & Replicability
Instructor: Alkis Kalavasis
This course is about generalization and stability of Machine Learning (ML) systems. There are various ways to define what it means for a learning algorithm to be stable. The most standard way is inspired by sensitivity analysis, which aims at determining how much the variation of the input can influence the output of a system. This abstract way allows one to introduce various notions of stability such as uniform stability, differential privacy, and replicability. In this course, we investigate these notions of stability, their implications to learning theory, and their surprising connections.
Lecture Notes (PDF)-
Weekly Lectures – Spring 2025 (Yale CPSC 683)
- Lecture 1 (VC Theory and Uniform Convergence) (Jan 14) – [PDF]
- Lecture 2 (Generalization Bounds via Algorithmic Stability) (Jan 21) – [PDF]
- Lecture 3 (Stability of SGD and Randomization Tests) (Jan 28) – [PDF]
- Lecture 4 (Uniform Convergence Failures and Domain Adaptation) (Feb 4) – [PDF]
- Lecture 5 (Online and Private PAC Learning) (Feb 11) – [PDF]
- Lecture 6 (DP PAC Learning implies Online Learning) (Feb 18) – [PDF]
- Lecture 7 (Online Learning implies DP PAC Learning) (Feb 25) – [PDF]
- Lecture 8 (Replicable and DP PAC Learning) (Mar 4) – [PDF]
- Spring Break (Mar 11, Mar 18)
- Lecture 9 (Memorization, Learning, and Generative Models) (Mar 25) – [PDF]
- Lecture 10 (A Theory of Learning Curves) (Apr 1) – [PDF]
- Lecture 11 (Language Identification and Generation) (Apr 8) – [PDF]
- Lecture 12 (Diffusion Models) (Apr 15) – [Paper 1] [Paper 2]
- Lecture 13 (Student Presentations) (Apr 29)
Recent Talks
-
Learning with Systematic Bias and Imperfect Data
INFORMS Annual Meeting, October 2025
Archimedes Workshop on Algorithmic Game Theory, July 2025 -
DDPM Score Matching and Distribution Learning
Aarhus Theory Seminar, September 2025
NTUA Student Seminar, June 2025
Yale Student Seminar, May 2025
MIT Student Seminar, May 2025 -
Transfer Learning beyond Bounded Density Ratios
Slides (20 min)
Archimedes Workshop on Machine Learning, July 2024 -
On the Complexity of Computing Sparse Equilibria and Lower Bounds for No-Regret Learning in Games
ITCS 2024 Talk (25 min)
ITCS, December 2024
Service
-
Organization
Reliable ML with Unreliable Data (NeurIPS 2025 Workshop)with Andrew Ilyas, Anay Mehrotra, and Manolis Zampetakis
-
Reviewing
FOCS (2025, 2024), STOC (2025, 2024), COLT (2025), NeurIPS (2024, 2023, 2022, 2021), ICML (2023), AISTATS (2022, 2021), ICLR (2022), ITCS (2024)