About
Welcome to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025!
Location: IT University, Rued Langgaards Vej 7, 2300 Copenhagen, Auditorium 0. Beware, it is NOT in the Bella Conference Center
The success of machine learning depends on access to large amounts of training data, which often contains sensitive information. This raises issues of legality, competitiveness, and privacy when data is exposed. Neural networks are known to be vulnerable to privacy attacks, a concern that has recently become more visible with large language models (LLMs), where attacks can be carried out directly through prompting. Differential privacy, the gold standard for privacy-preserving learning, has improved in terms of privacy–utility trade-offs thanks to new trust models and algorithms. However, there are still many open questions on how to bridge the gap between attacks and defenses, from developing auditing methods and more effective attacks to the growing interest in machine unlearning.
Which models best reflect real-world scenarios? How can methods scale to deep learning and foundation models? How are unlearning, auditing, and privacy-preserving machine learning connected, and how can these lines of work be brought together?
This workshop will bring together researchers from academia and industry working on differential privacy, machine unlearning, privacy auditing, privacy attacks, and related topics.
Call for Papers
We invite submissions to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025.
We welcome both novel contributions and in-progress work with diverse viewpoints.
| Important Dates | Date (AoE) |
|---|---|
| Paper submission | October 17, 2025 |
| Accept/Reject notification | October 31, 2025 |
| Workshop | December 7, 2025 |
All accepted papers can be presented as posters. Posterboards allow for A0 portrait or A1 landscape. (Please use A0 portrait if possible).
Topics of Interest
- Efficient methods for privacy-preserving machine learning
- Trust models for privacy, including federated learning and data minimization
- Privacy at inference and privacy for agents interaction and for large language models (fine-tuning, test-time training)
- Privacy-preserving data generation
- Differential privacy theory
- Threat models and privacy attacks
- Auditing methods and interpretation of privacy guarantees
- Machine unlearning, certifiable machine unlearning, and new unlearning algorithms
- Relationship between privacy and other issues related to Trustworthy Machine Learning
Submission Guidelines
- Format: up to 5 pages, excluding references
- Style: NeurIPS 2025 template
- Anonymization: required (double-blind review)
- Submission site: via OpenReview: https://openreview.net/group?id=EurIPS.cc/2025/Workshop/PPML
This workshop is non-archival and will not have official proceedings. Workshop submissions can be submitted to other venues. We welcome ongoing and unpublished work, including papers that are under review at the time of submission. We do not accept submissions that have already been accepted for publication in other venues with archival proceedings. The titles of accepted papers will be published on the website.
We are looking for reviewers to help ensure a fair and constructive review process.
Each reviewer will be asked to review at most 3 papers.
Invited Speakers
Aurélien Bellet
Inria
Tamalika Mukherjee
Max Planck Institute for Security and Privacy
Antti Honkela
University of Helsinki
Rasmus Pagh
University of Copenhagen
Catuscia Palamidessi
Inria
Sahra Ghalebikesabi
ex-Google DeepMind
Program
The workshop will be at ITU, Auditorium 0. There are 180 seats and food for everyone! We have six invited talks and 3 shorter contributed talks (CT).
| Time | Program |
|---|---|
| 08:50 - 09:00 | Opening |
| 09:00 - 10:30 | Interpreting Privacy Guarantees |
| On quantifying and communicating privacy Antti Honkela | |
| Three Flavors of Differential Privacy Auditing Aurélien Bellet | |
| CT: Privacy Amplification Persists Under Unlimited Data Release Clément Pierquin | |
| 10:30 - 11:00 | Coffee Break (please install the posters during the break) |
| 11:00 - 12:30 | Deep Learning and agents |
| Operationalising Contextual Integrity in Privacy-conscious agents Sahra Ghalebikesabi | |
| CT: Efficient and Scalable Implementation of Differentially Private Deep Learning without Shortcuts Sebastian Rodriguez Beltran | |
| First Poster session | |
| 12:30 - 13:30 | Lunch |
| 13:30 - 15:00 | Making DP efficient |
| Privacy Under Memory Constraints Tamalika Mukherjee | |
| How many random bits are needed for differential privacy? Rasmus Pagh | |
| CT: Better Rates for Private Linear Regression in the Proportional Regime via Aggressive Clipping Inbar Seroussi | |
| 15:00 - 15:30 | Coffee Break |
| 15:30 - 17:00 | Federated Learning |
| Privacy-preserving Federated Histogram Estimation: Local Differential Privacy Strikes Back Catuscia Palamidessi | |
| Second Poster session and Awards! |
Accepted papers
- “Privacy Leakage via Output Label Space and Differentially Private Continual Learning”, Marlon Tobaben, Talal Alrawajfeh, Marcus Klasson, Mikko A. Heikkilä, Arno Solin, Antti Honkela
- “Privacy Amplification Persists Under Unlimited Data Release”, Clément Pierquin, Aurélien Bellet, Marc Tommasi, Matthieu Boussard
- “Subgroup-Level Membership Inference Risks in Synthetic RNA-seq”, Hakime Öztürk, Oliver Stegle
- “Privacy Preserving Diffusion Models for Mixed-Type Tabular Data Generation”, Timur Sattarov, Marco Schreyer, Damian Borth
- “A New Sensitivity Bound on Sliced Wasserstein Losses for Private Machine Learning”, David Rodríguez-Vítores, Clément Lalanne, Jean-Michel Loubes
- “Just a Simple Transformation is Enough for Data Protection in Split Learning”, Andrei Semenov, Philip Zmushko, Alexander Pichugin, Aleksandr Beznosikov
- “Mitigating Disparate Impact of Differentially Private Learning through Bounded Adaptive Clipping”, Linzh Zhao, Aki Rehn, Mikko A. Heikkilä, Razane Tajeddine, Antti Honkela
- “Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization”, Aurélien Bellet, Edwige Cyffers, Davide Frey, Romaric Gaudel, Dimitri Lerévérend, Francois Taiani
- “How to Train Private Clinical Language Models: A Comparative Study of Privacy-Preserving Pipelines for ICD-9 Coding”, Mathieu Dufour, Andrew B. Duncan
- “A Law of Data Reconstruction for Random Features (and Beyond)”, Simone Bombari, Leonardo Iurada, Tatiana Tommasi, Marco Mondelli
- “Balancing Fairness and Privacy in DP-SGD with Subsampling and Clipping”, Max Cairney-Leeming, Christoph H. Lampert, Amartya Sanyal
- “Communication-efficient publication of sparse vectors under DP via Poisson private representation”, Quentin Hillebrand, Vorapong Suppakitpaisarn, Tetsuo Shibuya
- “An Interactive Framework for Finding the Optimal Trade-off in Differential Privacy”, Yaohong Yang, Aki Rehn, Sammie Katt, Antti Honkela, Samuel Kaski
- “Efficient and Scalable Implementation of Differentially Private Deep Learning without Shortcuts”, Sebastian Rodriguez Beltran, Marlon Tobaben, Joonas Jälkö, Niki Andreas Loppi, Antti Honkela
- “iDP-ULDP: Achieving Tight User-Level DP with Heterogeneous Numbers of Records per User”, Johannes Kaiser, Jakob Eigenmann, Daniel Rueckert, Georgios Kaissis
- “DP-MicroAdam: Private and Frugal Algorithm for Training and Fine-tuning”, Mihaela Hudişteanu, Nikita Kalinin, Edwige Cyffers
- “Differentially Private and Federated Structure Learning in Bayesian Networks”, Ghita Fassy El Fehri, Aurélien Bellet, Philippe Bastien
- “Private Rate-Constrained Optimization with Applications to Fair Learning”, Mohammad Yaghini, Tudor Cebere, Michael Menart, Aurélien Bellet, Nicolas Papernot
- “Unifying Re-Identification, Attribute Inference, and Data Reconstruction Risks in Differential Privacy”, Bogdan Kulynych, Juan Felipe Gomez, Jamie Hayes, Borja Balle, Flavio Calmon, Georgios Kaissis, Jean Louis Raisaro
- “Better Rates for Private Linear Regression in the Proportional Regime via Aggressive Clipping”, Simone Bombari, Inbar Seroussi, Marco Mondelli
- “On Optimal Hyperparameters for Differentially Private Deep Transfer Learning”, Aki Rehn, Linzh Zhao, Mikko A. Heikkilä, Antti Honkela
- “Sequential Subspace Noise Injection Prevents Accuracy Collapse in Certified Unlearning”, Dolgova Polina, Sebastian U Stich
- “Model Agnostic Differentially Private Causal Inference”, Christian Janos Lebeda, Mathieu Even, Aurélien Bellet, Julie Josse
- “Beyond Membership: Limitations of Add/Remove Adjacency in Differential Privacy”, Gauri Pradhan, Joonas Jälkö, Santiago Zanella-Beguelin, Antti Honkela
Organizers
Amartya Sanyal
University Of Copenhagen
Edwige Cyffers
ISTA
Rachel Cummings
Columbia University
Nikita Kalinin
ISTA
Peter Kairouz
Google
Reviewers
We thank all the reviewers for their work.
- Bogdan Kulynych
- Carolin Heinzler
- Christian Janos Lebeda
- Christoph H. Lampert
- Clément Lalanne
- Clément Pierquin
- Edwige Cyffers
- Erchi Wang
- Jan Schuchardt
- Joel Daniel Andersson
- Kostadin Cvejoski
- Luca Corbucci
- Lukas Retschmeier
- Marlon Tobaben
- Mathieu Dagréou
- Mina Basirat
- Nikita Kalinin
- Quentin Hillebrand
- Renaud Gaucher
- Romaric Gaudel
- Şeyma Selcan Mağara
- Simone Bombari
- Vasilis Siomos
Sponsor
PPML@EurIPS is sponsored by BILAI.

Contact
Questions? Email us at ppml.eurips@gmail.com