You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Private Evolution (PE) is a training-free algorithm for generating differentially private (DP) synthetic data. Unlike traditional methods that require DP fine-tuning of a pre-trained generative model:
PE requires no training β it only utilizes the inference APIs of foundation models or non-neural-network data synthesis tools. This allows PE to take advantage of any cutting-edge API-based foundation models (e.g., GPT-4), open-source models (e.g., Stable Diffusion, Llama), or tools (e.g., computer graphics-based image synthesis tools).
PE can even match or outperform state-of-the-art training-based methods in balancing data quality with DP guarantees in some cases.
Since its introduction, PE has been extended by the community to various data modalities (images, text, tabular), different environments (federated and centralized), and a range of use cases (both training-free and training-based).
PE has been adopted by some of the largest IT companies such as Microsoft and Apple.
This repository collects papers, code repositories, and blogs related to PE. If you'd like to add your work to the list, feel free to submit a pull request, open an issue, or contact me (zinanlin AT microsoft.com). Please star the repo to get the latest update!
6/17/2025: As the list has grown rapidly, maintaining it in the README has become difficult to navigate. I've moved the paper list to the GitHub Page for improved readability and enhanced features such as filtering and exporting. The source data remains in this CSV file within the repository, so we can continue tracking changes and managing updates via GitHub issues and pull requests.