| CARVIEW |
A. Feder Cooper, Ph.D.
I research a variety of topics in reliable, scalable machine learning. I'm a postdoctoral affiliate at Stanford HAI, RegLab, and CRFM working with Percy Liang and Dan Ho. I am also a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. In 2026, I'll be appointed as an Assistant Professor of Computer Science at Yale University. I'll also be affiliated with the Information Society Project at Yale Law School, the Center for Algorithms, Data, and Market Design, and the Institute for Foundations of Data Science.
I am recruiting students for Fall 2026!
My contributions span privacy, security, and evaluation of generative-AI systems, MLSys, and uncertainty estimation. I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public. My research has received spotlights, orals, and best-paper accolades at top AI/ML and computing venues, including NeurIPS, ICML, AAAI, and AIES. Law collaborations on copyright and Generative AI have been lauded as "landmark" work among technology law scholars and the popular press. My research has been covered in the media at outlets such as The Atlantic, The Washington Post, Bloomberg News, 404 Media, and Wired.
Selected recent work
*Denotes co-first author or equal contribution; full list
- A. Feder Cooper, Aaron Gokaslan, Ahmed Ahmed, Amy B. Cyphert, Christopher De Sa, Mark A. Lemley, Daniel E. Ho, and Percy Liang. "Extracting memorized pieces of (copyrighted) books from open-weight language models." ICML 2025 Workshop on Reliable and Responsible Foundation Models. [arxiv | ssrn] Workshop Oral
- Jamie Hayes*, Ilia Shumailov, Christopher A. Choquette-Choo, Matthew Jagielski, George Kaissis, Milad Nasr, Sahra Ghalebikesabi, Meenatchi Sundaram Mutu Selva Annamalai, Niloofar Mireshghallah, Igor Shilov, Matthieu Meeus, Yves-Alexandre de Montjoye, Katherine Lee, Franziska Boenisch, Adam Dziedzic, and A. Feder Cooper*. "Exploring the limits of strong membership inference attacks on large language models." NeurIPS 2025. [arxiv | proceedings] Proceedings
- A. Feder Cooper*, Christopher A. Choquette-Choo*, Miranda Bogen*, Kevin Klyman*, Matthew Jagielski*, Katja Filippova*, Ken Ziyu Liu*, Alexandra Chouldechova, Jamie Hayes, Yangsibo Huang, Eleni Triantafillou, Peter Kairouz, Nicole Mitchell, Niloofar Mireshghallah, Abigail Z. Jacobs, James Grimmelmann, Vitaly Shmatikov, Christopher De Sa, Ilia Shumailov, Andreas Terzis, Solon Barocas, Jennifer Wortman Vaughan, danah boyd, Yejin Choi, Sanmi Koyejo, Fernando Delgado, Percy Liang, Daniel E. Ho, Pamela Samuelson, Miles Brundage, David Bau, Seth Neel, Hanna Wallach, Amy B. Cyphert, Mark A. Lemley, Nicolas Papernot, and Katherine Lee*. "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research." NeurIPS 2025 (written 2024). [ssrn | arxiv] Proceedings Oral
- Jamie Hayes, Marika Swanberg, Harsh Chaudhari, Itay Yona, Ilia Shumailov, Milad Nasr, Christopher A. Choquette-Choo, Katherine Lee, and A. Feder Cooper. "Measuring memorization in language models via probabilistic extraction." NAACL 2025. [arxiv | proceedings] Proceedings
- A. Feder Cooper* and James Grimmelmann*. "The Files are in the Computer: On Copyright, Memorization, and Generative AI." Chicago-Kent Law Review, Vol. 100, 2025 (written in 2024). [ssrn | arxiv | journal] Journal
- A. Feder Cooper*, Katherine Lee*, and James Grimmelmann*. "Talkin’ ‘Bout AI Generation: Copyright and the Generative-AI Supply Chain." Journal of the Copyright Society, Vol. 72, 2025 (written in 2023). [ssrn | arxiv | journal] Journal
- Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr. "Stealing Part of a Production Language Model." ICML 2024. [arxiv | proceedings] Proceedings Best Paper Award
- Milad Nasr*, Nicholas Carlini*, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. "Scalable Extraction of Training Data from (Production) Language Models." 2023. [arxiv] arXiv