| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 10 Oct 2025 01:21:05 GMT
access-control-allow-origin: *
etag: W/"68e85f81-336d"
expires: Mon, 29 Dec 2025 09:38:06 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: F30F:3655F2:8A2B1F:9B1A4B:695249A6
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 09:28:06 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210060-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767000487.529101,VS0,VE220
vary: Accept-Encoding
x-fastly-request-id: 27e7f6eac14500cfb1e9183bd44aee11e1e2d94b
content-length: 4665
Matthew Jagielski
I received my PhD from Northeastern University, where I was fortunate to be advised by Alina Oprea and Cristina Nita-Rotaru, as a member of the Network and Distributed Systems Security Lab (NDS2).
In other news, I enjoy running, swimming, and biking. I'm also a retired Super Smash Brothers tournament competitor.
[Oct 2024] Maura Pintor, Xinyun Chen, and I organized the 17th AISec workshop at CCS 2024. Thank you to everyone who helped make it happen, and see you next year!
[Dec 2023] Our paper Privacy Auditing in One (1) Training Run received an outstanding paper award at NeurIPS 2023!
[June - Sept 2023] I enjoyed hosting Karan Chadha as a student researcher, together with Nicolas Papernot! His paper, Auditing Private Prediction, was accepted to ICML 2024!
[Aug 2023] Our paper Tight Auditing of Differentially Private Machine Learning won a best paper award at USENIX Security 2023!
[July 2023] Our paper "Extracting Training Data from Large Language Models" won runner up for the Caspar Bowden award at PETS 2023!
[June 2023] Lishan Yang and I cochaired the DSML 2023 workshop, colocated with DSN 2023 in Porto, Portugal! Thank you to everyone involved, especially our attendees, keynote speakers (Paolo Rech and Andrew Paverd) and our steering committee!
About Me
I am a member of the technical staff at Anthropic, working on Ethan Perez's team. I work on security, privacy, and memorization in machine learning systems. This includes directions like privacy auditing, memorization in generative models, data poisoning, and model stealing.I received my PhD from Northeastern University, where I was fortunate to be advised by Alina Oprea and Cristina Nita-Rotaru, as a member of the Network and Distributed Systems Security Lab (NDS2).
In other news, I enjoy running, swimming, and biking. I'm also a retired Super Smash Brothers tournament competitor.
News
[Apr 2025] Maura Pintor, Ruoxi Jia, and I are organizing the 18th AISec workshop at CCS 2025. Please consider submitting your work![Oct 2024] Maura Pintor, Xinyun Chen, and I organized the 17th AISec workshop at CCS 2024. Thank you to everyone who helped make it happen, and see you next year!
[Dec 2023] Our paper Privacy Auditing in One (1) Training Run received an outstanding paper award at NeurIPS 2023!
[June - Sept 2023] I enjoyed hosting Karan Chadha as a student researcher, together with Nicolas Papernot! His paper, Auditing Private Prediction, was accepted to ICML 2024!
[Aug 2023] Our paper Tight Auditing of Differentially Private Machine Learning won a best paper award at USENIX Security 2023!
[July 2023] Our paper "Extracting Training Data from Large Language Models" won runner up for the Caspar Bowden award at PETS 2023!
[June 2023] Lishan Yang and I cochaired the DSML 2023 workshop, colocated with DSN 2023 in Porto, Portugal! Thank you to everyone involved, especially our attendees, keynote speakers (Paolo Rech and Andrew Paverd) and our steering committee!
Selected Publications - see Google Scholar for full list
-
Measuring Forgetting of Memorized Training Examples
Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang
ICLR 2023
[Paper] -
Extracting Training Data from Large Language Models
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel
USENIX Security 2021
[Paper] -
Auditing Differentially Private Machine Learning - How Private is Private SGD?
Matthew Jagielski, Jonathan Ullman, Alina Oprea
NeurIPS 2020, TPDP 2020 Contributed Talk
[Paper] [Code] [Poster] [Talk] -
High-Fidelity Extraction of Neural Network Models
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot
USENIX Security 2020
[Paper] [Blog] [Talk] -
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski, Alina Oprea, Chang Liu, Cristina Nita-Rotaru, and Bo Li
IEEE S&P (Oakland) 2018
[Code] [Paper] [Talk]
Sometimes I have things to say. If that happens, I'll put them here.