| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Wed, 03 Dec 2025 22:08:11 GMT
access-control-allow-origin: *
etag: W/"6930b4cb-383e"
expires: Mon, 29 Dec 2025 14:21:46 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: EA1C:2D8B9D:8E95FA:A00573:69528C21
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 15:51:07 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210076-BOM
x-cache: HIT
x-cache-hits: 0
x-timer: S1767023468.775958,VS0,VE201
vary: Accept-Encoding
x-fastly-request-id: 7f8e3bcf5cfd02afb90e799541d213ea097a8aeb
content-length: 5098
Dawn Xiaodong Song's Home Page
Dawn Song [Bio]ProfessorComputer Science Division University of California, Berkeley |
News
- Selected as AI2050 Senior Fellow by Schmidt Sciences, 2025
- Elected as a member of American Academy of Arts and Sciences (AAAS), 2025
- Launched 3rd edition of our Agentic AI MOOC, with 32K+ enrolled globally in the series, September 2025
- Berkeley RDI hosted Agentic AI Summit, at UC Berkeley, with 2K+ in-person attendees and 40K+ joined online, the largest agentic AI gathering we know of, August 2025
- Distinguished Paper Award at IEEE Symposium on Security and Privacy (S&P): DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks, 2025
- Keynote at ICLR Towards Building Safe and Secure AI: Lessons and Open Challenges, April 2025
- Outstanding Paper Award Runner-Up at ICLR: Data Shapley in One Training Run, 2025
Research
- Research Interests: AI safety and security, Agentic AI, deep learning, decentralization technology, security and privacy.
- Publications:
- Selected Current Research Thrusts and Projects:
- Sample Past Research Projects
I am the co-Director of UC Berkeley Center on Responsible Decentralized Intelligence (RDI) . I am also part of the Berkeley Artificial Intelligence Research (BAIR) Lab, the Berkeley Deep Drive (BDD), Berkeley Center for Human-Compatible AI .
Awards (selected)
- Best/Outstanding Paper Awards:
- Distinguished Paper Award at 46th IEEE Symposium on Security and Privacy, 2025
- Best Scientific Cybersecurity Paper Award by National Security Agency (NSA), 2024
- Outstanding Paper Award at NeurIPS, 2023
- Best Paper Award at ACM Symposium on User Interface Software and Technology (UIST), 2023
- Distinguished Paper Award at OOPSLA, 2019
- Distinguished Paper Award at International Symposium on Software Testing and Analysis (ISSTA), 2018
- Best Paper Award at ICLR, 2017
- Best Applied Security Research Paper Award, AT&T, 2010
- Best Paper Award, USENIX Security Symposium, 2007
- Test-of-Time Paper Awards:
- 3 Test-of-Time Awards at IEEE Symposium on Security and Privacy, 2020
- Test-of-Time Award at ACM Conference on Computer and Communications Security, 2011
- Finalists/Honorable Mention:
- Outstanding Paper Award Runner-Up at ICLR, 2025
- Best Research Paper Award Finalist at VLDB, 2024
- Best Paper Award Honorable Mention at CHI, 2024
- Caspar Bowden Award Runner-Up at PET, 2023
Recognition (Selected)
- Fellows:
- American Academy of Arts and Sciences (AAAS) Fellow, 2025
- AI2050 Senior Fellow, Schmidt Sciences, 2025
- ACM Fellow, 2020
- IEEE Fellow, 2019
- MacArthur Fellow, 2010
- Guggenheim Fellow, 2010
- Sloan Research Fellow, 2007
- Other Honors and Awards:
- Outstanding Innovation Award, ACM SIGSAC, 2020
- Wired25 List of Innovators, 2019
- Female Founder 100 List, Inc., 2019
- AMiner Most Influential Scholar Award (as most cited scholar in Computer Security), 2016
- Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, 2010
- MIT Technology Review TR-35 Award (recognizing the world's top innovators under the age of 35), 2009
- Okawa Foundation Research Award, 2008
- George Tallman Ladd Research Award, Carnegie Mellon University, 2007
- NSF CAREER Award, 2005
Services (selected)
- World Economic Forum AGI Global Council (2025 - present)
- Program co-Chair: ICLR 2020
- Program Senior Area Chair: ICML (2026), NeurIPS (2025), NeurIPS (2024)
- Program Area Chair: ICML (2018), NeurIPS (2018)
- Steering committee: MLSys conference (2018-present)
- Monetary Authority of Singapore's International Technology Advisory Panel (2021-2023)
- Science Advisory Board, Santa Fe Institute (2014-2017)
Teaching
- CS194/294-196 Agentic AI, with Xinyun Chen, Fall 25 (MOOC)
- CS194/294-280 Advanced Large Language Model Agents, with Xinyun Chen and Kaiyu Yang, Spring 25 (MOOC)
- CS194/294-196 Large Language Model Agents, with Xinyun Chen, Fall 24 (MOOC)
- CS294-267/CS194-267 Understanding Large Language Models: Foundations and Safety, with Dan Hendrycks, Spring 24
- CS294-196/CS194-196 Responsible GenAI and Decentralized Intelligence, with Matei Zaharia, Fall 23
Talks (selected)
- Keynote at Microsoft Machine Learning, AI & Data Science Conference, June 2025. Towards building safe and secure AI - Lessons and Open Challenges
- Keynote at ICLR, April 2025. Towards Building Safe and Secure AI: Lessons and Open Challenges
- Invited talk at RSA, April 2025. Building Safe and Secure Agentic AI.
- Keynote at Amazon Machine Learning Conference, October 2024. Towards Building Safe and Secure AI: Lessons and Open Challenges
- Keynote at Graph the Planet, May 2024. Impact of Frontier AI on the Landscape of Cybersecurity
- Invited talk at Stanford Workshop on the Governance of Open Foundation Models, February 2024. Impact of Frontier AI on the Landscape of Cybersecurity
- Keynote at MLSys, Aug 2022. Towards Building a Responsible Data Economy
- Keynote at IEEE Big Data, Dec 2021. Towards Building a Responsible Data Economy
- Keynote at ACM CCS, Nov 2021. Towards Building a Responsible Data Economy
- Keynote at ICDE, April 2021. Towards Building a Responsible Data Economy
- Keynote at AAAI 2020, February 2020. AI and Security: Lessons, Challenges and Future Directions
- Plenary keynote at KDD Deep Learning Day, August 2019. AI and Security: lessons, challenges and future directions
- Keynote at CAV 2019, July 2019. Open Challenges in AI and Formal Verification
- Invited talk at Deep Reinforcement Learning Summit, June, 2019. Secure Deep Reinforcement Learning
- Keynote at ScaledML, March, 2019. AI and Security: Lessons and Future Directions
- Invited talk at EmTech Digitals, March, 2019. AI and Security: Lessons and Future Directions
- Distinguished lecture at Brown University Distinguished Lecture Series, October 2018. AI and Security: Lessons, Challenges and Future Directions
- Keynote at IEEE Cybersecurity Development Conference, October 2018. Building and Deploying Secure Systems in Practice: Lessons, Challenges and Future Directions
- Keynote at OReilly AI Conference, September, 2018. AI and security: Lessons, challenges, and future directions
- Invited talk at Microsoft Research Faculty Summit, August, 2018. Towards secure, practical confidential computing with open source secure enclave
- Keynote at ICML 2018, July, 2018. AI and Security: Lessons, Challenges and Future Directions
- Keynote at Spark Summit, June, 2018. The Future of AI and Security
- Keynote at ASPLOS workshop, March, 2018. Towards An Open-Source, Formally-Verified Secure Enclave
- Invited talk at Representation Learning Workshop, March, 2017. Resilient Representation and Provable Generalization
- Keynote at Apple Machine Learning Summit, February, 2017. Secure and Privacy-preserving data analytics and machine learning
- Keynote at FMCAD, October, 2016. Formal Verification for Computer Security: Lessons Learned and Future Directions
- Keynote at Qualcomm Research Day, June, 2015. Towards Security in a Connected World
- Distinguished lecture at UPenn, April, 2015. No More Cat and Mouse: Towards Building Systems Secure by Construction
Job Openings
-
Positions are available for postdocs, research staff, staff programmers, and interns, in areas in security, privacy, and deep learning. If you are interested in applying for a position, please fill out the form here and then send an email to dawnsong.jobs@gmail.com if you are interested. Please do not send me emails directly without filling in the form.
- For Berkeley undergrads, we enjoy having undergrads participate in our research projects to gain research experience. To be selected, normally we expect the students to have a GPA of at least 3.7 in EECS or math major or have extensive experience in areas related to the project. If interested, please apply through the URAP program. You are also welcome to fill in the form mentioned above at any time. Please do not send me emails directly without first applying through URAP and/or fill in the form above, unless instructed.