HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 27 Jun 2019 18:32:14 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"5d150bae-4b31"
expires: Mon, 29 Dec 2025 01:04:31 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: A048:3827E5:8295DE:92B1DB:6951D147
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 00:54:31 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210078-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766969671.497775,VS0,VE204
vary: Accept-Encoding
x-fastly-request-id: ae674922ad6f463039c54f5fd8d283b1975c3bdc
content-length: 6503
Security and Privacy of Machine Learning
Security and Privacy of Machine Learning
Date: June 14, 2019
Location:Long Beach Convention Center, Room 104B, Long Beach, CA, USA
As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.
This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.
Schedule
8:40am-9:00am
Opening Remarks (Dawn Song)
Session 1: Security Vulnerabilities of Machine Learning Systems
9:00am-9:30am
Invited Talk #1: Patrick McDaniel. A systems security perspective on adversarial machine learning
9:30am-10:00am
Inivted Talk #2: Abdullah Al-Dujaili. Flipping Sign Bits is All You Need to Craft Black-Box Adversarial Examples
10:00am-10:20am
Contributed Talk #1: Enhancing Gradient-based Attacks with Symbolic Intervals
10:20am-10:30am
Spotlight Presentation #1: Adversarial Policies: Attacking Deep Reinforcement Learning
10:30am-10:45am
Coffee Break
Session 2: Secure and Private Machine Learning in Practice
10:45am-11:15am
Invited Talk #3: Le Song. Adversarial Attack on Graph Structured Data
11:15am-11:45am
Invited Talk #4: Sergey Levine. Robust Perception, Imitation, and Reinforcement Learning for Embodied Learning Machines.
Session 3: Provable Robustness and Verifiable Machine Learning Approaches
1:15pm-1:45pm
Invited Talk #5: Ziko Kolter. Provable Robustness beyond Region Propagation: Randomization and Stronger Threat Models
1:45pm-2:05pm
Contributed Talk #3: Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
2:05pm-2:45pm
Poster Session #1
Session 4: Trustworthy and Interpretable Machine Learning Towards
2:45pm-3:15pm
Invited Talk #6: Alexander Madry. Robustness beyond Security
3:15pm-3:45pm
Invited Talk #7: Been Kim. Towards interpretability for everyone: Testing with Concept Activation Vectors
3:45pm-4:05pm
Contributed Talk #4: Theoretically Principled Trade-off between Robustness and Accuracy
4:05pm-4:15pm
Spotlight Presentation #2: Model Weight Theft with just Noise Inputs: The Curious Case of the Petulant Attacker
4:15pm-5:15pm
Panel discussion
5:15pm-6:00pm
Poster Sesson #2
Schedule
Poster Session #1 (2:00pm-2:45pm)
Shiqi Wang, Yizheng Chen, Ahmed Abdou and Suman Jana. Enhancing Gradient-based Attacks with Symbolic Intervals
Bo Zhang, Boxiang Dong, Hui Wendy Wang and Hui Xiong. Integrity Verification for Federated Machine Learning in the Presence of Byzantine Faults
Xinyun Chen, Wenxiao Wang, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li and Dawn Song. Leveraging Unlabeled Data for Watermark Removal of Deep Neural Networks
Qian Lou and Lei Jiang. SHE: A Fast and Accurate Deep Neural Network for Encrypted Data
Matt Jordan, Justin Lewis and Alexandros G. Dimakis. Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
Aria Rezaei, Chaowei Xiao, Bo Li and Jie Gao. Protecting Sensitive Attributes via Generative Adversarial Networks
Saeed Mahloujifar, Mohammad Mahmoody and Ameer Mohammed. Universal Multi-Party Poisoning Attacks
Hongge Chen, Huan Zhang, Si Si, Yang Li, Duane Boning and Cho-Jui Hsieh. Verifying the Robustness of Tree-based Models
Congyue Deng and Yi Tian. Towards Understanding the Trade-off Between Accuracy and Adversarial Robustness
Zhi Xu, Chengtao Li and Stefanie Jegelka. Exploring the Robustness of GANs to Internal Perturbations
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent Ghaoui and Michael Jordan. Theoretically Principled Trade-off between Robustness and Accuracy
Pang Wei Koh, Jacob Steinhardt and Percy Liang. Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Bokun Wang and Ian Davidson. Improve Fairness of Deep Clustering to Prevent Misuse in Segregation
Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine and Stuart Russell. Adversarial Policies: Attacking Deep Reinforcement Learning
Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong and Tao Wei. Attacking Multiple Object Tracking using Adversarial Examples
Joseph Szurley and Zico Kolter. Perceptual Based Adversarial Audio Attacks
Poster Session #2 (5:15pm-6:00pm)
Felix Michels, Tobias Uelwer, Eric Upschulte and Stefan Harmeling. On the Vulnerability of Capsule Networks to Adversarial Attacks
Zhaoyang Lyu, Ching-Yun Ko, Tsui-Wei Weng, Luca Daniel, Ngai Wong and Dahua Lin. POPQORN: Quantifying Robustness of Recurrent Neural Networks
Dimitrios Diochnos, Saeed Mahloujifar and Mohammad Mahmoody. Lower Bounds for Adversarially Robust PAC Learning
Nicholas Roberts and Vinay Prabhu. Model weight theft with just noise inputs: The curious case of the petulant attacker
Ryan Webster, Julien Rabin, Frederic Jurie and Loic Simon. Generating Private Data Surrogates for Vision Related Tasks
Joyce Xu, Dian Ang Yap and Vinay Prabhu. Understanding Adversarial Robustness Through Loss Landscape Geometries
Kevin Shi, Daniel Hsu and Allison Bishop. A cryptographic approach to black-box adversarial machine learning
Haizhong Zheng, Earlence Fernandes and Atul Prakash. Analyzing the Interpretability Robustness of Self-Explaining Models
Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Sicun Gao, Dean Tullsen and Hadi Esmaeilzadeh. Shredder: Learning Noise for Privacy with Partial DNN Inference on the Edge
Chaowei Xiao, Xinlei Pan, Warren He, Bo Li, Jian Peng, Mingjie Sun, Jinfeng Yi, Mingyan Liu, Dawn Song. Characterizing Attacks on Deep Reinforcement Learning
Sanjam Garg, Somesh Jha, Saeed Mahloujifar and Mohammad Mahmoody. Adversarially Robust Learning Could Leverage Computational Hardness
ICML Workshop posters paper should be roughly 24" x 36" in portrait orientation. There will be no poster board; you will tape your poster directly to the wall. Use lightweight paper. We provide the tape.
Submissions to this track will introduce novel ideas or results. Submissions should follow the ICML format and not exceed 4 pages (excluding references, appendices or large figures).
The workshop will include contributed papers. Based on the PC’s recommendation, each paper accepted to the workshop will be allocated either a contributed talk or poster presentation .
Submissions need to be anonymized. The workshop allows submissions of papers that are under review or have been recently published in a conference or a journal. The workshop will not have any official proceedings.
We invite submissions on any aspect of machine learning that relates to computer security and privacy (and vice versa). This includes, but is not limited to:
Test-time (exploratory) attacks: e.g. adversarial examples
Training-time (causative) attacks: e.g. data poisoning attack
Differential privacy
Privacy preserving generative models
Game theoretic analysis on machine learning models