Security and Privacy of Machine Learning

Date: June 14, 2019

Location:Long Beach Convention Center, Room 104B, Long Beach, CA, USA

As machine learning has increasingly been deployed in critical real-world applications, the dangers of manipulation and misuse of these models has become of paramount importance to public safety and user privacy. In applications such as online content recognition to financial analytics to autonomous vehicles all have shown the be vulnerable to adversaries wishing to manipulate the models or mislead models to their malicious ends.

This workshop will focus on recent research and future directions about the security and privacy problems in real-world machine learning systems. We aim to bring together experts from machine learning, security, and privacy communities in an attempt to highlight recent work in these area as well as to clarify the foundations of secure and private machine learning strategies. We seek to come to a consensus on a rigorous framework to formulate adversarial attacks targeting machine learning models, and to characterize the properties that ensure the security and privacy of machine learning systems. Finally, we hope to chart out important directions for future work and cross-community collaborations.

Schedule

8:40am-9:00am Opening Remarks (Dawn Song)
Session 1: Security Vulnerabilities of Machine Learning Systems
9:00am-9:30am Invited Talk #1: Patrick McDaniel. A systems security perspective on adversarial machine learning
9:30am-10:00am Inivted Talk #2: Abdullah Al-Dujaili. Flipping Sign Bits is All You Need to Craft Black-Box Adversarial Examples
10:00am-10:20am Contributed Talk #1: Enhancing Gradient-based Attacks with Symbolic Intervals
10:20am-10:30am Spotlight Presentation #1: Adversarial Policies: Attacking Deep Reinforcement Learning
10:30am-10:45am Coffee Break
Session 2: Secure and Private Machine Learning in Practice
10:45am-11:15am Invited Talk #3: Le Song. Adversarial Attack on Graph Structured Data
11:15am-11:45am Invited Talk #4: Sergey Levine. Robust Perception, Imitation, and Reinforcement Learning for Embodied Learning Machines.
11:45am-12:05pm Contributed Talk #2: Private vqSGD: Vector-Quantized Stochastic Gradient Descent
12:05pm-1:15pm Lunch
Session 3: Provable Robustness and Verifiable Machine Learning Approaches
1:15pm-1:45pm Invited Talk #5: Ziko Kolter. Provable Robustness beyond Region Propagation: Randomization and Stronger Threat Models
1:45pm-2:05pm Contributed Talk #3: Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
2:05pm-2:45pm Poster Session #1
Session 4: Trustworthy and Interpretable Machine Learning Towards
2:45pm-3:15pm Invited Talk #6: Alexander Madry. Robustness beyond Security
3:15pm-3:45pm Invited Talk #7: Been Kim. Towards interpretability for everyone: Testing with Concept Activation Vectors
3:45pm-4:05pm Contributed Talk #4: Theoretically Principled Trade-off between Robustness and Accuracy
4:05pm-4:15pm Spotlight Presentation #2: Model Weight Theft with just Noise Inputs: The Curious Case of the Petulant Attacker
4:15pm-5:15pm Panel discussion
5:15pm-6:00pm Poster Sesson #2

Schedule

Poster Session #1 (2:00pm-2:45pm)

  • Shiqi Wang, Yizheng Chen, Ahmed Abdou and Suman Jana. Enhancing Gradient-based Attacks with Symbolic Intervals
  • Bo Zhang, Boxiang Dong, Hui Wendy Wang and Hui Xiong. Integrity Verification for Federated Machine Learning in the Presence of Byzantine Faults
  • Xinyun Chen, Wenxiao Wang, Yiming Ding, Chris Bender, Ruoxi Jia, Bo Li and Dawn Song. Leveraging Unlabeled Data for Watermark Removal of Deep Neural Networks
  • Qian Lou and Lei Jiang. SHE: A Fast and Accurate Deep Neural Network for Encrypted Data
  • Matt Jordan, Justin Lewis and Alexandros G. Dimakis. Provable Certificates for Adversarial Examples:Fitting a Ball in the Union of Polytopes
  • Aria Rezaei, Chaowei Xiao, Bo Li and Jie Gao. Protecting Sensitive Attributes via Generative Adversarial Networks
  • Saeed Mahloujifar, Mohammad Mahmoody and Ameer Mohammed. Universal Multi-Party Poisoning Attacks
  • Hongge Chen, Huan Zhang, Si Si, Yang Li, Duane Boning and Cho-Jui Hsieh. Verifying the Robustness of Tree-based Models
  • Congyue Deng and Yi Tian. Towards Understanding the Trade-off Between Accuracy and Adversarial Robustness
  • Zhi Xu, Chengtao Li and Stefanie Jegelka. Exploring the Robustness of GANs to Internal Perturbations
  • Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent Ghaoui and Michael Jordan. Theoretically Principled Trade-off between Robustness and Accuracy
  • Pang Wei Koh, Jacob Steinhardt and Percy Liang. Stronger Data Poisoning Attacks Break Data Sanitization Defenses
  • Bokun Wang and Ian Davidson. Improve Fairness of Deep Clustering to Prevent Misuse in Segregation
  • Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine and Stuart Russell. Adversarial Policies: Attacking Deep Reinforcement Learning
  • Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong and Tao Wei. Attacking Multiple Object Tracking using Adversarial Examples
  • Joseph Szurley and Zico Kolter. Perceptual Based Adversarial Audio Attacks

Poster Session #2 (5:15pm-6:00pm)

  • Felix Michels, Tobias Uelwer, Eric Upschulte and Stefan Harmeling. On the Vulnerability of Capsule Networks to Adversarial Attacks
  • Zhaoyang Lyu, Ching-Yun Ko, Tsui-Wei Weng, Luca Daniel, Ngai Wong and Dahua Lin. POPQORN: Quantifying Robustness of Recurrent Neural Networks
  • Avishek Ghosh, Justin Hong, Dong Yin and Kannan Ramchandran. Robust Heterogeneous Federated Learning
  • Ruoxi Jia, Bo Li, Chaowei Xiao and Dawn Song. Delving into Bootstrapping for Differential Privacy
  • Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu and Bin Dong. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
  • Mark Lee and Zico Kolter. On Physical Adversarial Patches for Object Detection
  • Venkata Gandikota, Raj Kumar Maity and Arya Mazumdar. Private vqSGD: Vector-Quantized Stochastic Gradient Descent
  • Dimitrios Diochnos, Saeed Mahloujifar and Mohammad Mahmoody. Lower Bounds for Adversarially Robust PAC Learning
  • Nicholas Roberts and Vinay Prabhu. Model weight theft with just noise inputs: The curious case of the petulant attacker
  • Ryan Webster, Julien Rabin, Frederic Jurie and Loic Simon. Generating Private Data Surrogates for Vision Related Tasks
  • Joyce Xu, Dian Ang Yap and Vinay Prabhu. Understanding Adversarial Robustness Through Loss Landscape Geometries
  • Kevin Shi, Daniel Hsu and Allison Bishop. A cryptographic approach to black-box adversarial machine learning
  • Haizhong Zheng, Earlence Fernandes and Atul Prakash. Analyzing the Interpretability Robustness of Self-Explaining Models
  • Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Sicun Gao, Dean Tullsen and Hadi Esmaeilzadeh. Shredder: Learning Noise for Privacy with Partial DNN Inference on the Edge
  • Chaowei Xiao, Xinlei Pan, Warren He, Bo Li, Jian Peng, Mingjie Sun, Jinfeng Yi, Mingyan Liu, Dawn Song. Characterizing Attacks on Deep Reinforcement Learning
  • Sanjam Garg, Somesh Jha, Saeed Mahloujifar and Mohammad Mahmoody. Adversarially Robust Learning Could Leverage Computational Hardness
  • Horace He, Aaron Lou, Qingxuan Jiang, Isay Katsman, Serge Belongie, Ser-Nam Lim. Adversarial Example Decomposition

Poster Size

ICML Workshop posters paper should be roughly 24" x 36" in portrait orientation. There will be no poster board; you will tape your poster directly to the wall. Use lightweight paper. We provide the tape.

Organizing Committee

(Listed by alphabetical order)