| CARVIEW |
The Art of Robustness:
Devil and Angel in Adversarial Machine Learning
Workshop at IEEE Conference on Computer Vision and Pattern Recognition 2022
Overview
Deep learning has achieved significant success in multiple fields, including computer vision. However, studies in adversarial machine learning also indicate that deep learning models are highly vulnerable to adversarial examples. Extensive works have demonstrated that adversarial examples are serving as a devil for the robustness of deep neural networks, which threatens the deep learning based applications in both the digital and physical world.
Though harmful, adversarial attacks can be also shown as an angel for deep learning models. Discovering and harnessing adversarial examples properly could be highly beneficial across several domains including improving model robustness, diagnosing model blind spots, protecting data privacy, safety evaluation, and further understanding vision systems in practice. Since there are both the devil and angel roles of adversarial learning, exploring robustness is an art of balancing and embracing both the light and dark sides of adversarial examples.
In this workshop, we aim to bring together researchers from the fields of computer vision, machine learning, and security to jointly cooperate with a series of meaningful works, lectures, and discussions. We will focus on the most recent progress and also the future directions of both the positive and negative aspects of adversarial machine learning, especially in computer vision. Different from the previous workshops on adversarial machine learning, our proposed workshop aims to explore both the devil and angel characters for building trustworthy deep learning models.
Overall, this workshop consists of invited talks from the experts in this area, research paper submissions, and a large-scale online competition on building robust models. In particular, the competition includes two tracks as (1) Building digital robust classifiers on ImageNet and (2) Building physical robust adversarial detectors on open-set.
Timeline is released now! Click to visit.“This year, June 19 and 20 marks Juneteenth, a US holiday commemorating the end of slavery in the US, and a holiday of special significance in the US South. We encourage attendees to learn more about Juneteenth and its historical context, and to join the city of New Orleans in celebrating the Juneteenth holiday. You can find out more information about Juneteenth here: https://cvpr2022.thecvf.com/recognizing-juneteenth”
Important Dates
Timeline
| ArtofRobust Workshop Schedule | |||
| Event | Start time | End time | |
| Opening Remarks | 8:50 | 9:00 | |
| Invited talk: Yang Liu | 9:00 | 9:30 | |
| Invited talk: Quanshi Zhang | 9:30 | 10:00 | |
| Invited talk: Baoyuan Wu | 10:00 | 10:30 | |
| Invited talk: Aleksander Mądry | 10:30 | 11:00 | |
| Invited talk: Bo Li | 11:00 | 11:30 | |
| Poster Session (click) | 11:30 | 12:30 | |
| lunch (12:30-13:30) | |||
| Oral Session | 13:30 | 14:10 | |
| Challenge Session | 14:10 | 14:30 | |
| Invited talk: Nicholas Carlini | 14:30 | 15:00 | |
| Invited talk: Judy Hoffman | 15:00 | 15:30 | |
| Invited talk: Alan Yuille | 15:30 | 16:00 | |
| Invited talk: Ludwig Schmidt | 16:00 | 16:30 | |
| Invited talk: Cihang Xie | 16:30 | 17:00 | |
| Join our workshop (click)!!! | |||
| June 19, 2022 (9:00-17:30) | |||
| New Orleans time zone (UTC/GMT -5) | |||
Proposed Speakers
![]() |
Alan |
|
![]() |
Yang |
|
![]() |
Aleksander |
Massachusetts Institute of Technology |
![]() |
Nicholas |
|
![]() |
Bo |
|
![]() |
Quanshi |
Shanghai Jiaotong University |
![]() |
Ludwig |
University of Washington |
![]() |
Cihang |
Peking University |
![]() |
Judy |
Georgia Tech |
![]() |
Baoyuan |
Chinese University of Hong Kong (Shenzhen) |
Call for Papers
- Adversarial attacks against computer vision tasks
- Improve the robustness of deep learning systems
- Interpreting and understanding model robustness
- Adversarial attacks for social good
Organizers
![]() |
Aishan |
Beihang |
![]() |
Florian Tramèr |
ETH Zürich and Google Brain |
![]() |
Francesco Croce |
University of Tübingen |
![]() |
Jiakai |
Beihang |
![]() |
Yingwei |
Johns Hopkins University |
![]() |
Xinyun |
UC Berkeley |
![]() |
Cihang |
UC Santa Cruz |
![]() |
Bo |
UIUC |
![]() |
Xianglong |
Beihang University |
![]() |
Xiaochun |
Sun Yat-sen University |
![]() |
Dawn |
UC Berkeley |
![]() |
Alan |
Johns Hopkins University |
![]() |
Philip |
Oxford University |
![]() |
Dacheng |
JD Explore Academy |
Paper Submission
The excellent papers will be invited to the Special Issue of Pattern Recognition journal for publication consideration(TBD).
Submission Site: https://cmt3.research.microsoft.com/artofrobust2022/
Submission Due: !!!Time Delay March 22, 2022, Anywhere on Earth (AoE)
Accepted Long Paper
- Privacy Leakage of Adversarial Training Models in Federated Learning Systems [Paper] oral presentation
Jingyang Zhang (Duke University)*; Yiran Chen (Duke University); Hai Li (Duke University) - Increasing Confidence in Adversarial Robustness Evaluations [Paper] oral presentation
Roland S. Zimmermann (University of Tuebingen)*; Wieland Brendel (University of Tübingen); Florian Tramer (Google); Nicholas Carlini (Google) - The Risk and Opportunity of Adversarial Example in Military Field [Paper]
Yuwei Chen (Chinese Aeronautical Establishment)* - Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning [Paper]
Jun Guo (Beihang University)*; Yonghong Chen (Yangzhou Collaborative Innovation Research Institute CO., LTD); Yihang Hao (Yangzhou Collaborative Innovation Research Institute CO., LTD); Zixin Yin (Beihang University); Yin Yu (No. 38 Research Institute of CETC, Hefei 230088, China); Simin Li (Beihang University) - Robustness and Adaptation to Hidden Factors of Variation [Paper]
William Paul (JHU/APL)*; Philippe Burlina (JHU/APL/CS/SOM) - PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos [Paper]
Nupur Thakur (Arizona State University)*; baoxin Li (Arizona State University) - Adversarial Robustness through the Lens of Convolutional Filters [Paper]
Paul Gavrikov (Offenburg University)*; Janis Keuper (Offenburg University) - Strengthening the Transferability of Adversarial Examples Using Advanced Looking Ahead and Self-CutMix [Paper][Supp]
Donggon Jang (KAIST)*; Sanghyeok Son (KAIST); Daeshik Kim (KAIST) - AugLy: Data Augmentations for Adversarial Robustness [Paper]
Zoë Papakipos (Meta AI)*; Joanna Bitton (Facebook AI) - RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection [Paper]
Umar Khalid (University of central florida)*; Ashkan Esmaeili (University of Central Florida); Nazmul Karim (University of Central Florida); Nazanin Rahnavard (University of Central Florida) - An Empirical study of Data-Free Quantization's Tuning Robustness [Paper]
Hong Chen (Beihang University); Yuxan Wen (Beihang University); Yifu Ding (Beihang University); Zhen Yang (Shanghai Aerospace Electronic Technology Institute); Yufei Guo (The Second Academy of China Aerospace Science and Industry Corporation); Haotong Qin (Beihang University)* - Exploring Robustness Connection between Artificial and Natural Adversarial Examples [Paper]
Akshay Agarwal (University at Buffalo)*; Nalini Ratha (SUNY Buffalo); Mayank Vatsa (IIT Jodhpur); Richa Singh (IIT Jodhpur) 0 - Generalizing Adversarial Explanations with Grad-CAM [Paper]
Tanmay Chakraborty (EURECOM)*; Utkarsh Trehan (EURECOM); Khawla Mallat (Eurecom); Jean-Luc Dugelay (France) - CorrGAN:Input Transformation Technique Against Natural Corruptions [Paper]
Mirazul Haque (University of Texas at Dallas)*; Christof J Budnik (Siemens); Wei Yang (University of Texas at Dallas) - Poisons that are learned faster are more effective [Paper]
Pedro Sandoval-Segura (University of Maryland at College Park)*; Vasu Singla (University Of Maryland); Liam Fowl (University of Maryland); Jonas Geiping (University of Maryland); Micah Goldblum (New York University); David Jacobs (University of Maryland); Tom Goldstein (University of Maryland, College Park) - Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems [Paper]
Furkan Mumcu (University of South Florida); Keval Doshi (University of South Florida)*; Yasin Yilmaz (University of South Florida)
Accepted Extended Abstract
- Gradient Obfuscation Checklist Test Gives a False Sense of Security [Paper] oral presentation
Nikola Popović (ETH Zürich)*; Danda Pani Paudel (ETH Zürich); Thomas Probst (ETH Zurich); Luc Van Gool (ETH Zurich) - Test-time Adaptation of Residual Blocks against Poisoning and Backdoor Attacks [Paper][Supp] oral presentation
Arnav Gudibande (UC Berkeley)*; Xinyun Chen (UC Berkeley); Yang Bai (Tsinghua); Jason Xiong (University of California, Berkeley); Dawn Song (UC Berkeley) - Understanding CLIP Robustness [Paper]
Yuri Galindo (Federal University of Sao Paulo)*; Fabio Faria (Federal University of Sao Paulo) - On Fragile Features and Batch Normalization in Adversarial Training [Paper]
Nils Philipp Walter (Max Planck Institute for Informatics)*; David Stutz (Max Planck Institute for Informatics); Bernt Schiele (MPI Informatics) - Sparse Visual Counterfactual Explanations in Image Space [Paper]
Valentyn Boreiko (University of Tübingen)*; Maximilian Augustin (University of Tuebingen); Francesco Croce (University of Tübingen); Philipp Berens (University of Tübingen); Matthias Hein (University of Tübingen) - Efficient and Effective Augmentation Strategy for Adversarial Training [Paper]
Sravanti Addepalli (Indian Institute of Science)*; Samyak Jain (Indian Institute of Technology (BHU), Varanasi); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science) - Towards Data-Free Model Stealing in a Hard Label Setting [Paper]
Sunandini Sanyal (Indian Institute of Science, Bengaluru)*; Sravanti Addepalli (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science) - Transferability of ImageNet Robustness to Downstream Tasks [Paper]
Yutaro Yamada (Yale University)*; Mayu Otani (CyberAgent)
Challenge
Track I: Classification Task Defense
Deep learning models are vulnerable against noises, e.g., adversarial attacks, which poses strong challenges on the deep learning-based applications in the real-world scenario. A large number of defenses have been proposed to mitigate the threats, however, most of them can be broken and the robustness on large-scale datasets is still far from satisfactory.
To accelerate the research on building robust models against noises, we organize this challenge track for motivating novel defense algorithms. Participants are encouraged to develop defense methods that could improve model robustness against diverse noises on large-scale dataset.
Track II: Open Set Defense
Most defense methods aim to build robust model in the closed set (e.g., under fixed datasets with constrained perturbation types and budgets). However, in the real-world scenario, adversaries would bring more harms and challenges to the deep learning-based applications by generating unrestricted attacks, such as large and visible noises, perturbed images with unseen labels, etc.
To accelerate the research on building robust models in the open set, we organize this challenge track. Participants are encouraged to develop a robust detector that could distinguish clean examples from perturbed ones on unseen noises and classes by training on a limited-scale dataset.
Challenge Chair
![]() |
Siyuan |
Chinese Academy of Sciences |
![]() |
Zonglei |
Beihang |
![]() |
Tianlin |
Nanyang Technological University |
![]() |
Haotong |
Beihang |
Sponsors
Program Committee
- Akshayvarun Subramanya (UMBC)
- Alexander Robey (University of Pennsylvania)
- Ali Shahin Shamsabadi (Queen Mary University of London, UK)
- Angtian Wang (Johns Hopkins University)
- Aniruddha Saha (University of Maryland Baltimore County)
- Anshuman Suri (University of Virginia)
- Bernhard Egger (Massachusetts Institute of Technology)
- Chenglin Yang (Johns Hopkins University)
- Chirag Agarwal (Harvard University)
- Gaurang Sriramanan (Indian Institute of Science)
- Jiachen Sun (University of Michigan)
- Jieru Mei (Johns Hopkins University)
- Jun Guo (Beihang University)
- Ju He (Johns Hopkins University)
- Kibok Lee (University of Michigan)
- Lifeng Huang (SunYat-sen university)
- Maura Pintor (University of Cagliari)
- Muhammad Awais (Kyung-Hee University)
- Renshuai Tao (Beihang University)
- Muzammal Naseer (ANU)
- Nataniel Ruiz (Boston University)
- Qihang Yu (Johns Hopkins University)
- Qing Jin (Northeastern University)
- Rajkumar Theagarajan (University of California, Riverside)
- Ruihao Gong (SenseTime)
- Shiyu Tang (Beihang University)
- Shunchang liu (Beihang University)
- Sravanti Addepalli (Indian Institute of Science)
- Tianlin Li (NTU)
- Wenxiao Wang (Tsinghua University)
- Hang Yu (Beihang University)
- Won Park (University of Michigan)
- Xiangning Chen (University of California, Los Angeles)
- Xiaohui Zeng (University of Toronto)
- Xingjun Ma (Deakin University)
- Xinwei Zhao (Drexel University)
- Shunchang Liu (Beihang University)
- Yulong Cao (University of Michigan, Ann Arbor)
- Yutong Bai (Johns Hopkins University)
- Zihao Xiao (Johns Hopkins University)
- Zixin Yin (Beihang University)
- Zixiang Zhao (Xi'an Jiaotong University)
























