| CARVIEW |
The 3rd Workshop of Adversarial Machine Learning
on Computer Vision: Art of Robustness
Workshop at IEEE Conference on Computer Vision and Pattern Recognition 2023
Overview
Deep learning has achieved significant success in multiple fields, including computer vision. However, studies in adversarial machine learning also indicate that deep learning models are highly vulnerable to adversarial examples. Extensive works have demonstrated that adversarial examples challenge the robustness of deep neural networks, which threatens deep-learning-based applications in both the digital and physical worlds.
Though harmful, adversarial attacks are also beneficial for deep learning models. Discovering and harnessing adversarial examples properly could be highly beneficial across several domains including improving model robustness, diagnosing model blind spots, protecting data privacy, safety evaluation, and further understanding vision systems in practice. Since there are both the devil and angel roles of adversarial learning, exploring robustness is an art of balancing and embracing both the light and dark sides of adversarial examples.
In this workshop, we aim to bring together researchers from the fields of computer vision, machine learning, and security to jointly cooperate with a series of meaningful works, lectures, and discussions. We will focus on the most recent progress and also the future directions of both the positive and negative aspects of adversarial machine learning, especially in computer vision. Different from the previous workshops on adversarial machine learning, our proposed workshop aims to explore both the devil and angel characters for building trustworthy deep learning models.
Overall, this workshop consists of invited talks from experts in this area, research paper submissions, and a large-scale online competition on building robust models.
Tentative Important Dates
Timeline
| ArtofRobust Workshop Schedule | |||
| Event | Start time | End time | |
| Opening Remarks | 9:00 | 9:10 | |
| Invited talk: Aleksander Madry | 9:10 | 9:40 | |
| Invited talk: Deqing Sun | 9:40 | 10:10 | |
| Invited talk: Bo Li | 10:10 | 10:40 | |
| Invited talk: Cihang Xie | 10:40 | 11:10 | |
| Invited talk: Alan Yuille | 11:10 | 11:40 | |
| Oral Session | 11:40 | 12:00 | |
| Challenge Session | 12:00 | 12:10 | |
| Poster Session | 12:10 | 12:30 | |
| Lunch (12:30-13:30) | |||
| Invited talk: Lingjuan Lyu | 13:30 | 14:00 | |
| Invited talk: Judy Hoffman | 14:00 | 14:30 | |
| Invited talk: Furong Huang | 14:30 | 15:00 | |
| Invited talk: Ludwig Schmidt | 15:00 | 15:30 | |
| Invited talk: Chaowei Xiao | 15:30 | 16:00 | |
| Poster Session | 16:00 | 17:00 | |
Proposed Speakers
![]() |
Alan
|
|
|
|
Furong
|
|
University of Maryland |
|
Bo
|
|
University of Illinois at Urbana-Champaign |
|
Cihang
|
|
UC Santa Cruz |
|
Deqing
|
|
|
|
Chaowei
|
|
Arizona State University |
![]() |
Aleksander
|
|
Massachusetts Institute of Technology |
![]() |
Judy
|
|
Georgia Tech |
![]() |
Ludwig
|
|
University of Washington |
![]() |
Lingjuan
|
|
Sony AI |
Organizers
|
Aishan
|
|
Beihang |
|
Jiakai
|
|
Zhongguancun |
![]() |
Francesco Croce |
|
University of Tübingen |
|
Vikash Sehwag |
|
Princeton University |
|
Yingwei
|
|
Waymo |
|
Xinyun
|
|
Google Brain |
|
Cihang
|
|
UC Santa Cruz |
![]() |
Yuanfang
|
|
Beihang |
|
Xianglong
|
|
Beihang |
|
Xiaochun
|
|
Sun Yat-sen University |
![]() |
Dawn
|
|
UC Berkeley |
![]() |
Alan
|
|
Johns Hopkins |
![]() |
Philip
|
|
Oxford |
|
Dacheng
|
|
JD Explore Academy |
Call for Papers
- Adversarial attacks against computer vision tasks
- Improve the robustness of deep learning systems
- Interpreting and understanding model robustness
- Adversarial attacks for social good
- Dataset and benchmark that might benefit the model robustness
Submission Site: https://cmt3.research.microsoft.com/3rdAdvML2023
Submission Due: March 15, 2023, Anywhere on Earth (AoE)
Accepted Long Paper
- Certified Adversarial Robustness Within Multiple Perturbation Bounds [Paper] [Supplementary Material] oral presentation
Soumalya Nandi (Indian Institute of Science, Bangalore)*; Sravanti Addepalli (Indian Institute of Science); Harsh Rangwani (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science) - Adversarial Defense in Aerial Detection [Paper]
Yuwei Chen (Aviation Industry Development Research Center of China)*; Shiyong Chu (Aviation Industry Development Research Center of China) - Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective
[Paper] [Supplementary Material]
Zhengbao He (Shanghai Jiao Tong University)*; Tao Li (Shanghai Jiao Tong University); Sizhe Chen (Shanghai Jiao Tong University); Xiaolin Huang (Shanghai Jiao Tong University) - Universal Watermark Vaccine: Universal Adversarial Perturbations for Watermark Protection [Paper] oral
presentation
Jianbo Chen (Hunan University)*; Xinwei Liu (Institute of Information Engineering,Chinese Academy of Sciences); Siyuan Liang (Chinese Academy of Sciences); Xiaojun Jia (Institute of Information Engineering,Chinese Academy of Sciences); Yuan Xun (Institute of Information Engineering,Chinese Academy of Sciences) - Robustness with Query-efficient Adversarial Attack using Reinforcement Learning [Paper]
Soumyendu Sarkar (Hewlett Packard Enterprise)*; Ashwin Ramesh Babu (Hewlett Packard Enterprise Labs); Sajad Mousavi (Hewlett Packard Enterprise); Sahand Ghorbanpour (HPE); Vineet Gundecha (Hewlett Packard Enterpise); Antonio Guillen (Hewlett Packard Enterprise); Ricardo Luna Gutierrez (Hewlett Packard Enterprise); Avisek Naug (Hewlett Packard Enterprise) - Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs [Paper]
Hasan Abed Al Kader Hammoud (King Abdullah University of Science and Technology )*; Adel Bibi (University of Oxford); Philip Torr (University of Oxford); Bernard Ghanem (KAUST) - Exploring Diversified Adversarial Robustness in Neural Networks via Robust Mode Connectivity [Paper]
Ren Wang (Illinois Institute of Technology)*; Yuxuan Li (Harbin Institute of Technology); Sijia Liu (Michigan State University) - How many dimensions are required to find an adversarial example? [Paper] [Supplementary Material]
oral presentation
Charles Godfrey (Pacific Northwest National Lab)*; Henry Kvinge (Pacific Northwest National Lab); Elise Bishoff ( Pacific Northwest National Lab); Myles Mckay (Pacific Northwest National Lab); Davis Brown (Pacific Northwest National Laboratory); Timothy Doster (Pacific Northwest National Laboratory); Eleanor Byler (Pacific Northwest National Laboratory) - An Extended Study of Human-like Behavior under Adversarial Training [Paper]
Paul Gavrikov (Offenburg University)*; Janis Keuper (Offenburg University); Margret Keuper (University of Mannheim) - Test-time Detection and Repair of Adversarial Samples via Masked Autoencoder [Paper]
Yun-Yun Tsai (Columbia University)*; JuChin Chao (Columbia University); Albert Wen (Columbia ); Zhaoyuan Yang (GE Research); Chengzhi Mao (Columbia University); Tapan Shah (GE ); Junfeng Yang (Columbia University) - Deep Convolutional Sparse Coding Networks for Interpretable Image Fusion [Paper]
Zixiang Zhao (Xi’an Jiaotong University)*; Jiangshe Zhang (Xi'an Jiaotong University); Haowen Bai (Xi'an Jiaotong University); Yicheng Wang (Xi'an Jiaotong University); yukun cui (Xi'an Jiaotong University); Lilun Deng (Xi’an Jiaotong University); Kai Sun (Xi'an Jiaotong University); Chunxia Zhang (Xi'an Jiaotong University); Junmin Liu (Xi'an Jiaotong University); Shuang Xu (Northwestern Polytechnical University) - Robustness Benchmarking of Image Classifiers for Physical Adversarial Attack Detection [Paper]
Ojaswee . (Indian Institute of Science Education and Research Bhopal); Akshay Agarwal (IISER Bhopal)* - Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness [Paper]
Timothy P Redgrave (University of Notre Dame)*; Colton R Crum (University of Notre Dame) - A Few Adversarial Tokens Can Break Vision Transformers [Paper] [Supplementary Material]
Ameya Joshi (New York University)*; Sai Charitha Akula (New York University); Gauri Jagatap (New York University); Chinmay Hegde (New York University) -
Dual-model Bounded Divergence Gating for Improved Clean Accuracy in Adversarial Robust Deep Neural
Networks
[Paper]
Hossein Aboutalebi (University of Waterloo)*; Mohammad Javad Shafiee (University of Waterloo); Chi-en A Tai (University of Waterloo); Alexander Wong (University of Waterloo) - A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion [Paper]
Haomin Zhuang (South China University of Technology)*; Yihua Zhang (Michgan State University); Sijia Liu (Michigan State University) - Implications of Solution Patterns on Adversarial Robustness [Paper]
Hengyue Liang (University of Minnesota)*; Buyun Liang (University of Minnesota); Ying Cui (University of Minnesota); Tim Mitchell (Queens College / CUNY); Ju Sun (University of Minnesota)
Accepted Extended Abstract
- Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection [Paper]
Tianyuan Zhang (Beihang University)*; Yisong Xiao (Beihang University); Xiaoya Zhang (Beihang University); Li Hao (Beihang University); lu wang (Beihang University) - Benchmarking the Robustness of Quantized Models [Paper]
Yisong Xiao (Beihang University)*; Tianyuan Zhang (Beihang University); shunchang liu (Beihang University); Haotong Qin (Beihang University) - Higher Model Robustness by Meta-Optimization for Monocular Depth Estimation [Paper] [Supplementary Material]
Cho-Ying Wu (University of Southern California)*; Yiqi Zhong (University of Southern California); Junying Wang (University of Southern California); Ulrich Neumann (USC) - Neural Architecture Design and Robustness: A Dataset [Paper] [Supplementary
Material]
Steffen Jung (MPII)*; Jovita Lukasik (University of Mannheim); Margret Keuper (University of Mannheim) - Boosting Cross-task Transferability of Adversarial Patches with Visual Relations [Paper]
Wentao Ma (Beihang University)*; SongZe Li (Beihang University); Yisong Xiao (Beihang University); shunchang liu (Beihang University)
Challenge
Timeline
| Challenge Timeline | |
| 2023-03-28 (UTC+8) | Registration opens |
| 2023-03-31 10:00 (UTC+8) | Phase Ⅰ submission starts |
| 2023-04-28 20:00 (UTC+8) | Registration and Phase Ⅰ submission deadline |
| 2023-05-01 10:00 (UTC+8) | Phase Ⅱ submission starts |
| 2023-05-31 20:00 (UTC+8) | Phase Ⅱ submission deadline |
| 2023-06 | Results Announcement |
Award List
| Rank | Team | Price |
| Huge | ¥20000 | |
| violet | ¥15000 | |
| team_wwwwww | ¥10000 | |
| 4 | team_AIHIA | ¥9000 |
| 5 | hhh | ¥6000 |
| 6 | SJTU_ICL | ¥4000 |
| 7 | team_oldman | ¥1500 |
| 7 | puff | ¥1500 |
| 9 | 为了全人类 | ¥1500 |
| 10 | superKI | ¥1500 |
Challenge Chair
![]() |
Zonglei
|
|
Beihang |
![]() |
Haotong
|
|
Beihang |
![]() |
Siyuan
|
|
Chinese Academy |
|
Ding
|
|
SenseTime |
|
Yichao
|
|
SenseTime |
|
Yue
|
|
NUDT & OPENI |
|
Xianglong
|
|
Beihang |
Sponsors





Program Committee
![]() |
Simin
|
|
Beihang |
![]() |
Shunchang
|
|
Beihang |
|
Yisong
|
|
Beihang |
- Akshayvarun Subramanya (UMBC)
- Alexander Robey (University of Pennsylvania)
- Ali Shahin Shamsabadi (Queen Mary University of London, UK)
- Angtian Wang (Johns Hopkins University)
- Aniruddha Saha (University of Maryland Baltimore County)
- Anshuman Suri (University of Virginia)
- Bernhard Egger (Massachusetts Institute of Technology)
- Chenglin Yang (Johns Hopkins University)
- Chirag Agarwal (Harvard University)
- Gaurang Sriramanan (Indian Institute of Science)
- Hang Yu (Beihang University)
- Jiachen Sun (University of Michigan)
- Jieru Mei (Johns Hopkins University)
- Jun Guo (Beihang University)
- Ju He (Johns Hopkins University)
- Kibok Lee (University of Michigan)
- Lifeng Huang (SunYat-sen university)
- Maura Pintor (University of Cagliari)
- Muhammad Awais (Kyung-Hee University)
- Muzammal Naseer (ANU)
- Nataniel Ruiz (Boston University)
- Qihang Yu (Johns Hopkins University)
- Qing Jin (Northeastern University)
- Rajkumar Theagarajan (University of California, Riverside)
- Ruihao Gong (SenseTime)
- Shiyu Tang (Beihang University)
- Shunchang liu (Beihang University)
- Sravanti Addepalli (Indian Institute of Science)
- Tianlin Li (NTU)
- Wenxiao Wang (Tsinghua University)
- Won Park (University of Michigan)
- Xiangning Chen (University of California, Los Angeles)
- Xiaohui Zeng (University of Toronto)
- Xingjun Ma (Deakin University)
- Xinwei Zhao (Drexel University)
- Yulong Cao (University of Michigan, Ann Arbor)
- Yutong Bai (Johns Hopkins University)
- Zihao Xiao (Johns Hopkins University)
- Zixin Yin (Beihang University)













