| CARVIEW |
The 4th Workshop of Adversarial Machine Learning on Computer Vision: Robustness of Foundation Models
The IEEE/CVF Conference on Computer Vision and Pattern Recognition. Jun 17-21, 2024. Seatle WA, USA
Overview
Artificial intelligence (AI) has entered a new era with the emergence of foundation models (FMs). These models demonstrate powerful generative capabilities by leveraging extensive model parameters and training data, which have become a dominant force in computer vision, revolutionizing a wide range of applications. Alongside their potential benefits, the increasing reliance on FMs has also exposed their vulnerabilities to adversarial attacks. These malicious attacks involve applying imperceptible perturbations to input images or prompts, which can cause the models to misclassify the objects or generate adversary-intended outputs. Such vulnerabilities pose significant risks in safety-critical applications, such as autonomous vehicles and medical diagnosis, where incorrect predictions can have dire consequences. By studying and addressing the robustness challenges associated with FMs, we could enable practitioners to better construct robust, reliable FMs across various domains.
The workshop will bring together researchers and practitioners from the computer vision and machine learning communities to explore the latest advances and challenges in adversarial machine learning, with a focus on the robustness of foundation models. The program will consist of invited talks by leading experts in the field, as well as contributed talks and poster sessions featuring the latest research. In addition, the workshop will also organize a challenge on adversarial attacking foundation models.
We believe this workshop will provide a unique opportunity for researchers and practitioners to exchange ideas, share latest developments, and collaborate on addressing the challenges associated with the robustness and security of foundation models. We expect that the workshop will generate insights and discussions that will help advance the field of adversarial machine learning and contribute to the development of more secure and robust foundation models for computer vision applications.
Tentative Important Dates
Timeline (Tentative)
| Workshop Schedule | |||
| Event | Start time | End time | |
| Opening Remarks | 8:30 | 8:40 | |
| Challenge Session | 8:40 | 9:00 | |
| Invited talk #1: Prof. Chaowei Xiao | 9:00 | 9:30 | |
| Invited talk #2: Prof. Bo Li | 9:30 | 10:00 | |
| Invited Talk #3: Prof. Zico Kolter | 10:00 | 10:30 | |
| Invited Talk #4: Prof. Neil Gong | 10:30 | 11:00 | |
| Poster Session #1 | 11:00 | 12:30 | |
| Lunch (12:30-13:30) | |||
| Invited Talk #5: Prof. Ludwig Schmidt | 13:30 | 14:00 | |
| Invited Talk #6: Prof. FlorianTramer | 14:00 | 14:30 | |
| Invited Talk #7: Prof. Tom Goldstein | 14:30 | 15:00 | |
| Invited Talk #8: Prof. Alex Beutel | 15:00 | 15:30 | |
| Poster Session #2 | 15:30 | 16:30 | |
Proposed Speakers
|
Bo
|
|
University of Chicago |
![]() |
Tom
|
|
|
![]() |
Ludwig
|
|
University of Washington |
|
Chaowei
|
|
University of Wisconsin, Madison |
![]() |
Florian
|
|
ETH Zurich |
![]() |
Alex
|
|
OpenAI |
![]() |
Zico
|
|
Carnegie Mellon University |
![]() |
Neil
|
|
Duke University |
Organizers
|
Aishan
|
|
Beihang |
|
Jiakai
|
|
Zhongguancun |
![]() |
Mo
|
|
Johns Hopkins |
|
Qing
|
|
A*STAR |
|
Xiaoning
|
|
Monash |
|
Xinyun
|
|
Google Brain |
![]() |
Felix
|
|
AI at Meta |
|
Xianglong
|
|
Beihang |
|
VishalM
|
|
Johns Hopkins University |
![]() |
Dawn
|
|
UC Berkeley |
![]() |
Alan
|
|
Johns Hopkins |
![]() |
Philip
|
|
Oxford |
|
Dacheng
|
|
The University of Sydney |
Call for Papers
- Robustness of foundation models
- Adversarial attacks on computer vision tasks
- Improving the robustness of deep learning systems
- Interpreting and understanding model robustness, especially foundation models
- Adversarial attacks for social good
- Dataset and benchmark that could evaluate foundation model robustness
Submission Site: https://cmt3.research.microsoft.com/CVPRAdvML2024
Submission Due: March 15, 2024, Anywhere on Earth (AoE)
Distinguished Paper Award
- Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
[Paper]
Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xuan, Ee-Chien Chang, Xiaochun Cao.
Accepted Long Paper
- Large Language Models in Wargaming: Methodology, Application, and Robustness
[Paper]
Yuwei Chen (Aviation Industry Development Research Center of China)*; Shiyong Chu (Aviation Industry Development Research Center of China) - Enhancing Targeted Attack Transferability via Diversified Weight Pruning
[Paper]
Hung-Jui Wang (National Taiwan University)*; Yu-Yu Wu (National Taiwan University); Shang-Tse Chen (National Taiwan University) - Enhancing the Transferability of Adversarial Attacks with Stealth Preservation
[Paper]
Xinwei Zhang (Beihang University); Tianyuan Zhang (Beihang University); Yitong Zhang (Beihang University); Shuangcheng Liu (Beihang University)* - Adversarial Attacks on Foundational Vision Models
Nathan Inkawhich (Air Force Research Laboratory)*; Ryan S Luley (Air Force Research Laboratory); Gwendolyn N McDonald (AFRL) - Benchmarking Robustness in Neural Radiance Fields
Chen Wang (University of Pennsylvania)*; Angtian Wang (Johns Hopkins University); Junbo Li (UC Santa Cruz); Alan Yuille (Johns Hopkins University); Cihang Xie ( University of California, Santa Cruz) - Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability
[Paper]
Muchao Ye (The Pennsylvania State University); Xiang Xu (Amazon)*; Qin Zhang (Amazon.com); Jonathan Wu (Amazon) - Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
[Paper]
Chenguang Wang (Washington University)*; Ruoxi Jia (Virginia Tech); Xin Liu (University of California); Dawn Song (UC Berkeley) - Red-Teaming Segment Anything Model
[Paper]
Krzysztof Jankowski (University of Warsaw)*; Bartłomiej Jan Sobieski (Warsaw University of Technology); Mateusz Kwiatkowski (MI2.AI, University of Warsaw); Jakub Szulc (University of Warsaw); Michał Janik (University of Warsaw); Hubert Baniecki (University of Warsaw); Przemyslaw Biecek (Warsaw University of Technology) - Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities
[Paper]
SangHwa Hong (SeoulTech)* - Multimodal Attack Detection for Action Recognition Models
[Paper]
Furkan Mumcu (University of South Florida)*; Yasin Yilmaz (University of South Florida)
Accepted Extended Abstract
- Attack End-to-End Autonomous Driving through Module-Wise Noise
[Paper]
Lu Wang (Beihang University); Tianyuan Zhang (Beihang University)*; Yikai Han (Beijing University of Aeronautics and Astronautics); Muyang Fang (Beihang University); Ting Jin (Beihang University); 家麒 康 (Beihang University) - Scaling Vision-Language Models Does Not Improve Relational Understanding: The Right Learning Objective Helps
[Paper]
Haider Al-Tahan (Meta - FAIR)*; Quentin Garrido (Meta - FAIR); Randall Balestriero (Facebook AI Research); Diane Bouchacourt (Facebook AI); Caner Hazirbas (Meta AI); Mark Ibrahim (Capital One Center for Machine Learning) - ResampleTrack: Online Resampling for Adversarially Robust Visual Tracking
Xuhong Ren (School of Computer Science and Engineering, Tianjin University of Technology); Jianlang Chen (Kyushu University); Yue Cao (Nanyang Technological University); Wanli Xue (Tianjin University of Technology)*; Qing Guo (A*STAR); Lei Ma (The University of Tokyo / University of Alberta); Jianjun Zhao (Kyushu University); Chen Shengyong (Zhejiang University of technology;Tianjin University of Technology) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
[Paper] ⭐Distinguished Paper Award
Siyuan Liang (Department of Computer Science, National University of Singapore)*; kuanrong liu (Sun Yat-San University); Jiajun Gong (School of Computing, National University of Singapore); Jiawei Liang (SUN YAT-SEN UNIVERSITY); Yuan Xun (Institute of Information Engineering, Chinese Academy of Sciences); Ee-Chien Chang (NUS); Xiaochun Cao (Sun Yat-sen University)
Challenge
Rank List
| Rank | Team |
| team_aoliao | |
| AdvAttacks | |
| team_Aikedaer | |
| 4 | daqiezi |
| 5 | team_theone |
| 6 | AttackAny |
| 7 | team_bingo |
| 8 | Ynu AI Sec |
| 9 | Tsinling |
| 10 | abandon |
Challenge Chair
![]() |
Jiakai
|
|
Zhongguancun |
![]() |
Zonglei
|
|
Beihang |
![]() |
Hainan
|
|
Institute |
![]() |
Zhenfei
|
|
Shanghai AI Laboratory & The University of Sydney |
![]() |
Haotong
|
|
ETH |
![]() |
Jing
|
|
Shanghai AI |
|
Xianglong
|
|
Beihang |
|
Shengshan
|
|
Huazhong University of Science and Technology |
|
Long
|
|
Huazhong University of Science and Technology |
![]() |
Yanjun
|
|
Zhongguancun |
![]() |
Yue
|
|
OpenI |
Sponsors







Program Committee
|
Yisong
|
|
Beihang |
|
Jin
|
|
Beihang |
![]() |
Tianyuan
|
|
Beihang |
![]() |
Siyang
|
|
Beihang |
- Tianlin Li (NTU)
- Xinwei Liu (IIE, CAS)
- Kui Zhang (USTC)
- Linghui Zhu (Tsinghua University)
- Jingzhi Li (IIE, CAS)
- Bangyan He (IIE, CAS)
- Yikun Xu (IIE, CAS)
- Yulong Cao (University of Michigan)
- Zhipeng Wei (Fudan University)
- Xinwei Zhang (Beihang University)
- Jiangfan Liu (Beihang University)
- Haojie Hao (Beihang University)
- Akshayvarun Subramanya (UMBC)
- Chirag Agarwal (Harvard University)
- Jieru Mei (Johns Hopkins University)
- Jun Guo (Beihang University)
- Kibok Lee (University of Michigan)
- Lifeng Huang (SunYat-sen university)
- Maura Pintor (University of Cagliari)
- Ruihao Gong (SenseTime)
- Jiachen Sun (University of Michigan)
- Lu Wang (Beihang University)
- Zihao Xiao (Johns Hopkins University)
- Siqi Wang (Boston University)



















