| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 17 Oct 2021 17:09:56 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"616c58e4-79dd"
expires: Sun, 28 Dec 2025 15:03:05 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: ED3B:2DDCFF:7B938F:8A88E5:69514451
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 14:53:05 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210050-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766933586.682287,VS0,VE226
vary: Accept-Encoding
x-fastly-request-id: 923792f9739120bb827650f0fee9cf3cb6eedfee
content-length: 4978
Home | AdvM Workshop
1st International Workshop on Adversarial
Deep learning has achieved significant success in multimedia fields involving computer vision, natural language processing, and acoustics. However research in adversarial learning also shows that they are highly vulnerable to adversarial examples. Extensive works have demonstrated that adversarial examples could easily fool deep neural networks to wrong predictions threatening practical deep learning applications in both digital and physical world. Though challenging, discovering and harnessing adversarial attacks is beneficial for diagnosing model blind-spots and further understanding as well as improving multimedia systems in practice.
In this workshop, we aim to bring together researchers from the fields of adversarial machine learning, model robustness, and explainable AI to discuss recent research and future directions for adversarial robustness of deep learning models, with a particular focus on multimedia applications, including computer vision, acoustics, etc. As far as we know, we are the first workshop to focus on adversarial learning of multimedia deep learning systems, which is of great significance.
Deep learning has achieved significant success in multimedia fields, however research in adversarial learning also shows that it is highly vulnerable to adversarial examples. We invite submissions on any aspect of adversarial machine learning in multimedia deep learning systems.
Topics include but not limited to:
Format: Submitted papers (.pdf format) must use the ACM Article Template https://www.acm.org/publications/proceedings-template. Please remember to add Concepts and Keywords.
Length: As stated in the CfP, submitted papers may be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Overlength papers will be rejected without review.
Submission Site: https://cmt3.research.microsoft.com/AdvM2021 .
Security and Safety in Machine Learning Systems (Workshop at ICLR 2021)
Adversarial Robustness in the Real World (Workshop at ICCV 2021)
Uncertainty & Robustness in Deep Learning (Workshop at ICML 2021)
Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (Workshop at CVPR 2021)
1st International Workshop on Adversarial
Learning for Multimedia
Workshop at ACM Multimedia 2021
Overview
In this workshop, we aim to bring together researchers from the fields of adversarial machine learning, model robustness, and explainable AI to discuss recent research and future directions for adversarial robustness of deep learning models, with a particular focus on multimedia applications, including computer vision, acoustics, etc. As far as we know, we are the first workshop to focus on adversarial learning of multimedia deep learning systems, which is of great significance.
Important Dates
Workshop Schedule
Keynote Speakers
![]() |
Alan Yuille |
Johns Hopkins University |
![]() |
Xiaochun Cao |
Chinese Academy of Sciences |
![]() |
Bo Li |
University of Illinois at Urbana-Champaign |
![]() |
Tom Goldstein |
University of Maryland |
![]() |
Baoyuan Wu |
The Chinese University of HongKong, Shenzhen |
![]() |
Pin-Yu Chen |
IBM |
![]() |
Boqing Gong |
![]() |
Cihang Xie |
University of California, Santa Cruz |
Call for Papers
- Adversarial attacking deep learning systems
- Robust architectures against adversarial attacks
- Training techniques for building robust deep learning systems
- Benchmark for evaluating model robustness
- Understanding the adversarial vulnerabilities of deep learning systems
- Improving generalization performance of computer vision systems to out-of-distribution samples
- Explainable AI
Organizers
![]() |
Dawn Song |
UC Berkeley |
![]() |
Dacheng Tao |
JD Explore Academy |
![]() |
Alan Yuille |
Johns Hopkins University |
![]() |
Anima Anandkumar |
California Institute of Technology |
![]() |
Xianglong Liu |
Beihang University |
![]() |
Aishan Liu |
Beihang University |
![]() |
Xinyun Chen |
UC Berkeley |
![]() |
Yingwei Li |
Johns Hopkins University |
![]() |
Chaowei Xiao |
NVIDIA Research & Arizona State University |
![]() |
Xun Yang |
National University of Singapore |
Paper Submission
Length: As stated in the CfP, submitted papers may be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Overlength papers will be rejected without review.
Submission Site: https://cmt3.research.microsoft.com/AdvM2021 .
Program Committee
- Jiakai Wang, Beihang University
- Ruihao Gong, SenseTime
- Xiaohui Zeng, University of Toronto
- Renshuai Tao, Beihang University
- Zhuozhuo Tu, the University of Sydney
- Tianlin Li, Nanyang Technological University
- Yuqing Ma, Beihang University
- Huiyuan Xie, University of Cambridge
- Shiyu Tang, Beihang University
- Bo Sun, UT Austin
TBD
Related Workshops
Sponsors
















