| CARVIEW |
Overview
Computer vision systems nowadays often perform on super-human level in complex cognitive tasks but research in adversarial machine learning also shows that they are not as robust as the human vision system. In this context, perturbation-based adversarial examples have received great attention.
Recent work has shown that deep neural networks are also easily challenged by real-world adversarial examples, e.g. including partial occlusion, viewpoint changes, atmospheric changes, or style changes. Discovering and harnessing those adversarial examples helps us understand and improve the robustness of computer vision methods in real-world environments, which will also accelerate the deployment of computer vision systems in safety-critical applications. In this workshop, we aim to bring together researchers from various fields, including robust vision, adversarial machine learning, and explainable AI, to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.
Awards
The workshop is sponsored by RealAI. The funding covers a Best Paper Award, a Best Paper Honorable Mention Award, a Female Leader in Computer Vision Award, and multiple travel grants.
Schedule (EDT)
08:45 - 09:00         Opening Remarks
09:00 - 09:30         Invited Talk 1: Alan Yuille - Adversarial Patches and Compositional Networks
09:30 - 10:00         Invited Talk 2: Tomaso Poggio - Biologically-inspired Defenses Against Adversarial Attacks
10:00 - 11:00         Poster Session 1
11:00 - 11:30         Invited Talk 3: Raquel Urtasun - Adversarial Robustness for Self-driving
11:30 - 12:00         Invited Talk 4: Aleksander Madry - Adversarial Robustness: An Update from the Trenches
12:00 - 12:30        Panel Discussion 1
12:30 - 13:30         Lunch Break
13:30 - 14:00         Invited Talk 5: Kate Saenko
14:00 - 14:30        Invited Talk 6: Cihang Xie - Not All Networks Are Born Equal for ROBUSTNESS
14:30 - 15:00        Invited Talk 7: Ludwig Schmidt - Adversarial robustness vs. the real world
15:00 - 15:30         Invited Talk 8: Nicholas Carlini - Adversarial Attacks That Matter
15:30 - 16:00        Panel Discussion 2
16:00 - 17:00        Poster Session 2
Call For Papers
Submission deadline:July 31 August 5, 2021 Anywhere on Earth (AoE)
Notification sent to authors: August 7 August 12, 2021 Anywhere on Earth (AoE)
Camera ready deadline: August 10 August 15, 2021 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/AROW2021/
Submission format:
Submissions need to be anonymized and follow the
ICCV 2021 Author Instructions.
The workshop considers two types of submissions:
(1) Long Paper: Papers are limited to 8 pages excluding references and will be included in the official ICCV proceedings;
(2) Extended Abstract: Papers are limited to 4 pages excluding references and will NOT be included
in the official ICCV proceedings. Please use the ICCV template for extended abstracts .
Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a
contributed talk or a poster presentation.
We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:
- Discovery of real-world adversarial examples
- Novel architectures with robustness to occlusion, viewpoint, and other real-world domain shifts
- Domain adaptation techniques for building robust vision system in the real world
- Datasets for evaluating model robustness
- Adversarial machine learning for diagnosing and understanding limitations of computer vision systems
- Improving generalization performance of computer vision systems to out-of-distribution samples
- Structured deep models and explainable AI
Accepted Long Paper (Proceeding)
- On the Effect of Pruning on Adversarial Robustness [Paper] [Poster]
Artur Jordão L Correia (University of Campinas)*; Hélio Pedrini (University of Campinas) - Trojan Signatures in DNN Weights [Paper] [Poster]
Greg Fields (UC San Diego)*; Mohammad Samragh Razlighi (University of California San Diego); Mojan Javaheripi (UC San Diego); Farinaz Koushanfar (UCSD); Tara Javidi (University of California San Diego) - Impact of Colour on Robustness of Deep Neural Networks [Paper]
Kanjar De (Lulea University of Technology)*; Marius Pedersen (NTNU, Gjovik, Norway) - Evasion Attack STeganography: Turning Vulnerability Of Machine Learning To Adversarial Attacks Into A Real-world Application [Paper]
Salah GHAMIZI (SnT Luxembourg)*; Maxime Cordy (University of Luxembourg); Mike Papadakis (University of Luxembourg); Yves Le Traon (University of Luxembourg) - Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap? [Paper]
Nathan Inkawhich (Duke University)*; Kevin J Liang (Facebook); Jingyang Zhang (Duke University); Huanrui Yang (Duke University); Hai Li (Duke University); Yiran Chen (Duke University) - Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Single-Source Domain Generalization [Paper] [Poster]
Thomas J Duboudin (École Centrale de Lyon / LIRIS)*; emmanuel dellandrea (EC Lyon); Corentin Abgrall (Thales LAS France); Gilles Henaff (Thales Optronique S.A.S.); Liming Chen (Ecole Centrale de Lyon) - A Hierarchical Assessment of Adversarial Severity [Paper] [Poster]
Guillaume Jeanneret (Universidad de los Andes)*; Juan C Perez (Universidad de los Andes; King Abdullah University of Science and Technology); Pablo Arbelaez (Universidad de los Andes) - Detecting and Segmenting Adversarial Graphics Patterns from Images [Paper]
Xiangyu Qu (Purdue University)*; Stanley Chan (Purdue University, USA) - Enhancing Adversarial Robustness via Test-time Transformation Ensembling [Paper] [Poster]
Juan C Perez (Universidad de los Andes; King Abdullah University of Science and Technology)*; Motasem Alfarra (KAUST); Guillaume Jeanneret (Universidad de los Andes); Laura Rueda-Gensini (Universidad de los Andes); Ali K Thabet (Facebook); Bernard Ghanem (KAUST); Pablo Arbelaez (Universidad de los Andes) - Optical Adversarial Attack [Paper]
Abhiram Gnanasambandam (Purdue University)*; Alex M. Sherman (Purdue University); Stanley Chan (Purdue University, USA) - Countering Adversarial Examples: Combining Input Transformation and Noisy Training [Paper]
Cheng Zhang (Nanjing University of Aeronautics and Astronautics)*; Pan Gao (Nanjing University of Aeronautics and Astronautics) - Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? [Paper]
Max Lennon (JHU/APL); Nathan Drenkow (Johns Hopkins University Applied Physics Laboratory)*; Philippe Burlina (JHU/APL/CS/SOM) - Can Optical Trojans Assist Adversarial Perturbations? [Paper]
Adith Boloor (Washington University in St. Louis)*; Tong Wu (Washington University in St. Louis); Patrick Naughton (Washington University in St. Louis); Ayan Chakrabarti (Washington University in St. Louis); Xuan Zhang (Washington University in St. Louis); Yevgeniy Vorobeychik (Washington University in St. Louis) - Towards Category and Domain Alignment: Category-Invariant Feature Enhancement for Adversarial Domain Adaptation [Paper] [Poster]
Yuan Wu (Carleton University)*; Diana Inkpen (University of Ottawa); Ahmed El-Roby (Carleton University) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [Paper] [Poster]
Yuzhen Ding (Arizona State University)*; Nupur Thakur (Arizona State University); baoxin Li (Arizona State University) - On Adversarial Robustness: A Neural Architecture Search perspective [Paper]
Chaitanya Devaguptapu (Indian Institute of Technology, Hyderabad)*; Devansh Agarwal (IIT Hyderabad); Gaurav Mittal (Microsoft); Pulkit Gopalani (IIT Kanpur); Vineeth N Balasubramanian (Indian Institute of Technology, Hyderabad)
Accepted Extended Abstract
- Mental Models of Adversarial Machine Learning [Paper]
Lukas Bieringer (QuantPi); Kathrin Grosse (University of Cagliari)*; Michael Michael (CISPA Helmholtz Center for Information Security); Katharina Krombholz (CISPA − Helmholtz Center for Information Securit) - Defending Object Detection Networks Against Adversarial Patch Attacks [Paper] [Poster]
Thomas Gittings (University of Surrey)*; Steve Schneider (University of Surrey); John Collomosse (Adobe Research) - An Adversarial Attack on DNN-based Adaptive Cruise Control Systems [Paper] [Poster]
Yanan Guo (University of Pittsburgh)*; Christopher Dipalma (University of California, Irvine); TAKAMI SATO (University of California, Irvine); Yulong Cao (University of Michigan, Ann Arbor ); Qi Alfred Chen (UC Irvine); Yueqiang Cheng (NIO) - Towards Achieving Adversarial Robustness Beyond Perceptual Limits [Paper] [Supp] [Poster] [Videos]
Sravanti Addepalli (Indian Institute of Science)*; Samyak Jain (Indian Institute of Technology (BHU), Varanasi); Gaurang Sriramanan (Indian Institute of Science); Shivangi Khare (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science) - Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions [Paper] [Poster] [Videos]
Antonio E Cinà (Ca' Foscari University of Venice)*; Kathrin Grosse (University of Cagliari); Ambra Demontis (University of Cagliari ); Sebastiano Vascon (Ca' Foscari University of Venice & European Centre for Living Technology); Battista Biggio (University of Cagliari, Italy); Fabio Roli (University of Cagliari); Marcello Pelillo (Ca' Foscari University of Venice) - Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise [Paper]
Anindya Sarkar (IIT Hyderabad); Anirban Sarkar (IIT Hyderabad)*; Vineeth N Balasubramanian (Indian Institute of Technology, Hyderabad) - Are socially-aware trajectory prediction models really socially-aware? [Paper] [Poster]
Saeed Saadatnejad (EPFL)*; Mohammadhossein Bahari (EPFL); Seyed-Mohsen Moosavi-Dezfooli (ETH Zurich); Alexandre Alahi (EPFL) - Efficient Training Methods for Achieving Adversarial Robustness Against Sparse Attacks [Paper] [Supp] [Poster]
Sravanti Addepalli (Indian Institute of Science)*; Dhruv Behl (Indian Institute of Science); Gaurang Sriramanan (Indian Institute of Science); Venkatesh Babu RADHAKRISHNAN (Indian Institute of Science)
Speakers
Organizing Committee
Program Committee
- Aishan Liu (Beihang University)
- Akshayvarun Subramanya (UMBC)
- Alexander Robey (University of Pennsylvania)
- Ali Shahin Shamsabadi (Queen Mary University of London, UK)
- Angtian Wang (Johns Hopkins University)
- Aniruddha Saha (University of Maryland Baltimore County)
- Anshuman Suri (University of Virginia)
- Bernhard Egger (Massachusetts Institute of Technology)
- Chenglin Yang (Johns Hopkins University)
- Chirag Agarwal (Harvard University)
- Gaurang Sriramanan (Indian Institute of Science)
- Jamie Hayes
- Jiachen Sun (University of Michigan)
- Jieru Mei (Johns Hopkins University)
- Ju He (Johns Hopkins University)
- Kibok Lee (University of Michigan)
- Lifeng Huang (SunYat-sen university)
- Maura Pintor (University of Cagliari)
- Muhammad Awais (Kyung-Hee University)
- Muzammal Naseer (ANU)
- Nataniel Ruiz (Boston University)
- Qihang Yu (Johns Hopkins University)
- Qing Jin (Northeastern University)
- Rajkumar Theagarajan (University of California, Riverside)
- Ruihao Gong (SenseTime)
- Shiyu Tang (Beihang University)
- shunchang liu (Beihang University)
- Sravanti Addepalli (Indian Institute of Science)
- Tianlin Li (NTU)
- Wenxiao Wang (Tsinghua University)
- Won Park (University of Michigan)
- Xiangning Chen (University of California, Los Angeles)
- Xiaohui Zeng (University of Toronto)
- Xingjun Ma (Deakin University)
- Xinwei Zhao (Drexel University)
- yulong cao (University of Michigan, Ann Arbor)
- Yutong Bai (Johns Hopkins University)
- Zihao Xiao (Johns Hopkins University)
- Zixin Yin (Beihang University)
Related Workshops
Uncertainty & Robustness in Deep Learning (Workshop at ICML 2021)
Security and Safety in Machine Learning Systems (Workshop at ICLR 2021)
Generalization beyond the Training Distribution in Brains and Machines (Workshop at ICLR 2021)
1st International Workshop on Adversarial Learning for Multimedia (Workshop at ACM Multimedia 2021)
Please contact Yingwei Li or Adam Kortylewski if you have questions. The webpage template is by the courtesy of ECCV 2020 Workshop on Adversarial Robustness in the Real World.





























