| CARVIEW |
Overview
Computer vision systems nowadays have advanced performance but research in adversarial machine learning also shows that they are not as robust as the human vision system.
Recent work has shown that real-world adversarial examples exist when objects are partially occluded or viewed in previously unseen poses and environments (such as different weather conditions).
Discovering and harnessing those adversarial examples provides opportunities for understanding and improving computer vision systems in real-world environments.
In particular, deep models with structured internal representations seem to be a promising approach to enhance robustness in the real world, while also being able to explain their predictions.
In this workshop, we aim to bring together researches from the fields of adversarial machine learning, robust vision and explainable AI to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.
Awards
Towards Analyzing Semantic Robustness of Deep Neural Networks
Abdullah J Hamdi (KAUST)*; Bernard Ghanem (KAUST))
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Fu Lin (Georgia Institute of Technology)*; Rohit Mittapalli (Georgia Institute of Technology); Prithvijit Chattopadhyay (Georgia Institute of Technology); Daniel Bolya (University of California, Davis); Judy Hoffman (Georgia Tech)
Prof. Judy Hoffman (Georgia Tech)
Schedule
08:45 - 09:00         Opening Remarks
09:00 - 09:30         Invited Talk 1: Andreas Geiger - Attacking Optical Flow
09:30 - 10:00         Invited Talk 2: Wieland Brendel - To Defend Against Adversarial Examples We Need to Understand Human Vision
10:00 - 12:00         Poster Session 1
12:00 - 12:30         Invited Talk 3: Alan Yuille - Adversarial Robustness
12:30 - 13:00         Invited Talk 4: Raquel Urtasun - Adversarial Attacks and Robustness for Self-Driving
13:00 - 14:30         Lunch Break
14:30 - 15:00         Invited Talk 5: Alex Robey - Model-based Robust Deep Learning
15:00 - 15:30        Invited Talk 6: Judy Hoffman - Achieving and Understanding Adversarial Robustness
15:30 - 16:00        Invited Talk 7: Honglak Lee - Generative Modeling Perspective for Synthesizing and
                                      Interpreting Adversarial Attacks
16:00 - 16:30         Invited Talk 8: Bo Li - Secure Learning in Adversarial Autonomous Driving Environments
16:30 - 17:00        Invited Talk 9: Daniel Fremont - Semantic Adversarial Analysis with Formal Scenarios
17:00 - 17:45        Panel Discussion
17:45 - 19:00        Poster Session 2
Accepted Papers
- A Deep Dive into Adversarial Robustness in Zero-shot Learning
Mehmet Kerim Yücel (Hacettepe University)*; Ramazan Gokberk Cinbis (METU); Pinar Duygulu (Hacettepe University) - AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds
Abdullah J Hamdi (KAUST)*; Sara Rojas Martinez (KAUST); Ali K Thabet (KAUST); Bernard Ghanem (KAUST) - Towards Analyzing Semantic Robustness of Deep Neural Networks
Abdullah J Hamdi (KAUST)*; Bernard Ghanem (KAUST) - Deep k-NN Defense against Clean-label Data Poisoning Attacks
Neehar Peri (University of Maryland)*; neal gupta (umd); W. Ronny Huang (Google Research); Chen Zhu (University of Maryland); Liam Fowl (University of Maryland); Soheil Feizi (University of Maryland); Tom Goldstein (University of Maryland, College Park); John P Dickerson (University of Maryland) - How Well Do Bayesian Neural Networks Perform Against Out of Distribution and Adversarial Examples?
John Mitros (UCD)*; Arjun Pakrashi (University College Dublin); Brian Mac Namee (University College Dublin ) - Adversarial Shape Perturbations on 3D Point Clouds
Daniel Liu (Torrey Pines High School)*; Ronald Yu (UCSD); Hao Su (UCSD) - Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks
Neil Fendley (JHU/APL); Max Lennon (JHU/APL); I-Jeng Wang (Johns Hopkins University); Philippe Burlina (JHU/APL/CS/SOM); Nathan Drenkow (Johns Hopkins University Applied Physics Laboratory)* - The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks (UC Berkeley)* - Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison
Lukas P Brunke (Volkswagen Group of America )*; Prateek Agrawal (Volkswagen Group Of America); Nikhil George (Volkswagen Group of America) - Fooling Semantic Segmentation in One Step via Manipulating Nuisance Factors
guangyu Shen (Purdue University)*; Chengzhi Mao (Columbia University); Junfeng Yang (Columbia University); Baishakhi Ray (Columbia University) - Adversarial Robustness of the Open-set Recognition Systems
xiao gong (nanjing university)*; Guosheng Hu (AnyVision); Timothy Hospedales (Edinburgh University); Yongxin Yang (University of Edinburgh ) - WaveTransform: Crafting Adversarial Examples via Input Decomposition
Divyam Anshumaan (IIIT Delhi); Akshay Agarwal (IIIT Delhi); Mayank Vatsa (IIT Jodhpur)*; Richa Singh (IIT Jodhpur) - Beyond the Pixels: Exploring the Effect of Video File Corruptions on Model Robustness
Trenton Chang (Stanford University)*; Daniel Y Fu (Stanford University); Sharon Yixuan Li; Christopher Re (Stanford University) - Multitask Learning Strengthens Adversarial Robustness
Chengzhi Mao (Columbia University)*; Amogh Gupta; Vikram Nitin; Baishakhi Ray; Shuran Song; Junfeng Yang; Carl Vondrick (Columbia University) - Investigating Distributional Robustness: Semantic Perturbations Using Generative Models
Isaac Dunn (University of Oxford)*; Laura Hanu (Unitary); Hadrien Pouget (University of Oxford); Daniel Kroening (Oxford University); Tom Melham (University of Oxford) - Robust Super-Resolution of Real Faces using Smooth Features
Saurabh Goswami (Indian Institute of Technology, Madras)*; Aakanksha Aakanksha (Indian Institute of Technology, Madras); Rajagopalan N Ambasamudram (Indian Institute of Technology Madras) - Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Fu Lin (Georgia Institute of Technology)*; Rohit Mittapalli (Georgia Institute of Technology); Prithvijit Chattopadhyay (Georgia Institute of Technology); Daniel Bolya (University of California, Davis); Judy Hoffman (Georgia Tech) - Instance Adaptive Adversarial Training: Improved Accuracy-Robustness Trade-offs in Neural Nets
Yogesh Balaji (UMD, College Park)*; Tom Goldstein (University of Maryland, College Park); Judy Hoffman (Georgia Tech)
- Improved Robustness to Open Set Inputs via Tempered Mixup
Ryne P Roady (Rochester Institute of Technology)*; Tyler Hayes (RIT); Christopher Kanan (RIT) - Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Xinwei Zhao (Drexel University)*; Matthew Stamm (Drexel University) - Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Francesco Croce (University of Tübingen)*; Maksym Andriushchenko (EPFL); Naman D Singh (University of Tübingen); Nicolas Flammarion (EPFL); Matthias Hein (University of Tübingen) - RayS: A Ray Searching Method for Hard-label Adversarial Attack
Jinghui Chen (UCLA)*; Quanquan Gu (University of California, Los Angeles) - Adversarial Attack on Deepfake Detection using RL based Texture Patches
Steven Fernandes (Creighton University)*; Sumit Kumar Jha ( University of Texas at San Antonio)
Call For Papers
Submission deadline:July 10 July 20, 2020 Anywhere on Earth (AoE)
Reviews due: July 26 , July 27, 2020 Anywhere on Earth (AoE)
Notification sent to authors: July 29, 2020 Anywhere on Earth (AoE)
Presentation materials deadline:: August 16, 2020 Anywhere on Earth (AoE)
Camera ready deadline: September 10, 2020 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/AROW2020/
Submission format:
Submissions need to be anonymized and follow the
ECCV 2020 Author Instructions.
The workshop considers two types of submissions:
(1) Long Paper: Papers are limited to 14 pages excluding references and will be included in the official ECCV proceedings;
(2) Extended Abstract: Papers are limited to 4 pages including references and will NOT be included
in the official ECCV proceedings. Please use the CVPR template for extended abstracts .
Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a
contributed talk or a poster presentation.
We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:
- Discovery of real-world adversarial examples
- Novel architectures with robustness to occlusion, viewpoint and other real-world domain shifts
- Domain adaptation techniques for building robust vision system in the real world
- Datasets for evaluating model robustness
- Adversarial machine learning for diagnosing and understanding limitations of computer vision systems
- Improving generalization performance of computer vision systems to out-of-distribution samples
- Structured deep models
- Explainable AI
Speakers
Organizing Committee
Program Committee
- Akshayvarun Subramanya (UMBC)
- Alexander Robey (University of Pennsylvania)
- Ali Shahin Shamsabadi (Queen Mary University of London)
- Aniruddha Saha (University of Maryland Baltimore County)
- Anshuman Suri (University of Virginia)
- Bernhard Egger (MIT)
- Chen Zhu (University of Maryland)
- Chenglin Yang (Johns Hopkins University)
- Chirag Agarwal (UIC)
- Gaurang Sriramanan (Indian Institute of Science)
- Jamie Hayes (University College London)
- Jiachen Sun (University of Michigan)
- Jieru Mei (Johns Hopkins University)
- Kibok Lee (University of Michigan)
- Lifeng Huang (Sun Yat-Sen university)
- Mario Wieser (University of Basel)
- Maura Pintor (University of Cagliari)
- Muhammad Awais (Kyung-Hee University)
- Muzammal Naseer (ANU)
- Nataniel Ruiz (Boston University)
- Peng Tang (Salesforce Research)
- Qihang Yu (Johns Hopkins University)
- Rajkumar Theagarajan (University of California, Riverside)
- Sravanti Addepalli (Indian Institute of Science)
- Tianyu Pang (Tsinghua University)
- Won Park (University of Michigan)
- Xiangning Chen (University of California, Los Angeles)
- Xingjun Ma (Deakin University)
- Xinwei Zhao (Drexel University)
- Yash Sharma (University Tuebingen)
- yulong cao (University of Michigan, Ann Arbor)
- Yuzhe Yang (MIT)
- Ziqi Zhang (Peking University)
Sponsor

Please contact Adam Kortylewski or Cihang Xie if you have questions. The webpage template is by the courtesy of CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision.




















