CARVIEW |
Learning from Limited and Imperfect Data (L2ID)
A joint workshop combining Learning from Imperfect data (LID) and Visual Learning with Limited Labels (VL3)
June 20, 2021 (Full Day, Virtual Online)
News
- YouTube channel with videos from speakers, orals, posters, and panel sessions is here
- Introduction slides are available here
- The workshop will use the CVPR Zoom link; see the June 20th list of workshops and search for Limited or Imperfect Data. ONLY the poster session at noon PDT will be on Gatherly.
- YouTube video presentations from invited speakers available here! Please ask questions in the comments section and we will ask them during the panels!
- Full list of accepted papers and oral papers available!
Introduction
Learning from limited or imperfect data (L^2ID) refers to a variety of studies that attempt to address challenging pattern recognition tasks by learning from limited, weak, or noisy supervision. Supervised learning methods including Deep Convolutional Neural Networks have significantly improved the performance in many problems in the field of computer vision, thanks to the rise of large-scale annotated data sets and the advance in computing hardware. However, these supervised learning approaches are notoriously "data hungry", which makes them sometimes not practical in many real-world industrial applications. This issue of availability of large quantities of labeled data becomes even more severe when considering visual classes that require annotation based on expert knowledge (e.g., medical imaging), classes that rarely occur, or object detection and instance segmentation tasks where the labeling requires more effort. To address this problem, many efforts, e.g., weakly supervised learning, few-shot learning, self/semi-supervised, cross-domain few-shot learning, domain adaptation, etc., have been made to improve robustness to this scenario. The goal of this workshop is to bring together researchers to discuss emerging new technologies related to visual learning with limited or imperfectly labeled data. Topics that are of special interest (though submissions are not limited to these):
- Few-shot learning for image classification, object detection, etc.
- Cross-domain few-shot learning
- Weakly-/semi- supervised learning algorithms
- Zero-shot learning · Learning in the “long-tail” scenario
- Self-supervised learning and unsupervised representation learning
- Learning with noisy data
- Any-shot learning – transitioning between few-shot, mid-shot, and many-shot training
- Optimal data and source selection for effective meta-training with a known or unknown set of target categories
- Data augmentation
- New datasets and metrics to evaluate the benefit of such methods
- Real world applications such as object semantic segmentation/detection/localization, scene parsing, video processing (e.g. action recognition, event detection, and object tracking)
Challenge Information
This year we have two groups of challenges: 1) Localization and 2) Classification. The due date for submission is
Workshop Paper Submission Information
The contributions can have two formats- Extended Abstracts of max 4 pages (excluding references)
- Papers of the same length of CVPR submissions
According to the CVPR rules, extended abstracts will not count as archival.
The submissions should be formatted in the CVPR 2021 format and uploaded through the L2ID CMT Site
Please feel free to contact us if you have any suggestions to improve our workshop!    l2idcvpr@gmail.com
Schedule
(All times in PT Time Zone)
Date | Speaker | Topic |
---|---|---|
8:00-8:05 PT / 11:00-11:05 EDT | Organizers | Introduction and opening |
8:05-8:45 PT / 11:05-11:45 EDT |
Guoliang Kang - Pixel-Level Cycle Association: Domain Adaptive Semantic Seg.
Angela Dai - Learning from Imperfect RGB-D Scan Data Colin Raffel - Explicit and Implicit Entropy Minimization in Proxy-Label-Based SSL Sanja Fidler - Image GANs for Reducing Pixel-Wise Supervision Oral Papers: [A, B, D, H] |
Unlabeled data / Self/Semi-Supervised, Domain Adaptation |
8:45-9:15 PT / 11:45-12:15 EDT | [A, B, D, H, Classification Challenge Participants] | Paper Spotlight Talks |
9:15-9:55 PT / 12:15-12:55 EDT | Chelsea Finn - Few Shot Learning in the Real World
Rogerio Ferris - How Transferable are Contrastive Representations? Trevor Darrell - Recent Progress on Unsupervised Detection and Adaptation Oral Papers: [G, J, L] |
Few Shot Learning |
9:55-10:10 PT / 12:55-13:10 EDT | Coffe Break | |
10:10-10:50 PT / 13:10-13:50 EDT |
Boqing Gong - When Vision Transformers Outperform ResNets
Vahan Petrosyan - Tools to share datasets and find imperfect data in CV Olga Russakovsky - Mitigating bias and privacy concerns in visual data Dina Katabi - Making Contrastive Learning Robust to Shortcuts and Generalize it to New Modalities Oral Papers: [E, K] |
Robustness, adversarial, bias/fairness, deployment/industry |
10:50-11:20 PT / 13:50-14:20 EDT | Oral Papers: [G, J, L, E, K] | Paper Spotlight Talks |
11:20-12:00 PT / 14:20-15:00 EDT | Alexander Schwing - Not All Unlabeled Data are Equal
Humphrey Shi - Escaping the Big Data Paradigm with Compact Transformers Anurag Arnab - Video Understanding with Imperfect Data Oral Papers: [C, F, I] |
Imperfect/Noisy/Weakly supervised |
12:00-14:00 PT / 15:00-17:00 EDT | Gatherly Poster / Lunch Break | |
14:00-14:40 PT / 17:00-17:40 EDT | Aarti Singh - Learning from preferences and labels
Philip Isola - When and Why Does Contrastive Learing Work? |
Theory/Optimization |
14:40-15:10 PT / 17:40-18:10 EDT | Oral papers: [C, F, I, Localization Challenge Participants] | Paper Spotlight Talks |
15:10-15:50 PT / 18:10-18:50 EDT | All available | Future Directions |
15:50-16:00 PT / 18:50-19:00 EDT | Organizers | Wrap-up Discussion |
ID | Title |
---|---|
A | Training Deep Generative Models in Highly Incomplete Data Scenarios with Prior Regularization |
B | Unsupervised Discriminative Embedding for Sub-Action Learning in Complex Activities |
C | Unlocking the Full Potential of Small Data with Diverse Supervision |
D | Distill on the Go: Online knowledge distillation in self supervised learning |
E | Learning Unbiased Representations via Mutual Information Backpropagation |
F | PLM: Partial Label Masking for Imbalanced Multi-label Classification |
G | ReMP: Rectified Metric Propagation for Few-Shot Learning |
H | A Closer Look at Self-training for Zero-Label Semantic Segmentation |
I | An Exploration into why Output Regularization Mitigates Label Noise |
J | Shot in the Dark: Few-Shot Learning with No Base-Class Labels |
K | Contrastive Learning Improves Model Robustness Under Label Noise |
L | A Simple Framework for Cross-Domain Few-Shot Recognition with Unlabeled Data |
Speakers















Important Dates
Description | Date |
---|---|
Paper submission deadline | March 25th, 2021 |
Notification to authors | April 8th, 2021 (extended to Apr 13) |
Camera-ready deadline | April 20th, 2021 |
Challenge submission deadline | May 14th, 2021 |
People











yug185 AT eng.ucsd.edu

Copyright © All rights reserved | This template is made with by Colorlib