| CARVIEW |
Deep Generative Models for Highly Structured Data (ICLR 2022 Workshop)
(For any questions please email dgm4highlystructureddata@gmail.com)
Deep generative models are at the core of research in artificial intelligence, especially for unlabelled data. They have achieved remarkable performance in domains including computer vision, natural language processing, speech recognition, and audio synthesis. Very recently, deep generative models have been applied to broader domains, e.g. fields of science including the natural sciences, physics, chemistry and molecular biology, and medicine. However, deep generative models still face challenges when applied to these domains from which arise highly structured data. This workshop aims to bring experts from different backgrounds and perspectives to discuss the applications of deep generative models to these data modalities. The workshop will put an emphasis on challenges in encoding domain knowledge when learning representations, performing synthesis, or for prediction purposes. Since evaluation is essential for benchmarking, the workshop will also be a platform for discussing rigorous ways to evaluate representations and synthesis.
Relevant topics to this workshop include but are not limited to:
Important Dates
| Paper submission deadline | |
| Acceptance notification | 25 March 2022 |
| Camera-ready deadline | 15 April 2022 |
| Workshop | 29 April 2022 |
Submissions
We solicit 4-page paper (references not included) of high-quality contributions that are (1) originally published, or (2) recently published.
Invited Speakers
| Max Welling | UvA and Microsoft Research |
| Octavian Ganea | Massachusetts Institute of Technology |
| Ellen Zhong | Massachusetts Institute of Technology |
| Pratyush Tiwary | University of Maryland | Geemi Wellawatte | University of Rochester |
Organization
Organizing Committee
| Yuanqi Du | George Mason University |
| Adji Bousso Dieng | Princeton University |
| Yoon Kim | Massachusetts Institute of Technology |
| Rianne van den Berg | Microsoft Research |
| Yoshua Bengio | MILA and Université de Montreal |