| CARVIEW |
CtrlGen: Controllable Generative Modeling in Language and Vision
Abstract
Over the past few years, there has been an increased interest in the areas of language and image generation within the community. As generated texts by models like GPT-3 start to sound more fluid and natural, and generated images and videos by GAN models appear more realistic, researchers began focusing on qualitative properties of the generated content such as the ability to control its style and structure, or incorporate information from external sources into the output. Such aims are extremely important to make language and image generation useful for human-machine interaction and other real-world applications including machine co-creativity, entertainment, reducing biases or toxicity, and improving conversational agents and personal assistants.
Achieving these ambitious but important goals introduces challenges not only from NLP and Vision perspectives, but also ones that pertain to Machine Learning as a whole, which has witnessed a growing body of research in relevant domains such as interpretability, disentanglement, robustness, and representation learning. We believe that progress towards the realization of human-like language and image generation may benefit greatly from insights and progress in these and other ML areas.
In this workshop, we propose to bring together researchers from the NLP, Vision, and ML communities to discuss the current challenges and explore potential directions for controllable generation and improve its quality, correctness, and diversity. As excitement about language and image generation has significantly increased recently thanks to the advent and improvement of language models, Transformers, and GANs, we feel this is the opportune time to hold a new workshop about this subject. We hope CtrlGen will foster discussion and interaction across communities, and sprout fruitful cross-domain relations that open the door for enhanced controllability in language and image generation.
Video
Schedule
|
|
|
|
|
|
|
8:35 AM
|
|
|
|
9:00 AM
Irina Higgins
|
|
|
|
|
|
|
|
|
|
|
|
1:50 PM
|
|
|
|
2:15 PM
Or Patashnik
|
|
|
|
4:20 PM
Invited Talk #7 - Controllable Text Generation with Multiple Constraints (Yulia Tsvetkov)
Invited Talk
Yulia Tsvetkov
|
|
|
|
|
|
SEUNG HYUN LEE · Sang Ho Yoon · Jinkyu Kim · Sangpil Kim
|
|
Antoine Chaffin · Vincent Claveau · Ewa Kijak
|
|
Asif Khan · Amos Storkey
|
|
|
|
Alara Dirik · Hilal Dönmez · Pinar Yanardag
|
|
Tomasz Korbak · Hady Elsahar · Germán Kruszewski · Marc Dymetman
|
|
Ghazi FELHI · Joseph Roux · Djame Seddah
|
|
Jeffrey Wen · Fabian Benitez-Quiroz · Qianli Feng · Aleix Martinez
|
|
Bryan Eikema · Germán Kruszewski · Hady Elsahar · Marc Dymetman
|
|
XCI-Sketch: Extraction of Color Information from Images for Generation of Colored Outlines and Sketches
Poster
V MANUSHREE · Sameer Saxena · Parna Chowdhury · MANISIMHA VARMA MANTHENA · Harsh Rathod · Ankita Ghosh · Sahil Khose
|
|
Alex Lambert · Sanjeel Parekh · Zoltan Szabo · Florence d'Alché-Buc
|
|
ADARSH KAPPIYATH · Silpa Vadakkeeveetil Sreelatha ·
|
|
Nan Liu · Shuang Li · Yilun Du
|
|
Gautam Singh · Fei Deng · Sungjin Ahn
|
|
Hyukgi Lee · Gi-Cheon Kang · Chang-Hoon Jeong · Hanwool Sul · Byoung-Tak Zhang
|
|
Yusong Wu · Ethan Manilow · Kyle Kastner · Tim Cooijmans · Aaron Courville · Cheng-Zhi Anna Huang · Jesse Engel
|
|
Kaylee Burns · Christopher D Manning · Li Fei-Fei
|
|
SK Mainul Islam · Abhinav Nagpal · Balaji Ganesan · Pranay Lohia
|
|
Nishtha Madaan · · Srikanta Bedathur
|
|
Yizhou Zhao · Kaixiang Lin · Zhiwei Jia · Qiaozi Gao · Govindarajan Thattai · Jesse Thomason · Gaurav Sukhatme
|
| NeurIPS uses cookies for essential functions only. We do not sell your personal information. Our Privacy Policy » |