| CARVIEW |
The first workshop on
AI for 3D Generation
Room ARCH 4F - June 17th Monday Full Day @CVPR2024
Seattle WA, USA
Remote attendees can join via the zoom link on the CVPR conference site!
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
NeurIPS 2023
Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
ICCV 2023
GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
NeurIPS 2022
Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
arXiV 2023
Instruct-NeRF2NeRF Editing 3D Scenes with Instructions
ICCV 2023
4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling
arXiV 2023
InfiniteNature-Zero Learning Perpetual View Generation of Natural Scenes from Single Images
ECCV 2022
Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion
CVPR 2023
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
arXiV 2023
Developing algorithms capable of generating realistic, high quality 3D data at scale has been a long standing problem in Computer Vision and Graphics. We anticipate that having generative models that can reliably synthesize meaningful 3D content will completely revolutionize the workflow of artists and content creators, and will also enable new levels of creativity through ``generative art". Although recently there has been considerable success in generating photorealistic images, the quality and generality of 3D generative models has lagged behind their 2D counterparts. Additionally, efficiently controlling what needs to be generated and scaling these approaches to complex scenes with several static and dynamic objects still remains an open challenge.
In this workshop, we seek to bring together researchers working on generative models for 3D shapes, humans, and scenes to discuss the latest advances, existing limitations and next steps towards developing generative pipelines capable of producing fully controllable 3D environments with multiple humans interacting with each other or with objects in the scene. In the last few years, there has been significant progress in generating 3D objects, humans, and scenes independently, but only recently has the research community shifted their attention towards generating meaningful dynamics and interactions between humans or humans and other scene elements. To this end, in our workshop we look forward to cover the following topics:
- What is the best representation for generating meaningful variations of 3D objects with texture and high quality details?
- What is the best representation to enable intuitive control over the generated objects?
- How to synthesize realistic humans performing plausible actions?
- How to generate fully controllable 3D environments, where it would be possible to manipulate both the appearance of the scene elements as well as their spatial composition?
- What is the best representation for generating plausible dynamics and interactions between humans or humans and objects?
- What are the ethical implications that arise from artificially generated 3D content and how we can address them.
- June 15, 2024: The workshop's location changed. It will now take place in ARCH 4F .
- June 15, 2024: Remote attendees can join via the zoom link on the CVPR conference site.
- June 12, 2024: Workshop's schedule released.
- June 10, 2024: Posters can only be put up during the poster session time, namely 15:30-16:30. Materials for attaching posters to the poster boards will be provided on-site.
- June 3, 2024: The poster session will take place from 15:30-16:30, in the Arch Building Exhibit Hall.
- May 31, 2024: The workshop will take place on Monday 17th of June in Room Summit Flex A. For more details please refer here.
- April 7, 2024: We have extended the paper submission deadline by a couple of days! The new paper and supplemental material deadline is on April 15 (AoE)!!.
- January 25, 2024: Workshop website launched, with the tentative list of the invited speakers announced.
The workshop will take place on Monday, June 17th in Room ARCH 4F! Remote attendees can join via join via the zoom link on the CVPR conference site. Note that all times in the below schedule are in PST.
| 08:30 - 08:45 | Welcome and Opening Remarks | |
| 08:45 - 09:25 | Andrea Vedaldi | TBD |
| 09:25 - 09:50 | Ruoshi Liu | TBD |
| 09:50 - 10:15 | Duygu Ceylan | TBD |
| 10:15 - 10:45 | Coffee Break | |
| 10:45 - 11:10 | Dongsu Zhang | Scalable Scene Completion with Generative Cellular Automata |
| 11:10 - 11:50 | Varun Jampani | Adapting image and video generative models for 3D Generation |
| 11:50 - 12:30 | Jun Yan Zhu | Controllable 3D Generation |
| 12:30 - 13:30 | Lunch Break | |
| 13:30 - 14:10 | Gordon Wetzstein | TBD |
| 14:10 - 14:35 | Jun Gao | 3D Representations for 3D Content Creation |
| 14:35 - 15:00 | Alexander Holynski | TBD |
| 15:00 - 15:25 | Alex Yu | TBD |
| 15:30 - 16:30 | Poster Session | |
| 16:30 - 17:10 | Sergey Tulyakov | Volumetric Generation of Objects, Scenes, and Videos |
| 17:10 - 17:15 | Closing Remarks |
- Long paper: Long papers should not exceed 8 pages excluding references and should use the official CVPR template. Long papers are intended for presenting mature works, should describe novel ideas but also include extensive experimental evaluations that support the proposed ideas.
- Short paper: Short papers should not exceed 4 pages excluding references and should also use the official CVPR template. Short papers are intended for presenting ideas that are still at an early stage. Although comprehensive analyses and experiments are not necessary for short papers, they should have some basic experiments to support their claims. Moreover, in the short paper track, we encourage submissions focusing on creative contributions demonstrating applications of existing technology into 3D content creation pipelines. For example, we look forward for submissions showcasing how ongoing research on 3D generative AI can be used to facilitate the workflow of experienced as well novice users in various fields such as architectural engineering, product designing, education, art, entertainment etc.
All submissions should anonymized. Papers with more than 4 pages (excluding references) will be reviewed as long papers, and papers with more than 8 pages (excluding references) will be rejected without review. Supplementary material is optional with supported formats: pdf, mp4 and zip. All papers that were not previously presented in a major conference, will be peer-reviewed by three experts in the field in a double-blind manner. In case you are submitting a previously accepted conference paper, your submission does not need to be anonymized. For already accepted conference papers, please also attach a copy of the acceptance notification email in the supplementary material documents.
Please not that the accepted papers will NOT be included in the IEEE/CVF proceedings, but will have a poster presentation the day of the workshop.
Submission Website: https://cmt3.research.microsoft.com/AI3DG2024/
All submissions should follow the CVPR paper format: https://github.com/cvpr-org/author-kit/releases
Paper Review Timeline:
| Paper Submission and supplemental material deadline | Monday, April 15, 2024 (AoE time) |
|---|---|
| Notification to authors | Friday, May 10, 2024 |
| Camera ready deadline | Wednesday, May 15, 2024 |
- ICCV 2023: AI for 3D Content Creation
- CVPR 2023: AI for Content Creation
- CVPR 2022: AI for Content Creation
- ECCV 2022: Learning to Generate 3D Shapes and Scenes
- CVPR 2021: AI for Content Creation
- CVPR 2021: Learning to Generate 3D Shapes and Scenes
- CVPR 2020: AI for Content Creation
- CVPR 2020: Learning 3D Generative Models
- CVPR 2019: Deep Learning for Content Creation
- CVPR 2019: 3D Scene Generation