| CARVIEW |
Select Language
Workshop Recording
Welcome to the official site of the 2nd Workshop on Neural Fields Beyond Conventional Cameras! This workshop will be held in conjunction with CVPR 2025 June 11th-15th 2025.
Motivation
Neural fields have been widely adopted for learning novel view synthesis and 3D reconstruction from RGB images by modelling transport of light in the visible spectrum. This workshop focuses on neural fields beyond conventional cameras, including (1) learning neural fields from data from different sensors across the electromagnetic spectrum and beyond, such as lidar, cryo-electron microscopy (cryoEM), thermal, event cameras, acoustic, and more, and (2) modelling associated physics-based differentiable forward models and/or the physics of more complex light transport (reflections, shadows, polarization, diffraction limits, optics, scattering in fog or water, etc.). Our goal is to bring together a diverse group of researchers using neural fields across sensor domains to foster learning and discussion in this growing area.Schedule
| 1:00 - 1:05pm | Welcome & Introduction | |
| 1:05 - 1:35pm | Keynote: Splatting Beyond the Visible Spectrum: Gaussian Splatting for Radar, Sonar, and More Speaker: Katherine Skinner |
|
| 1:35 - 2:05pm | Keynote: Time-of-Flight Neural Fields Speaker: Christian Richardt |
|
| 2:05 - 2:35pm | Keynote: Neural Fields for All: Physics, World Models, and Beyond Speaker: Jingyi Yu |
|
| 2:35 - 2:45pm | Paper Spotlight 1: Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction Authors: Youming Deng, Wenqi Xian, Guandao Yang, Leonidas Guibas, Gordon Wetzstein, Steve Marschner, Paul Debevec |
|
| 2:45 - 2:55pm | Paper Spotlight 2: Neural Refraction Fields for Image Verification Authors: Sage Simhon, Jingwei Ma, Prafull Sharma, Lucy Chai, Yen-Chen Lin, Phillip Isola |
|
| 2:55 - 3:05pm | Paper Spotlight 3: Hyperspectral Neural Radiance Fields Authors: Gerry Chen, Sunil Kumar Narayanan, Thomas Gautier Ottou, Benjamin Missaoui, Harsh Muriki, Yongsheng Chen, Cédric Pradalier |
|
| 3:05 - 4:00pm | Poster Session & Coffee Break | |
| 4:00 - 4:30pm | Keynote: Multi-modal Neural Fields for Robot Perception and Planning Speaker: Felix Heide |
|
| 4:30 - 5:00pm | Keynote: Reconstructing the Cosmos with Physics Constrained Neural Fields Speaker: Aviad Levis |
|
| 5:00 - 5:30pm | Keynote: Volume Representations for Inverse Problems Speaker: Sara Fridovich-Keil |
|
| 5:30 - 6:00pm | Panel Discussion Moderator: David Lindell |
Keynote Speakers
Katherine Skinner
University of Michigan
Katherine Skinner is an Assistant Professor in the Department of Robotics at the University of Michigan, with a courtesy appointment in the Department of Naval Architecture and Marine Engineering. Before joining Michigan, she was a Postdoctoral Fellow at Georgia Institute of Technology in the Daniel Guggenheim School of Aerospace Engineering and the School of Earth and Atmospheric Sciences. She earned her M.S. and Ph.D. from the Robotics Institute at the University of Michigan, where she worked in the Deep Robot Optical Perception Laboratory. Her research focuses on robotics, computer vision, and machine learning to enable autonomy in dynamic, unstructured, or remote environments. Her dissertation advanced machine learning methods for underwater robotic perception, and she has collaborated with the Ford Center for Autonomous Vehicles to enhance urban perception.
Christian Richardt
Meta Reality Labs
Christian Richardt is a Research Scientist at Meta Reality Labs in Zurich, Switzerland, and previously at the Codec Avatars Lab in Pittsburgh, USA. He was previously a Reader (=Associate Professor) and EPSRC-UKRI Innovation Fellow in the Visual Computing Group and the CAMERA Centre at the University of Bath. His research interests cover the fields of image processing, computer graphics and computer vision, and his research combines insights from vision, graphics and perception to reconstruct visual information from images and videos, to create high-quality visual experiences with a focus on novel-view synthesis.
Jingyi Yu
ShanghaiTech University
Jingyi Yu is a professor and executive dean of the School of Information Science and Technology at ShanghaiTech University. He received his B.S. from Caltech in 2000 and his Ph.D. from MIT in 2005, and he is also affiliated with the University of Delaware. His research focuses on computer vision and computer graphics, particularly in computational photography and non-conventional optics and camera designs. His research has been generously supported by the National Science Foundation (NSF), the National Institute of Health, the Army Research Office, and the Air Force Office of Scientific Research (AFOSR). He is a recipient of the NSF CAREER Award, the AFOSR YIP Award, and the Outstanding Junior Faculty Award at the University of Delaware.
Felix Heide
Princeton University
Felix Heide is a professor of Computer Science at Princeton University and heads the Princeton Computational Imaging Lab. He received his Ph.D. from the University of British Columbia and completed his postdoctoral fellowship at Stanford University. He is recognized as a SIGGRAPH Significant New Researcher, Sloan Research Fellow, and Packard Fellow. Previously, he founded an autonomous driving startup Algolux, later acquired by Torc and Daimler Trucks. His research focuses on imaging and computer vision techniques that help devices capture details in challenging conditions, spanning optics, machine learning, optimization, computer graphics, and computer vision.
Aviad Levis
University of Toronto
Aviad Levis is an assistant professor in the Departments of Computer Science and Astronomy and Astrophysics at the University of Toronto. He is an associated faculty member at the Dunlap Institute for Astronomy and Astrophysics. His research focuses on scientific computational imaging and AI for science. Prior to that, he was a postdoctoral scholar in the Department of Computing and Mathematics at Caltech, supported by the Zuckerman and Viterbi postdoctoral fellowships, working with Katie Bouman on imaging the galactic center black hole as part of the Event Horizon Telescope collaboration. He received his Ph.D. (2020) from the Technion and B.Sc. (2013) from Ben-Gurion University. His Ph.D. thesis into tomography of clouds has paved the way for an ERC-funded space mission (CloudCT) led by his Ph.D. advisor Yoav Schechner.
Sara Fridovich-Keil
Georgia Tech
Sara Fridovich-Keil is an assistant professor at Georgia Tech's Department of Electrical and Computer Engineering. She completed her postdoctoral research at Stanford University under the guidance of Gordon Wetzstein and Mert Pilanci, after earning her Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley, where she was advised by Ben Recht. Her work focuses on machine learning, signal processing, and optimization to address inverse problems in computer vision as well as in computational, medical, and scientific imaging. Her research aims to identify optimal signal representations while balancing interpretability and computational efficiency.
Accepted Papers
- (Spotlight ⭐) Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction
Youming Deng, Wenqi Xian, Guandao Yang, Leonidas Guibas, Gordon Wetzstein, Steve Marschner, Paul Debevec - (Spotlight ⭐) Neural Refraction Fields for Image Verification
Sage Simhon, Jingwei Ma, Prafull Sharma, Lucy Chai, Yen-Chen Lin, Phillip Isola - (Spotlight ⭐) Hyperspectral Neural Radiance Fields
Gerry Chen, Sunil Kumar Narayanan, Thomas Gautier Ottou, Benjamin Missaoui, Harsh Muriki, Yongsheng Chen, Cédric Pradalier - Neural SDF for Shadow-aware Unsupervised Structured Light
Kazuto Ichimaru, Diego Thomas, Takafumi Iwaguchi, Hiroshi Kawasaki - Differentiable Inverse Rendering with Interpretable Basis BRDFs
Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek - Luminance-GS: Adapting 3D Gaussian Splatting to Challenging Lighting Conditions with View-Adaptive Curve Adjustment
Ziteng Cui, Xuangeng Chu, Tatsuya Harada - Time of the Flight of the Gaussians: Optimizing Depth Indirectly in Dynamic Radiance Fields
Runfeng Li, Mikhail Okunev, Zixuan Guo, Anh Ha Duong, Christian Richardt, Matthew O'Toole, James Tompkin - Gaussian Wave Splatting for Computer Generated Holography
Suyeon Choi*, Brian Chao*, Jackie Yang, Manu Gopakumar, Gordon Wetzstein - Gaussian Splatting for Efficient Satellite Image Photogrammetry
Luca Savant Aira, Gabriele Facciolo, Thibaud Ehret - Z-Splat: Z-Axis Gaussian Splatting for Camera-Sonar Fusion
Ziyuan Qu, Omkar Vengurlekar, Mohamad Qadri, Kevin Zhang, Michael Kaess, Christopher Metzler, Suren Jayasuriya, and Adithya Pediredla - DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Post-Capture Refocusing, Defocus Rendering and Blur Removal
Yujie Wang, Praneeth Chakravarthula, Baoquan Chen - MultimodalStudio: A Heterogeneous Sensor Dataset and Framework for Neural Rendering across Multiple Imaging Modalities
Federico Lincetto, Gianluca Agresti, Mattia Rossi, Pietro Zanuttigh - HyperGS: Hyperspectral 3D Gaussian Splatting
Christopher Thirgood, Oscar Mendez, Erin Ling, Simon Hadfield - Neural shape reconstruction from multiple views with static pattern projection
Ryo Furukawa, Kota Nishihara, Hiroshi Kawasaki - PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields
Sean Wu, Shamik Basu, Tim Broedermann, Luc Van Gool, Christos Sakaridis - LiHi-GS: LiDAR-Supervised Gaussian Splatting for Highway Driving Scene Reconstruction
Pou-Chun Kung, Xianling Zhang, Katherine A Skinner, Nikita Jaipuria - Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats
Mingyang Xie, Haoming Cai, Sachin Shah, Yiran Xu, Brandon Y Feng, Jia-Bin Huang, Christopher A Metzler - 3D Gaussian Splatting Vulnerabilities
Matthew Hull, Haoyang Yang, Pratham Mehta, Mansi Phute, Aeree Cho, Haoran Wang, Matthew Lau, Wenke Lee, Willian Lunardi, Martin Andreoni, Duen Horng Chau - SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields
Jungho Lee, Dogyoon Lee, Minhyeok Lee, Donghyeong Kim, Sangyoun Lee - CoCoGaussian: Leveraging Circle of Confusion for Gaussian Splatting from Defocused Images
Jungho Lee, Suhwan Cho, Taeoh Kim, Ho-Deok Jang, Minhyeok Lee, Geonho Cha, Dongyoon Wee, Dogyoon Lee, Sangyoun Lee - Alpine - A Flexible, User-friendly, Distributed Library for Implicit Neural Representations
Kushal Vyas, Vishwanath Saragadam, Ashok Veeraraghavan, Guha Balakrishnan - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion
Tianyi Xiong, Jiayi Wu, Botao He, Cornelia Fermuller, Yiannis Aloimonos, Heng Huang, Christopher A. Metzler - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes
Jan Held*, Renaud Vandeghen*, Abdullah Hamdi*, Adrien Deliege, Anthony Cioppa, Silvio Giancola, Andrea Vedaldi, Bernard Ghanem, Marc Van Droogenbroeck - Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields
Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan, Katherine L. Bouman - DBMovi-GS: Dynamic View Synthesis from Blurry Monocular Video via Sparse-Controlled Gaussian Splatting
Yeon-Ji Song, Jaein Kim, Byung Ju Kim, Byoung-Tak Zhang - Reconstruction Using the Invisible: Intuition from NIR and Metadata for Enhanced 3D Gaussian Splatting
Gyusam Chang, Tuan-Anh Vu, Vivek Alumootil, Harris Song, Deanna Pham, Sangpil Kim, M. Khalid Jawed - Joint Attitude Estimation and 3D Neural Reconstruction of non-cooperative space objects
Clément Forray, Pauline Delporte, Nicolas Delaygue, Florence Genin, Dawa Derksen - HessianForge: Scalable LiDAR reconstruction with Physics-Informed Neural Representation and Smoothness Energy Constraints
Hrishikesh Viswanath, Md Ashiqur Rahman, Chi Lin, Damon Conover, Aniket Bera - RadarSplat: Radar Gaussian Splatting for High-Fidelity Data Synthesis and 3D Reconstruction of Autonomous Driving Scenes
Pou-Chun Kung, Skanda Harisha, Ram Vasudevan, Aline Eid, Katherine A. Skinner - SHaDe: Compact and Consistent Dynamic 3D Reconstruction via Tri-Plane Deformation and Latent Diffusion
Asrar Alruwayqi - EchoNeRF: Generalizable Neural Radiance Fields for Novel Echocardiographic View Synthesis
Yuehao Wang, Edward Mei, Zhangyang Wang, Gregory Holste - ToFGS: Temporally-Resolved Inverse Rendering with Gaussian Splatting for Time-of-Flight
Omkar Shailendra Vengurlekar, Aaron Saju Augustine, Suren Jayasuriya
Related Works
Below is a collection of example works on neural fields beyond conventional cameras:- CryoFormer: Continuous Heterogeneous Cryo-EM Reconstruction using Transformer-based Neural Representations ICLR 2024
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar SIGGRAPH 2024
- PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar CVPR 2024
- Dynamic LiDAR Re-simulation using Compositional Neural Fields CVPR 2024
- Eclipse: Disambiguating Illumination and Materials using Unintended Shadows CVPR 2024
- Spectral and Polarization Vision: Spectro-polarimetric Real-world Dataset CVPR 2024
- Neural Spectro-polarimetric Fields SIGGRAPH ASIA 2023
- E-NeRF: Neural Radiance Fields from a Moving Event Camera RA-L 2023
- Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging MIDL 2023
- Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction NeurIPS 2023
- Neural LiDAR Fields for Novel View Synthesis ICCV 2023
- Neural Fields for Structured Lighting ICCV 2023
- E2NeRF: Event Enhanced Neural Radiance Fields from Blurry Images ICCV 2023
- ORCa: Glossy Objects as Radiance Field Cameras CVPR 2023
- Humans as Light Bulbs: 3D Human Reconstruction from Thermal Reflection CVPR 2023
- SeaThru-NeRF: Neural Radiance Fields in Scattering Media CVPR 2023
- EventNeRF: Neural Radiance Fields from a Single Colour Event Camera CVPR 2023
- Neural Interferometry: Image Reconstruction from Astronomical Interferometers Using Transformer-Conditioned Neural Fields AAAI 2022
- Learning Neural Acoustic Fields NeurIPS 2022
- PANDORA: Polarization-Aided Neural Decomposition Of Radiance ECCV 2022
- Medical Neural Radiance Fields for Reconstructing 3D-aware CT-Projections from a Single X-ray EMBC 2022