| CARVIEW |
Hello, I'm Keunhong Park
I am a researcher in 3D computer vision and generative AI. I am a founding member at World Labs where we are expanding the frontier of spatial intelligence. I was previously a research scientist at Google where I built technology to generate 3D assets for products on Google Search.
I received my Ph.D from the University of Washington in 2021 where I was advised by Ali Farhadi and Steve Seitz.
My current research interests are primarily in 3D generative AI and diffusion models.
Highlights
Publications
IllumiNeRF 3D Relighting without Inverse Rendering
3D relighting by distilling samples from a 2D image relighting diffusion model into a latent-variable NeRF.
ReconFusion: 3D Reconstruction with Diffusion Priors
Using an multi-view image conditioned diffusion model to regularize a NeRF enabled few-view reconstruction.
CamP: Camera Preconditioning for Neural Radiance Fields
Preconditioning camera optimization during NeRF training significantly improves their ability to jointly recover the scene and camera parameters.
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
By applying ideas from level set methods, we can represent topologically changing scenes with NeRFs.
FiG-NeRF: Figure Ground Neural Radiance Fields for 3D Object Category Modelling
Given a lot of images of an object category, you can train a NeRF to render them from novel views and interpolate between different instances.
Nerfies: Deformable Neural Radiance Fields
Learning deformation fields with a NeRF let's you reconstruct non-rigid scenes with high fidelity.
LatentFusion: End-to-End Differentiable Reconstruction and Rendering for Unseen Object Pose Estimation
By learning to predict geometry from images, you can do zero-shot pose estimation with a single network.
PhotoShape: Photorealistic Materials for Large-Scale Shape Collections
By pairing large collections of images, 3D models, and materials, you can create thousands of photorealistic 3D models fully automatically.