| CARVIEW |
Joel
Salzman
I'm a PhD student in Computer Science at Brown University, advised by James Tompkin in the Brown Visual Computing group. My research focuses on 3D geometric reconstruction, scene representation, and spatial analysis. I'm especially interested in hyperscalable methods that can be deployed in the wild to capture and understand complex environments like disaster zones.
GBake: Baking 3D Gaussian Splats into Reflection Probes
Stephen Pasch*,
Joel Salzman*,
Changxi Zheng
SIGGRAPH 2025 (Poster)
A tool to bake reflection probes from 3D Gaussian splatting scenes to relight meshes in Unity.
Doing this, we learned that EWA splatting is not accurate enough to render cubemaps.
The approximation causes artifacts to appear at the seams where the cube faces meet.
Instead, we raytraced the Gaussian primitives.
Unity can then use the baked cubemaps to relight meshes in real time by interpolating between nearby probes.
VHard: An XR UI for Kinesthetic Rehearsal of Rock Climbing Moves
Joel Salzman,
Jace Li,
Ben Yang,
Steven Feiner
ISMAR 2024 (Demo)
An XR rock climbing app for kinesthetic rehearsal.
If you climb, chances are you've stood at the base of a hard route and mimed the moves.
But what if you could actually see the holds in front of you without climbing up to them?
We scanned a MoonBoard and put it in AR/VR with a
custom shader that lights up where your fingers and palms touch.
So, you can practice your exact hand and body positions on the ground with metric accuracy
and live feedback.
We were awarded a grant from the
Columbia-Dream Sports AI Innovation Center
to significantly expand this project. Stay tuned!
Discharge promotes melt and formation of submarine ice-shelf channels
at the Beardmore Glacier grounding zone and basal body formation downstream
Andrew Hoffman,
S. Isabel Cordero,
Qazi Ashikin,
Joel Salzman,
Kirsty Tinto,
Jonathan Kingslake,
David Felton Porter,
Renata Constantino,
Alexandra Boghosian,
Knut A. Christianson,
Howard Conway,
Michelle R. Koutnik
AGU Fall 2024
Visualizing Ice Sheets in Extended Reality (VISER) for the Improvement of Polar Data Analysis
S. Isabel Cordero,
Alexandra Boghosian,
Joel Salzman,
Qazi Ashikin,
Kirsty Tinto,
Steven Feiner,
Robin Elizabeth Bell
AGU Fall 2023
Augmented Reality and Virtual Reality for Ice-Sheet Data Analysis
Alexandra Boghosian,
S. Isabel Cordero,
Carmine Elvezio,
Sofia Sanchez-Zarate,
Ben Yang,
Shengyue Guo,
Qazi Ashikin,
Joel Salzman,
Kirsty Tinto,
Steven Feiner,
Robin Elizabeth Bell
IGARSS 2023
An XR glaciology app.
We visualize radar data in an immersive 3D environment and develop custom UX tools for scientists.
Using a Quest or Hololens VR/AR headset, users can manipulate radargrams and other
scientific data to study glaciological processes in Antarctica and Greenland.
I created the pipeline that ingests radar plots and generates 3D meshes that visualize
the actual locations from where the signals were gathered.
We believe we were the first to model the entire flight trajectory in 3D.
LiDAR Data Segmentation using Deep Learning for Indoor Mapping
Enbo Zhou,
Keith C. Clarke,
Joel Salzman,
Haoyu Shi,
Bryn Morgan
AutoCarto 2020
We used PointNet and PointNet++ to segment and classify LIDAR point clouds of indoor environments at various levels of detail. Since this was my first foray into research (hooray!), I manually classified the ground truth and created visualizations.
01/25 – 07/25
I worked on 3D mapping and geospatial machine learning to help deliver better 5G signals to AT&T customers.
The image is from a
paper published by the group
before I joined.
I focused on frontend and geometry processing.
Since I'm under NDA, that's all I have to say.
06/20 – 06/22
For over two years, my job was to figure out where to build utility-scale renewable (primarily wind and solar) projects for Apex Clean Energy. Most of my work consisted of data engineering and automated geospatial analysis. My role supported the people who would actually talk to landowners to lease parcels. We developed and maintained a web app hosting tools for hundreds of unique analyses that could be generated on the fly to streamline the site selection process.
06/18 – 03/19
Primary Ocean Producers is a
startup aiming to cultivate Macrocystis pyrifera in the deep
ocean, in partnership with Catalina Sea Ranch.
We were funded by a grant from
ARPA-E
to grow giant kelp en masse in order to produce carbon-neutral biofuel.
My role was to site the pilot facilities off the coast of California.
Along with two aquatic biologists, I developed a hierarchical suitability
model for giant kelp cultivation.
Among other factors, we looked at chemical availability (for nutrients),
geophysical phenomena (so the kelp would be safe), and legal
restrictions (in order to actually build the facility).
A combination of Gaussian Splatting and Zip-NeRF.
Begun as a project for
Peter Belhumeur's
Advanced Topics in Deep Learning class,
I am attempting to improve the state of the art technique for novel view synthesis
by using a neural network to learn a point sampling probability field,
sampling primitives from this field,
and then splatting the primitives to render images.
It kind of works. At minimum, I learned a ton about radiance fields by doing this.
The project is fully compatible with Nerfstudio.
3D reconstruction of objects that are partially seen through reflections.
Done as a project for
Shree Nayar's class
Computational Imaging,
we wrote a pipeline for an Intel RealSense 455 camera that creates 3D models.
What makes this interesting is that part of each object is seen directly by the camera and
part is only visible through a mirror.
So, the object can be reconstructed better if the points seen through the mirror are properly
registered with the directly-seen points.
We wrote a self-supervised algorithm that segments and merges these point clouds.
Camera pose estimation for an indoor video using a deep neural network. This was a group project for Peter Belhumeur's Deep Learning for Computer Vision class. Our goal was to figure out where in our classroom a random image was taken from, given simple conditions (same lighting, no movement, etc). We took a supervised deep learning approach but used COLMAP to estimate the ground truth poses.
Where do votes matter most?
A project for my last GIS class in college under
Krzysztof Janowicz
that ended up as an interactive web map.
See the project page for (many) details.
Working on this project is a big part of why I decided to go back to school for
Computer Science.