The New England Computer Vision Workshop (NECV) brings together researchers in computer vision and related areas
for an informal exchange of ideas through a full day of presentations and posters.
Held conveniently after the CVPR deadline and before the NeurIPS conference, NECV offers opportunities to network and showcase research.
NECV attracts researchers from universities and industry research labs in New England.
As in previous years, the workshop will focus on graduate student presentations.
Welcome to Yale!
Academic researchers:
Participation is free for all researchers at academic institutions.
Please register here
and submit your abstract here.
Industry participants:
For our industry friends, a limited number of registrations are available for a fee.
Please register here.
Deadlines:
Early-bird registration (lunch provided) by November 7.
Please register by November 15 and submit by November 17.
Oral decisions will be released by November 19.
Submission guidelines:
Please submit a one-page PDF abstract using the CVPR 2025 rebuttal template.
Please include the title of your work and the list of authors in the abstract.
You may present work that has already been published or work that is in progress.
All relevant submissions will be granted a poster presentation,
and selected submissions from each institution will be granted 8-minute oral presentations.
Post-docs and faculty may submit for poster presentations, but oral presentations are reserved for graduate students.
There will be no publications resulting from the workshop,
so presentations will not be considered "prior peer-reviewed work" according to any definition we are aware of.
Thus, work presented at NECV can be subsequently submitted to other venues without citation.
The workshop is after the CVPR submission deadline, so come and show off your new work in a friendly environment.
It's also just before the NeurIPS conference, so feel free to come and practice your presentation.
Presentation
Oral presentation:
Each presentation is allocated a 6-minute slot, with an additional 2 minutes dedicated to questions.
We kindly request all oral presenters to bring their machines for their presentation.
The presentation equipment supports both HDMI and Type-C for screen sharing.
Please arrive at least 5 minutes before the scheduled oral session to test your machine and ensure compatibility with the provided equipment.
Similar to regular conferences, we have also allocated poster boards for oral presenters. Please find your poster ID.
Poster presentation:
Please locate the correct poster board to display your poster.
Easels and foam cores will be provided for mounting posters, accommodating sizes up to 36x48 inches.
The foam cores are not attached, allowing flexibility for landscape or portrait orientation. You are welcome to use any format within that size limit.
Logistics
Schedule
Time
Topic
9:00-10:00
Registration & Poster Setup
10:00-10:10
Welcome & Opening
10:15-11:15
Oral Session I
[10:15] Learning to Edit Visual Programs with Self-Supervision
[10:25] What if Eye? Computationally Emulating the Evolution of Visual Intelligence
[10:35] Orient Anything
[10:45] Score Distillation via Reparameterized DDIM
[10:55] Straightening Flow Matching Models by Learning Interpolants
[11:05] Time of the Flight of the Gaussians:Fast and Accurate Dynamic Time-of-Flight Radiance Fields
11:25-12:25
Poster Session I
[1] Augundo: Scaling up augmentations for monocular depth completion and estimation
[2] Orient Anything
[3] What if Eye? Computationally Emulating the Evolution of Visual Intelligence
[4] Enhancing Autonomous Navigation by Imaging Hidden Objects using Single-Photon LiDAR
[5] Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality
[6] E-BARF: Bundle Adjusting Neural Radiance Fields from a Moving Event Camera
[7] CUTS: A Deep Learning and Topological Framework for Multigranular Unsupervised Medical Image Segmentation
[8] GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities
[9] Real-Time Temporally Consistent Depth Completion for VR-Teleoperated Robots
[10] OneGaze: A Unified Model for Estimating Gazeing Egocentric Videos and Still Images
[11] Audio Geolocation: An Investigation with Natural Sounds
[12] Straightening Flow Matching Models by Learning Interpolants
[13] CP-TRPCA: A Novel Approach to Robust Tensor PCA
Steering committee:
Subhransu Maji (UMass Amherst),
Erik Learned-Miller (UMass Amherst),
Kate Saenko (Boston University),
Yun (Raymond) Fu (Northeastern University),
Octavia Camps (Northeastern University),
Todd Zickler (Harvard),
James Tompkin (Brown),
Benjamin Kimia (Brown),
Phillip Isola (MIT),
Pulkit Agrawal (MIT),
SouYoung Jin (Dartmouth),
Adithya Pediredla (Dartmouth),
and
Yu-Wing Tai (Dartmouth).
Acknowledgements
We thank Samson Timoner for helping us arrange NECV 2024.