| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 25 Nov 2025 09:04:42 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"6925712a-412f"
expires: Sun, 28 Dec 2025 15:31:18 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 1B4E:3827E5:7CC990:8BE210:69514AED
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 15:21:18 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210047-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766935278.388774,VS0,VE220
vary: Accept-Encoding
x-fastly-request-id: 4412854ea59ad9bd4e7acaaaffe4beb1e5b65666
content-length: 5586
Nissim Maruani
Nissim Maruani
Contact | GitHub | Scholar | LinkedIn
I'm a third year PhD student at Inria, under the supervision of Pierre Alliez (Titane team) and Mathieu Desbrun (Geomerix team). Last summer, I interned at Adobe Research with mentorship from Yifan Wang. My work lies at the intersection of geometry processing and deep learning, focusing on differentiable geometric representations that enable fast and accurate reconstruction. I’m particularly interested in data-driven approaches and generative models.
Publications
Illustrator’s Depth: Monocular Layer Index
Prediction for Image Decomposition
Nissim Maruani, Peiying Zhang, Siddhartha Chaudhuri, Matthew Fisher,
Nanxuan Zhao, Vladimir G. Kim, Pierre Alliez, Mathieu Desbrun, Wang Yifan
We introduce Illustrator’s Depth, a novel definition of depth that addresses a key challenge in
digital content creation: decomposing flat images into editable, ordered layers. Inspired by an
artist’s compositional process, illustrator’s depth infers a layer index for each pixel, forming an
interpretable image decomposition through a discrete, globally consistent ordering of elements
optimized for editability. We also propose and train a neural network using a curated dataset of
layered vector graphics to predict layering directly from raster inputs. Our layer index inference
unlocks a range of powerful downstream applications. In particular, it significantly outperforms
state-of-the-art baselines for image vectorization while also enabling high-fidelity
text-to-vector-graphics generation, automatic 3D relief generation from 2D images, and intuitive
depth-aware editing. By reframing depth from a physical quantity to a creative abstraction,
illustrator's depth prediction offers a new foundation for editable image decomposition.
Arxiv, 2025
MILo: Mesh-In-the-Loop Gaussian Splatting
for Detailed and Efficient Surface Reconstruction
Antoine Guédon, Diego Gomez, Nissim Maruani, Bingchen Gong, George
Drettakis, Maks Ovsjanikov
Our method introduces a novel differentiable mesh extraction framework that operates during the
optimization of 3D Gaussian Splatting representations. At every training iteration, we
differentiably extract a mesh—including both vertex locations and connectivity—only from Gaussian
parameters. This enables gradient flow from the mesh to Gaussians, allowing us to promote
bidirectional consistency between volumetric (Gaussians) and surface (extracted mesh)
representations. This approach guides Gaussians toward configurations better suited for surface
reconstruction, resulting in higher quality meshes with significantly fewer vertices. Our framework
can be plugged into any Gaussian splatting representation, increasing performance while generating
an order of magnitude fewer mesh vertices. MILo makes the reconstructions more practical for
downstream applications like physics simulations and animation.
ACM Trans. Graph. (SIGGRAPH Asia - Journal Track), 2025
ShapeShifter: 3D Variations
Using Multiscale and Sparse Point-Voxel
Diffusion
Nissim Maruani, Wang Yifan, Matthew Fisher, Pierre Alliez, Mathieu
Desbrun
This paper proposes a new 3D generative model that learns to synthesize shape variations based on a
single
example. While generative methods for 3D objects have recently attracted much attention, current
techniques often lack geometric details and/or require long training times and large resources. Our
approach remedies these issues by combining sparse voxel grids and multiscale point, normal, and
color
sampling within an encoder-free neural architecture that can be trained efficiently and in parallel.
We
show that our resulting variations better capture the fine details of their original input and can
capture
more general types of surfaces than previous SDF-based methods. Moreover, we offer interactive
generation
of 3D shape variants, allowing more human control in the design loop if needed.
Proc. Conference on Computer Vision and Patter Recognition (CVPR), 2025
PoNQ: a Neural QEM-based Mesh
Representation
Nissim Maruani, Maks Ovsjanikov, Pierre Alliez, Mathieu Desbrun
Although polygon meshes have been a standard representation in geometry processing, their irregular
and combinatorial nature hinders their suitability for learning-based applications. In this work, we
introduce a novel learnable mesh representation through a set of local 3D sample Points and their
associated Normals and Quadric error metrics (QEM) w.r.t. the underlying shape, which we denote
PoNQ. A global mesh is directly derived from PoNQ by efficiently leveraging the knowledge of the
local quadric errors. Besides marking the first use of QEM within a neural shape representation, our
contribution guarantees both topological and geometrical properties by ensuring that a PoNQ mesh
does not self-intersect and is always the boundary of a volume. Notably, our representation does not
rely on a regular grid, is supervised directly by the target surface alone, and also handles open
surfaces with boundaries and/or sharp features. We demonstrate the efficacy of PoNQ through a
learning-based mesh prediction from SDF grids and show that our method surpasses recent
state-of-the-art techniques in terms of both surface and edge-based metrics.
Proc. Conference on Computer Vision and Patter Recognition (CVPR), 2024
VoroMesh: Learning Watertight
Surface Meshes with Voronoi Diagrams
Nissim Maruani, Roman Klokov, Maks Ovsjanikov, Pierre Alliez, Mathieu
Desbrun
In stark contrast to the case of images, finding a concise, learnable discrete representation of 3D
surfaces remains a challenge. In particular, while polygon meshes are arguably the most common
surface representation used in geometry processing, their irregular and combinatorial structure
often make them unsuitable for learning-based applications. In this work, we present VoroMesh, a
novel and differentiable Voronoi-based representation of water- tight 3D shape surfaces. From a set
of 3D points (called generators) and their associated occupancy, we define our boundary
representation through the Voronoi diagram of the generators as the subset of Voronoi faces whose
two associated (equidistant) generators are of opposite occupancy: the resulting polygon mesh forms
a watertight approximation of the target shape’s boundary. To learn the position of the generators,
we propose a novel loss function, dubbed VoroLoss, that minimizes the distance from groundtruth
surface samples to the closest faces of the Voronoi diagram which does not require an explicit
construction of the entire Voronoi diagram. A direct optimization of the Voroloss to obtain
generators on the Thingi32 dataset demonstrates the geometric efficiency of our representation
compared to axiomatic meshing algorithms and recent learning-based mesh representations. We further
use VoroMesh in a learning-based mesh prediction task from input SDF grids on the ABC dataset, and
show comparable performance to state-of-the-art methods while guaranteeing closed output surfaces
free of self-intersections.
Proc. International Conference on Computer Vision (ICCV), 2023
Feature-Preserving Offset Mesh
Generation from Topology-Adapted Octrees
Daniel Zint, Nissim Maruani, Mael Rouxel-Labbé, Pierre Alliez
We introduce a reliable method to generate offset meshes from input triangle
meshes or triangle soups. Our method proceeds in two steps. The first step performs a Dual
Contouring method on the offset surface, operating on an adaptive octree that is refined in areas
where the offset topology is complex. Our approach substantially reduces memory consumption and
runtime compared to isosurfacing methods operating on uniform grids. The second step improves the
output Dual Contouring mesh with an offset-aware remeshing algorithm to reduce the normal deviation
between the mesh facets and the exact offset. This remeshing process reconstructs concave sharp
features and approximates smooth shapes in convex areas up to a user-defined precision. We show the
effectiveness and versatility of our method by applying it to a wide range of input meshes. We also
benchmark our method on the entire Thingi10k dataset: watertight, 2-manifold offset meshes are
obtained for 100% of the cases.
Symposium on geometry processing (SGP), 2023
Teaching
- ENS-PSL (2022-Today, Paris, France): exploring new ways of teaching maths with MathAData
- Polytech Nice (November 2025): Lecture & practical session on 3D Shape Learning within the Deep Learning class
- La Rotonde (2018-2019, Saint-Étienne, France): Science Mediation at La main à la pâte
Talks
- Stanford, Geometric Computation group, July 2024
- 3IA Côte d'Azur, June 2023, November 2025
Reviewer
- Computers & Graphics 2026, EUROGRAPHICS 2026, BMVC 2025, SIGGRAPH 2025, BMVC 2024, SIGGRAPH ASIA 2024, SIGGRAPH 2024, BMVC 2023
Education
- ENS Paris-Saclay (2021-2022, Gif-sur-Yvette, France): Master MVA
- École polytechnique (2018-2022, Saclay, France): Engineering Curriculum
- Lycée Louis-le-Grand (2016-2018, Paris, France): "Classe prépa"