| CARVIEW |
Select Language
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 28 Dec 2025 07:09:36 GMT
Content-Type: text/html
Content-Length: 5782
Connection: keep-alive
Accept-Ranges: bytes
Vary: Accept-Encoding
Content-Encoding: gzip
Tensor Holography
Tensor Holography: Towards Real-time Photorealistic
Nature 2021
Tensor Holography: Towards Real-time Photorealistic
3D Holography with Deep Neural Networks
Nature 2021
Liang Shi1,2,✉ Beichen Li1,2 Changil Kim1,2 Petr Kellnhofer1,2 Wojciech Matusik1,2,✉
1MIT CSAIL 2MIT EECS ✉Corresponding Author
Tensor Holography synthesizes a 3D hologram with per-pixel depth from a single RGB-D
image in real-time. This videos shows a live capture from a holographic near-eye
display (using HOLOEYE PLUTO SLM) with 3D holograms synthesized in real time. The camera focus is set on
the eyes of the bunny. The background trees are optically (not computationally) blurred due to
camera defocus.
Abstract
The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact
on virtual and augmented reality (AR/VR), human-computer interaction, education, and training.
Computer-generated holography (CGH) enables high spatio-angular resolution 3D projection via numerical
simulation of diffraction and interference. Yet, existing physically based methods fail to produce holograms
with both per-pixel focal control and accurate occlusion. The computationally taxing Fresnel diffraction
simulation further places an explicit trade-off between image quality and runtime, making dynamic holography
far from practical. Here, we demonstrate the first deep learning-based CGH pipeline capable of synthesizing
a photorealistic color 3D hologram from a single RGB-Depth (RGB-D) image in real time. Our convolutional
neural network (CNN) is extremely memory-efficient (below 620 KB) and runs at 60 Hz for 1920×1080 pixels
resolution on a single consumer-grade graphics processing unit (GPU). Leveraging low-power on-device
artificial intelligence (AI) acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at
1.1 Hz) and edge (Google Edge TPU at 2 Hz) devices, promising real-time performance in future generation
AR/VR mobile headsets. We enable this pipeline by introducing the first large-scale CGH dataset (MIT-CGH-4K)
with 4,000 pairs of RGB-D images and corresponding 3D holograms. Our CNN is trained with differentiable
wave-based loss functions and physically approximates Fresnel diffraction. With an anti-aliasing phase-only
encoding method, we experimentally demonstrate speckle-free, natural-looking high-resolution 3D holograms.
Our learning-based approach and the first Fresnel hologram dataset will help unlock the full potential of
holography and enable new applications in metasurface design, optical and acoustic tweezer-based microscopic
manipulation, holographic microscopy, and single-exposure volumetric 3D printing.
Paper
Towards Real-time Photorealistic 3D Holography with Deep Neural Networks
Liang Shi✉, Beichen Li, Changil Kim, Petr Kellnhofer, Wojciech Matusik✉
Nature 2021
[Paper] [Code] [Dataset] [BibTeX]
Liang Shi✉, Beichen Li, Changil Kim, Petr Kellnhofer, Wojciech Matusik✉
Nature 2021
[Paper] [Code] [Dataset] [BibTeX]
BibTeX
@ARTICLE{Shi2021,
title = "Towards real-time photorealistic 3D holography with deep neural networks",
author = "Shi, Liang and Li, Beichen and Kim, Changil and
Kellnhofer, Petr and Matusik, Wojciech",
journal = "Nature",
volume = 592,
month = Mar,
year = 2021,
}
Related Paper:
End-to-end Learning of 3D Phase-only Holograms for Holographic Display
Liang Shi✉, Beichen Li, Wojciech Matusik✉
Light: Science & Applications 2022
[Paper] [Code] [Dataset] [BibTeX]
Liang Shi✉, Beichen Li, Wojciech Matusik✉
Light: Science & Applications 2022
[Paper] [Code] [Dataset] [BibTeX]
BibTeX
@ARTICLE{Shi2022,
title = "End-to-end learning of {3D} phase-only holograms for holographic
display",
author = "Shi, Liang and Li, Beichen and Matusik, Wojciech",
journal = "Light Sci Appl",
volume = 11,
number = 1,
pages = "247",
month = aug,
year = 2022,
language = "en"
}