| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://albert100121.github.io/AiFDepthNet/
x-github-request-id: 44CE:234FE9:98D21E:AB896A:695335D6
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 02:15:50 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210095-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767060951.526128,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: 93864c6a3dbc12724d81bae04f443b0af4c07035
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 06 Jan 2022 09:42:29 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"61d6b985-2d0a"
expires: Tue, 30 Dec 2025 02:25:50 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 12CD:2685F2:967723:A92DD6:695335D5
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 02:15:50 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210095-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767060951.737567,VS0,VE217
vary: Accept-Encoding
x-fastly-request-id: e9fb2039b085a5119f2bacd080f6fe9333e03824
content-length: 3155
Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision
Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision (ICCV 2021)
Abstract
Depth estimation is a long-lasting yet important task in computer vision.
Most of the previous works try to estimate depth from input images and assume images are all-in-focus (AiF), which is less common in real-world applications.
On the other hand, a few works take defocus blur into account and consider it as another cue for depth estimation.
In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack).
We design a shared architecture to exploit the relationship between depth and AiF estimation.
As a result, the proposed method can be trained either supervisedly with ground truth depth, or unsupervisedly with AiF images as supervisory signals.
We show in various experiments that our method outperforms the state-of-the-art methods both quantitatively and qualitatively, and also has higher efficiency in inference time.
Video
Citation
Ning-Hsu Wang, Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Yu-Lin Chang, Chia-Ping Chen, Kevin Jou, "Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision", in International Conference on Computer Vision (ICCV), 2021
Bibtex
@inproceedings{Wang-ICCV-2021,
author = {Wang, Ning-Hsu and Wang, Ren and Liu, Yu-Lun and Huang, Yu-Hao and Chang, Yu-Lin and Chen, Chia-Ping and Jou, Kevin},
title = {Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision},
booktitle = {IEEE International Conference on Computer Vision},
year = {2021}
}
Download
Results
DDFF-12-Scene
4D Light Field Dataset
DefocusNet Dataset
Mobile Depth Dataset