You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can download the pre-trained models using the following commands. Ensure you are in the Genfocus root directory.
# 1. Download main models to the root directory
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/bokehNet.safetensors
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/deblurNet.safetensors
# 2. Setup checkpoints directory and download auxiliary model
mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/nycu-cplab/Genfocus-Model/resolve/main/checkpoints/depth_pro.pt
cd ..
3. Run Gradio Demo
Launch the interactive web interface locally:
Note: The project uses FLUX.1-dev. You must request access and authenticate locally before running the demo.
python demo.py
The demo will be accessible at https://127.0.0.1:7860 in your browser.
🗺️ Roadmap & TODO
We are actively working on improving this project. Current progress:
Release Inference Code (Support for adjustable parameters/settings)
Release Training Code and Data
🔗 Citation
If you find this project useful for your research, please consider citing:
@article{Genfocus2025,
title={Generative Refocusing: Flexible Defocus Control from a Single Image},
author={Tuan Mu, Chun-Wei and Huang, Jia-Bin and Liu, Yu-Lun},
journal={arXiv preprint arXiv:2512.16923},
year={2025}
}