Paper | Project Page | Project Page (CN)
Xinqi Lin1,2, Fanghua Yu1, Jinfan Hu1,2, Zhiyuan You1,3, Wu Shi1, Jimmy S. Ren4,5, Jinjin Gu6,*, Chao Dong1,7,*
*: Corresponding author
1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2University of Chinese Academy of Sciences
3The Chinese University of Hong Kong
4SenseTime Research
5Hong Kong Metropolitan University
6INSAIT, Sofia University
7Shenzhen University of Advanced Technology
⭐If HYPIR is helpful for you, please help star this repo. Thanks!🤗
Our current open-source version is based on the Stable Diffusion 2.1. While the number of parameters is small, this model was trained on our best-quality data and with significant computational resources (batch size 1024). Therefore, its performance is also quite good.
We'll provide access to more advanced models based on FLUX and Stable Diffusion 3.5 through web interfaces and APIs in the future. Stay tuned!✨
Our most advanced model has been launched on suppixel.ai and suppixel.cn! We welcome you to experience it.🔥🔥🔥 This state-of-the-art model offers more stable results, more flexible capabilities, while still maintaining incredibly fast speeds🔥🔥🔥.
Also, be aware that pirated HYPIR websites
- 2025.07.28: ✅ Provide colab example. Free T4 GPU is good enough for running this model!
- 2025.07.28: ✅ Integrated to openxlab.
- 2025.07.19: ✅ Integrated to replicate.
- 2025.07.19: This repo is created.
git clone https://github.com/XPixelGroup/HYPIR.git
cd HYPIR
conda create -n hypir python=3.10
conda activate hypir
pip install -r requirements.txt
Model Name | Description | HuggingFace | OpenXLab |
---|---|---|---|
HYPIR_sd2.pth | Lora weights of HYPIR-SD2 | download | download |
-
Download model weight
HYPIR_sd2.pth
. -
Fill
weight_path
in configs/sd2_gradio.yaml. -
Run the following command to launch gradio.
python app.py --config configs/sd2_gradio.yaml --local --device cuda
-
(Optional) Tired of manually typing out prompts for your images? Let GPT do the work for you!
First, create a file named
.env
in the project directory.GPT_API_KEY=your-awesome-api-key GPT_BASE_URL=openai-gpt-base-url GPT_MODEL=gpt-4o-mini
Second, add your API base URL and API key in the
.env
file. For the model, 4o-mini is usually sufficient.Finally, pass
--gpt_caption
argument to the program, and type "auto" in the prompt box to use GPT-generated prompt.
More details can be found by running python test.py --help
.
LORA_MODULES_LIST=(to_k to_q to_v to_out.0 conv conv1 conv2 conv_shortcut conv_out proj_in proj_out ff.net.2 ff.net.0.proj)
IFS=','
LORA_MODULES="${LORA_MODULES_LIST[*]}"
unset IFS
python test.py \
--base_model_type sd2 \
--base_model_path stabilityai/stable-diffusion-2-1-base \
--model_t 200 \
--coeff_t 200 \
--lora_rank 256 \
--lora_modules $LORA_MODULES \
--weight_path path/to/HYPIR_sd2.pth \
--patch_size 512 \
--stride 256 \
--lq_dir examples/lq \
--scale_by factor \
--upscale 4 \
--txt_dir examples/prompt \
--output_dir results/examples \
--seed 231 \
--device cuda
-
Generate a parquet file to save both image paths and prompts. For example:
import os import polars as pl # Recursively collect image files. For example, you can crop # the LSDIR dataset into 512x512 patches and place all patches # in one folder. image_dir = "/opt/data/common/data260t/LSDIR_512" image_exts = (".jpg", ".jpeg", ".png") image_paths = [] for root, dirs, files in os.walk(image_dir): for file in files: if file.lower().endswith(image_exts): image_paths.append(os.path.join(root, file)) # Create dataframe object with prompts. Here we use empty # prompt for simplicity. df = pl.from_dict({ "image_path": image_paths, "prompt": [""] * len(image_paths) }) # Save as parquet file, which will be used in the next step. df.write_parquet("path/to/save/LSDIR_512_nulltxt.parquet")
-
Fill in the values marked as TODO in configs/sd2_train.yaml. For example:
output_dir: /path/to/save/experiment data_config: train: ... dataset: target: HYPIR.dataset.realesrgan.RealESRGANDataset params: file_meta: file_list: path/to/LSDIR_512_nulltxt.parquet image_path_prefix: "" image_path_key: image_path prompt_key: prompt ...
-
Start training.
accelerate launch train.py --config configs/sd2_train.yaml
For questions about code or paper, please email xqlin0613@gmail.com
.
For authorization and collaboration inquiries, please email jinjin.gu@suppixel.ai
.
The HYPIR ("Software") is made available for use, reproduction, and distribution strictly for non-commercial purposes. For the purposes of this declaration, "non-commercial" is defined as not primarily intended for or directed towards commercial advantage or monetary compensation.
By using, reproducing, or distributing the Software, you agree to abide by this restriction and not to use the Software for any commercial purposes without obtaining prior written permission from Dr. Jinjin Gu.
This declaration does not in any way limit the rights under any open source license that may apply to the Software; it solely adds a condition that the Software shall not be used for commercial purposes.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
For inquiries or to obtain permission for commercial use, please contact Dr. Jinjin Gu (jinjin.gu@suppixel.ai).