You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 7, 2025. It is now read-only.
Losses: Adversarial losses, Spectral losses, and Perceptual losses (for 2D and 3D data using LPIPS, RadImageNet, and 3DMedicalNet pre-trained models).
Metrics: Multi-Scale Structural Similarity Index Measure (MS-SSIM) and Maximum Mean Discrepancy (MMD).
Diffusion Models and Latent Diffusion Models Inferers classes (compatible with MONAI style) containing methods to train, sample synthetic images, and obtain the likelihood of inputted data.
MONAI-compatible trainer engine (based on Ignite) to train models with reconstruction and adversarial components.
Tutorials including:
How to train VQ-VAEs, VQ-GANs, AutoencoderKLs, Diffusion Models and Latent Diffusion Models on 2D and 3D data.
Train diffusion model to perform conditional image generation with classifier-free guidance.
Comparison of different diffusion model schedulers.
Diffusion models with different parameterisation (e.g. v prediction and epsilon parameterisation).
Inpainting with diffusion model (using Repaint method)
Super-resolution with Latent Diffusion Models (using Noise Conditioning Augmentation)
Roadmap
Our short-term goals are available in the Milestones
section of the repository and this document.
In the longer term, we aim to integrate the generative models into the MONAI core library (supporting tasks such as,
image synthesis, anomaly detection, MRI reconstruction, domain transfer)
Installation
To install MONAI Generative Models, it is recommended to clone the codebase directly: