JAM is a rectified flow-based model for lyrics-to-song generation that addresses the lack of fine-grained word-level controllability in existing lyrics-to-song models. Built on a compact 530M-parameter architecture with 16 LLaMA-style Transformer layers as the Diffusion Transformer (DiT) backbone, JAM enables precise vocal control that musicians desire in their workflows. Unlike previous models, JAM provides word and phoneme-level timing control, allowing musicians to specify the exact placement of each vocal sound for improved rhythmic flexibility and expressive timing.
📣 29/07/25: We have released JAM-0.5, the first version of the AI song generator from Project Jamify!
- Fine-grained Word and Phoneme-level Timing Control: The first model to provide word-level timing and duration control in song generation, enabling precise prosody control for musicians
- Compact 530M Parameter Architecture: Less than half the size of existing models, enabling faster inference with reduced resource requirements
- Enhanced Lyric Fidelity: Achieves over 3× reduction in Word Error Rate (WER) and Phoneme Error Rate (PER) compared to prior work through precise phoneme boundary attention
- Global Duration Control: Controllable duration up to 3 minutes and 50 seconds.
- Aesthetic Alignment through Direct Preference Optimization: Iterative refinement using synthetic preference datasets to better align with human aesthetic preferences, eliminating manual annotation requirements
Check out the example generated music in the generated_examples/
folder to hear what JAM can produce:
Hybrid Minds, Brodie - Heroin.mp3
- Electronic music with synthesized beats and electronic elementsJade Bird - Avalanche.mp3
- Country music with acoustic guitar and folk influencesRizzle Kicks, Rachel Chinouriri - Follow Excitement!.mp3
- Rap music with rhythmic beats and hip-hop style
These samples demonstrate JAM's ability to generate high-quality music across different genres while maintaining vocal intelligence, style consistency and musical coherence.
- Python 3.10 or higher
- CUDA-compatible GPU with sufficient VRAM (8GB+ recommended)
git clone https://github.com/declare-lab/jamify
cd jam
The project includes an automated installation script, run it in your own virtual environment:
bash install.sh
This script will:
- Initialize and update git submodules (DeepPhonemizer)
- Install Python dependencies from
requirements.txt
- Install the JAM package in editable mode
- Install the DeepPhonemizer external dependency
If you prefer manual installation:
# Initialize submodules
git submodule update --init --recursive
# Install dependencies
pip install -r requirements.txt
# Install JAM package
pip install -e .
# Install DeepPhonemizer
pip install -e externals/DeepPhonemizer
The easiest way to run inference is using the provided inference.py
script:
python inference.py
This script will:
- Download the pre-trained JAM-0.5 model from Hugging Face
- Run inference with default settings
- Save generated audio to the
outputs
directory
Create an input file at inputs/input.json
with your songs:
[
{
"id": "my_song",
"audio_path": "inputs/reference_audio.mp3",
"lrc_path": "inputs/lyrics.json",
"duration": 180.0,
"prompt_path": "inputs/style_prompt.txt"
}
]
Required files:
- Audio file: Reference audio for style extraction
- Lyrics file: JSON with timestamped lyrics
- Prompt file: Text description of desired style/genre. Text prompt is not used in the default setting where the audio reference is utilized.
For more control over the generation process:
# Basic usage with custom checkpoint
python -m jam.infer evaluation.checkpoint_path=path/to/model.safetensors
# With custom output directory
python -m jam.infer evaluation.checkpoint_path=path/to/model.safetensors evaluation.output_dir=my_outputs
# With custom configuration file
python -m jam.infer config=configs/my_config.yaml evaluation.checkpoint_path=path/to/model.safetensors
Use Accelerate for distributed inference:
# Basic usage with custom checkpoint
accelerate launch --config_path path/to/accelerate/config.yaml jam.infer
# With custom configuration file
accelerate launch --config_path path/to/accelerate/config.yaml jam.infer config=path/to/inference/config.yaml
evaluation.checkpoint_path
: Path to model checkpoint (required)evaluation.output_dir
: Output directory (default: "outputs")evaluation.test_set_path
: Input JSON file (default: "inputs/input.json")evaluation.batch_size
: Batch size for inference (default: 1)evaluation.num_samples
: Only generate first n samples in test_set_path (null = all)evaluation.vae_type
: VAE model type ("diffrhythm" or "stable_audio")
evaluation.ignore_style
: Ignore style prompts (default: false)evaluation.use_prompt_style
: Use text prompts for style (default: false)evaluation.num_style_secs
: Style audio duration in seconds (default: 30)evaluation.random_crop_style
: Randomly crop style audio (default: false)
[
{"start": 2.2, "end": 2.5, "word": "First word of lyrics"},
{"start": 2.5, "end": 3.7, "word": "Second word of lyrics"},
{"more lines ...."}
]
Electronic dance music with heavy bass and synthesizers
[
{
"id": "unique_song_id",
"audio_path": "path/to/reference.mp3",
"lrc_path": "path/to/lyrics.json",
"duration": 180.0,
"prompt_path": "path/to/style.txt"
}
]
Generated files are saved to the output directory:
outputs/
├── generated/ # Final trimmed audio files
├── generated_orig/ # Original generated audio
├── cfm_latents/ # Intermediate latent representations
├── local_files/ # Process-specific metadata
└── generation_config.yaml # Configuration used for generation
- GPU Memory: Use
evaluation.batch_size=1
for large on limited VRAM - Multi-GPU: Use
accelerate launch
for faster processing of multiple samples - Mixed Precision: Add
--mixed_precision=fp16
to reduce memory usage
# Make sure to specify the checkpoint path
python -m jam.infer evaluation.checkpoint_path=path/to/your/model.safetensors
# Reduce batch size or use mixed precision
accelerate launch --mixed_precision=fp16 -m jam.infer evaluation.checkpoint_path=model.safetensors
# Create input.json file in inputs/ directory or specify custom path
python -m jam.infer evaluation.test_set_path=path/to/your/input.json evaluation.checkpoint_path=model.safetensors
The inference.py
script automatically downloads the JAM-0.5 model. For manual download:
from huggingface_hub import snapshot_download
model_path = snapshot_download(repo_id="declare-lab/jam-0.5")
If you use JAM in your research, please cite:
@misc{jam2024,
title={JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment},
author={Renhang Liu and Chia-Yu Hung and Navonil Majumder and Taylor Gautreaux and Amir Ali Bagherzadeh and Chuan Li and Dorien Herremans and Soujanya Poria},
year={2025}
}
JAM is the first open-sourced model released under Project Jamify, developed for facilitating academic research and creative exploration in AI-generated songs from lyrics. The model is subject to:
- Project Jamify License: Intended solely for non-commercial, academic, and entertainment purposes
- Stability AI Community License Agreement: Required due to use of Stability AI model components
- No copyrighted material was used in a way that would intentionally infringe on intellectual property rights
- JAM is not designed to reproduce or imitate any specific artist, label, or protected work
- Outputs generated by JAM must not be used to create or disseminate content that violates copyright laws
- Commercial use of JAM or its outputs is strictly prohibited
- Attribution Required: Must retain "This Stability AI Model is licensed under the Stability AI Community License, Copyright © Stability AI Ltd. All Rights Reserved."
Responsibility for the use of the model and its outputs lies entirely with the end user, who must ensure all uses comply with applicable legal and ethical standards.
For complete license terms, see LICENSE.md and STABILITY_AI_COMMUNITY_LICENSE.md.
For questions, concerns, or collaboration inquiries, please contact the Project Jamify team via the official repository.
For issues and questions:
- Open an issue on GitHub
- Check the troubleshooting section above
- Review the configuration options for parameter tuning