Latest News
- 2025/06 — Ultravox 0.6 available
- 2025/02 — Ultravox 0.5 available
- 2024/11 — Ultravox 0.4.1 available
- 2024/08 — Ultravox 0.4 available
- 2024/08 — Ultravox 0.3 available
- 2024/08 — Preview of Ultravox APIs available, more information here
Key Links
- Ultravox Realtime — Build real-time Voice AI agents on top of the Ultravox model
- Hugging Face — Our Hugging Face page
Ultravox is a new kind of multimodal LLM that can understand text as well as human speech, without the need for a separate Audio Speech Recognition (ASR) stage. Building on research like AudioLM, SeamlessM4T, Gazelle, SpeechGPT, and others, Ultravox is able to extend any open-weight LLM with a multimodal projector that converts audio directly into the high-dimensional space used by LLM. We've trained versions on Llama 3, Mistral, and Gemma. This direct coupling allows Ultravox to respond much more quickly than systems that combine separate ASR and LLM components. In the future this will also allow Ultravox to natively understand the paralinguistic cues of timing and emotion that are omnipresent in human speech.
Ultravox currently takes in audio and emits streaming text. As we evolve the model, we'll train it to be able to emit a stream of speech tokens that can then be converted directly into raw audio by an appropriate unit vocoder.
Our default model is built on top of Llama 3.3 70B. We also have an 8B variant available on Hugging Face.
Ultravox can be trained against any open-weight model. See below for more details on training.
See Ultravox in action on our demo page. You can build your own voice-to-voice agents on our Realtime platform at ultravox.ai.
Join us on our Discord server here.
If you're interested in working on Ultravox fulltime, we're hiring! Check out our jobs page here.
You can try out Ultravox using your own audio content (as a WAV file) by spinning up an Ultravox instance on our partner, BaseTen: https://www.baseten.co/library/ultravox/. They offer free credits to get started.
If you're interested in running Ultravox in a real-time capacity, we offer a set of managed APIs as well. You can learn more about getting access to those here.
You can download the latest weights from the Ultravox Hugging Face page.
Read on if you're interested in training your own version of Ultravox.
Install the basic tools:
Homebrew
is a package manager for MacOS that also mostly works for Linux. If you're running Debian or Ubuntu Linux, you can alternatively get by with apt.Just
simplifies our shell workflows. It frequently functions as our interface to all the other tools.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew update
brew install just
It's recommended to use pyenv for managing environments due to the use of Poetry:
brew install xz
brew install pyenv
pyenv init
pyenv install 3.11
pyenv global 3.11
# Optional
pyenv shell 3.11
Note: Use of conda is NOT recommended with Poetry
After creating a virtual environment, install required packages using just
and poetry
:
just install
If you plan to use augmentations (optional), you may also want to install system packages necessary for augmentations. You can do that with just install-augs-system
. Read more about augmentations here.
We're using Poetry to manage the Python virtual environment. You can observe your environment with poetry env info
.
Currently, we keep both the LLM and the audio encoder frozen and only train the adapter/projector. Training Ultraox v0.4 took 2-3 hours on 8xH100 GPUs for 14K training steps.
Why would you want to (re-) train Ultravox? Here are a few scenarios:
-
You want to use a different LLM or audio encoder backbone.
a. In this case you need to re-train the adapter. You can use
example_config.yaml
, which contains our config for our latest release, and you should be able to simply change the base LLM or encoder by specifying--text-model <hf-model-id-for-llm>
and/or--audio-model <hf-model-id-for-encoder>
. -
You want to improve the knowledge of the model
a. We suggest to either use RAG on the fly (no training needed), or fine-tune the LLM backbone instead. Fine-tuning the LLM backbone does not require re-training Ultravox (i.e., the existing adapter will work).
-
You want to use your own audio data, for example to add support for a new language.
a. First step, prepare your dataset: at bare minimum, the samples should have an
audio
and a textcontinuation
field.b. Take a look at
ds_tool.py
andcontinuation.jinja
as well as our variant of Common Voice that was created usingds_tool
to add thecontinuation
field.c. Add your dataset to the dataset mix in
example_config.yaml
and train.
There's no one-size fits all. If you need help you can find us on our Discord server here.
We do most of our training on the MosaicML platform and therefore most of our tooling and docs are Mosaic-related. However, the MosaicML platform is being shut down at the end of July 2025, but we still left the configs in here for the commands. You can do the same training on your own GPU without much difficulty. Here we assume you have the environment set up (run just install
). You can also take a look at setup.sh
To kick off a training run you can do:
poetry run python -m ultravox.training.train --config_path ultravox/training/configs/example_config.yaml
For DDP training make sure to add torchrun
. We also recommend prefetching weights in advance:
TRAIN_ARGS="--config_path ultravox/training/configs/example_config.yaml"
poetry run python -m ultravox.training.helpers.prefetch_weights $TRAIN_ARGS
poetry run torchrun --nproc_per_node=8 -m ultravox.training.train $TRAIN_ARGS
For a debug run, you can use smaller models, datasets, or batch size. Here's a config that uses TinyLlama as the LLM backbone:
poetry run python -m ultravox.training.train --config_path ultravox/training/configs/asr_tinyllama_100s.yaml --batch_size 1 --report_logs_to tensorboard
We use SimpleParsing for configs. Configs are composable (i.e. you can specify zero or many configs) and meta_config.yaml
is always used as the default.
See configs_base.py
to find the parameters you modify, such as the --text-model
, --device
, --exp-name
, etc.
- For multi-node training, all you need to do is update
compute.gpus
line onmcli_train.yaml
to get more GPUs for training- All factors of 8 are supported
- For more than 4 nodes, you might need to increase
val_dataset_args.max_samples
NOTE: W&B doesn't currently support multiple nodes. You'll only get info from the main node. It's possible to support it with grouped runs, so let me know if you think this is important for you.
For inference or evaluations, you can use:
just eval --config_path ultravox/evaluation/configs/eval_config.yaml
where eval_config.yaml
is a config file that specifies the model, datasets, and configurations to use for inference or evaluation. If your dataset is not already defined in ultravox, you need to create a config file for your dataset in ultravox/data/configs/
(with the appropriate eval_config
field to specify evaluation metrics and arguments), and register it in ultravox/data/registry.py
. Please refer to examples in ultravox/data/configs/
.
The Justfile is a good resource for finding popular commands. Here are a few:
just update # update dependencies
just format # run formatting (black, isort, autoflake)
just test # run tests
just python # activate venv and run python