English | 简体ä¸ć–‡ |
Efficient, easy-to-use platform for inference and serving local LLMs including an OpenAI compatible API server.
- OpenAI compatible API server provided for serving LLMs.
- Highly extensible trait-based system to allow rapid implementation of new module pipelines,
- Streaming support in generation.
- Efficient management of key-value cache with PagedAttention.
- Continuous batching (batched decoding for incoming requests over time).
In-situ
quantization (andIn-situ
marlin format conversion)GPTQ/Marlin
format quantization (4-bit)- Support
Mac/Metal
devices - Support
Multi-GPU
inference (bothmulti-process
andmulti-threaded
mode) - Support
Multi-node
inference with MPI runner
-
Currently, candle-vllm supports chat serving for the following model structures.
Show supported model architectures
Model ID Model Type Supported Speed (A100, BF16
)Throughput ( BF16
,bs=16
)Quantized (A100, Q4K
orMarlin
)Throughput ( GTPQ/Marlin
,bs=16
)#1 LLAMA âś… 65 tks/s (8B) 553 tks/s (8B) 75 tks/s (8B), 115 tks/s (8B, Marlin) 968 tks/s (8B) #2 Mistral âś… 70 tks/s (7B) 585 tks/s (7B) 96 tks/s (7B), 115 tks/s (7B, Marlin) 981 tks/s (7B) #3 Phi âś… 107 tks/s (3.8B) 744 tks/s (3.8B) 135 tks/s (3.8B) TBD #4 QWen2/Qwen3 âś… 81 tks/s (8B) 831 tks/s (8B) - TBD #4 Yi âś… 75 tks/s (6B) 566 tks/s (6B) 105 tks/s (6B) TBD #5 StableLM âś… 99 tks/s (3B) TBD - TBD #6 Gemma-2/Gemma-3 âś… 60 tks/s (9B) TBD 73 tks/s (9B, Marlin) 587 tks/s (9B) #7 DeepSeek-R1-Distill-QWen âś… 48 tks (14B) TBD 62 tks (14B) TBD #8 DeepSeek-R1-Distill-LLaMa âś… 65 tks (8B) TBD 108 tks (8B) TBD #9 DeepSeek V2/V3/R1 âś… TBD TBD ~20 tks (AWQ 671B, tp=8, offloading) TBD #10 QwQ-32B âś… 30 tks/s (32B, tp=2) TBD 36 tks/s (32B, Q4K, GGUF) TBD #11 GLM4 âś… 55 tks/s (9B) TBD 92 tks/s (9B, Q4K, GGUF) TBD
-
Nvidia GPU and Apple Silicon
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh #install rust, 1.83.0+ required
sudo apt install libssl-dev pkg-config -y
git clone git@github.com:EricLBuehler/candle-vllm.git
cd candle-vllm
## Mac/Metal (single-node only)
cargo build --release --features metal
#Make sure the CUDA Toolkit can be found in the system PATH
export PATH=$PATH:/usr/local/cuda/bin/
#CUDA: single-node compilation (single gpu, or multi-gpus on single machine)
cargo build --release --features cuda,nccl
#CUDA: single-node compilation with flash attention (takes few minutes for the first build, faster inference for long-context)
cargo build --release --features cuda,nccl,flash-attn
#CUDA: multinode compilation with MPI (multi-gpus, multiple machines)
sudo apt update
sudo apt install libopenmpi-dev openmpi-bin -y #install mpi
sudo apt install clang libclang-dev
cargo build --release --features cuda,nccl,mpi #build with mpi feature
# or
cargo build --release --features cuda,nccl,flash-attn,mpi #build with flash-attn and mpi features
-
[
ENV_PARAM
] cargo run [BUILD_PARAM
] -- [PROGRAM_PARAM
] [MODEL_ID/MODEL_WEIGHT_PATH
] [MODEL_TYPE
] [MODEL_PARAM
]Show details
Example:
[RUST_LOG=warn] cargo run [--release --features cuda,nccl] -- [--log --dtype bf16 --p 2000 --d 0,1 --mem 8192] [--w /home/weights/Qwen3-27B-GPTQ-4Bit]
ENV_PARAM
: RUST_LOG=warnBUILD_PARAM
: --release --features cuda,ncclPROGRAM_PARAM
:--log --dtype bf16 --p 2000 --d 0,1 --mem 8192MODEL_WEIGHT_PATH
: --w /home/weights/Qwen3-27B-GPTQ-4Bit (or--m
specify model-id)where,
--mem
(kvcache-mem-gpu
) is the key parameter to control KV cache usage (increase this for large batch); supported model archs include ["llama", "llama3", "mistral", "phi2", "phi3", "qwen2", "qwen3", "glm4", "gemma", "gemma3", "yi", "stable-lm", "deep-seek"]
-
Run Uncompressed models
Show command
Local Path
target/release/candle-vllm --p 2000 --w /home/DeepSeek-R1-Distill-Llama-8B/
Model-ID (download from Huggingface)
target/release/candle-vllm --m deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
-
Run GGUF models
Show command
Local Path (with port, dtype, sampling parameter specified)
target/release/candle-vllm --f /home/data/DeepSeek-R1-0528-Qwen3-8B-Q2_K.gguf
Model-ID (download from Huggingface)
target/release/candle-vllm --m unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF --f DeepSeek-R1-0528-Qwen3-8B-Q2_K.gguf
-
Run GGUF models on Apple Silicon
Show command
Local Path (assume model downloaded in /home)
cargo run --release --features metal -- --f /home/qwq-32b-q4_k_m.gguf
Model-ID (download from Huggingface)
cargo run --release --features metal -- --m Qwen/QwQ-32B-GGUF --f qwq-32b-q4_k_m.gguf
-
Run Any uncompressed models as quantized with in-situ quantization
Show command
Simply add
isq
parameter when running unquantized modelstarget/release/candle-vllm --p 2000 --w /home/DeepSeek-R1-Distill-Llama-8B/ llama3 --isq q4k
Options for in-site
isq
parameters: ["q4_0", "q4_1", "q5_0", "q5_1", "q8_0", "q2k", "q3k","q4k","q5k","q6k"] -
Run Marlin-compatible GPTQ models models (4-bit GPTQ, 128-group, desc_act=False)
Show command
Local Path
target/release/candle-vllm --w /home/DeepSeek-R1-Distill-Qwen-14B-GPTQ_4bit-128g
Model-ID (download from Huggingface)
target/release/candle-vllm --m thesven/Llama-3-8B-GPTQ-4bit
Convert Any uncompressed model to marlin-compatible format
python3 examples/convert_marlin.py --src /home/DeepSeek-R1-Distill-Qwen-14B/ --dst /home/DeepSeek-R1-Distill-Qwen-14B-GPTQ_4bit-128g target/release/candle-vllm --w /home/DeepSeek-R1-Distill-Qwen-14B-GPTQ_4bit-128g
-
Run Marlin-compatible AWQ models models
Show command
Convert AWQ model to Marlin-compatible format
python3 examples/convert_awq_marlin.py --src /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4/ --dst /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-Marlin/ --bits 4 --method awq --group 128 --nk False
Run the converted AWQ model
target/release/candle-vllm --d 0 --w /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-Marlin/
-
Run Marlin-format models
Show command
target/release/candle-vllm --w /home/DeepSeek-R1-Distill-Qwen-14B-GPTQ-Marlin/
-
Run Large models using multi-process mode (Multi-GPU)
Show command
QwQ-32B BF16 model on two GPUs
cargo run --release --features cuda,nccl -- --d 0,1 --w /home/QwQ-32B/
QwQ-32B 4-bit AWQ model on two GPUs
- Convert AWQ model to Marlin-compatible format
python3 examples/convert_awq_marlin.py --src /home/QwQ-32B-AWQ/ --dst /home/QwQ-32B-AWQ-Marlin/ --bits 4 --method awq --group 128 --nk False
- Run the converted AWQ model
cargo run --release --features cuda,nccl -- --d 0,1 --w /home/QwQ-32B-AWQ-Marlin/
Note: number of GPUs (
--d
) used must be aligned to 2^n (e.g., 2, 4, or 8). -
Run Large models using multi-threaded mode (Multi-GPU, for debug purpose)
Show command
Simply add the
--multithread
parameterQwQ-32B BF16 model on two GPUs
cargo run --release --features cuda,nccl -- --multithread --d 0,1 --w /home/QwQ-32B/
If you encountered problems under Multi-threaded Multi-GPU mode, you may:
export NCCL_P2P_DISABLE=1 # disable p2p cause this feature can cause illegal memory access in certain environments
-
Run DeepSeek-R1 (671B/685B) on Lower GPU Memories (CPU offloading)
Show command
1. Convert DeepSeek-R1-AWQ model to Marlin-compatible format
python3 examples/convert_awq_marlin.py --src /data/DeepSeek-R1-AWQ/ --dst /data/DeepSeek-R1-AWQ-Marlin/
2. Run DeepSeek-R1 model on 8 x A100(40GB)
cargo run --release --features cuda,nccl -- --log --d 0,1,2,3,4,5,6,7 --w /data/DeepSeek-R1-AWQ-Marlin/--num-experts-offload-per-rank 15
Note: This setup offloads 15 experts per rank (a total of 120 out of 256 experts) to the CPU (around 150GB additional host memory required). During inference, these offloaded experts are swapped back into GPU memory as needed. If you have even less GPU memory, consider increasing the
--num-experts-offload-per-rank
parameter (up to a maximum of 32 experts per rank in this case). -
Run DeepSeek-R1 (671B/685B) on Multi-node
Show command
1. Install MPI and build with MPI feature
sudo apt update sudo apt install libopenmpi-dev openmpi-bin -y #install mpi sudo apt install clang libclang-dev #clone the repo on the same directory of the two node and build cargo build --release --features cuda,nccl,mpi #build with mpi feature
2. Convert AWQ deepseek to Marlin-compatible format
python3 examples/convert_awq_marlin.py --src /data/DeepSeek-R1-AWQ/ --dst /data/DeepSeek-R1-AWQ-Marlin/
3. Config Multi-node Environment
MPI Runner requires
identical
hardware and software configurations for all nodes, please ensure weights and candle-vllm binaries located in the identical folders in difference nodes. The the nodes need to be ssh (port 22 in this case) passwordless for each other (root user if--allow-run-as-root
).%NET_INTERFACE%
is the active network interface obtained through command 'ifconfig -a'. You may disable InfiniBand if it's not available in the nodes by insert env "-x NCCL_IB_DISABLE=1". Where,hostfile
can be defined as:Example (two nodes, each with 8 GPUs)
192.168.1.100 slots=8 192.168.1.101 slots=8
4. Run the model on two nodes with MPI runner
sudo mpirun -np 16 -x RUST_LOG=info -hostfile ./hostfile --allow-run-as-root -bind-to none -map-by slot --mca plm_rsh_args "-p 22" --mca btl_tcp_if_include %NET_INTERFACE% target/release/candle-vllm --log --d 0,1,2,3,4,5,6,7 --w /data/DeepSeek-R1-AWQ-Marlin/ deep-seek
-
Run with NUMA binding
Show command
Prerequisite Ensure your machine has more than one NUMA node (i.e., more than one physical CPU), and install numactl:
sudo apt-get install numactl
Suppose your machine has 8 GPUs and 2 NUMA nodes, with each set of 4 GPUs bound to a different NUMA node. To achieve optimal performance during inference using all GPUs, use the following NUMA binding:
MAP_NUMA_NODE=0,0,0,0,1,1,1,1 numactl --cpunodebind=0 --membind=0 target/release/candle-vllm --d 0,1,2,3,4,5,6,7 --w /home/data/DeepSeek-V2-Chat-AWQ-Marlin
To use only 4 GPUs, you can apply this NUMA binding:
MAP_NUMA_NODE=0,0,0,0 numactl --cpunodebind=0 --membind=0 target/release/candle-vllm --d 0,1,2,3 --w /home/data/DeepSeek-V2-Chat-AWQ-Marlin
where
numactl --cpunodebind=0 --membind=0
above indicates NUMA binding for the master rank (master process) which should be matched toMAP_NUMA_NODE
.Note: The exact NUMA binding sequence may vary depending on your hardware configuration.
-
Run Qwen3-Reranker
Show command
- Start the backend service for
Qwen3-Reranker
model
target/release/candle-vllm --p 2000 --f /home/data/Qwen3-Reranker-4B-q4_k_m.gguf
- Start the chatbot with
system prompt
for qwen3-reranker
python3 examples/chat.py --thinking True --system_prompt "Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\"."
- Chat with chatbot for any query/doc pairs, for example,
<Query>: What is the capital of China?\n\n<Document>: The capital of China is Beijing.
Observe the answer:
🙋 Please Input (Ctrl+C to start a new chat or exit): <Query>: What is the capital of China?\n\n<Document>: The capital of China is Beijing. Candle-vLLM: ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── <think> Okay, the user is asking for the capital of China. The document provided is a direct answer: "The capital of China is Beijing." I need to check if this is correct. From my knowledge, Beijing is indeed the capital of China. The answer is correct and straightforward. The document meets the requirement as it provides the accurate information. So the answer is yes. </think> yes
- Start the backend service for
Run chat frontend after starting the backend service
Chat frontend (any frontend compatible with openai API, simple options available below):
-
Option 1: Chat with Chat.py (for simple tests)
Show Option 1
Install API and chatbot dependencies (openai package is only used for local chat with candle-vllm)
python3 -m pip install openai rich click
Chat with the mini chatbot (plain text)
python3 examples/chat.py
Pass generation parameters (to reasoning models with
--thinking True
)python3 examples/chat.py --temperature 0.7 --top_k 64 --top_p 0.9 --thinking True --system_prompt "Thinking big!"
Chat with the mini chatbot (live update with Markdown, may cause flick)
python3 examples/chat.py --live
-
Option 2: Chat with naive ChatUI (or popular dify frontend)
Show Option 2
Install naive ChatUI and its dependencies:
git clone git@github.com:guoqingbao/candle-vllm-demo.git cd candle-vllm-demo apt install npm #install npm if needed npm install n -g #update node js if needed n stable #update node js if needed npm i -g pnpm #install pnpm manager pnpm install #install ChatUI dependencies
Launching the ChatUI:
pnpm run dev # run the ChatUI
Trouble shooting for Nodejs error
ENOSPC: System limit for number of file watchers reached
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
-
Option 3: Chat completion request with HTTP post
Show Option 3
curl -X POST "https://127.0.0.1:2000/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "model": "llama7b", "messages": [ {"role": "user", "content": "Explain how to best learn Rust."} ], "temperature": 0.7, "max_tokens": 128, "stop": {"Single":"</s>"} }'
Sample response:
{"id":"cmpl-53092967-c9cf-40e0-ae26-d7ac786d59e8","choices":[{"message":{"content":" Learning any programming language requires a combination of theory, practice, and dedication. Here are some steps and resources to help you learn Rust effectively:\n\n1. Start with the basics:\n\t* Understand the syntax and basic structure of Rust programs.\n\t* Learn about variables, data types, loops, and control structures.\n\t* Familiarize yourself with Rust's ownership system and borrowing mechanism.\n2. Read the Rust book:\n\t* The Rust book is an official resource that provides a comprehensive introduction to the language.\n\t* It covers topics such","role":"[INST]"},"finish_reason":"length","index":0,"logprobs":null}],"created":1718784498,"model":"llama7b","object":"chat.completion","usage":{"completion_tokens":129,"prompt_tokens":29,"total_tokens":158}}
-
Option 4: Chat completion with with openai package
Show Option 4
In your terminal, install the
openai
Python package by runningpip install openai
. I use version1.3.5
.Then, create a new Python file and write the following code:
import openai openai.api_key = "EMPTY" openai.base_url = "https://localhost:2000/v1/" completion = openai.chat.completions.create( model="llama", messages=[ { "role": "user", "content": "Explain how to best learn Rust.", }, ], max_tokens = 64, ) print(completion.choices[0].message.content)
After the
candle-vllm
service is running, run the Python script and enjoy efficient inference with an OpenAI compatible API server!Batched requests
Install openai API first
python3 -m pip install openai
Run the benchmark test
python3 examples/benchmark.py --batch 16 --max_tokens 1024
Refer to
examples/benchmark.py
async def benchmark(): model = "mistral7b" max_tokens = 1024 # 16 requests prompts = ["Explain how to best learn Rust.", "Please talk about deep learning in 100 words.", "Do you know the capital city of China? Talk the details of you known.", "Who is the best female actor in the world? Explain why.", "How to dealing with depression?", "How to make money in short time?", "What is the future trend of large language model?", "The famous tech companies in the world.", "Explain how to best learn Rust.", "Please talk about deep learning in 100 words.", "Do you know the capital city of China? Talk the details of you known.", "Who is the best female actor in the world? Explain why.", "How to dealing with depression?", "How to make money in short time?", "What is the future trend of large language model?", "The famous tech companies in the world."] # send 16 chat requests at the same time tasks: List[asyncio.Task] = [] for i in range(len(prompts)): tasks.append( asyncio.create_task( chat_completion(model, max_tokens, prompts[i])) ) # obtain the corresponding stream object for each request outputs: List[Stream[ChatCompletionChunk]] = await asyncio.gather(*tasks) # tasks for streaming chat responses tasks_stream: List[asyncio.Task] = [] for i in range(len(outputs)): tasks_stream.append( asyncio.create_task( stream_response(i, outputs[i])) ) # gathering the response texts outputs: List[(int, str)] = await asyncio.gather(*tasks_stream) # print the results, you may find chat completion statistics in the backend server (i.e., candle-vllm) for idx, output in outputs: print("\n\n Response {}: \n\n {}".format(idx, output)) asyncio.run(benchmark())
-
Loading unquantized models as gguf quantized or marlin format
Show quantization config
Candle-vllm supports in-situ quantization, allowing the transformation of default weights (F32/F16/BF16) into any GGML/GGUF format, or
4-bit GPTQ/AWQ
weights intomarlin format
during model loading. This feature helps conserve GPU memory and speedup inference performance, making it more efficient for consumer-grade GPUs (e.g., RTX 4090). To use this feature, simply supply theisq
parameter when running candle-vllm.For unquantized models:
cargo run --release --features cuda -- --p 2000 --w /home/Meta-Llama-3.1-8B-Instruct/ llama3 --isq q4k
Options for
isq
parameters: ["q4_0", "q4_1", "q5_0", "q5_1", "q8_0", "q2k", "q3k","q4k","q5k","q6k"]For quantized 4-bit GPTQ model:
cargo run --release --features cuda -- --p 2000 --w /home/mistral_7b-int4/
Please note for marlin:
-
It may takes few minutes to load F32/F16/BF16 models into quantized;
-
Marlin format in-situ conversion only support 4-bit GPTQ (with
sym=True
,groupsize=128
or -1,desc_act=False
) and 4-bit AWQ (after conversion using the given script, refer toOther Usage
); -
Marlin format only supported in CUDA platform.
-
-
KV Cache config, sampling parameter, etc.
Show details
The `--mem` (kvcache-mem-gpu) parameter is used to control kv cache, default 4GB GPU memory, increase this for large batch and long-context inference.For chat history settings, set
record_conversation
totrue
to let candle-vllm remember chat history. Bydefault
, candle-vllmdoes not
record chat history; instead, the client sends both the messages and the contextual history to candle-vllm. If record_conversation is set totrue
, the client sends only new chat messages to candle-vllm, and candle-vllm is responsible for recording the previous chat messages. However, this approach requires per-session chat recording, which is not yet implemented, so the default approachrecord_conversation=false
is recommended.For chat streaming, the
stream
flag in chat request need to be set toTrue
.cargo run --release --features cuda -- --p 2000 --w /home/mistral_7b/
--max-gen-tokens
parameter is used to control the maximum output tokens per chat response. The value will be set to 1/5 of max_sequence_len by default.For
consumer GPUs
, it is suggested to run the models under GGML formats (or Marlin format), e.g.,cargo run --release --features cuda -- --p 2000 --w /home/Meta-Llama-3.1-8B-Instruct/ llama3 --isq q4k
where
isq
is one of ["q4_0", "q4_1", "q5_0", "q5_1", "q8_0", "q2k", "q3k","q4k","q5k","q6k", "awq", "gptq", "marlin", "gguf", "ggml"]. -
Use Marlin kernel to speedup GPTQ/AWQ models
Show details
Candle-vllm now supports GPTQ/AWQ Marlin kernel, you can run these models directly, such as:
cargo run --release --features cuda -- --dtype f16 --w /home/Meta-Llama-3.1-8B-Instruct-GPTQ-INT4-Marlin/
or, convert existing AWQ 4bit model to marlin compatible format
python3 examples/convert_awq_marlin.py --src /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4/ --dst /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-Marlin/ --bits 4 --method awq --group 128 --nk False cargo run --release --features cuda,nccl -- --dtype f16 --d 0 --w /home/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-Marlin/
You may also use
GPTQModel
to transform a model to marlin-compatible format using the given scriptexamples/convert_marlin.py
.Note: for using Marlin fast kernel, only 4-bit GPTQ quantization supported at the moment.
Installing candle-vllm
is as simple as the following steps. If you have any problems, please create an
issue.
The following features are planned to be implemented, but contributions are especially welcome:
- Sampling methods:
- Beam search (huggingface/candle#1319)
- More pipelines (from
candle-transformers
)
- Python implementation:
vllm-project
vllm
paper