You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Digital products and their users need privacy, reliability, cost control, and an option to be independent from closed-source model providers.
Paddler is an open-source LLM load balancer and serving platform. It allows you to run inference, deploy, and scale LLMs on your own infrastructure, providing a great developer experience along the way.
Works through agents that can be added dynamically, allowing integration with autoscaling tools
Request buffering, enabling scaling from zero hosts
Dynamic model swapping
Built-in web admin panel for management, monitoring, and testing
Observability metrics
Who is Paddler for?
Product teams that need LLM inference and embeddings in their features
DevOps/LLMOps teams that need to run and deploy LLMs at scale
Organizations handling sensitive data with high compliance and privacy requirements (medical, financial, etc.)
Organizations wanting to achieve predictable LLM costs instead of being exposed to per-token pricing
Product leaders who need reliable model performance to maintain a consistent user experience of their AI-based features
Installation and Quickstart
Paddler is self-contained in a single binary file, so all you need to do to start using it is obtain the paddler binary and make it available in your system.
You can obtain the binary by:
Option 1: Downloading the latest release from our GitHub releases
Option 2: Or building Paddler from source
Using Paddler
Once you have made the binary available in your system, you can start using Paddler. The entire Paddler functionality is available through the paddler command (running paddler --help will list all available commands).
There are only two deployable components, the balancer (which distributes the incoming requests), and the agent (which generates tokens and embeddings through slots).
For questions or community conversations, use GitHub discussions or join our Discord server. All contributions are welcome.
How does it work?
Paddler is built for an easy setup. It comes as a self-contained binary with only two deployable components, the balancer and the agents.
The balancer exposes the following:
Inference service (used by applications that connect to it to obtain tokens or embeddings)
Management service, which manages the Paddler's setup internally
Web admin panel that lets you view and test your Paddler setup
Agents are usually deployed on separate instances. They further distribute the incoming requests to slots, which are responsible for generating tokens and embeddings.
Paddler uses a built-in llama.cpp engine for inference, but has its own implementation of llama.cpp slots, which keep their own context and KV cache.
Web admin panel
Paddler comes with a built-in web admin panel.
You can use it to monitor your Paddler fleet:
Add and update your model and customize the chat template and inference parameters:
We initially wanted to use Raft consensus algorithm (thus Paddler, because it paddles on a Raft), but eventually dropped that idea. The name stayed, though.
Later, people started sending us the "that's a paddlin'" clip from The Simpsons, and we just embraced it.
About
Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙