You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Single platform that unifies Generative and Predictive AI inference on Kubernetes. Simple enough for quick deployments, yet powerful enough to handle enterprise-scale AI workloads with advanced features.
Features
Generative AI
🧠 LLM-Optimized: OpenAI-compatible inference protocol for seamless integration with large language models
🚅 GPU Acceleration: High-performance serving with GPU support and optimized memory management for large models
💾 Model Caching: Intelligent model caching to reduce loading times and improve response latency for frequently used models
🗂️ KV Cache Offloading: Advanced memory management with KV cache offloading to CPU/disk for handling longer sequences efficiently
📈 Autoscaling: Request-based autoscaling capabilities optimized for generative workload patterns
🔧 Hugging Face Ready: Native support for Hugging Face models with streamlined deployment workflows
Predictive AI
🧮 Multi-Framework: Support for TensorFlow, PyTorch, scikit-learn, XGBoost, ONNX, and more
🔀 Intelligent Routing: Seamless request routing between predictor, transformer, and explainer components with automatic traffic management
🔄 Advanced Deployments: Canary rollouts, inference pipelines, and ensembles with InferenceGraph
⚡ Autoscaling: Request-based autoscaling with scale-to-zero for predictive workloads
🔍 Model Explainability: Built-in support for model explanations and feature attribution to understand prediction reasoning
💰 Cost Efficient: Scale-to-zero on expensive resources when not in use, reducing infrastructure costs
Learn More
To learn more about KServe, how to use various supported features, and how to participate in the KServe community,
please follow the KServe website documentation.
Additionally, we have compiled a list of presentations and demos to dive through various details.
🛠️ Installation
Standalone Installation
Standard Kubernetes Installation: Compared to Serverless Installation, this is a more lightweight installation. However, this option does not support canary deployment and request based autoscaling with scale-to-zero.
Knative Installation: KServe by default installs Knative for serverless deployment for InferenceService.
ModelMesh Installation: You can optionally install ModelMesh to enable high-scale, high-density and frequently-changing model serving use cases.