CARVIEW |
Navigation Menu
-
Notifications
You must be signed in to change notification settings - Fork 216
Releases: openvinotoolkit/model_server
OpenVINO Model Server 2025.2.1
b7e0a09
Compare
The 2025.2.1 is a minor release with bug fixes and improvements, mainly in automatic model pulling and image generation.
Improvements:
- Enable passing
chat_template_kwargs
parameters inchat/completion
request. It can be used to turn off model reasoning. - Allow setting CORS headers in HTTP response. It can resolve connectivity problems between OpenWebUI and the model server.
Other changes:
- Changed NPU driver version from 1.17 to 1.19 in docker images
- Security related updates in dependencies
Bug fixes:
- Removed limitation for Image generation - now it supports requesting several output images with parameter
n
add_to_config
andremove_from_config
parameters accept path to configuration file in addition to directory containingconfig.json
file- Resolved connectivity issues while pulling models from HuggingFace Hub without proxy configuration
- Fixed handling HF_ENDPOINT environment variable with HTTP addresses as previously
https://
prefix was incorrectly added. - Changed
pull
feature environment variablesGIT_SERVER_CONNECT_TIMEOUT_MS
toGIT_OPT_SET_SERVER_TIMEOUT
andGIT_SERVER_TIMEOUT_MS
toGIT_OPT_SET_SERVER_TIMEOUT
to unify with underlying libgit2 implementation. - Fixed handling relative paths on Windows with MediaPipes/LLMs for
config_path
parameter. - Fixed agentic demo not working without proxy
- Stop rejecting
response_format
field in image generation. While parameter accepts now only base64_json value, it allows to integrate with Open WebUI - Add missing
--response_parser
parameter when using OVMS to pull LLM's model and prepare its configuration - Block simultaneous use of
--list_models
and--pull
parameters as they are exclusive. - Fixed accuracy for the Phi4-mini model response parser while using functions with lists as arguments
- export_model.py script fix for handling target_device for embeddings and reranking models
- stateful text generation pipeline do not include usage content - it is not supported for such pipeline type. Before it was returning incorrect response.
Known issues and limitations
- VLM models QwenVL2, QwenVL2.5, and Phi3_VL have lower accuracy when deployed on CPU in a text generation pipeline with continuous batching. It is recommended to deploy these models in a stateful pipeline which processes the requests sequentially like in the demo
- Using NPU for image generation endpoints is unsupported in this release.
You can use an OpenVINO Model Server public docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2025.2.1- CPU device support with image based on Ubuntu24.04
docker pull openvino/model_server:2025.2.1-gpu - GPU, NPU and CPU device support with image based on Ubuntu 24.04
or use provided binary packages. Only packages with suffix _python_on
have support for python.
Check the instructions how to install the binary package
The prebuilt image is also available on RedHat Ecosystem Catalog
Assets 18
- sha256:7b80f0ca8e73744b099adf0658871f29c4265b1c8466494e0da53591ef90b3b8
2025-07-15T11:23:48Z - sha256:7eb71b4b3eb97fdf385eb994f1d079ba560b315f705e612ec9ef68552673f13d
2025-07-15T11:23:43Z - sha256:b16d0fefa50e1d18e0c216bafb8ec39f683cafbb3e7c1b938326005d0bf81e9d
2025-07-15T11:25:12Z - sha256:deb102be4145890f691a2564f9c8ab51a4569baf85215426d074cf385097a7a3
2025-07-15T11:23:45Z - sha256:fa5999696d04bb8e9c5e587ecce8698c0cb3fe846c82d99e1573455e4f5596bc
2025-07-15T11:26:45Z - sha256:959b6432e023fbf03b9645a678fb18b2fd0ab75855b3bc49e5a789fda7337b5a
2025-07-15T11:23:46Z - sha256:e6629a6e6951a175b09abcc3b2682714e76e0b33a921232c22fb6a34d2cd79fb
2025-07-15T11:28:08Z - sha256:510f5ed45a44e95d7872848f8230aa3173fdab46bdc4b1450dccdc02df8c26c2
2025-07-15T11:23:46Z - sha256:e7b110973e99d888323e3e1756b8fa64a8b01e5b94f379f061182f3fda5b7634
2025-07-15T11:29:41Z - sha256:3a4d5e4e2636051478a733a127e5ab7c92e6cc9a034d56f542db77adabf792b7
2025-07-15T11:23:47Z -
2025-07-16T13:46:51Z -
2025-07-16T13:46:51Z - Loading
OpenVINO™ Model Server 2025.2
814c4ef
Compare
The 2025.2 is a major release adding support for image generation, support for AI agents with tools_calls
handling and new features in models’ management.
Image generation (preview)
Image generation endpoint – this preview feature enables image generation based on text prompts. The endpoint is compatible with OpenAI API making it easy to integrate with existing ecosystem. It supports the popular models like Stable Diffusion, Stable Diffusion XL, Stable Diffusion 3 and FLUX.
Check the end-to-end demo
Image generation API reference
Agentic AI (preview)
When generating text in LLM models, you can extend the context using tools. The tools can provide additional context from external sources like python functions. AI Agents can use OpenVINO Model Server to choose the right tool and generate functions parameters. The final agent response can be also created based on the tool response.
It is now possible to use in the chat/completions
endpoint for text generation, the specification of tools and the messages can include tools responses (tool_calls). Such agentic use case requires specially tuned chat templates and custom response parsers. They are enabled for the popular tool enabled models.
Check the demo with AI Agent
Model management for generative use cases
This release brings several improvements for the model management and development mechanism especially for generative use cases.
It is now possible to pull and deploy the generative models in OpenVINO format directly from Hugging Faces Hub. All the runtime parameters for the generative pipeline can be set via Model Server command line interface. ovms
binary can be used to pull the model to the local models repository to reuse in subsequent runs. There are also included CLI commands for listing the models in the models repository and adding or removing the models from the list of enabled in the configuration file.
More details about the CLI usage to pull models and start the server
Check the RAG demo how easy it is to deploy 3 models in a single server instance.
Note that the python script export_models.py
can be still used to prepare models from outside of OpenVINO organization in HF Hub. It is extended to support image generation task.
Breaking changes
Till now, the default text generation sampling parameters were static. This release changes default sampling parameters to be based on generation_config.json
from the model folder.
Other changes
VLM models with chat/completion
endpoint can now support passing the images as URL or as path to a local file system. Model Server will download the image and use as part of the message content. Check updated API examples
Python is no longer required to use LLM chat/completions
endpoint. The package version without python, applies the chat templates using JinjaCpp library. It has however limitations: tools usage and system prompt are not supported.
New version of embeddings and rerank calculators which are using flat models structure identical with the output of optimum-intel export
and existing OpenVINO models in Hugging Face Hub. Previous calculators supporting models versioning are still present for compatibility with previously exported models. They will be deprecated in the future release. It is recommended to reexport the models using --task rerank_ov
or embeddings_ov
.
Documented use case with long context models and very long prompts
Bug fixes
Correct error status now reported in streaming mode.
Fixed sporadic issue of extra special token at the beginning of prompt when applying chat template.
Security and stability related improvements.
Known issues and limitations
VLM models QwenVL2, QwenVL2.5, and Phi3_VL have lower accuracy when deployed on CPU in a text generation pipeline with continuous batching. It is recommended to deploy these models in a stateful pipeline which processes the requests sequentially like in the demo
Using NPU for image generation endpoints is unsupported in this release.
OVMS on linux OS in environment without proxy, requires setting env variable GIT_SERVER_TIMEOUT_MS=4000 to be able to pull the models from HuggingFace Hub. The default value was set too short.
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2025.2
- CPU device support with image based on Ubuntu24.04docker pull openvino/model_server:2025.2-gpu
- GPU, NPU and CPU device support with image based on Ubuntu 24.04
or use provided binary packages. Only packages with sufffix _python_on
have support for python.
Check the instructions how to install the binary package
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 18
OpenVINO™ Model Server 2025.1
c9658a3
Compare
The 2025.1. is a major release adding support for visual language models and enabling text generation on NPU accelerator.
VLM support
The endpoint chat/completion
has been extended to support vision language models. Now it is possible to send images in the context of chat. Vision language models can be deployed just like the LLM models.
Check the end-to-end demo: Link
Updated API reference: Link
Text Generation on NPU
Now it is possible to deploy LLM and VLM models on NPU accelerator. Text generation will be exposed over completions and chat/completions endpoints. From the client perspective it works the same way as with GPU and CPU deployment but it doesn’t support continuous batching algorithm. NPU is targeted for AI PC use cases with low concurrency.
Check the NPU LLM demo and NPU VLM demo.
Model management improvements
- Option to start MediaPipe graphs and generative endpoints from CLI without the configuration file. Simply point
--model_path
CLI argument to directory with MediaPipe graph. - Unification for the JSON configuration file structure for models and graphs under section
models_config_list
.
Breaking changes
- gRPC server is now optional. There is no default gRPC port set. The parameter –port is mandatory to start gRPC server. It is possible to start only REST API server with
--rest_port
parameter. At least one port number needs to be defined to start OVMS from CLI (--port
for gRPC or--rest_port
for REST). Starting OVMS via C-API does not require any port to be defined.
Other changes
-
Updated scalability demonstration using multiple instance: Link
-
Increased allowed number of text generation stop words in the request from 4 to 16
-
Enabled and tested OVMS integration with Visual Studio Code extension of Continue. OpenVINO Model Server can be used as a backend for code completion and built-in IDE chat assistant. Check out instructions: Link
-
Performance improvements – enhancements in OpenVINO Runtime and also in text sampling generation algorithm which should increase the throughput in high concurrency load
Bug fixes
-
Fixed handling of the LLM context length - now OVMS will stop generating the text when model context is exceeded. An error will be raised when the prompt is longer than the context or when the
max_tokens
plus the input tokens exceed the model context. -
Security and stability improvements
-
Fixed cancellation of text generation workloads - clients are allowed to stop the generation in non-streaming scenarios by simply closing the connection
Known issues and limitations
chat/completions
API accepts images encoded to base64 format but does not accept URL format.
Qwen Vision models deployed on GPU might experience an execution error when image size has too high resolution. It is recommended to edit the model preprocessor_config.json and lower max_pixels
parameter to a value. It will ensure the images will be resized automatically to smaller resolution. It will avoid the outage on GPU and improve performance. In some cases, accuracy might be impacted, though.
Note that by default, NPU sets limitation to the prompt length to 1024 tokens. You can modify that limit by using --max_prompt_len
parameter in model export script, or manually modify MAX_PROMPT_LEN
plugin config param in graph.pbtxt.
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2025.1
- CPU device supportdocker pull openvino/model_server:2025.1-gpu
- GPU, NPU and CPU device support
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 16
OpenVINO™ Model Server 2025.0
8a47d52
Compare
The 2025.0 is a major release adding support for Windows native deployments and improvements to the generative use cases.
New feature - Windows native server deployment
-
This release enables model server deployment on Windows operating systems as a binary application
-
Full support for generative endpoints – text generation and embeddings based on OpenAI API, reranking based on Cohere API
-
Functional parity with linux version with several minor differences: cloud storage, CAPI interface, DAG pipelines - read more
-
It is targeted on client machines with Windows 11 and Data Center environment with Windows 2022 Server OS
-
Demos are updated to work both on Linux and Windows. Check the installation guide
Other Changes and Improvements
-
Added official support for Battle Mage GPU, Arrow Lake CPU, iGPU, NPU and Lunar Lake CPU, iGPU and NPU
-
Updated base docker images – added Ubuntu 24 and RedHat UBI 9, dropped Ubuntu 20 and RedHat UBI 8
-
Extended chat/completions API to support
max_completion_tokens
parameter and messages content as an array. Those changes are to make the API keep compatibility with OpenAI API. -
Truncate option in embeddings endpoint – It is now possible to export the embeddings model with option to truncate the input automatically to match the embeddings context length. By default, the error is raised when too long input is passed.
-
Speculative decoding algorithm added to text generations – Check the demo
-
Added direct support for models without named outputs – when models don’t have named outputs, generic names will be assigned in the model initialization with a pattern
out_<index>
-
Added histogram metric for tracking MediaPipe graph processing duration
-
Performance improvements
Breaking changes
- Discontinued support for NVIDIA plugin
Bug fixes
-
Corrected behavior of cancelling text generation for disconnected clients
-
Fixed detecting of the model context length for embeddings endpoint
-
Security and stability improvements
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2025.0
- CPU device supportdocker pull openvino/model_server:2025.0-gpu
- GPU, NPU and CPU device support
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 16
OpenVINO™ Model Server 2024.5
3c284cf
Compare
The 2024.5 release comes with support for embedding and rerank endpoints, as well as experimental Windows support version.
Changes and improvements
-
The OpenAI API text embedding endpoint has been added, enabling OVMS to be used as a building block for AI applications like RAG.
-
The rerank endpoint has been added based on Cohere API, enabling easy similarity detection between a query and a set of documents. It is one of the building blocks for AI applications like RAG and makes integration with frameworks such as langchain easy.
-
The
echo
sampling parameter together withlogprobs
in thecompletions
endpoint is now supported. -
Performance increase on both CPU and GPU for LLM text generation.
-
LLM dynamic_split_fuse for GPU target device boosts throughput in high-concurrency scenarios.
-
The procedure for LLM service deployment and model repository preparation has been simplified.
-
Improvements in LLM tests coverage and stability.
-
Instructions how to build experimental version of a Windows binary package - native model server for Windows OS – is available. This release includes a set of limitations and has limited tests coverage. It is intended for testing, while the production-ready release is expected with 2025.0. All feedback is welcome.
-
OpenVINO Model Server C-API now supports asynchronous inference, improves performance with ability of setting outputs, enables using OpenCL & VA surfaces on both inputs & outputs for GPU target device's
-
KServe REST API Model_metadata endpoint can now provide additional model_info references.
-
Included support for NPU and iGPU on MTL and LNL platforms
-
Security and stability improvements
Breaking changes
No breaking changes.
Bug fixes:
- Fix support for url encoded model name for KServe REST API
- OpenAI text generation endpoints now accepts requests with both v3 & v3/v1 path prefix
- Fix reporting metrics in video stream benchmark client
- Fix sporadic INVALID_ARGUMENT error on completions endpoint
- Fix incorrect LLM finish reason when expecting stop but got length
Discontinuation plans
In the future release, support for the following build options will not be maintained:
- Ubuntu 20 as the base image
- OpenVINO NVIDIA plugin
You can use an OpenVINO Model Server public Docker images based on Ubuntu22.04 via the following command:
docker pull openvino/model_server:2024.5
- CPU device supportdocker pull openvino/model_server:2024.5-gpu
- GPU, NPU and CPU device support
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 6
OpenVINO™ Model Server 2024.4
f958bf8
Compare
The 2024.4 release brings official support for OpenAI API text generation. It is now recommended for production usage. It comes with a set of added features and improvements.
Changes and improvements
-
Significant performance improvements for multinomial sampling algorithm
-
finish_reason
in the response correctly determines reaching the max_tokens (length) and completed the sequence (stop) -
Added automatic cancelling of text generation for disconnected clients
-
Included prefix caching feature which speeds up text generation by caching the prompt evaluation
-
Option to compress the KV Cache to lower precision – it reduces the memory consumption with minimal impact on accuracy
-
Added support for
stop
sampling parameters. It can define a sequence which stops text generation. -
Added support for
logprobs
sampling parameter. It returns the probabilities of generated tokens. -
Included generic metrics related to execution of MediaPipe graph. Metric
ovms_current_graphs
can be used for autoscaling based on current load and the level of concurrency. Counters likeovms_requests_accepted
andovms_responses
can track the activity of the server. -
Included demo of text generation horizontal scalability
-
Configurable handling of non-UTF-8 responses from the model – detokenizer can now automatically change then to Unicode replacement character
-
Included support for Llama3.1 models
-
Text generation is supported both on CPU and GPU -check the demo
Breaking changes
No breaking changes.
Bug fixes
-
Security and stability improvements
-
Fixed handling of model templates without bos_token
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2024.4
- CPU device support with the image based on Ubuntu22.04
docker pull openvino/model_server:2024.4-gpu
- CPU, GPU and NPU device support with the image based on Ubuntu22.04
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 6
OpenVINO™ Model Server 2024.3
a6ddd3f
Compare
The 2024.3 release focus mostly on improvements in OpenAI API text generation implementation.
Changes and improvements
A set of improvements in OpenAI API text generation:
- Significantly better performance thanks to numerous improvements in OpenVINO Runtime and sampling algorithms
- Added config parameters
best_of_limit
andmax_tokens_limit
to avoid memory overconsumption impact from invalid requests Read more - Added reporting LLM metrics in the server logs Read more
- Added extra sampling parameters
diversity_penalty
,length_penalty
,repetition_penalty
. Read more
Improvements in documentation and demos:
- Added RAG demo with OpenAI API
- Added K8S deployment demo for text generation scenarios
- Simplified models initialization for a set of demos with mediapipe graphs using pose_detection model. TFLite models don't required any conversions Check demo
Breaking changes
No breaking changes.
Bug fixes
- Resolved issue with sporadic text generation hang via OpenAI API endpoints
- Fixed issue with chat streamer impacting incomplete utf-8 sequences
- Corrected format of the last streaming event in
completions
endpoint - Fixed issue with request hanging when running out of available cache
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2024.3
- CPU device support with the image based on Ubuntu22.04
docker pull openvino/model_server:2024.3-gpu
- GPU and CPU device support with the image based on Ubuntu22.04
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 6
OpenVINO™ Model Server 2024.2
31ad50a
Compare
The major new functionality in 2024.2 is a preview feature of OpenAI compatible API for text generation along with state of the art techniques like continuous batching and paged attention for improving efficiency of generative workloads.
Changes and improvements
-
Updated OpenVINO Runtime backend to 2024.2
-
OpenVINO Model Server can be now used for text generation use cases using OpenAI compatible API
-
Added support for continuous batching and PagedAttention algorithms for text generation with fast and efficient in high concurrency load especially on Intel Xeon processors. Learn more about it.
-
Added LLM text generation OpenAI API demo.
-
Added notebook showcasing RAG algorithm with online scope changes delegated to the model server. Link
-
Enabled python 3.12 for python clients, samples and demos.
-
Updated RedHat UBI base image to 8.10
Breaking changes
No breaking changes.
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2024.2
- CPU device support with the image based on Ubuntu 22.04
docker pull openvino/model_server:2024.2-gpu
- GPU and CPU device support with the image based on Ubuntu 22.04
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 6
OpenVINO™ Model Server 2024.1
95b8b78
Compare
The 2024.1 has a few improvements in the serving functionality, demo enhancements and bug fixes.
Changes and improvements
-
Updated OpenVINO Runtime backend to 2024.1 Link
-
Added support for OpenVINO models with string data type on output. Together with the features introduced in 2024.0, now OVMS can support models with input and output of string type. That way you can take advantage of the tokenization built into the model as the first layer. You can also rely on any post-processing embedded into the model which returns just text. Check universal sentence encoder demo and image classification with string output demo
-
Updated MediaPipe python calculators to support relative path for all related configuration and python code files. Now, the complete graph configuration folder can be deployed in arbitrary path without any code changes. It is demonstrated in the updated text generation demo.
-
Extended support for KServe REST API for MediaPipe graph endpoints. Now you can send the data in KServe JSON body. Check how it is used in text generation use case.
-
Added demo showcasing full RAG algorithm entirely delegated to the model server Link
-
Added RedHat UBI based Dockerfile for python demos, usage documented in python demos
Breaking changes
No breaking changes.
Bug fixes
- Improvements in error handling for invalid requests and incorrect configuration
- Fixes in the demos and documentation
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2024.1
- CPU device support with the image based on Ubuntu22.04
docker pull openvino/model_server:2024.1-gpu
- GPU and CPU device support with the image based on Ubuntu22.04
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog
Assets 6
OpenVINO™ Model Server 2024.0
29e108f
Compare
The 2024.0 includes new version of OpenVINO™ backend and several improvements in the serving functionality.
Changes and improvements
- Updated OpenVINO™ Runtime backend to 2024.0. Link
- Extended text generation demo to support multi batch size both with streaming and unary clients. Link to demo
- Added support for REST client for servables based on MediaPipe graphs including python pipeline nodes. Link to demo
- Added additional MediaPipe calculators which can be reused for multiple image analysis scenarios. Link to new calculators
- Added support for models with a
string
input data type including tokenization extension. Link to demo - Security related updates in versions of included dependencies.
Deprecation notices
Batch Size AUTO and Shape AUTO are deprecated and will be removed.
Use Dynamic Model Shape feature instead.
Breaking changes
No breaking changes.
Bug fixes
- Improvements in error handling for invalid requests and incorrect configuration
- Minor fixes in the demos and documentation
You can use an OpenVINO Model Server public Docker images based on Ubuntu via the following command:
docker pull openvino/model_server:2024.0
- CPU device support with the image based on Ubuntu22.04
docker pull openvino/model_server:2024.0-gpu
- GPU and CPU device support with the image based on Ubuntu22.04
or use provided binary packages.
The prebuilt image is available also on RedHat Ecosystem Catalog