This sample code demonstrates how to implement Model Context Protocol (MCP) clients and servers with Spring AI.
MCP is an open protocol that standardizes the connection between AI models and various data sources and tools, enabling developers to create agents on top of LLMs. MCP follows a client-server architecture, where an AI application (MCP host) communicates through embedded MCP clients to MCP servers to access specific resources or functionalities.
Note: A Spring AI Recipe Finder implementation without MCP is available here.
Currently Ollama, OpenAI, and Azure OpenAI are supported AI providers.
As Ollama doesn't yet provide a text-to-image model, recipe image generation is not available with this setup. From version 3.1 Llama is supporting Function Calling even if it's not working well with the small models.
Even if an option is provided to start and configure a Llama 3.2 instance with docker compose, depending on your system (e.g. ARM macs) this is not a recommended setup due to performance reasons.
To run a Llama 3.2 instance on your local machine without a container, download and install the latest Ollama release.
Pull the llama3.2 model (Ollama 0.2.8 or newer, and Llama 3.1 or newer is required for Function Calling)
ollama pull llama3.2
Make sure the deployment names of the models match exactly what's in your application-azure.yaml configuration files.
Currently, only some regions support image generation with Dall-E.
If you use a region that doesn't support it, you have to disable the image generation by setting ai.azure.openai.image.enabled: false
in the application-azure.yaml configuration files to not run into errors.
On your local machine, a Redis database is automatically started and configured with Docker Compose for the favorite-recipes-server. As a fallback if no Redis database is configured, a SimpleVectorStore instance will be used.
The easiest way to run the application is via Docker Compose.
/run-local.sh
/run-local.sh ollama-container
export SPRING_AI_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_PROFILES_ACTIVE=openai
/run-local.sh
export SPRING_AI_AZURE_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_AI_AZURE_OPENAI_ENDPOINT=https://{your-resource-name}.openai.azure.com
export SPRING_PROFILES_ACTIVE=azure
/run-local.sh
You can also run the applications in different terminal sessions. Just run the following commands for each sub directory (fridge-server, favorite-recipes-server, recipe-finder-client) in a seperate terminal sessions.
SPRING_DOCKER_COMPOSE_ENABLED=false # If want to use the SimpleVectorStore instead of Redis running in a container
./gradlew bootRun
export SPRING_AI_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_PROFILES_ACTIVE=openai
./gradlew bootRun
export SPRING_AI_AZURE_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_AI_AZURE_OPENAI_ENDPOINT=https://{your-resource-name}.openai.azure.com
export SPRING_PROFILES_ACTIVE=azure
./gradlew bootRun
Open https://localhost:8080 in your browser. Enter the ingredients (e.g. "Cheese") you want to find a recipe for in the form and press the "find" button.
By checking the "Prefer available ingredients" checkbox, Function Calling will be enabled.
As the functionalities to add always available ingredients and for the API call to check the available ingredients in the fridge are not yet implemented, they can be configured via the app.available-ingredients-in-fridge
property in fridge-server's application.yaml.
Bacon and onions are currently configured for available ingredients in fridge.
With the input "Cheese", you should get a recipe with cheese and bacon.
By checking the "Prefer own recipes" checkbox, Retrieval-Augmented Generation will be enabled.
To upload your own PDF documents for recipes to the vector database, there is a REST API endpoint implemented.
curl -XPOST -F "file=@$PWD/german_recipes.pdf" -F "pageBottomMargin=50" https://localhost:8082/api/v1/recipes/upload
Based on the sample recipes part of this repository, with the input "Cheese", you should get a recipe that goes in the direction of a cheese spaetzle muffin.