A collection of example scripts demonstrating how to build intelligent agents using the Strands framework with local Ollama language models. These scripts showcase different agent capabilities from basic calculations to real-time AWS documentation queries.
This repository contains three progressively advanced examples of Strands Agents:
- Calculator Agent - Basic mathematical computation capabilities
- Interactive Conversational Agent - General-purpose question answering
- AWS Documentation Agent - Real-time AWS documentation queries via MCP
All examples use local Ollama models for privacy and control, with comprehensive documentation and error handling.
- Install Ollama from ollama.ai
- Pull a compatible model:
ollama pull llama2 # or ollama pull mistral
- Ensure Ollama is running:
ollama serve
All scripts use inline dependency management. Dependencies are automatically installed when running with uv
:
uv run script_name.py
Create a .env
file in the repository root:
STRANDS_OLLAMA_HOST=localhost
STRANDS_OLLAMA_MODEL=llama2
Environment Variables:
STRANDS_OLLAMA_HOST
: Hostname where Ollama is running (default: localhost)STRANDS_OLLAMA_MODEL
: Name of the Ollama model to use
File: calculator_agent.py
A Strands Agent equipped with calculator tools for mathematical computations.
- Integration with calculator tool
- Mathematical problem solving
- Step-by-step calculation explanations
uv run calculator_agent.py
The script automatically asks: "What is the square root of 1764?"
Expected output: Detailed explanation showing the calculation process and result (42).
Agent
: Core Strands agent with calculator toolOllamaModel
: Local language model integrationcalculator
: Pre-built mathematical computation tool
File: interactive_agent.py
A general-purpose conversational agent for open-ended discussions and questions.
- Interactive command-line interface
- General knowledge question answering
- No specialized tools - pure conversation
- Default topic about Agentic AI
uv run interactive_agent.py
Enter a topic to query the LLM about: What is machine learning?
Or press Enter for the default topic: "Tell me about Agentic AI"
Agent
: Basic conversational agent without toolsOllamaModel
: Local language model for responses- Interactive user input with sensible defaults
File: aws_mcp_agent.py
An advanced agent that provides real-time access to official AWS documentation using Model Context Protocol (MCP).
- Real-time AWS documentation queries
- Official AWS Labs MCP server integration
- Markdown-formatted responses
- Up-to-date service information
- uvx installed:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Internet connection for MCP server
uv run aws_mcp_agent.py
Ask a question about aws documentation: How do I configure S3 buckets for static websites?
Or press Enter for default: "Tell me about Amazon Bedrock and how to use it with Python, provide the output in Markdown format"
Agent
: Strands agent with AWS documentation toolsMCPClient
: Model Context Protocol clientstdio_client
: Communication with AWS documentation server- Real-time documentation access via
awslabs.aws-documentation-mcp-server
All scripts share these core architectural elements:
# Environment setup
load_dotenv()
ollama_host = f"https://{os.getenv('STRANDS_OLLAMA_HOST')}:11434"
# Model initialization
ollama_model = OllamaModel(
host=ollama_host,
model_id=os.getenv('STRANDS_OLLAMA_MODEL')
)
# Agent creation
agent = Agent(tools=[...], model=ollama_model)
- Calculator Agent: Basic tool integration
- Interactive Agent: User interaction patterns
- AWS MCP Agent: External service integration via MCP
Each script can be executed directly:
# Using uv (recommended)
uv run script_name.py
# Or with traditional Python (after installing dependencies)
python script_name.py
All scripts include comprehensive error handling for:
- Missing environment variables
- Ollama connection issues
- MCP server connectivity (AWS script)
- Invalid user inputs
Each script follows a consistent pattern:
- Environment setup and validation
- Model/client initialization
- User interaction (where applicable)
- Query processing and response handling
- Graceful error management
Start here to understand:
- Basic Strands Agent setup
- Tool integration concepts
- Ollama model configuration
Builds upon basics with:
- User interaction patterns
- Input validation and defaults
- Conversational AI without tools
Demonstrates complex integrations:
- Model Context Protocol usage
- External service integration
- Real-time data access
- Context manager patterns
Ollama Connection Errors:
# Check if Ollama is running
curl https://localhost:11434/api/version
# Start Ollama if needed
ollama serve
Missing Model Errors:
# List available models
ollama list
# Pull required model
ollama pull llama2
AWS MCP Server Issues (AWS script only):
# Ensure uvx is installed
uvx --version
# Test MCP server availability
uvx awslabs.aws-documentation-mcp-server@latest --help
Environment Variable Issues:
- Verify
.env
file exists and contains required variables - Check file permissions and syntax
- Ensure no extra spaces or quotes around values
Add debugging to any script by modifying the agent call:
response = agent(query, debug=True) # If supported
Or add verbose logging:
import logging
logging.basicConfig(level=logging.DEBUG)
- Fork the repository
- Create a feature branch
- Add comprehensive docstrings to new functions
- Include error handling and examples
- Test with multiple Ollama models
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.