Paper | Features | Installation | Usage | CLI | Development
A Model Context Protocol (MCP) based LLM deep evaluation framework.
This project provides a framework for evaluating Large Language Models using the Model Context Protocol. It enables automating end- to-end task generation and deep evaluation of LLM agents across diverse dimensions.
🎬 Watch Full Demo Video (with audio)
Click above to download and view the complete MCPEval demonstration with audio explanation
MCPEval system architecture showing the complete evaluation pipeline from task generation to analysis
MCPEval web interface providing intuitive access to all evaluation features
- Supporting GPT-5
- Using model-config for using any model to generate and evaluate
- A new revalidation cli is released for generating high-quality data
- 🚀 Automated End-to-End Evaluation
- đź”§ MCP Protocol Integration
- 📊 Comprehensive Analysis & Insights
- đź’» User-Friendly Web-based Interface
- ⚡ Advanced CLI Commands
- 🔬 Research & Development Support
If you find our system or paper useful, please cite
@misc{liu2025mcpevalautomaticmcpbaseddeep,
title={MCPEval: Automatic MCP-based Deep Evaluation for AI Agent Models},
author={Zhiwei Liu and Jielin Qiu and Shiyu Wang and Jianguo Zhang and Zuxin Liu and Roshan Ram and Haolin Chen and Weiran Yao and Huan Wang and Shelby Heinecke and Silvio Savarese and Caiming Xiong},
year={2025},
eprint={2507.12806},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2507.12806},
}
For complete setup including both CLI and Web UI:
# Clone the repository
git clone https://github.com/SalesforceAIResearch/MCPEval.git
cd MCPEval
# Run unified setup script (installs CLI, backend API, and frontend UI)
./setup.shThis will set up:
- âś… Core CLI evaluation framework
- âś… Flask REST API backend
- âś… React web interface
- âś… All dependencies using uv package manager
For command-line usage only:
# Make sure uv is installed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the package
uv sync
uv sync --extra devcp .env.template .env
Edit the .env file to add your OpenAI API key:
OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
OR export the key in your terminal:
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
After running the setup script:
-
Start the backend API:
cd backend uv run app.pyBackend will run on
https://localhost:22358 -
Start the frontend (in a new terminal):
cd frontend npm startFrontend will run on
https://localhost:22359 -
Access the web application:
- Open
https://localhost:22359in your browser - Use the intuitive interface to generate tasks, run evaluations, and view results
- Real-time progress tracking for all operations
- Open
Note: The frontend automatically proxies API requests to the backend server (port 22358). No additional configuration is needed.
For advanced users and automation:
We provide an example about a special calculator MCP application. We define an example special calculator MCP server and use OpenAI client to interact with the server.
Quick start:
# Basic example with local MCP server
uv run mcp_clients/example_openai_client/client.py --servers mcp_servers/special_calculator/server.py
# Multiple servers with environment variables (use ^ for env vars)
uv run mcp_clients/example_openai_client/client.py --servers @modelcontextprotocol/server-sequential-thinking mcp-server-nationalparks^NPS_API_KEY=your-api-key-here
# Combined example with arguments and environment variables
uv run mcp_clients/example_openai_client/client.py --servers @openbnb/mcp-server-airbnb:--ignore-robots-txt mcp-server-nationalparks^NPS_API_KEY=your-api-key-hereFor more details on the OpenAI client usage, see the OpenAI Client README.
# Complete development environment
./setup.sh
# Start backend API (Terminal 1)
cd backend && uv run app.py
# Start frontend UI (Terminal 2)
cd frontend && npm start
# Access at https://localhost:22359For each benchmark contribution, please follow the following steps:
- Create a new directory in the
benchmarks/your_benchmark_namefolder. - If you are developing a new MCP server, please create a new folder and add the server script in the
mcp_serversfolder. - If you are developing a new MCP client, please create a new folder and add the client script in the
mcp_clientsfolder. - Add your benchmark scripts to the
benchmarks/your_benchmark_namefolder.
For web interface contributions:
- Frontend components:
frontend/src/components/andfrontend/src/pages/ - Backend API endpoints:
backend/app.py
See our detailed Development Roadmap for the current progress and planned features across all components.
The MCPEval CLI provides a comprehensive toolkit for managing MCP servers and evaluating LLMs. For detailed documentation, parameter descriptions, and advanced usage examples, see the CLI README.
Auto Workflow (Recommended) - Complete evaluation pipeline in one command:
# Automatically generate tasks, verify, evaluate, and analyze results
mcp-eval auto \
--servers @openbnb/mcp-server-airbnb:--ignore-robots-txt \
--working-dir evaluation_results/airbnb_eval \
--task-model gpt-4o-2024-11-20 \
--eval-model-configs benchmarks/airbnb/eval_models/gpt-4o.json \
--num-tasks 50For more control over each step:
# 1. Generate tasks
mcp-eval generate-tasks \
--servers @openbnb/mcp-server-airbnb:--ignore-robots-txt \
--model-config benchmarks/airbnb/eval_models/gpt-4o.json \
--num-tasks 200 \
--output data/airbnb/evaluation_tasks.jsonl
# 2. Verify tasks work correctly
mcp-eval verify-tasks \
--servers @openbnb/mcp-server-airbnb:--ignore-robots-txt \
--tasks-file data/airbnb/evaluation_tasks.jsonl \
--output data/airbnb/evaluation_tasks_verified.jsonl
# 3. Revalidate task descriptions based on execution data (optional but recommended)
mcp-eval revalidate-tasks \
--verified-tasks-file data/airbnb/evaluation_tasks_verified.jsonl \
--model-config benchmarks/airbnb/eval_models/gpt-4o.json \
--output data/airbnb/evaluation_tasks_final.jsonl
# 4. Evaluate model performance
mcp-eval evaluate \
--servers @openbnb/mcp-server-airbnb:--ignore-robots-txt \
--model-config benchmarks/airbnb/eval_models/gpt-4o.json \
--tasks-file data/airbnb/evaluation_tasks_final.jsonl \
--output benchmarks/airbnb/results/gpt4o_evaluation.json \
--max-turns 30
# 5. Analyze results and generate reports
mcp-eval analyze \
--predictions benchmarks/airbnb/results/gpt4o_evaluation.json \
--ground-truth data/airbnb/evaluation_tasks_final.jsonl \
--generate-report
# 6. Optional: Run LLM judge evaluation
mcp-eval judge \
--input-file benchmarks/airbnb/results/gpt4o_evaluation.json \
--output-dir benchmarks/airbnb/results \
--model-config benchmarks/airbnb/eval_models/gpt-4o.json
# 7. Optional: Analyze LLM judgment results
mcp-eval judge-rubric \
--trajectory-file benchmarks/airbnb/results/gpt4o_evaluation_trajectory.json \
--completion-file benchmarks/airbnb/results/gpt4o_evaluation_completion.json \
--output-dir benchmarks/airbnb/reportNote: The revalidation step (step 3) analyzes the actual tool conversations from verified tasks and improves task descriptions to be more accurate and specific. This leads to higher-quality evaluation datasets and better task clarity for subsequent evaluations.
generate-tasks- Generate evaluation tasks for MCP serversverify-tasks- Verify tasks can be executed successfullyrevalidate-tasks- Improve task descriptions based on actual execution dataevaluate- Evaluate models using MCP servers and tasksanalyze- Analyze evaluation results and generate reportsjudge- Run LLM-based evaluation of execution trajectoriesjudge-rubric- Analyze LLM judgment resultsconvert-data- Convert data to different formats (e.g., XLAM)auto- Complete automated evaluation workflow
Models are configured using JSON files. Examples:
{
"model": "gpt-4o-2024-11-20",
"temperature": 0.01,
"max_tokens": 16384
}For custom endpoints:
{
"model": "mistral-24b",
"api_key": "default",
"temperature": 0.01,
"max_tokens": 3000,
"base_url": "https://<IP_Address>:<port>/v1"
}# General help
mcp-eval --help
# Command-specific help
mcp-eval generate-tasks --help
mcp-eval evaluate --helpFor comprehensive documentation, examples, and advanced usage patterns, see the Complete CLI Documentation.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
For any questions or feedback, please contact Zhiwei Liu at zhiweiliu@salesforce.com.

