A Model Context Protocol (MCP) server based on multi-engine search results, supporting free web search without API keys.
- Web search using multi-engine results
- bing
- baidu
linux.dotemporarily unsupported- csdn
- duckduckgo
- exa
- brave
- HTTP proxy configuration support for accessing restricted resources
- No API keys or authentication required
- Returns structured results with titles, URLs, and descriptions
- Configurable number of results per search
- Customizable default search engine
- Support for fetching individual article content
- csdn
- Support for
Bing(already supported),DuckDuckGo(already supported),Exa(already supported),Brave(already supported), Google and other search engines - Support for more blogs, forums, and social platforms
- Optimize article content extraction, add support for more sites
- Clone or download this repository
- Install dependencies:
npm install
- Build the server:
npm run build
- Add the server to your MCP configuration:
Cherry Studio:
{
"mcpServers": {
"web-search": {
"name": "Web Search MCP",
"type": "streamableHttp",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"baseUrl": "https://localhost:3000/mcp"
}
}
}
VSCode (Claude Dev Extension):
{
"mcpServers": {
"web-search": {
"transport": {
"type": "streamableHttp",
"url": "https://localhost:3000/mcp"
}
},
"web-search-sse": {
"transport": {
"type": "sse",
"url": "https://localhost:3000/sse"
}
}
}
}
Claude Desktop:
{
"mcpServers": {
"web-search": {
"transport": {
"type": "streamableHttp",
"url": "https://localhost:3000/mcp"
}
},
"web-search-sse": {
"transport": {
"type": "sse",
"url": "https://localhost:3000/sse"
}
}
}
}
Quick deployment using Docker Compose:
docker-compose up -d
Or use Docker directly:
docker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/aas-ee/open-web-search:latest
Environment variable configuration:
# Enable CORS (default: false)
ENABLE_CORS=true
# CORS origin configuration (default: *)
CORS_ORIGIN=*
# Default search engine (options: bing, duckduckgo, exa, brave, default: bing)
DEFAULT_SEARCH_ENGINE=duckduckgo
# Enable HTTP proxy (default: false)
USE_PROXY=true
# Proxy server URL (default: https://127.0.0.1:10809)
PROXY_URL=https://your-proxy-server:port
# Server port (default: 3000)
PORT=8080
Then configure in your MCP client:
{
"mcpServers": {
"web-search": {
"name": "Web Search MCP",
"type": "streamableHttp",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"baseUrl": "https://localhost:3000/mcp"
},
"web-search-sse": {
"transport": {
"name": "Web Search MCP",
"type": "sse",
"description": "Multi-engine web search with article fetching",
"isActive": true,
"url": "https://localhost:3000/sse"
}
}
}
}
The server provides three tools: search
, fetchLinuxDoArticle
, and fetchCsdnArticle
.
{
"query": string, // Search query
"limit": number, // Optional: Number of results to return (default: 10)
"engines": string[] // Optional: Engines to use (bing,baidu,linuxdo,csdn,duckduckgo,exa,brave) default bing
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "search",
arguments: {
query: "search content",
limit: 3, // Optional parameter
engines: ["bing", "csdn", "duckduckgo", "exa", "brave"] // Optional parameter, supports multi-engine combined search
}
})
Response example:
[
{
"title": "Example Search Result",
"url": "https://example.com",
"description": "Description text of the search result...",
"source": "Source",
"engine": "Engine used"
}
]
Used to fetch complete content of CSDN blog articles.
{
"url": string // URL from CSDN search results using the search tool
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchCsdnArticle",
arguments: {
url: "https://blog.csdn.net/xxx/article/details/xxx"
}
})
Response example:
[
{
"content": "Example search result"
}
]
Used to fetch complete content of Linux.do forum articles.
{
"url": string // URL from linuxdo search results using the search tool
}
Usage example:
use_mcp_tool({
server_name: "web-search",
tool_name: "fetchLinuxDoArticle",
arguments: {
url: "https://xxxx.json"
}
})
Response example:
[
{
"content": "Example search result"
}
]
Since this tool works by scraping multi-engine search results, please note the following important limitations:
-
Rate Limiting:
- Too many searches in a short time may cause the used engines to temporarily block requests
- Recommendations:
- Maintain reasonable search frequency
- Use the limit parameter judiciously
- Add delays between searches when necessary
-
Result Accuracy:
- Depends on the HTML structure of corresponding engines, may fail when engines update
- Some results may lack metadata like descriptions
- Complex search operators may not work as expected
-
Legal Terms:
- This tool is for personal use only
- Please comply with the terms of service of corresponding engines
- Implement appropriate rate limiting based on your actual use case
-
Search Engine Configuration:
- Default search engine can be set via the
DEFAULT_SEARCH_ENGINE
environment variable - Supported engines: bing, duckduckgo, exa, brave
- The default engine is used when searching specific websites
- Default search engine can be set via the
-
Proxy Configuration:
- HTTP proxy can be configured when certain search engines are unavailable in specific regions
- Enable proxy with environment variable
USE_PROXY=true
- Configure proxy server address with
PROXY_URL
Welcome to submit issue reports and feature improvement suggestions!
If you want to fork this repository and publish your own Docker image, you need to make the following configurations:
To enable automatic Docker image building and publishing, please add the following secrets in your GitHub repository settings (Settings → Secrets and variables → Actions):
Required Secrets:
GITHUB_TOKEN
: Automatically provided by GitHub (no setup needed)
Optional Secrets (for Alibaba Cloud ACR):
ACR_REGISTRY
: Your Alibaba Cloud Container Registry URL (e.g.,registry.cn-hangzhou.aliyuncs.com
)ACR_USERNAME
: Your Alibaba Cloud ACR usernameACR_PASSWORD
: Your Alibaba Cloud ACR passwordACR_IMAGE_NAME
: Your image name in ACR (e.g.,your-namespace/open-web-search
)
The repository includes a GitHub Actions workflow (.github/workflows/docker.yml
) that automatically:
-
Trigger Conditions:
- Push to
main
branch - Push version tags (
v*
) - Manual workflow trigger
- Push to
-
Build and Push to:
- GitHub Container Registry (ghcr.io) - always enabled
- Alibaba Cloud Container Registry - only enabled when ACR secrets are configured
-
Image Tags:
ghcr.io/your-username/open-web-search:latest
your-acr-address/your-image-name:latest
(if ACR is configured)
- Fork the repository to your GitHub account
- Configure secrets (if you need ACR publishing):
- Go to Settings → Secrets and variables → Actions in your forked repository
- Add the ACR-related secrets listed above
- Push changes to the
main
branch or create version tags - GitHub Actions will automatically build and push your Docker image
- Use your image, update the Docker command:
docker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/your-username/open-web-search:latest
- If you don't configure ACR secrets, the workflow will only publish to GitHub Container Registry
- Make sure your GitHub repository has Actions enabled
- The workflow will use your GitHub username (converted to lowercase) as the GHCR image name
If you find this project helpful, please consider giving it a ⭐ Star!