Model Context Protocol (MCP) server for Lucid App integration. Enables multimodal LLMs to access and analyze Lucid diagrams through visual exports.
- π Document discovery and metadata retrieval from LucidChart, LucidSpark, and LucidScale
- πΌοΈ PNG image export from Lucid diagrams
- π€ AI-powered diagram analysis with multimodal LLMs (supports Azure OpenAI and OpenAI)
- βοΈ Environment-based API key management with automatic fallback from Azure to OpenAI.
- π TypeScript implementation with full test coverage
- π§ MCP Inspector integration for easy testing
Before you begin, ensure you have the following:
- Node.js: Version 18 or higher.
- Lucid API Key: A key from the Lucid Developer Portal is required for all features.
- AI Provider Key (Optional): For AI-powered diagram analysis, you need an API key for either:
Follow these steps to get the server running.
To install lucid-mcp-server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @smartzan63/lucid-mcp-server --client claude
Install the package globally from npm:
npm install -g lucid-mcp-server
Set the following environment variables in your terminal. Only the Lucid API key is required.
# Required for all features
export LUCID_API_KEY="your_api_key_here"
# Optional: For AI analysis, configure either Azure OpenAI or OpenAI
# Option 1: Azure OpenAI (takes precedence)
export AZURE_OPENAI_API_KEY="your_azure_openai_key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"
export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-4o"
# Option 2: OpenAI (used as a fallback if Azure is not configured)
export OPENAI_API_KEY="your_openai_api_key"
export OPENAI_MODEL="gpt-4o" # Optional, defaults to gpt-4o
Note: The server automatically uses Azure OpenAI if
AZURE_OPENAI_API_KEY
is set. If not, it falls back to OpenAI ifOPENAI_API_KEY
is provided.
Test your installation using the MCP Inspector:
npx @modelcontextprotocol/inspector lucid-mcp-server
Once the server is running, you can interact with it using natural language or by calling its tools directly.
-
Basic commands (works with just a Lucid API key):
- "Show me all my Lucid documents"
- "Get information about the document with ID: [document-id]"
-
AI Analysis (requires Azure OpenAI or OpenAI setup):
- "Analyze this diagram: [document-id]"
- "What does this Lucid diagram show: [document-id]"
Lists documents in your Lucid account.
- Parameters:
keywords
(string, optional): Search keywords to filter documents.
- Example:
{ "keywords": "architecture diagram" }
Gets document metadata and can optionally perform AI analysis on its visual content.
- Parameters:
documentId
(string): The ID of the document from the Lucid URL.analyzeImage
(boolean, optional): Set totrue
to perform AI analysis.β οΈ Requires Azure or OpenAI key.pageId
(string, optional): The specific page to export (default: "0_0").
- Example:
{ "documentId": "demo-document-id-here-12345678/edit", "analyzeImage": true }
You can integrate the server directly into Visual Studio Code.
- Open the Command Palette (
Ctrl+Shift+P
orCmd+Shift+P
). - Run the command: "MCP: Add Server".
- Choose "npm" as the source.
- Enter the package name:
lucid-mcp-server
. - VS Code will guide you through the rest of the setup.
- Verify automatically created configuration, because AI can make mistakes
Click the "Install in VS Code" badge at the top of this README, then follow the on-screen prompts. You will need to configure the environment variables manually in your settings.json
.
Click to view manual `settings.json` configuration
Add the following JSON to your VS Code settings.json
file. This method provides the most control and is useful for custom setups.
{
"mcp": {
"servers": {
"lucid-mcp-server": {
"type": "stdio",
"command": "lucid-mcp-server",
"env": {
"LUCID_API_KEY": "${input:lucid_api_key}",
"AZURE_OPENAI_API_KEY": "${input:azure_openai_api_key}",
"AZURE_OPENAI_ENDPOINT": "${input:azure_openai_endpoint}",
"AZURE_OPENAI_DEPLOYMENT_NAME": "${input:azure_openai_deployment_name}",
"OPENAI_API_KEY": "${input:openai_api_key}",
"OPENAI_MODEL": "${input:openai_model}"
}
}
},
"inputs": [
{
"id": "lucid_api_key",
"type": "promptString",
"description": "Lucid API Key (REQUIRED)"
},
{
"id": "azure_openai_api_key",
"type": "promptString",
"description": "Azure OpenAI API Key (Optional, for AI analysis)"
},
{
"id": "azure_openai_endpoint",
"type": "promptString",
"description": "Azure OpenAI Endpoint (Optional, for AI analysis)"
},
{
"id": "azure_openai_deployment_name",
"type": "promptString",
"description": "Azure OpenAI Deployment Name (Optional, for AI analysis)"
},
{
"id": "openai_api_key",
"type": "promptString",
"description": "OpenAI API Key (Optional, for AI analysis - used if Azure is not configured)"
},
{
"id": "openai_model",
"type": "promptString",
"description": "OpenAI Model (Optional, for AI analysis, default: gpt-4o)"
}
]
}
}
- Fork the repository.
- Create your feature branch (
git checkout -b feature/amazing-feature
). - Commit your changes (
git commit -m 'Add amazing feature'
). - Push to the branch (
git push origin feature/amazing-feature
). - Open a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.