AI-powered code analysis that helps developers understand and review changes faster.
Armchair bridges the gap between AI-assisted coding and real developer productivity by breaking down commits into logical chunks and providing intelligent code reviews—all through an interactive dashboard.
- Quick Start
- Features
- LLM Setup
- Performance Tips
- Configuration
- CLI Usage
- API Reference
- Security Model
- Troubleshooting
- Contributing
- License
- Changelog
- Advanced: Manual Docker
- Development
Get Armchair running in under 2 minutes.
- Docker Desktop (version 20.10+) installed and running
- An LLM provider configured — see LLM Setup for options
curl -fsSL https://raw.githubusercontent.com/armchr/armchr/main/scripts/armchair.sh -o armchair.sh
chmod +x armchair.sh
./armchair.shThe script will:
- Prompt for your code repositories root directory
- Pull the latest Docker image
- Start the dashboard at https://localhost:8686
- Open your browser automatically for further setup
-
Click the Settings icon (⚙️) in the dashboard
-
Configure your LLM:
Provider API Base URL Model Claude https://api.anthropic.com/v1claude-sonnet-4-20250514OpenAI https://api.openai.com/v1gpt-4oOllama https://host.docker.internal:11434/v1qwen3:32bSee LLM Setup for detailed instructions on getting API keys or installing local models.
-
Add your repositories (paths must be under your workspace directory)
-
Start analyzing commits
Breaks down commits into logical, reviewable chunks:
- Identifies code structures and relationships
- Generates structured output for downstream analysis
- Supports multiple programming languages (Python, Go, JS/TS, Java, Rust, C/C++)
Standalone usage: The Splitter Agent can be used independently via CLI. See the Code Splitter Agent README for details.
When you split a large change, Armchair generates a mental model to help reviewers understand the big picture before diving into code:
The mental model includes:
- What This Change Does: High-level summary of the entire changeset
- How Patches Progress: Step-by-step guide through the logical order of patches
- Key Concepts: Important domain concepts and patterns introduced
- Review Tips: Suggestions for what to focus on during review
A single large commit gets split into multiple smaller, logically-grouped patches:
Each patch shows:
- Descriptive title explaining what it does
- Files affected with annotation counts
- Lines changed summary
Interactive web UI at https://localhost:8686:
- Browse branches, commits, and uncommitted changes
- Run splitter and reviewer analysis
- Visualize annotated code explanations
Each code change in a split patch includes AI-generated annotations that explain what the change does:
The annotated diff viewer shows:
- Inline annotations: AI explanations embedded directly in the diff
- Side-by-side diff: Before and after comparison
- Line-level context: Annotations reference specific line ranges
AI-powered code review:
- Analyzes commits and uncommitted changes
- Provides feedback on code quality and best practices
- Generates detailed suggestions
Backend API available at https://localhost:8787 for programmatic access.
Armchair requires an LLM to power its analysis. You can use proprietary cloud APIs or run open models locally.
Claude models excel at code understanding and review tasks.
| Setting | Value |
|---|---|
| API Base URL | https://api.anthropic.com/v1 |
| Recommended Model | claude-sonnet-4-20250514 |
Get an API key:
- Create an account at console.anthropic.com
- Navigate to API Keys
- Click "Create Key" and copy it to your Armchair settings
| Setting | Value |
|---|---|
| API Base URL | https://api.openai.com/v1 |
| Recommended Model | gpt-4o |
Get an API key:
- Create an account at platform.openai.com
- Navigate to API Keys
- Click "Create new secret key" and copy it to your Armchair settings
Run models locally for privacy and no API costs. Ollama makes it easy to run open-source models.
- Download from ollama.com/download
- Install and start Ollama
- Verify installation:
ollama --version
We recommend Qwen3 Coder for code analysis:
# Install Qwen3 Coder (recommended for code tasks)
ollama pull qwen3:32b
# Or for smaller systems
ollama pull qwen3:14b| Model | Command | Best For |
|---|---|---|
| Qwen3 | ollama pull qwen3:32b |
Code analysis, general reasoning |
| DeepSeek Coder V2 | ollama pull deepseek-coder-v2:16b |
Code-specific tasks |
| DeepSeek R1 | ollama pull deepseek-r1:32b |
Complex reasoning, detailed analysis |
| Llama 3.1 | ollama pull llama3.1:70b |
General purpose, well-rounded |
| Llama 3.1 (smaller) | ollama pull llama3.1:8b |
Faster, lower resource usage |
In the Armchair Settings UI:
| Setting | Value |
|---|---|
| API Base URL | https://host.docker.internal:11434/v1 |
| Model Name | qwen3:32b (or your installed model) |
| API Key | Leave empty |
Important: Use host.docker.internal instead of localhost so the Docker container can reach your local Ollama instance.
For large repositories, enable commitOnly mode to skip loading unstaged/untracked files:
Via Settings UI: Toggle "Commit Only" when adding a repository
Via config file (~/.armchair_output/.armchair/source.yaml):
source:
repositories:
- name: "large-repo"
path: "/path/to/large-repo"
commitOnly: true # Faster loading, commits still availableWhen enabled:
- All commits and branches remain available
- Dashboard loads significantly faster
- Unstaged/untracked files are hidden
All settings are managed through the dashboard Settings UI (⚙️) or config files.
| File | Purpose |
|---|---|
~/.armchair_output/.armchair/.armchair.json |
LLM settings (API URL, model, key) |
~/.armchair_output/.armchair/source.yaml |
Repository configuration |
~/.armchair_output/
├── .armchair/
│ ├── source.yaml # Repository config
│ └── .armchair.json # LLM settings
├── commit_*/ # Split patches
└── reviews/ # Code reviews
./armchair.sh [OPTIONS]
Options:
--port-frontend PORT Frontend port (default: 8686)
--port-backend PORT Backend port (default: 8787)
--foreground, -f Run in foreground (show logs)
--name NAME Container name (default: armchair-dashboard)
--local Use local image 'explainer:latest'
--image IMAGE Use custom Docker image
--help, -h Show helpRun the Splitter Agent standalone (without the dashboard) for CI/CD or automation:
./scripts/run_splitter.sh --repo REPO_NAME --api-key YOUR_API_KEY| Flag | Description |
|---|---|
--repo NAME |
Repository name from config (required) |
--api-key KEY |
API key (or set OPENAI_API_KEY env var) |
--commit HASH |
Analyze specific commit |
--patch |
Analyze uncommitted changes |
--mcp-config FILE |
MCP server configuration file |
--verbose |
Enable verbose output |
--interactive, -it |
Interactive mode |
Note: The --mcp-config flag enables integration with Model Context Protocol servers, allowing the splitter to use additional tools and context sources during analysis.
# Split a specific commit
curl -X POST https://localhost:8787/api/split \
-H "Content-Type: application/json" \
-d '{"repoName": "my-repo", "branch": "main", "commitId": "abc1234"}'
# Split uncommitted changes
curl -X POST https://localhost:8787/api/split \
-H "Content-Type: application/json" \
-d '{"repoName": "my-repo", "branch": "main"}'# List repositories and branches
curl https://localhost:8787/api/repositories
# List analyzed commits
curl https://localhost:8787/api/commits
# Get commit diff
curl https://localhost:8787/api/repositories/my-repo/commits/abc1234/diff
# Get uncommitted changes
curl https://localhost:8787/api/repositories/my-repo/branches/main/working-directory/diffArmchair is designed with security in mind:
| Aspect | Implementation |
|---|---|
| File Access | Your home directory is mounted read-only (-v "$HOME:/workspace:ro") |
| Data Locality | All processing happens locally—no data sent to Armchair servers |
| LLM Communication | Only files you explicitly select are sent to your configured LLM |
| Output Isolation | Results stored in ~/.armchair_output, separate from your code |
What gets sent to the LLM:
- Code diffs you choose to analyze
- File contents within those diffs
What stays local:
- Repository metadata
- Git history
- All files not explicitly analyzed
| Problem | Solution |
|---|---|
| Container won't start | Verify Docker is running: docker info |
| Port in use | Use --port-frontend 3000 --port-backend 3001 |
| Permission denied | Add workspace to Docker Desktop → Settings → Resources → File Sharing |
# View logs
docker logs armchair-dashboard
# Follow logs
docker logs -f armchair-dashboard
# Clean restart
docker stop armchair-dashboard && docker rm armchair-dashboard
./armchair.sh| Problem | Solution |
|---|---|
| API errors | Verify API key is valid |
| Model not found | Check model name matches provider exactly |
| Ollama not connecting | Use https://host.docker.internal:11434/v1 (not localhost) |
| Problem | Solution |
|---|---|
| Repository not showing | Add via Settings UI (⚙️) |
| Can't access repo | Ensure path is under your workspace directory |
We welcome contributions! Please:
- Open an issue to discuss proposed changes
- Fork the repository
- Create a feature branch
- Submit a pull request
Report bugs and request features via GitHub Issues.
MIT License. See LICENSE for details.
See CHANGELOG.md and Releases for version history.
Current Version: v0.2
For manual Docker control without the script:
docker run -d \
--name armchair-dashboard \
-p 8686:8686 -p 8787:8787 \
-v "$HOME:/workspace:ro" \
-v "$HOME/.armchair_output:/app/output" \
--entrypoint /bin/bash \
armchr/explainer:latest \
-c "cd /app/backend && node server.js --output /app/output --root-map /workspace --root-dir $HOME & cd /app/frontend && serve -s dist -l 8686"docker logs armchair-dashboard # View logs
docker logs -f armchair-dashboard # Follow logs
docker stop armchair-dashboard # Stop
docker start armchair-dashboard # Start
docker rm armchair-dashboard # RemoveFor local development, building from source, or contributing to Armchair, see the code_explainer_ui README. It contains:
- Full architecture documentation (React frontend, Express.js backend, AI agents)
- Local development setup with Make commands
- Environment variables and configuration details
- API endpoint reference
- MCP server mode documentation
- Project structure and key technologies



