See example outputs here: https://claude-code-deep-research.vercel.app/
- Why This Exists
- Repo Structure
- Quick Start
- How It Works
- Customization
- Roadmap
- Credits & Acknowledgements
- License
UPDATE: Added Calude2.md - updated for deeper reserach and more closley mimics graph of thought patterns.
Large Language Models (LLMs) excel at single queries but struggle with complex, multi-step research requiring iterative querying, source verification, and citations—what OpenAI and Google call "Deep Research." Anthropic’s Claude Code can achieve the same results, provided the right instructions. This repo supplies those instructions, streamlined into an easy-to-use workflow.
| File/Folder | Purpose |
|---|---|
| CLAUDE.md | Master instructions for Claude Code. Includes Graph-of-Thoughts integration and deep-research methodology. |
| Deep Research Question Generator System Prompt.md | ChatGPT system prompt (o3/o3-pro) refining raw questions into structured prompts (OpenAI format recommended). |
| deepresearchprocess.md | Comprehensive 7-phase deep research playbook inspired by OpenAI & Google Gemini, foundational to CLAUDE.md. |
| .template_mcp.json | Optional MCP server configuration for local filesystem and browser automation with Claude. |
examples/ |
Sample refined questions and completed Claude reports compared to other outputs. |
##Example output and comparisons (from examples folder): https://claude-code-deep-research.vercel.app/
- Open ChatGPT (model o3/o3-pro or other thinking models work best).
- Set the system prompt to the contents of
Deep Research Question Generator System Prompt.md. - Paste your raw research question into the user prompt.
- Respond to any clarifying questions from ChatGPT.
- Copy the generated OpenAI-formatted prompt.
-
Launch a new Claude Code session.
-
Set the model using
/model opus. -
Type:
Please read the CLAUDE.md file and confirm when ready for my deep research question. -
Wait for Claude’s confirmation.
-
Paste your refined question.
-
Claude autonomously performs:
- Research planning with Graph-of-Thoughts.
- Spins up mulple subagents to do the work faster
- Iterative search and data scraping.
- Fact verification and cross-referencing.
- Markdown report generation with citations and bibliography.
-
Review and refine as needed.
Tip ⚡: Include directory instructions, such as:
"Save all outputs in the
/RESEARCH/[topic]folder."
After obtaining the report, instruct Claude to convert it into a user-friendly website format for enhanced accessibility and readability.
[ ChatGPT (o3) ] → Question Refinement → [ Claude Code (opus) ] → Graph-of-Thoughts & Deep Research Pipeline → [ Cited Markdown Report ]
- DeepResearchProcess: Implements a 7-phase pipeline—Scope → Plan → Retrieve → Triangulate → Draft → Critique → Package.
- Graph-of-Thoughts: Allows Claude to branch and merge multiple reasoning paths rather than relying on linear chains.
- CLAUDE.md: Integrates instructions, enabling Claude to autonomously select tools, verify information, and embed citations systematically.
- Research Methodology: Derived from OpenAI and Gemini’s deep-research playbooks.
- Graph-of-Thoughts Integration: Adapted from Graph-of-Thoughts to support dynamic research pathways.
- Prompt Generation: ChatGPT-based structured prompt ensures clarity, reducing confusion during Claude’s research by over 50% in tests.
- Automation Hooks: The
.template_mcp.jsondemonstrates local automation options via MCP servers, enabling advanced Claude operations.
- Output Styles: Adjust formatting and citation preferences directly within the
CLAUDE.mdfile. - Model Flexibility: Alternative Gemini-specific prompts provided by the ChatGPT system prompt generator if preferred.
- Tool Integration: Expand automation via MCP by updating
.template_mcp.jsonand referencing additional tools withinCLAUDE.md.
- Graph-of-Thoughts Framework: SPCL, ETH Zürich (MIT License).
- Methodologies inspired by publicly available OpenAI and Google Gemini documentation.
- Developed by Ankit at My Business Care Team (MyBCAT).
MIT License. See LICENSE file for full details.