MCP IDE Integration
Set up the Flezi AgentBox MCP server to invoke BMAD agents directly from VS Code, Cursor, or Claude Code
MCP IDE Integration
The Model Context Protocol (MCP) is an open standard that connects AI-powered IDEs to external tool servers. Flezi AgentBox's MCP endpoint lets you invoke any BMAD agent directly from your IDE — turning it into an AI-powered SDLC command center.
Automatic Setup (Recommended)
The easiest way to configure MCP is with the agentbox CLI:
agentbox initThis automatically detects your IDE, generates an API key, and injects the MCP configuration — no manual editing required. See the CLI Local Setup Guide for installation and details.
Manual Setup
If you prefer manual configuration, follow the steps below.
Prerequisites
- Docker running with
docker compose up(orchestrator + Supabase) - An Flezi AgentBox API key configured in Settings > API Keys
- A supported IDE: VS Code, Cursor, or Claude Code
IDE Configuration
Claude Code
Add to your Claude Code MCP configuration:
{
"mcpServers": {
"agentbox": {
"url": "http://localhost:3001/api/v1/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}VS Code
Add to your .vscode/settings.json:
{
"mcp.servers": {
"agentbox": {
"url": "http://localhost:3001/api/v1/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}Cursor
Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"agentbox": {
"url": "http://localhost:3001/api/v1/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}Available Agents (21)
Core Workflow
| Agent | Purpose | |-------|---------| | BMAD Analyst | Business domain analysis | | BMAD Architect | Solution architecture | | BMAD PM | Product management | | BMAD Dev | Implementation guidance | | BMAD QA | Test strategy | | BMAD Scrum Master | Sprint facilitation | | BMAD Code Reviewer | Structured code review | | BMAD Sprint Planner | Sprint planning | | BMAD Story Writer | Story generation |
AMS Maintenance
| Agent | Purpose | |-------|---------| | BMAD Incident Responder | Incident triage and postmortem | | BMAD Release Manager | Changelog and deployment | | BMAD Debt Tracker | Tech debt analysis | | BMAD Migration Planner | Migration planning | | BMAD Performance Analyzer | Performance profiling |
Example: Code Review via MCP
You send:
Review the authentication middleware for security issues.
Agent responds:
## Code Review Report
### Critical (1)
- Missing CSRF protection on POST /sessions
### Major (1)
- API key in query string — move to x-api-key header
**Verdict**: Request ChangesExample: Sprint Planning
You send:
Plan the next sprint for Epic 57. Team velocity: 25 points. 2 developers.
Example: Performance Analysis
You send:
Analyze the /api/v1/marketplace endpoint. Current p95 is 800ms, target <200ms.
Architecture
IDE (MCP Client)
|
| JSON-RPC 2.0 over HTTP
v
Orchestrator (/api/v1/mcp)
|
|--> tools/list --> AgentDiscovery (with context inputSchema)
|--> tools/call --> ContextValidator (5-layer security)
| |
| |--> Inline context from IDE (priority)
| | OR ContextGatherer (local mount fallback)
| |
| |--> AgentRunner --> LLM API
|
<-- JSON-RPC responseAPI keys are encrypted at rest with AES-256-GCM. Inline context is validated through a 5-layer security pipeline. Project files mounted read-only with path traversal protection.
Project Context Sharing
Agents can understand your IDE workspace through inline context — the IDE's LLM automatically includes relevant files when calling agent tools.
How It Works
tools/listdeclares an optionalcontextfield in each tool's input schema- Your IDE's LLM reads the schema and populates
context.filesandcontext.treewith relevant workspace files - The orchestrator validates the context (security pipeline) and injects it into the agent's prompt
You don't need to configure anything — it works automatically when your IDE supports MCP tool schemas.
Context Limits
| Limit | Value | | ------- | ------- | | Max files per request | 50 | | Max single file size | 500 KB | | Max tree size | 100 KB | | Max HTTP body | 5 MB | | Token budget (default) | 30,000 tokens |
Security
- Path validation — rejects traversal attacks, absolute paths, null bytes
- Sensitive file filter — blocks
.env,*.pem,*.key, credentials - Binary detection — skips images, executables, compiled files
- Token budget — prevents excessive context injection
Limitations
- Stateless — each tool call is independent (IDE LLM handles conversation memory)
- No dynamic context — agent cannot request additional files mid-execution
- HTTP only — future stdio transport will enable bidirectional MCP resources
Tips for Better Results
- Be specific — mention file names, endpoints, or table names
- Set scope — "focus on security" or "check performance only"
- Use the right agent — each has a specialty
- Iterate — refine input based on initial output
- Large projects — focus on specific packages, not the entire repo
Troubleshooting
| Issue | Fix |
| ------- | ----- |
| Connection refused | Run docker compose up |
| AUTH_INVALID | Regenerate API key in Settings |
| Generic output | Include file names in your message so IDE sends context |
| LLM_KEY_DECRYPTION_FAILED | Re-save your LLM API key in Settings > Key Vault |
| LLM_RATE_LIMITED | Wait a moment and retry |
| No agents listed | Run supabase db reset to reseed |
Further Reading
- Full MCP Guide — comprehensive version with JSON-RPC examples and context architecture
- API Reference
- Execution Guide