Open Source · GPLv3 / Apache 2.0

The open-source engine
that runs your skills.

The MMC MCP Server reads your SKILL.md files from GitHub, executes the business logic, and connects to your tools — all without sending your data to any MMC database.

Free & open source · Fork it, audit it, self-host it

Three layers. One standard.

The MMC architecture separates authoring (proprietary Workbench), logic storage (open SKILL.md standard), and execution (open-source MCP Server) into three independent, auditable layers.

MMC Architecture Diagram
Proprietary / SaaS (Workbench)
Standard (SKILL.md)
Open Source (MCP Server)
AI Model

Every layer is
independently auditable.

Layer 1 · Proprietary SaaS
MMC Workbench
The visual modeling environment where business teams design context models and generate SKILL.md files. Governed, versioned, human-in-the-loop.
Outcome-driven context modeling canvas
SOP / process document import
SKILL.md generation & GitHub publish
Human-in-the-loop governance controls
Layer 2 · Open Standard
SKILL.md Format
A plain-text, open specification for encoding business context, decision rules, and event flows. Stored in your own GitHub — readable by any AI agent that implements MCP.
YAML frontmatter + markdown structure
Multi-scenario decision rule encoding
Event bus publishing specification
Version-controlled in your own repo
Layer 3 · Open Source
MMC MCP Server
The execution engine. Reads SKILL.md from GitHub at runtime, parses business rules, runs the event-slicing logic, and calls your tool connectors. GPLv3 / Apache 2.0.
SKILL.md parser & executor
Open-source event-slicing logic
Secure tool connectors (Xero, Slack, Jira…)
Self-hostable on your own infrastructure

What happens when
your AI calls a skill.

1
AI Agent
Requests skill execution via MCP
Claude, Gemini, or any MCP-compatible agent sends a tool call to the MMC MCP Server — either triggered by user input or by the event bus dispatching to the next skill in the flow.
# AI sends tool call
tool: slice-1-receive-request
role: claims-processor
2
MCP Server
Reads SKILL.md directly from GitHub
The server fetches the relevant SKILL.md from your repository — always the latest committed version. No caching of your business logic on MMC infrastructure. No database. Just your GitHub.
# Reads from your repo
source: your-org/mmc-skills
file: slice-1-receive-request.SKILL.md
3
Parser & Executor
Presents interface, evaluates all scenarios
The parser presents the required user interface, collects facts, then evaluates every decision scenario independently. Multiple scenarios can be true simultaneously — all valid outcomes are logged.
# Multi-scenario evaluation
Scenario A: invalid inputs → error
Scenario B: valid → log event
4
Event Bus
Publishes outcome, dispatches next skill
Each valid outcome is logged to the event bus via log-event-to-bus. The bus then dispatches to the next skill in the flow — creating a fully traceable, step-by-step audit trail of every AI decision.

Your tools. Your infrastructure.

The MCP Server connects to your existing business systems via secure connectors. All reads and writes happen from infrastructure you control — never routed through MMC servers.

📊
Xero
Finance
Log transactions, create audit entries, and write financial outcomes directly to your Xero instance.
💬
Slack
Communications
Send notifications, alerts and approval requests to the right channels and people at the right moment.
🐙
GitHub API
Repo / Users
Read SKILL.md files, manage versions, and integrate with your existing developer workflows and CI/CD.
🎯
Jira
Tickets
Raise, update and resolve tickets automatically as business events are logged through the skill flow.
🤖
Claude
AI Agent
Anthropic's Claude connects natively via MCP. Skills run within the Claude system prompt context.
Gemini
AI Agent
Google Gemini connects via MCP. The same SKILL.md runs on any compatible model without modification.
🔌
Custom MCP
Your Systems
Build your own connectors using the open MCP standard. Any system your team already uses can integrate.
More coming
Roadmap
Salesforce, HubSpot, ServiceNow and more. Or contribute your own via the open-source repository.

Run it on
your infrastructure.

The MCP Server is a standalone process you can run anywhere — your cloud, your VPC, your laptop. No dependency on MMC staying in business. No vendor lock-in. Just an open binary and your GitHub repo.

Your procurement team already approved GitHub. There is no MMC database to review. No sensitive data touches our infrastructure.

1
Fork or install the MCP Server
Clone the open-source repo or install via npm. Inspect every line — it's all public.
2
Point it at your GitHub repo
Set your repo URL and a read-only token. The server reads SKILL.md files at runtime from there.
3
Connect your AI agent
Add the MCP server URL to Claude or Gemini. Your skills are live — fully governed, fully auditable.
bash
# Install the MMC MCP Server
$ npm install -g @mmc/mcp-server
 
# Configure your GitHub repo
$ mmc config set repo your-org/mmc-skills
$ mmc config set token ghp_xxxxxxxxxxxx
 
# Start the server
$ mmc serve --port 3000
 
Reading skills from your-org/mmc-skills…
Found 7 SKILL.md files
✓ MCP Server running on port 3000
✓ Ready for AI agent connections
Open Source · GPLv3 / Apache 2.0 · No vendor lock-in

Fork it. Audit it.
Own it completely.

The MCP Engine is free, open, and always will be. Start with the Workbench to generate your first SKILL.md, then connect the engine to any AI agent.

Works with Claude, Gemini & any MCP-compatible agent · Your GitHub, your data