How it works

Three steps from
rule to execution.

Model My Context turns your existing process documents into governed, auditable AI skills — without coding, without a vendor database, and without losing ownership of your logic.

Customer Onboarding Skill
● Active
Mission & context
mission: "Customer is ready to transact"
owner: "success-team"
Modelled interactions
- identity_verified: KYC complete
- account_configured: preferences set
- first_value_delivered: outcome met
SKILL.md generated · AI aligned to business outcomes · Committed to GitHub
1
Step One · Model

Model your context
in the Workbench.

A visual canvas for outcome-driven context modeling — no coding required

The Workbench is a visual context modeling tool. Instead of writing rules or mapping workflows, you define what your business actually needs to achieve — the mission, the interactions that must happen, and the measurable outcomes that prove success.

MMC focuses on what needs to be true, never how to implement it. This keeps your models technology-agnostic, aligned across business and technical stakeholders, and free from implementation assumptions that lock you in.

Model what, not how
Define missions, interactions and measurable outcomes — not implementation steps. Your model stays technology-agnostic and bias-free.
Align all stakeholders
Business owners, operations and technical teams work from the same visual canvas. No translation layer, no assumptions, no guesswork.
Validate outcomes before committing
Review and confirm your context model in the Workbench before generating a SKILL.md. What you see is what your AI will understand.
MMC Workbench — Manual Physiotherapy Eligibility
Slice detail view
Context Node
Manual Therapy
Eligibility rules for direct access
Activity Node
Patient Triage
Collect clinical symptoms
Outcome Node
Referral Generated
Member is eligible for 6 sessions
Context
Activity
Outcome
your-org / mmc-skills
GitHub · main
a3f9c12
Update Customer Onboarding — add go-live outcome metric
2 min ago
8e21b44
Add Vendor Assessment context model — initial version
1 day ago
c17d3a8
Customer Onboarding — refine stakeholder interactions
3 days ago
f04a991
Init — Customer Onboarding skill v1.0
1 week ago
skills/customer-onboarding.SKILL.mdSKILL.md
1---
2name: slice-1-receive-request
3compatibility: requires mmc-mcp tools: log-event-to-bus
4use this skill when: user submits a new request
5publishes_event: request-received
6---
2
Step Two · Commit

Your context model
lives in your GitHub.

Your repository, your version history — MMC never stores your data

Once you're satisfied with the context model, MMC generates a SKILL.md file — a structured, human-readable definition of your business context — and commits it directly to your own GitHub repository.

Every change is versioned, attributable and reversible. Your security team already trusts GitHub. There's no new system to approve, no vendor database to clear through procurement. Your GitHub is the single source of truth.

Full version history, forever
Every change to your business rules is tracked with author, timestamp and message. Roll back any update in seconds.
Human-readable SKILL.md format
The output file is plain YAML-structured markdown. Your team can read, review and approve it via a standard pull request.
Model-agnostic and portable
Skills aren't tied to Claude or any AI. Switch from Claude to Gemini — your SKILL.md travels with you, unchanged.
3
Step Three · Execute

Your AI understands
the context — every time.

Deterministic execution via open-source MCP — and you can prove it

Your AI agent — Claude, Gemini, or any MCP-compatible model — connects to the open-source MMC MCP Server. At runtime, it reads your SKILL.md files directly from your GitHub repository and executes within the context you've defined.

No guessing. No prompt drift. The business context you modeled in the Workbench governs every AI interaction — consistently, traceably, and independently of which AI model you're using. Every execution is logged and auditable.

Consistent — not probabilistic
The MCP server executes from your context model, not from LLM interpretation. Results are grounded in what you modeled, every run.
Full audit trail, compliance-ready
Every skill execution is logged with a timestamp, the exact version of the context model that ran, and the outcome. Built for auditors.
Fork it, audit it, self-host it
The MCP Server is public and always free. Run it on your own infrastructure. No dependency on MMC staying in business.
MMC MCP Server — Execution Log
● Live
AI Agent
Claude / Gemini
Sends tool call to MCP server
MCP Server
open-source engine
Reads SKILL.md from GitHub
Context source
your-org / mmc-skills
customer-onboarding.SKILL.md · v1.2.0
💬
Slack✓ Sent
🎯
Jira✓ Raised
📁
GitHub✓ Audit
🔗
CRM✓ Updated
skill executed · context v1.2.0 · 4 actions · all succeeded

Human-readable context,
machine-executable alignment.

A SKILL.md file is the output of the Workbench and the input to the MCP Server. It's plain markdown — reviewable in a pull request, auditable in GitHub, and portable across any MCP-compatible AI agent. The AI reads it at runtime and follows it exactly.

slice-1-receive-request.SKILL.md
generated
1---
2name: slice-1-receive-request
3compatibility: requires mmc-mcp tools: log-event-to-bus
4use this skill when: user submits a new request
5publishes_event: request-received
6---
7
8## Step 1 — Collect request details
9Present interface to collect: request-id, member-id, date
10
11## Step 2 — Apply Decision Rules
12# Evaluate every scenario independently
13
14Scenario A
15  - When: request-id = Null OR member-id = Null
16  - Error: Invalid query, clarify inputs
17
18Scenario B
19  - When: request-id != Null AND member-id != Null
20  - Log event: request-received
YAML frontmatter
Lines 1–6 are machine-readable metadata: which MCP tools are required, what triggers this skill, and what event it publishes when complete.
Interaction step
Lines 8–9 define what the AI must present to the user — a structured interface to collect the required facts before any rules are applied.
Multi-scenario decision rules
Lines 11–20 define every scenario independently. More than one can be true simultaneously — the AI evaluates all of them, not just the first match.
Event bus logging
Each valid scenario outcome is logged to the event bus via log-event-to-bus. This is the audit trail — and the trigger for the next skill in the flow.

Questions about the process.

Still unsure how any of this works in practice? Here are the questions we get most.

Do I need to be technical to use the Workbench?+
No. If you can describe what your business needs to achieve, you can use MMC.
The Workbench is a visual context modeling tool. You're defining missions, interactions and measurable outcomes using a guided canvas — not writing code or filling in configuration forms. The SKILL.md file is generated automatically and committed to your GitHub without any command-line knowledge needed.
What if I want to edit the SKILL.md file manually?+
Go ahead — it's just a text file in your own repository.
SKILL.md is plain markdown with a YAML structure. Technical users can edit it directly in their code editor or via a pull request, exactly like any other config file. The Workbench will reflect those changes the next time you open the context model.
Which AI agents work with the MCP Server?+
Claude and Gemini natively, plus any MCP-compatible agent.
The MMC MCP Server implements the Model Context Protocol standard. Any agent that supports MCP tool calls can connect to it. Skills are model-agnostic by design — the same SKILL.md that runs on Claude today will run on any future MCP-compatible agent without modification.
What does "open source" mean here?+
The MCP Server is public, auditable and always free. The Workbench is the paid product.
You can fork the MCP Server, self-host it, and inspect every line of code. It will always be free and open — you're never dependent on MMC staying in business for your AI agents to keep working. The Workbench is the premium authoring and management IDE for teams who want the full experience.
How does this pass enterprise procurement?+
There's no MMC database to review. Your rules live in GitHub — which procurement already approves.
The most common procurement blocker for AI tools is the vendor database: where is our data stored, who can see it, what happens if the vendor is breached? MMC eliminates this question entirely. Your logic lives in your own GitHub repository. The open-source MCP Server runs in your own infrastructure. There is no MMC database — and therefore nothing for procurement to review.
Free to start · Open-source engine · No database required

Ready to model
your first context?

Launch the Workbench, build your first outcome-driven context model, and have a governed AI skill committed to your GitHub within the hour.

Works with Claude, Gemini & any MCP-compatible agent · Your GitHub, your data