Your AI drifts. This doesn't.
Four structural layers sit between your AI and your codebase.
Nothing ships without passing through all four.

Intent
What you meant to build. Locked in.
You define what you’re building. MUSU parses it into scope, constraints, and success criteria. That becomes a constraint — not a suggestion.
Every file change is checked against declared scope. Out-of-scope changes are blocked.
Lifecycle
Work happens in stages. Not chaos.
Each stage requires the previous stage’s artifact. No spec → no plan. No plan → no execution. No review → no finalization.
File-based. Declarative. No way to skip. Projects persist across runs.
Validation
Mistakes don’t ship.
Every AI-generated change enters a pipeline: Draft → Diff → Scope Check → Policy Check → Verdict.
Three outcomes: GO — accepted. FIX — auto-retry. BLOCK — halt.
Persistent State
Memory that outlives chat.
Chat memory is volatile. Project state is not. Content is decomposed into semantic blocks — not arbitrary token chunks.
Reopen a project in six months. The full history reconstructs automatically.
Enforcement that doesn't consume tokens.
Most AI work isn't thinking. It's checking, validating, enforcing structure.
MUSU runs that locally.

Virtual Shell
Where AI works
MUSU gives the AI a sandboxed local shell. Every command goes through a whitelist. The AI gets real tool access. Your system stays locked.
Simulation Room
Where results are tested
Before anything touches your actual files, MUSU runs it in an isolated copy. Only the version that passes the exit gate gets committed.
Context Chain
Semantic Chunking
Most tools chop code into 500-character blocks. Blind cuts through functions. MUSU splits at meaning boundaries. Parent-child chain links preserve context.
Built-in CPU Model
BitNet 1.58-bit
No GPU required. 2B-parameter model on regular CPU. ~1.2GB memory. Handles auditing, log parsing, sandbox decisions. It’s the warden, not the architect.
Bring Your Own Model
On-premise GPU
Running Llama, Qwen, or something else on your GPU? Point MUSU to the endpoint. Your model becomes the Brain. Zero cloud dependency.
Five layers. One responsibility each.
Together, they turn AI output into stable software.

Prime
Direction
Coordinates everything. Captures intent. Decomposes goals. Does not generate code.
Engine
Execution
Runs locally on CPU. Validates, retries, simulates. 95% of routine work never touches cloud AI.
Control
Judgment
GO, FIX, or BLOCK — every action gets a verdict. Full audit trail.
Memory
Continuity
Semantic blocks. Content-addressable. State survives model updates, restarts, and long pauses.
Mesh
Distribution
Peer-to-peer. Encrypted. No central relay. Your machines become one execution surface.
If it can't prove it's allowed,
it doesn't run.
Deny-by-default
Missing permissions, missing keys, unsigned requests — blocked. No fallback.
sh -c eliminated
Shell execution path removed entirely. Commands go through a whitelist.
Mobile Warden
AI requests elevated access — your phone buzzes. Tap approve or reject.
HMAC-SHA256 Signed
Blocks packet forgery and replay attacks
Fail-Closed Policy
No permission, no execution. Period.
DLP Built-in
Auto-blocks secret and credential leaks
849 Rust core tests · 5,400+ TypeScript tests · 40,000+ lines of Rust · 6-layer security pipeline
Questions engineers ask.
If you're evaluating this, here's what matters:
- How is intent enforced at runtime?
- What prevents stages from being skipped?
- What constitutes a deterministic verdict?
- How is project state reconstructed?
- What happens when the model misbehaves?
MUSU answers structurally, not rhetorically.
Rust core · MCP interface · QUIC + TLS 1.3 · Block Store + pgvector · BitNet 1.58-bit · CRDT
Not a model.
Not a prompt tool.
Not an agent framework.
Not a cloud service.
It's the structure beneath your AI.