13 pre-built vertical templates. A clone-and-deploy pipeline. Five live paying customers proving it runs. Customer #50 ships as fast as customer #6.
Context files load bottom-up. Project specificity wins. When ambiguous, the more restrictive constraint applies. The pattern most enterprise AI programs build by year three — designed in on day one.
HoneyBun's production runtime is 59 Cloudflare Workers (4 core + 55 specialist) — distinct from the Claude Code sub-agent tooling used to build it. Every page view travels through six layers: client → WordPress theme → core workers → specialist workers → storage → output surfaces. The mu-plugin bridges WordPress and Cloudflare Workers invisibly. The diagram below is the actual production topology.
This is the personal AI operating system I built to direct software work — separate from HoneyBun's production runtime. 39 specialist Claude Code sub-agents, 6 binary quality gates (Plan · Code · Security · Test · Build · Business), and 6 executive-persona reviewers route every task through a custom 270-line dispatch protocol. No gate is skippable. A RED verdict from a relevant persona blocks execution. The platform that runs on AI was itself built by AI orchestration.
Templates ≠ tenants. The 13 golden apps are the factory; the 5 live customers are forks of the factory line. Each carries its own git SHA and deployment timestamp.
~41,000 lines of screens sharing 21 cross-app modules. Customer-facing PWA and internal AI ops console deploy from one source. No App Store review cycles.
Every catch block routes through reportFailure(). Auto-remedy attempted first. Triple-channel escalation if it can't self-heal. Named human owner on every alert.
Data isolation, deploy integrity, bot protection, and transport security are first-class design constraints — not compliance checkboxes added after the fact.
can(), isPrivilegedAdmin(), and role constants gate every sensitive operation. Not ad-hoc if-checks.isPrivilegedAdmin(). Operator keys can't self-elevate.~/.claude/lessons/honeybun/lessons.mdcompleted_at + existing code state before claiming any task. Prevents parallel-session re-do.Every task is classified by risk the moment it hits the board. Low-risk work runs, verifies, and commits without a single human touch. High-risk work builds in an isolated branch, runs an independent verification pass, and surfaces a one-click approve/reject card with the full diff attached. After merge, a health probe watches the live endpoint — two consecutive failures within five minutes trigger an auto-revert.
The four-station mini-line for multi-file structural changes.
autopilot/<taskId>. Never touches live source repo. Diff guard blocks sensitive paths.wrangler*.toml, .env*, migrations/, auth/, billing/, CI workflows, deploy scripts/tmp/hb-build/<jobId>The hard part of enterprise AI was never the technology. It was always going to be getting people to want to move with you. Eleven years as a Marine Corps career recruiter taught me to operate that way. Three years building HoneyBun proved the operating model holds at machine scale, too.