Introduction
Archon provides a visual and programmatic environment for constructing repeatable AI coding harnesses. Instead of relying on ad-hoc prompts, you define structured workflows in YAML that specify tasks, constraints, and validation steps so every run produces consistent results.
What Archon Does
- Provides a web-based editor for designing multi-step AI coding workflows
- Exports harness definitions as portable YAML files that can be versioned in Git
- Supports pluggable LLM backends including Claude, GPT, and local models
- Validates agent output against user-defined acceptance criteria automatically
- Enables sharing and remixing of community-contributed harness templates
Architecture Overview
Archon is a TypeScript application running on Bun with a React frontend. The backend orchestrates workflow execution by dispatching steps to configured LLM providers and collecting results. Harness definitions are stored as YAML documents and can reference external tools, shell commands, or API calls at each step. A built-in evaluation engine checks outputs against assertions before proceeding.
Self-Hosting & Configuration
- Requires Bun runtime; install with
curl -fsSL https://bun.sh/install | bash - Configure LLM provider keys via environment variables (
ANTHROPIC_API_KEY,OPENAI_API_KEY) - Persistent storage uses SQLite by default; configurable via
archon.config.yaml - Deploy behind a reverse proxy for team access with standard HTTPS
- Docker Compose file included for one-command self-hosted setup
Key Features
- Deterministic AI coding through structured harness definitions
- Visual workflow editor with live preview
- Built-in output validation and assertion engine
- Community template library for common tasks
- Lightweight stack with minimal resource requirements
Comparison with Similar Tools
- Claude Code CLAUDE.md — static instruction files; Archon adds multi-step orchestration and validation
- Cursor Rules — editor-specific; Archon is editor-agnostic and version-controlled
- Dagger / Earthly — general CI engines; Archon is purpose-built for LLM workflows
- LangChain — programmatic chaining; Archon offers a no-code visual builder
FAQ
Q: Do I need a specific LLM provider? A: No. Archon supports multiple backends and can route to local models via Ollama.
Q: Can harnesses call external tools? A: Yes. Each step can invoke shell commands, HTTP APIs, or MCP servers.
Q: Is Archon suitable for production pipelines? A: It is designed for development and iteration. For production, export the finalized workflow to your CI system.
Q: How is it licensed? A: Archon is released under the MIT license.