Early Access Q3 2026 GA Q4 2026
Kaizen

Every change remembered.
Every mistake reversible.
Every session continuous.

Terminal-first AI coding agent. Web UI coming soon.

You work at the Helm.

Single binary. SQLite-only. No cloud backend. Works with any LLM provider.

Effort levels: low · medium · high · ludicrous · auto

You Work at the Helm

Helm is Kaizen's coordination layer. It classifies every request, builds a plan, and delegates to specialist agents - but only after confirming scope for anything non-trivial. You stay in control.

  • Helm plans before acting - confirms scope for large changes
  • Delegates read tasks to Scout, write tasks to Cody, reviews to Sage, docs to Scribe
  • Helm never edits files directly - pure coordination
  • Confirmation gate scales by scope: trivial → proceed, large → plan first

See It in Action

Watch Kaizen orchestrate agents, capture checkpoints, and deliver results.

kaizen
| Message Kaizen... (@file /cmd Enter to send)
Helm | ■ projects/tensorfoundry/kaizen | main | ↓0 tokens | $0.00 / $2.00 | tf/forge-code-2.0

Your Agent Remembers Everything

Kaizen persists decisions, discoveries, and bugfixes in SQLite across every session. FTS5 full-text search with staleness detection keeps your context fresh - no re-learning, no cold starts.

  • FTS5 full-text search across all sessions
  • Staleness detection flags outdated memories
  • Four memory types: decision, bugfix, discovery, architecture
  • No vector database required

Every File Write, Checkpointed

Before Kaizen writes or edits any file, it CAS-snapshots the previous state. Made a mistake? kaizen undo shows a numbered list, diff preview, and y/N confirmation. No AI mistake is permanent.

  • Pre-write CAS snapshots on every file operation
  • Branch undo history like git
  • Delta compression: bsdiff with chain depth cap
  • Named savepoints for manual checkpoints

Parallel Agents, Not Parallel Promises

Helm plans the work and spawns Scout, Cody, Sage, and Scribe as actual OS processes - not threads. Each gets its own TUI tab with independent scrollback. Real parallelism with SQLite file locking for safe co-ordination.

  • Helm orchestrator with create_plan / update_plan tools
  • Scout (search), Cody (write), Sage (review), Scribe (docs) specialists
  • Up to 8 agents, 3 concurrent by default (configurable)
  • Per-agent TUI tabs - click to reactivate, × to close
  • Effort levels: low · medium · high · ludicrous · auto - adjusts turn budgets and thoroughness
  • Sage returns structured PASS / FAIL / CONDITIONAL verdicts, triggering Cody retry loops
  • Plan-step checkpoints: every completed plan step creates a named savepoint automatically
  • Advisory SQLite file locking prevents two agents editing the same file simultaneously

Understands Your Code Structurally

Kaizen builds a tree-sitter symbol index across your entire codebase. The repo_map ranks files by cross-reference score so agents know what matters before reading a single line.

  • Tree-sitter symbol index across 18 languages
  • repo_map ranks files by cross-reference score - agents prioritise what matters
  • Symbol lookups cost ~20 tokens vs ~4,000 for a full file read
  • Incremental indexing: auto-reindex on every write, background watcher for external changes

Context That Never Disappears

Other agents summarise old context and discard it. Kaizen replaces tool output with CAS hash references - the agent can recall any masked content on demand. Three-level strategy: CAS masking first, LLM summarisation only as fallback.

  • CAS-backed masking: tool output stored by SHA-256 hash
  • recall tool retrieves any masked content instantly
  • Triggered at 75% of 200K context window
  • LLM summarisation only as last resort

"Memory + checkpoints + smart routing: a focused model with full project context can outperform a frontier model with none."

Works With Your Entire Stack

Point Kaizen at any LLM provider - or keep it fully local.

Built to Run Anywhere

Language Go 1.26
Storage SQLite (WAL mode)
Cloud Required None
Distribution Single binary
Platforms Linux, macOS, Windows (amd64 + arm64)
Context Window Up to 200K tokens
Languages (tree-sitter) 18 supported
LLM Providers Anthropic, OpenAI, any compatible endpoint
Effort Levels low · medium · high · ludicrous · auto

Get Early Access to Kaizen

Join the waitlist. Shape the product with direct engineering feedback.

Priority early access
Direct engineering feedback channel
Roadmap influence