These Aren't Cloud APIs.
These Are Living, Learning Systems.
When you install F3L1X, you're not renting access to someone's servers. You're deploying a growing network of autonomous agents that run locally, communicate through the Herald network, and get smarter every time you use them. Each tier unlocks more realms โ and you can always build your own.
Recursive Self-Improvement
Realms document their own code, generate their own tests, and learn from failures. Every error becomes a lesson. Every successful pattern gets reinforced.
Meta-Programming
realm-spawn creates new realms from templates. Agents that build agents. Say "spawn realm socialmedia-sally" and watch F3L1X scaffold a complete agent system.
Herald Network
Realms communicate through a local message broker. herald routes requests, manages state, and keeps your AI workforce coordinated without cloud dependencies.
Your First Week With F3L1X
Install and Discover What's Already Built
Install F3L1X. Say hello f3l1x and mr-greet loads your session context, surfaces blockers, and starts critical services automatically. Use doc-u-me to search your entire documentation library. Let worker-bee verify every realm is healthy. You're productive in minutes, not days.
Make Existing Realms Work for You
Every realm is a standard Django app โ you can extend any of them. Tell Claude Code to add a custom management command to doc-u-me, modify worker-bee's health checks for your infrastructure, or tweak herald routes for your workflow. The realms document themselves, so Claude always has context.
Spawn Your Own Realms
Use realm-spawn to generate a new agent from scratch. It scaffolds a complete Django app with Herald integration, tests, CLAUDE.md, and MCP tools. Name it, describe what it does, and start building. pipeline-go ensures every realm follows the same architecture, so they all work together out of the box.
Example: Spawn "invoicing-ivan" for freelance invoices, "content-claire" for social media scheduling, or "report-rick" for automated client reports. Build what you actually need.
Share or Sell What You've Built
Once your realm is working and tested, package it for the Herald marketplace. Set a price per use or offer it free. Other F3L1X users install it locally and pay via x402 micropayments โ your realm runs on their machine, so you have zero server costs. Early creators get featured placement when the marketplace launches.
Most AI platforms take weeks to learn. Most agent frameworks need a dev team.
F3L1X gives you a working AI workforce on day one โ and the tools to grow it yourself.
The End Goal: True AI Sovereignty
Using sov-ai (local LLMs via Ollama) and transcriber (local Whisper), you can evolve realms into standalone MCP servers that don't require Claude Code anymore. Build your tool. Prove it works. Cut the cord. Own it forever.
And you don't do this alone โ the ecosystem automates the journey. pipeline-go enforces the architecture that makes realms portable. explore-kid pre-filters with local AI so you stop paying for tokens you don't need. sync-man keeps your configurations consistent across every realm. worker-bee monitors the whole fleet. The realms that build realms also build your path to independence.
The Nervous System
Critical realms that keep F3L1X alive and coordinated.
herald
Port 8014Central message broker. Routes inter-realm communication, manages JWT auth, and serves as gateway for x402 payment protocol.
- WebSocket-based message routing with Redis backend
- JWT authentication + device flow OAuth for CLI clients
- REST API at /api/tools/ exposing marketplace catalog
- x402 payment protocol gateway for micropayments
- Real-time realm status broadcasting
f3l1x-dashboard
Port 8000Your command center. Control Claude Code terminals, monitor realm status, access marketplace, view system health - all in one place.
- Multi-terminal Claude Code session management
- Live realm health monitoring across all active agents
- Integrated marketplace browser with x402 payment UI
- WebSocket terminal output streaming
- One-click realm start/stop/restart controls
402-payment
Port 8086x402 protocol implementation. Handles micropayment infrastructure for marketplace transactions. Built on Base L2 with USDC settlement.
- Coinbase Commerce integration for crypto payments
- Per-use pricing ($0.01-$1.00 range) with instant settlement
- Smart contract escrow for tool usage verification
- Automatic revenue splits (80% creator, 20% F3L1X)
- Base L2 for low gas fees (~$0.001 per transaction)
Get Things Done
Realms that automate your daily workflows and compound your productivity.
doc-u-me
Port 8005Documentation search on steroids. Continuously indexes every markdown file across all realms, extracts blockers from unchecked TODOs, and links code to documentation. Context-aware semantic search via sov-ai integration.
- Full-text search across your entire realm ecosystem
- Auto-extracts unchecked Next Steps as blockers
- Links docs to code files via path references
- Semantic search using sov-ai local embeddings
- Blocker prioritization (high/medium/low)
worker-bee
Port 8082Ecosystem health verification and session logging. Automated workflows that run without you. Verifies 9 essential realms, checks endpoints, logs activity.
- Verifies 9 core realms (herald, dashboard, doc-u-me, etc.)
- HTTP endpoint health checks with response time tracking
- Session activity logging to JSONL format
- Auto-restart failed services via systemd integration
- Dashboard at :8082 with real-time status grid
mr-greet
Port 8078Session greeting and context recovery. Loads your last handoff, surfaces high-priority blockers, auto-starts critical services. "Say hello f3l1x" and get back to work instantly.
- Loads last session handoff from prompt-bridge
- Surfaces high-priority blockers from doc-u-me
- Auto-starts herald, dashboard, worker-bee on greeting
- Shows ecosystem health summary from worker-bee
- Single command ("hello f3l1x") restores full context
prompt-bridge
Port 8061Session handoff generation. Generates concise context summaries for continuing work across sessions. Never lose your place again.
- Analyzes conversation history to extract key decisions
- Generates 3-5 sentence context summary
- Stores handoffs in ~/.claude/handoffs/ by realm
- Auto-loaded by mr-greet on next session start
- Preserves technical context across Claude restarts
sync-man
Port 8063CLAUDE.md sync and drift detection. Keeps project instructions consistent across realms. Catches when your docs get out of sync with reality.
- Scans all realms for CLAUDE.md files
- Detects drift between project docs and pipeline-go kernel
- Auto-sync common sections across realms
- Warns when realm-specific instructions conflict
- Prevents stale documentation from misleading Claude
login-master
Port 8043Supabase SSO integration. Centralized authentication for F3L1X services. Google OAuth, GitHub auth, and more.
- Supabase Auth integration for centralized SSO
- Google OAuth + GitHub OAuth providers
- JWT token generation for realm-to-realm auth
- User session management across all F3L1X services
- Magic link passwordless authentication
Build Smarter
Realms that help you write better code, generate tests, and plan features.
realm-spawn
Port 8029The meta-agent. Generates new realms from templates. Say "spawn realm email-emma" and watch it scaffold a complete Django app with tests, docs, and Herald integration.
- Django app scaffolding with pipeline-go structure
- Auto-generates CLAUDE.md, spec.md, init.sh per realm
- Herald integration boilerplate (message routing, auth)
- MCP server template with tool definitions
- Port registry auto-assignment (8000-8999 range)
pipeline-go
Port 8023CI/CD methodology and development framework. Django patterns, testing standards, Railway deployment, git workflows. The operating system for F3L1X realms.
- Django HackSoftware architecture (config/, apps/)
- TDD workflow with pytest and Django TestCase
- Railway deployment templates with WhiteNoise
- Git standards: commit format, pre-commit hooks
- Agent harness (CLAUDE.md + spec.md + init.sh)
test-master
Port 8009Testing orchestrator. Generates test suites, runs coverage, enforces TDD workflow. Django's testing framework on autopilot.
- Auto-generates test cases from model/view definitions
- Runs pytest with coverage reporting (--cov flag)
- Enforces 80%+ coverage requirement
- Parallel test execution for speed
- Integration with GitHub Actions CI pipeline
upgraydd
Port 8008Feature research and planning agent. Investigates implementation strategies, drafts specs, analyzes trade-offs. Your technical co-founder.
- Researches libraries/frameworks for feature requirements
- Drafts spec.md files with implementation plans
- Analyzes architecture trade-offs (monolith vs microservices)
- Estimates effort and complexity
- Proposes 2-3 implementation options with pros/cons
euclid-boy
Port 8011Algorithmic optimization specialist. Analyzes complexity, benchmarks performance, suggests optimizations. Big-O notation expert.
- Big-O complexity analysis for algorithms
- Benchmarking with Python timeit and profiling tools
- Database query optimization (indexes, select_related)
- Suggests algorithmic improvements (caching, memoization)
- Identifies performance bottlenecks via profiling
hacker-man
ActiveSecurity-focused development assistant. Audits for OWASP vulnerabilities, suggests hardening, reviews auth implementations.
- OWASP Top 10 vulnerability scanning
- SQL injection, XSS, CSRF detection
- Authentication/authorization review
- Django security check (manage.py check --deploy)
- Secrets detection in code (API keys, passwords)
Sovereign Intelligence
Run AI models locally. Your data never leaves your machine. The path to AI independence.
sov-ai
Port 8017Local AI infrastructure via Ollama. Summarize, classify, extract entities - all on your hardware. MCP tools for Claude Code integration. Supports Qwen 2.5, Llama 3, CodeLlama.
- Ollama integration (Qwen 2.5, Llama 3, CodeLlama models)
- MCP tools: summarize, classify, extract, inference
- Local embeddings for semantic search (doc-u-me)
- Zero API costs - runs entirely on your GPU/CPU
- Prompt enhancement for better Claude results
transcriber
Port 8019Voice-to-text using local Whisper models. Transcribe meetings, voice notes, phone calls. 100% local processing - no API calls, no subscriptions.
- OpenAI Whisper models (tiny, base, small, medium, large)
- Real-time transcription via microphone input
- Batch processing for audio files (MP3, WAV, M4A)
- Speaker diarization (multi-speaker detection)
- Timestamp generation for searchable transcripts
explore-kid
Port 8031Token-efficient codebase exploration. Uses local AI to pre-filter files before expensive Claude queries. Saves money while you learn a new codebase.
- Pre-filters files with local LLM (Qwen/Llama)
- Reduces Claude API costs by 60-80%
- Semantic file search using local embeddings
- Generates codebase summary reports
- Identifies entry points and key dependencies
The Local AI Endgame
Start with Claude Opus for high-capability work. Use sov-ai to learn patterns. Eventually, your realms become smart enough to run on local-only infrastructure. That's when you truly own your AI. No subscriptions. No API bills. Just your computer, doing work.
An Ecosystem That Grows With You
The realms shown above are the core infrastructure. But the F3L1X ecosystem keeps expanding โ PDF converters, image processors, SQL experts, UI fixers, social media managers, theme generators. Every idea becomes a realm, and every tier gives you access to more.
What is a Realm?
A realm is the programmatic conceptualization of an idea. It's a Django app, an MCP server, a memory system, a specialized tool - all wrapped into an autonomous agent. Realms communicate via Herald, document themselves, generate tests, and can spawn new realms. They're not just programs. They're living systems that get smarter as you use them.
Ready to Deploy Your AI Workforce?
Multiple specialized agents. Local execution. Progressive autonomy. Start building today.