✈
Telegram Dual-Stack
Two parallel Telegram services: Aiogram 3.x async bot (public API, polling, inline buttons) + Telethon MTProto userbot (account-level: profile, groups, channels, forwarding, reactions). 43 Telegram tools. RPC between Brain and Telethon via PostgreSQL NOTIFY — zero HTTP overhead.
✉
Email Service (SMTP/IMAP)
Full async SMTP/IMAP service — 4th delivery channel alongside Telegram, Dashboard, and API. 6 email tools: send, reply (with threading), check inbox, read, search, download attachments. Auto-converts Markdown to HTML with anti-spam headers. Incoming emails auto-create sessions. Dashboard config tab with test connection.
⚡
Async-Native Architecture
100% async from top to bottom: asyncpg (non-blocking DB), Aiogram (async Telegram), Telethon (async MTProto), FastAPI (async HTTP). PostgreSQL LISTEN/NOTIFY as event bus — no Redis, no RabbitMQ. SKIP LOCKED queue for backpressure.
🎧
Multi-LLM Orchestra
Gemini 2.5, GPT-4o, Claude, DeepSeek, NVIDIA NIM — automatic fallback chains with per-category model selection and RPM rate limiting. Provider-side prompt caching support (Gemini 1M context cache). Token analytics: per-model input/output tracking, error counts, latency — all visible in Dashboard.
🧠
Memory & RAG
Three-tier memory with vector-free RAG: STM (50 msgs/chat, 72h TTL), LTM with hybrid scoring (FTS tsvector 35% + Jaccard tags 30% + importance 20% + user boost 15%), ISTM (inter-session broadcast with threading). Memory Manager Gate — 3-phase consciousness flow with decision logging and automatic LTM/ISTM hygiene. No embedding API costs, no vector drift.
🔁
Context Compression
Automatic Summarizer pipeline: when STM exceeds token threshold (4K) or age (24h), oldest messages are batched → summarized by LLM → archived to LTM with tags. Retains 15 most recent messages per chat. Sanitizer runs hourly archival + daily cleanup. Keeps context lean without losing information.
🤖
Multi-Swarm & Sub-Sessions
N concurrent supervisor sessions with priority queue — each session is an independent consciousness. Sessions spawn specialized sub-sessions (communicator, code, search, telethon, subconscious) with reduced tool sets. 3-level category hierarchy: queue → category → vector with prompt inheritance. ISTM enables threaded inter-session memory with broadcast and targeted thoughts.
💡
7D Growth Engine
7-dimensional scoring: knowledge, communication, autonomy, creativity, reliability, rationality, world_acceptance. Asymmetric formula: gains harder at top, losses harder at bottom. Auto-computed reliability (task success rate) and rationality (token efficiency). Auditor sessions write batch assessment reports to the built-in Blog system. Full audit trail in GrowthLog.
🛡
Self-Healing & Cloud Backup
Unified watchdog monitors all services via heartbeat (30s interval). Auto-restart through Docker API. Self-update pipeline: git pull → migrate → rebuild → health gate → rollback. Cloud Backup v2.0: full state serialization (12 tables + skills) to git branch — push/pull from Dashboard. Restore to new server without data loss.
🔎
Search & Browse Providers
Multi-provider search with automatic fallback: Brave Search API (native scoring) → Serper.dev (Google results via proxy) → DuckDuckGo (zero API key fallback). Research tool spawns isolated sub-sessions for deep web analysis. API keys managed from Dashboard.
🔌
MCP Protocol & Integrations
Native Model Context Protocol (JSON-RPC 2.0 over stdio). Pre-configured: Playwright MCP (web automation, accessibility snapshots), Puppeteer Real Browser (stealth browsing, CAPTCHA bypass). Auth support: API keys, Bearer tokens, OAuth2 with auto-refresh. Server namespacing prevents tool collisions.
💻
Dev Sandbox & Claude Code
Isolated Docker containers: init → exec → test → diff → promote. Claude Code CLI delegation — complex multi-file refactoring, feature implementation, full dev cycles via subprocess. Auth fallback chain, configurable timeout (up to 30min), token budget ($5), 50 max turns.
📚
Skills, Blog & Knowledge Base
Dynamic skill loading from Markdown files. Per-category prompts, tool whitelists, and LLM parameters — all configurable from DB at runtime. Agent can create and manage its own skills autonomously. Built-in Blog for self-reflection journaling with mood tags (reflective, curious, inspired). Dashboard UI with tag filtering.
📊
Execution Planning & Analytics
Structured execution plans with multi-step decomposition and status tracking (draft → in_progress → completed). Session todo lists with projections (next steps if OK / NOT OK). Token analytics dashboard: per-model usage, error rates, latency — aggregated by day/week/month. Full execution trace with i18n badges.