A self-hosted AI workspace where a master LLM orchestrates workers, manages SSH machines, connects MCP tools, and keeps every user's data completely isolated. Your models, your rules, your server.
A master LLM plans the work and delegates subtasks to specialized workers in parallel. Cheaper, faster, better.
Register your servers. The agent runs commands, streams output live, uploads and downloads files โ all from chat.
Connect any MCP-compatible tool server. Workers get scoped access to exactly the tools they need.
Each user gets an isolated file system. Upload, browse, drag into chat. The agent reads and writes files for you.
Long conversations are automatically summarized when the context window fills up. Nothing is lost, history is always retrievable.
Your server, your data. API keys encrypted at rest, per-user isolation, SSRF protection. No third party sees your conversations.
Workspace, SSH, web search, code analysis, persistent memory, todo tracking, platform management โ all built in.
Define behavioral rules as plain Markdown files. Link them per conversation. The agent follows your guidelines every time.
Every message is a typed component โ text, code, HTML, PDF, images, plans, traces, todos. Extensible with one file.
Ask anything โ build a feature, analyze data, deploy to a server, research a topic.
Your master LLM breaks the request into subtasks, picks the best worker for each one, and shows you the plan.
Reassign workers, edit instructions, add or remove tasks. You're always in control.
Each worker gets only the context it needs โ trimmed, scoped, budget-capped. No wasted tokens.
Results are collected and the master produces a unified response. A trace shows exactly what happened โ LLMs used, tools called, tokens spent.
Modern stack, no magic, fully open.