๐Ÿ™

Just a Better Interface

A self-hosted AI workspace where a master LLM orchestrates workers, manages SSH machines, connects MCP tools, and keeps every user's data completely isolated. Your models, your rules, your server.

Get started
๐Ÿ  Explore our MCP servers for accelerating material simulation

Everything you need, nothing you don't

๐Ÿง 

Multi-LLM orchestration

A master LLM plans the work and delegates subtasks to specialized workers in parallel. Cheaper, faster, better.

๐Ÿ–ฅ๏ธ

SSH machines

Register your servers. The agent runs commands, streams output live, uploads and downloads files โ€” all from chat.

๐Ÿ”Œ

MCP tool servers

Connect any MCP-compatible tool server. Workers get scoped access to exactly the tools they need.

๐Ÿ“

File workspace

Each user gets an isolated file system. Upload, browse, drag into chat. The agent reads and writes files for you.

๐Ÿ—œ๏ธ

Context compression

Long conversations are automatically summarized when the context window fills up. Nothing is lost, history is always retrievable.

๐Ÿ”’

Self-hosted & private

Your server, your data. API keys encrypted at rest, per-user isolation, SSRF protection. No third party sees your conversations.

๐Ÿ™

50+ built-in tools

Workspace, SSH, web search, code analysis, persistent memory, todo tracking, platform management โ€” all built in.

๐Ÿ“‹

Rules system

Define behavioral rules as plain Markdown files. Link them per conversation. The agent follows your guidelines every time.

๐Ÿ’ฌ

Bubble system

Every message is a typed component โ€” text, code, HTML, PDF, images, plans, traces, todos. Extensible with one file.

How orchestration works

1

You send a message

Ask anything โ€” build a feature, analyze data, deploy to a server, research a topic.

2

The master plans

Your master LLM breaks the request into subtasks, picks the best worker for each one, and shows you the plan.

3

You confirm (or edit)

Reassign workers, edit instructions, add or remove tasks. You're always in control.

4

Workers execute in parallel

Each worker gets only the context it needs โ€” trimmed, scoped, budget-capped. No wasted tokens.

5

Master synthesizes

Results are collected and the master produces a unified response. A trace shows exactly what happened โ€” LLMs used, tools called, tokens spent.

Built with

Modern stack, no magic, fully open.

FastAPI PostgreSQL React TypeScript Vite Tailwind CSS WebSockets Docker Compose Nginx Anthropic SDK OpenAI SDK Any OpenAI-compatible API