๐Ÿ™

Just a Better Interface

A self-hosted AI platform where a master LLM orchestrates workers, manages SSH machines, connects MCP tools, and runs autonomously 24/7 with the heartbeat agent. Your models, your rules, your server.

Get started
๐Ÿ  Explore our MCP servers for accelerating material simulation

Everything you need, nothing you don't

๐Ÿง 

Multi-LLM orchestration

A master LLM plans the work and delegates subtasks to specialized workers in parallel. Cheaper, faster, better.

๐Ÿ–ฅ๏ธ

SSH machines

Register your servers. The agent runs commands, streams output live, uploads and downloads files โ€” all from chat.

๐Ÿ”Œ

MCP tool servers

Connect any MCP-compatible tool server. Workers get scoped access to exactly the tools they need.

๐Ÿ“

File workspace

Each user gets an isolated file system. Upload, browse, drag into chat. The agent reads and writes files for you.

๐Ÿ—œ๏ธ

Context compression

Long conversations are automatically summarized when the context window fills up. Nothing is lost, history is always retrievable.

๐Ÿ”’

Self-hosted & private

Your server, your data. Secrets encrypted at rest, per-user isolation, configurable data retention. You control what stays and what goes.

๐Ÿ›ก

Built for trust

Route sensitive data to local models only. Per-model safety tags, audit-ready memory system, and data export tools โ€” designed for teams that care about how their data is handled.

๐Ÿ’ž

Autonomous mode

The heartbeat agent keeps your research moving while you sleep. It reads state, decides what's next, and drives the conversation forward โ€” 24/7.

โฑ๏ธ

Auto-confirm plans

Delegation plans auto-execute after a configurable countdown. Freeze to edit, or let the timer run. Autonomous loops without babysitting.

๐Ÿ™

86+ built-in tools

Workspace editing, SSH machines, web search, code analysis, persistent memory, plan tracking, file management โ€” all built in.

๐Ÿ“‹

Rules system

Define behavioral rules as plain Markdown files. Link them per conversation. The heartbeat agent gets its own dedicated instruction file.

๐Ÿ’ฌ

15 bubble types

Every message is a typed component โ€” text, code, plans, traces, SSH output, heartbeat messages, todos, and more. Extensible with one file.

How orchestration works

1

You send a message

Ask anything โ€” build a feature, analyze data, deploy to a server, research a topic.

2

The master plans

Your master LLM breaks the request into subtasks, picks the best worker for each one, and shows you the plan.

3

You confirm (or edit)

Reassign workers, edit instructions, add or remove tasks. You're always in control.

4

Workers execute in parallel

Each worker gets only the context it needs โ€” trimmed, scoped, budget-capped. No wasted tokens.

5

Master synthesizes

Results are collected and the master produces a unified response. A trace shows exactly what happened โ€” LLMs used, tools called, tokens spent.

Built with

Modern stack, no magic, fully open.

FastAPI PostgreSQL React TypeScript Vite Tailwind CSS WebSockets Docker Compose Nginx Anthropic SDK OpenAI SDK Any OpenAI-compatible API