beta — early access

Orchestrate AI agents.
In total security.
Runs locally.

The platform for developers who won't compromise on security or data sovereignty. Connect GitHub or GitLab, build your workflow, let the agent ship — in hardened containers you control.

// workflows

Build the workflow your team actually follows.

Define your own steps, control which tools the agent can use at each one, set custom prompts per step, and decide exactly when human review is required. From quick patches to multi-phase feature delivery — your call.

// integrations

GitHub and GitLab. Your issues, our agent.

Browse issues directly from the dashboard. The agent picks one up, analyzes it, executes the changes, and opens a merge or pull request — no copy-pasting, no manual handoff. Blocker relationships are respected: linked issues unblock automatically when their dependency closes.

// security

Security powered by ysa.
Open source, auditable.

Every container runs on ysa — the open source container runtime we built specifically for AI agents, Apache 2.0. A MITM proxy sits between the agent and the internet: GET-only in strict mode, domain allowlist, rate limits. Exfiltrating your code or credentials is made extremely difficult — even if the agent is instructed to. Keys are encrypted at rest and injected only at runtime.

// team

Organization-ready from day one.

Invite teammates, assign projects, manage roles. Tool presets and custom workflows are shared across your org so every team member runs the same guardrails.

// faq

Common questions

What does ysa stand for?

ysa stands for your secure agent — a name that reflects the two things we care about most: that the agent works for you, on your machine, with your code. And that security isn't optional.

How does this compare to Claude.ai, ChatGPT, or Cursor?

Those tools send your prompt to a cloud API and run the agent outside any security boundary — your filesystem, network, and host kernel are exposed with no guardrails. ysa platform runs agents in hardened containers on your machine: isolated, auditable, and under your control. Same state-of-the-art models, fundamentally different trust model.

Do you host the containers in the cloud?

No. Containers run entirely on your machine — your code, your infra, your control. If demand for cloud-hosted execution grows, that's something we'll look into. For now, local is the point.

Does my code leave my machine?

No. The agent runs locally in Podman containers on your machine or your own infra. Your codebase stays where it is. The only external calls are to the LLM API (using your own key) and to GitHub or GitLab for issue fetching and MR/PR creation — both of which you already trust with your code. And yes, fully air-gapped if you use a self-hosted LLM — coming soon.

Where are my API keys stored and how are they protected?

Keys can be set at two levels. At the organization level, they're stored encrypted in the platform database, decrypted and injected into the container only at runtime — so the whole team shares one key without each member having to configure their own. At the personal level, keys stay on your machine — never stored on our servers. No reason to create risk if you don't need to share them across the org. The platform itself never calls the LLM API; the agent running on your machine does, using your key directly.

Can the agent push to my main branch or break my repo?

No. Each issue gets an isolated git worktree on a dedicated branch. The agent works in complete isolation from your main branch. Nothing merges without you reviewing and approving the MR or PR. A git wrapper inside the container also strips dangerous git config options — hooks, SSH proxy, credential helpers — to prevent abuse.

What's the difference between ysa and ysa platform?

ysa is a container runtime for AI agents — the security backbone any orchestration layer can build on. Hardened Podman containers, MITM network proxy, seccomp profiles, CLI and a clean API. It is and will stay focused on that one job. ysa platform is the orchestration layer built on top: multi-tenancy, GitHub and GitLab integration, customizable workflows, and team management. All container security comes from ysa.

Which LLMs are supported?

Claude (Anthropic) and Mistral today, with multiple models available for each. You configure the provider and model per project. Support for self-hosted and local models is on the roadmap — we know sovereignty means not depending on third-party APIs for everyone.

Is this stable enough to use on real work?

ysa platform is in beta — but it's already running on real production codebases. The security model is solid. The product surface is still evolving: expect rough edges and fast iteration. We're looking for developers willing to run it on real projects and give honest feedback.

Why a web application instead of a desktop one?

A web application lets you access your dashboard from any machine without installation or updates — open a browser, you're in. It also means a single codebase to maintain and evolve, rather than separate native apps per platform. That said, if there's strong demand for a native desktop experience down the road, it's something worth considering.

// open source

The security layer is auditable.
Every line of it.

ysa is a container runtime for AI agents — and nothing else. Its job is to be the security backbone that any orchestration layer can build on top of. Read the seccomp profile, the MITM proxy, the OCI hooks. Fork it. Build on it. We did.

→ github.com/ysa-ai/ysa