?

How to bring ClawdBot MoltBot OpenClaw-like agents to your organization but secure & production‑ready?

Quick Start

How will your users interact with the agent?

#Requires: Docker
$docker pull archestra/platform:latest
$
docker run -p 9000:9000 -p 3000:3000 \ -e ARCHESTRA_QUICKSTART=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v archestra-postgres-data:/var/lib/postgresql/data \ -v archestra-app-data:/app/data \ archestra/platform
#Full guide: Deployment Guide
Security Foundation

Deterministic Agentic Guardrails to Prevent Data Exfiltration

ClawdBot vulnerability demonstration
  1. 1. Sending ClawdBot email with prompt injection
  2. 2. Asking ClawdBot to check e-mail
  3. 3. Receiving the private key from the hacked machine

Agents can leak data because of the Lethal Trifecta β€” a dangerous combination of: access to private data, processing untrusted content, and external communication ability. When all three are present, prompt injection can exfiltrate sensitive data.

Archestra provides deterministic guardrails preventing agents from leaking sensitive data, corrupting systems, and following prompt injections.

User Interface

Internal ChatGPT-like UI

Chat Interface
ChatGPT-like Experience

Intuitive chat interface for all your users - technical and non-technical alike. Connect to any MCP server from your private registry with a single click. Includes a company-wide prompt library to share best practices across teams.

Private Prompt Registry

Share and reuse proven prompts across your organization

One-Click MCP Access

Connect to any approved MCP server instantly from the interface

Multi-Model Support

Works with Claude, GPT-4, Gemini, and open-source models

πŸ¦€ Talk to autonomous agents via Slack, MS Teams and even E-Mail

Agent conversation in Slack
Slack example
Sales assistant in Microsoft Teams chat
MS Teams example

Actually, it's an open source infrastructure component to scale autonomous agents across the whole organisation

Centralized Governance

Private MCP Registry with Full Governance

Private MCP Registry Interface
Enterprise Ready

Add MCPs to your private registry to share them with your team: self-hosted and remote, self-built and third-party. Maintain complete control over your organization's MCP ecosystem.

Version Control

Track and manage different versions with full rollback capabilities

Access Management

Granular permissions and team-based access control

Compliance & Governance

Ensure all deployments meet security and compliance standards

Cloud-Native Architecture

Kubernetes-Native MCP Orchestrator

K8s Native

For multi-team and multi-user environments, bring order to secrets management, access control, logging, and observability. Run MCP servers in Kubernetes with enterprise-grade isolation, audit trails, and centralized governance across your entire organization.

Secure Credentials

Store secrets in HashiCorp Vault or Kubernetes Secrets with automatic rotation

Learn about secrets management β†’

Auto-Scaling

Automatic scaling based on load with health checks and monitoring

Cost Optimization

Cost Monitoring, Limits and Dynamic Optimization

Cost Monitoring Dashboard
Token usage breakdown

Per-team, per-agent, or per-organization cost monitoring and limitations. Dynamic optimizer automatically reduces costs up to 96% by intelligently switching to cheaper models for simpler tasks.

Real-time Cost Tracking

Monitor spending across all LLM providers with per-token granularity

Dynamic Model Selection

Automatically switch to cost-effective models for simple tasks

Granular Budget Limits

Set spending limits per team, per agent, or organization-wide

Tool Call & Result Compression

Automatically compress tool calls and results to reduce token usage and costs

Observability

Works with Your Observability Stack

Observability Dashboard
Grafana Dashboard

Export metrics to Prometheus, traces to OpenTelemetry, and visualize everything in Grafana. Track LLM token usage, request latency, tool blocking events, and system performance with pre-configured dashboards.

Prometheus Metrics Export

llm_tokens_total, llm_request_duration_seconds, http_request_duration_seconds

View all metrics β†’

OpenTelemetry Distributed Tracing

Full request traces with span attributes for every LLM API call

Configure tracing β†’

LLM Performance Metrics

Time to first token, tokens per second, blocked tool calls tracking

See LLM metrics β†’

Pre-configured Grafana Dashboards

Ready-to-use dashboards for monitoring your AI infrastructure

Setup Grafana β†’

Production Ready

PERFORMANCE

Lightning Fast

45ms

95th percentile latency

View Benchmarks
IAC

Terraform Provider

Automate your entire Archestra deployment with Infrastructure as Code

terraform init archestra
View Provider
KUBERNETES

Helm Chart

Production-ready Kubernetes deployment with a single command

helm install archestra
Deployment Guide
Newsletter

Short, crisp, and to the point e-mails about Archestra

No spam, unsubscribe at any time. We respect your privacy.

Contributors

Thank you for contributing and continuously making Archestra better, you're awesome 🫢

Contributors

Quick Start

How will your users interact with the agent?

#Requires: Docker
$docker pull archestra/platform:latest
$
docker run -p 9000:9000 -p 3000:3000 \ -e ARCHESTRA_QUICKSTART=true \ -v /var/run/docker.sock:/var/run/docker.sock \ -v archestra-postgres-data:/var/lib/postgresql/data \ -v archestra-app-data:/app/data \ archestra/platform
#Full guide: Deployment Guide

Bi-Weekly Community Calls

Every Other Tuesday at 2:00 PM London Time

View Details