OpenClaw Deploy
OpenClaw Deploy lets you spin up a personal AI Telegram bot in under 60 seconds — powered by Claude, GPT-4, Gemini, or a locally-hosted Ollama model. Each bot runs in an isolated Docker container on dedicated Hetzner infrastructure. You bring your own API key; we handle the hosting.
BYOK (Bring Your Own Key): Your API key is passed directly to your container at deploy time and is never stored in our database or logs.
What you get
- A live Telegram bot at
t.me/YourBotName - Full OpenClaw agent capabilities — web search, memory, code execution, tools
- Isolated container — your bot doesn't share memory or context with other users
- A dashboard to monitor status, view logs, stop or redeploy
- Stripe-managed subscription — cancel any time
Requirements
- A Google account (used for authentication)
- A Telegram bot token from @BotFather
- An API key for your chosen AI provider (not required for Ollama)
- An active Basic or Pro subscription
Quick start
From zero to a live bot in four steps.
Create a Telegram bot
Open Telegram, message @BotFather and run /newbot. Follow the prompts to choose a name and username. Copy the bot token it gives you — it looks like 123456:ABC-DEF1234...
Get an AI API key
Choose your provider and grab an API key:
- Claude (Anthropic): console.anthropic.com → API Keys
- GPT-4 (OpenAI): platform.openai.com/api-keys
- Gemini (Google): aistudio.google.com
- Ollama: No key needed — uses our on-VPS Ollama instance
Subscribe and deploy
Go to openclaw.aiindigo.com, sign in with Google, pick a plan, then fill in your bot token and API key. Click Deploy Bot. Your container will be live in 15–30 seconds.
If this is your first deploy, you'll be prompted to subscribe first. After payment you'll be redirected back and can deploy immediately.
Chat with your bot
Open your bot in Telegram and send /start. Your AI agent will respond immediately. Share the link with anyone you want to give access to.
Supported models
Pick any provider at deploy time. You can redeploy with a different model at any time.
| Provider | Default model | Custom model | Key required | Notes |
|---|---|---|---|---|
| 🟣 Anthropic | claude-sonnet-4-5 |
Yes | Anthropic key | Best reasoning and instruction-following |
| 🟢 OpenAI | gpt-4o-mini |
Yes | OpenAI key | Fast, cost-effective, great for chat |
gemini-2.0-flash |
Yes | Google AI key | Long context, multimodal | |
| ⚙️ Ollama | deepseek-r1:32b |
Yes | None (free) | Runs on-VPS. Shared resource — may be slower under load |
The model name field is editable. Any model your provider supports can be entered — e.g. claude-opus-4-5, gpt-4o, or llama3.1:70b.
Available Ollama models (on-VPS)
| Model | Size | Best for |
|---|---|---|
deepseek-r1:32b | 32B params | Reasoning, code, math |
qwen3:32b | 32B params | Multilingual, instruction-following |
phi4:14b | 14B params | Lightweight, fast responses |
mistral:7b | 7B params | Fast, low latency |
llama3.1:8b | 8B params | General-purpose chat |
How it works
Architecture
Each user gets an isolated Docker container running the openclaw-headless:latest image — a full OpenClaw agent configured to listen on Telegram.
User browser
│
▼ POST /api/deploy (HTTPS, Nginx → Express :3200)
openclaw-deploy API ──► Docker run ──► Container openclaw-user-{id}
│ │
▼ SQLite DB ▼
deployments table Telegram Bot API
(no API key stored) ← polls for messages
Container lifecycle
- Deploy: Any existing container for your user is stopped first, then a fresh one is launched with your credentials injected as environment variables.
- Running: Container polls Telegram for messages and responds using your model/key.
- Stop: Container is stopped but the deployment record is retained. You can redeploy at any time.
- Redeploy: Same flow as deploy — enter credentials again (API key is never cached).
Infrastructure
- VPS: Hetzner nbg1, 24 GB RAM, Ubuntu 24 — CPU-only
- Max containers: 30 simultaneous running bots per VPS
- Networking: Each container connects outbound-only (to Telegram and your AI provider). Port
18789(OpenClaw gateway) is never exposed externally. - SSL: TLS via Let's Encrypt, auto-renewing. All API traffic is HTTPS.
Security & privacy
What we store
| Data | Stored? | Where |
|---|---|---|
| Google profile (name, email, picture) | Yes | SQLite users table, on-VPS |
| Deployment metadata (model, provider, container name, status) | Yes | SQLite deployments table, on-VPS |
| Stripe customer/subscription IDs | Yes | SQLite subscriptions table, on-VPS |
| AI API key | Never | Passed to container env only, not logged or persisted |
| Telegram bot token | Never | Passed to container env only, not logged or persisted |
| Conversation history | Never | Stored inside your container only; destroyed on stop |
Authentication
We use Google Identity Services (OAuth 2.0). No passwords are stored. Sessions are JWT cookies (httpOnly, SameSite=lax, 30-day expiry). Sign out clears the cookie immediately.
Container isolation
Each container runs with only the environment variables it needs. Docker networking ensures containers cannot communicate with each other. The internal Ollama service is bound to 172.17.0.1 (Docker bridge) and is not reachable from the public internet.
If you stop your bot, the container is removed and its in-memory conversation history is lost. Long-term memory that OpenClaw persisted to disk inside the container is also removed. Plan to export anything important before stopping.
User dashboard
Once you sign in and have an active deployment, the deploy form is replaced by your dashboard.
Status indicator
- ● Running — bot is live and responding to messages
- ● Starting — container is booting (usually <30s)
- ● Stopped — container is not running; bot is offline
Actions
| Action | What it does |
|---|---|
| ■ Stop | Gracefully stops your container. Bot goes offline immediately. |
| 📋 Logs | Fetches the last N lines of your container's stdout/stderr. Click Refresh to update. |
| + New deploy | Shows the deploy form. Your existing bot is stopped and replaced. You'll need to re-enter your API key and bot token for security. |
| 🚀 Redeploy bot | Shown when bot is stopped. Same as New deploy. |
Subscription badge
An active Basic or Pro plan shows a ✦ pro badge next to your status. If no badge appears, your subscription may be inactive — check the pricing section to renew.
API reference
The backend runs at https://openclaw.aiindigo.com. All endpoints require a valid JWT cookie (set via POST /api/auth/google) unless marked public.
The API is primarily consumed by the web frontend. Direct API access is supported — set the Cookie: token=<jwt> header on requests.
Authentication
Exchange a Google Identity Services credential for a session cookie.
Body
Response
{ "success": true, "user": { "id": 1, "email": "you@gmail.com", "name": "Your Name", "picture": "https://..." } }
Returns the authenticated user's profile.
Clears the JWT cookie. No body required.
Deployment
Deploys (or redeploys) a bot container for the authenticated user. Stops any existing running container first.
Body
anthropic | openai | google | ollamaclaude-sonnet-4-5modelProvider=ollama. Never stored.telegramResponse
{ "success": true, "containerId": "a1b2c3d4...", "botName": "@YourBot", "botUrl": "https://t.me/YourBot", "status": "starting" }
Returns 402 Payment Required if no active subscription exists.
Status & monitoring
Returns the latest deployment record and live Docker status for the current user.
Response
{ "id": 1, "status": "running", // "running" | "stopped" | "starting" | "not_deployed" "model_provider": "anthropic", "model_name": "claude-sonnet-4-5", "bot_name": "@YourBot", "container_name": "openclaw-user-1", "created_at": "2026-02-22T14:00:00.000Z", "container": { "running": true, "status": "running" } }
Returns the last N lines of your container's stdout/stderr as plain text.
Stops the authenticated user's running container. No body required.
Response
{ "success": true, "message": "Container stopped" }
Subscription
Returns the current user's subscription plan and status.
Response
{ "subscription": { "plan": "pro", "status": "active", "current_period_end": 1774000000 } }
Creates a Stripe checkout session and returns a redirect URL.
Body
basic or proResponse
{ "url": "https://checkout.stripe.com/..." }
Public config
Returns public runtime config (Google Client ID for the frontend).
Response
{ "googleClientId": "123....apps.googleusercontent.com" }
Pricing
You pay for infrastructure hosting. AI model costs are yours (via your own API key).
- 1 active bot
- Telegram channel
- All AI providers (BYOK)
- Ollama (free, on-VPS)
- 500 MB RAM container
- Container logs
- Community support
- 3 active bots
- Telegram + Discord (coming soon)
- All AI providers (BYOK)
- Ollama (free, on-VPS)
- 1 GB RAM per container
- Container logs + analytics
- Priority support
Subscriptions are managed via Stripe. Cancel any time from the billing portal — no lock-in, no hidden fees.
FAQ
Is my API key safe?
Yes. Your API key is passed as an environment variable directly into your Docker container at deploy time. It is never written to our database, logs, or any persistent storage. Once the container is running, even our server cannot retrieve it.
The code is open to inspection — see src/routes/deploy.js. The comment "apiKey intentionally omitted from DB" is explicit in the source.
What happens if I stop my bot?
The Docker container is stopped and removed. Your deployment record (model, provider, bot name, status) is retained in the database so the dashboard can still show it. Your conversation history and any in-container memory is lost.
To bring it back online, use Redeploy bot — you'll need to re-enter your API key and bot token.
Can I change models without getting a new bot?
Yes. Click + New deploy in the dashboard, select a different model, and re-enter your credentials. The same Telegram bot (same @username) will now run with the new model — because the bot token is what defines the Telegram identity, not the model.
How many bots can I run?
Basic: 1 active bot. Pro: 3 active bots. If you deploy a new bot while one is running, the existing one is stopped first (on Basic). Pro users get up to 3 simultaneous containers.
Why does Ollama feel slower than Claude/GPT?
Ollama models run on the VPS CPU (no GPU). Larger models like deepseek-r1:32b and qwen3:32b generate tokens at ~5–15 tok/s on CPU, which is noticeably slower than cloud API calls. For fastest responses, use mistral:7b or llama3.1:8b, or switch to a cloud provider with your own key.
Does the bot stay online 24/7?
Yes, as long as your subscription is active and you haven't manually stopped it. The container auto-restarts if it crashes (PM2 manages the host process; Docker manages the container). If the VPS is rebooted, all containers restart automatically via Docker's --restart unless-stopped policy.
Can I add my bot to group chats?
Yes — it's a standard Telegram bot. Add it to any group or channel. OpenClaw responds when directly mentioned or when a message is sent in a DM. Group chat behavior (whether it responds to every message or only mentions) depends on your OpenClaw configuration.
What is OpenClaw?
OpenClaw is the open-source AI agent framework that powers every bot deployed here. It handles message routing, tool calls (web search, memory, code), multi-channel support, and conversation history. Visit openclaw.ai or the community Discord for more.
How do I cancel my subscription?
Email contact@aiindigo.com or use the Stripe billing portal link (coming soon to the dashboard). Cancellation takes effect at the end of your current billing period — your bot stays online until then.