A RunLobster Company
v2.4 · Self-host or hosted
╦ ╦╔═╗╔═╗╔╦╗  ╔═╗╔═╗╔═╗╔╗╔╔═╗╦  ╔═╗╦ ╦
╠═╣║ ║╚═╗ ║   ║ ║╠═╝║╣ ║║║║  ║  ╠═╣║║║
╩ ╩╚═╝╚═╝ ╩   ╚═╝╩  ╚═╝╝╚╝╚═╝╩═╝╩ ╩╚╩╝

Deploy a real OpenClaw agent on your own infrastructure in 90 seconds. One command.

Click to copy · MIT licensed

~/projects/my-agent · zsh
$

Ship to wherever you already pay.

One command picks the right buildpack for your provider. No vendor lock-in. Move between them whenever you want.

Vercel
Recommended
Railway
Supported
Fly.io
Supported
Render
Supported
AWS
Supported
Self-host
Supported

Real infrastructure.
Not a managed black box.

Self-hosted OpenClaw is the same engine that powers RunLobster — minus the hosting bill, plus full source access.

Full Linux, no sandbox

Real Ubuntu container. apt, brew, git, any package, any language. Your agent has the same machine you would give a junior developer.

ssh in. it's a real box.

Real Chromium browser

Logs into anything you can. LinkedIn, internal tools, legacy CRMs. Watch it click through pages live from your dashboard.

8000 tools, no API needed.

Persistent memory

Built-in SQLite + filesystem so your agent remembers context between conversations. Doesn't forget who you are between restarts.

memory.db survives reboots.

Built-in scheduler

Cron at the language level. Schedule jobs in plain English ("check stripe at 7am every weekday") and they run forever.

openclaw schedule --english

Bring your own model

ANTHROPIC_API_KEY, OPENAI_API_KEY, or your own LLM endpoint. Switch providers without changing code. No vendor markup.

$ export ANTHROPIC_API_KEY

Open source, MIT

Read every line, fork it, modify it, run it forever. No telemetry, no phone-home, no usage caps. Your infrastructure, your rules.

git clone openclaw

Six layers.
Each one swappable.

Every layer is independent. Switch your model provider without touching the runtime. Move providers without rewriting your agent.

layer 01

Your provider

Vercel, Railway, Fly.io, AWS, or your own VPS. The container layer.

  • 1 vcpu / 1gb ram baseline
  • Auto-scale on demand
  • Bring-your-own region
layer 02

Compute container

An isolated Ubuntu environment with Linux, browser, and persistent disk.

  • Ubuntu 24.04 LTS
  • Headless Chromium
  • SQLite + filesystem
layer 03

OpenClaw runtime

The agent loop, memory, scheduler, and tool router. MIT licensed.

  • Plan → act → observe loop
  • Persistent memory store
  • Cron-based scheduler
layer 04

Model provider

Bring your own key. Anthropic, OpenAI, or self-hosted Llama.

  • Anthropic Claude
  • OpenAI GPT
  • Self-hosted Llama 4
layer 05

Tools layer

Browser, file system, terminal, and 3,000+ integrations via Composio.

  • Real Chromium
  • Full Linux access
  • Composio integrations
layer 06

Your interface

Web dashboard, CLI, REST API, and webhooks. You pick the surface.

  • Web dashboard at /
  • CLI: openclaw chat
  • REST API + webhooks

Three commands.
Pick your provider.

vercel-deploy.sh
# 1. Install the CLI
npx hostopenclaw deploy --provider vercel

# 2. Add your model key
vercel env add ANTHROPIC_API_KEY

# 3. Talk to your agent
openclaw chat

Average time to first agent run: 90 seconds

90s
Time to first agent run
From `npx` to live URL
$2-5
Monthly compute cost
On Vercel hobby tier
0
Vendor lock-in
MIT licensed source
100%
Uptime since v2.0
On Fly.io reference deploy
ACMEBLOOMSTACKNOTEFLOWTRAILPOINTPIVOTWELLBRIDGEMARK

Engineers who shipped
on their own terms.

“I ran the npx command on a Saturday morning. By the time my coffee was done I had a working OpenClaw agent at a real URL. The whole thing felt illegal.”

M
Marcus Chen
Founding Engineer · Noteflow
@marcus_dev

“We needed to keep our agent infrastructure inside our own AWS for compliance reasons. HostOpenClaw was the only thing that let us self-host without giving up the polish of a managed service.”

S
Sarah Patel
Head of Engineering · Bloomstack
@sarahp

“The architecture doc actually explains the trade-offs instead of hand-waving. I knew exactly what I was getting into before I deployed. Felt like reading a real engineering doc, not marketing.”

J
Jordan Vega
Backend Lead · Pivotwell
@jvega

“Switched our model provider from Anthropic to OpenAI in 30 seconds. One env var change, no rebuild. This is what model-agnostic actually looks like.”

E
Eli Rosen
Staff Engineer · Trailpoint
@elirosen

“Open source means I read the runtime before deploying it. Found one thing I wanted to change, opened a PR, it was merged the next day. That trust loop is why I picked OpenClaw over the closed alternatives.”

P
Priya Singh
Solo Founder · Quickflux
@priya_codes

“We're running the same agent in three regions across Vercel, Fly, and our on-prem cluster. Same code, same behavior, three providers. This is the dream and it actually works.”

T
Tom Walker
Platform Engineer · Bridgemark
@tomwalker

The questions engineers ask.

It detects your local environment, asks you to pick a provider (or auto-picks Vercel), reads your model API key from your env, and ships the OpenClaw runtime to that provider with all its dependencies. Total time: ~90 seconds. The output is a real URL where your agent lives.
It's the same codebase. The bootstrapper just handles the boring parts — choosing the right buildpack, configuring the container, wiring up the database, setting environment variables. You can absolutely git clone and run it yourself; many people do. The bootstrapper exists because not everyone wants to spend their Saturday writing a Dockerfile.
Yes. The full source is on GitHub under the MIT license. Read it, fork it, modify it, run it forever. We commit changes back upstream from the hosted version of the runtime so you can stay in sync.
Anthropic Claude (Opus, Sonnet, Haiku), OpenAI GPT-4 family, and any OpenAI-compatible endpoint. We also support self-hosted Llama 4 via Ollama or vLLM. Switch providers with one env var change — no rebuild required.
Baseline: 1 vCPU and 1 GB of RAM. Most agents idle at ~150 MB RAM and spike to 800 MB when running browser-heavy tasks. Storage: about 200 MB for the runtime + whatever your agent persists in its memory.db. On Vercel hobby tier this typically costs $2-5/month depending on usage.
The runtime ships with no telemetry, no phone-home, and no external dependencies beyond your chosen model provider. Your agent's memory and conversation history live entirely on your infrastructure. We never see them. The model provider sees prompts, but you control which one.
Yes — runlobster.com is the managed version. Same engine, hosted by us, with a polished web dashboard, billing, and zero infra to maintain. Great for non-technical buyers. HostOpenClaw is for engineers who want full control of their stack.
We don't, on hostopenclaw.io. The bootstrapper is free, the source is MIT, and we have no upsell here. We make money on runlobster.com (the managed version). HostOpenClaw exists so the technical audience can self-host without friction — we'd rather have you running OpenClaw on your own infra than running a competitor on theirs.

Pick your operator.

Both tiers run the same engine. The only question is who you want managing the infrastructure.

Self-hosted
$0/forever

Run on your own infra. Pay your own provider. Own everything.

  • MIT licensed source code
  • Bring your own model key
  • Bring your own provider
  • All 3,000+ integrations
  • Real browser, real Linux
  • Built-in scheduler
  • Community support
Get the source
Most popular
Managed (RunLobster)
$0/to start

We host it. You use it. Free tier for starters, paid plans for higher limits.

  • Everything in Self-hosted
  • No infra to manage
  • Polished web dashboard
  • Auto-scale + auto-update
  • Priority support
  • Pre-built agent roles
  • Single-tenant compute
Try Hosted

Your agent. Your infra.
Live in 90 seconds.

Open a terminal. Paste the command. Get back a real URL. We'll be on your side the whole time.

MIT licensed · Zero telemetry · Bring your own model