Your OpenClaw API Keys Are Leaking — Here Are 5 Levels of Fix
Most OpenClaw guides tell you to paste your API key into openclaw.json and move on. What they don't mention: that key is probably ending up in the LLM's context window on every single turn. Snyk researchers found that 7% of ClawHub skills actively expose credentials through the model's prompt history. Plaintext config is just the beginning. Here's the complete hierarchy of API key protection — from “leaking everywhere” to “genuinely hardened.”
Why this matters more than you think
An exposed API key isn't just a billing risk. In the OpenClaw ecosystem, a leaked Anthropic or OpenAI key gives an attacker unlimited token spend on your account. A leaked Telegram bot token lets someone hijack your agent's messaging channel. A leaked Gmail OAuth token gives full read/write access to your inbox.
The attack vectors are less obvious than you'd expect. Sure, someone could find your key in a public GitHub commit. But OpenClaw introduces vectors that don't exist in traditional applications:
- The model catalog leak. OpenClaw's internal model configuration serializes resolved API key values into the prompt context that gets sent to the LLM provider. Your key travels through the model's context window on every message. This is tracked as a known security issue in the OpenClaw repository (issue #11202).
- Skill-driven exfiltration. A malicious or poorly-written skill can instruct the agent to read environment variables and output them in chat. Snyk's scan of nearly 4,000 ClawHub skills found around 280 that expose credentials this way — not through malware, but through careless SKILL.md instructions that treat the agent like a local script.
- MEMORY.md injection. If an agent gets prompt-injected (via a malicious email, web page, or chat message), it can be tricked into writing its configuration context — including tokens — into memory files that persist across sessions and may get backed up or synced.
- Debug log exposure. Running the gateway at debug log level for troubleshooting can dump raw token values into log files. Those logs get shared in Discord support threads, pasted into GitHub issues, or left on disk with world-readable permissions.
The common thread: your key doesn't have to be “stolen” to be exposed. It just has to exist somewhere the agent, the model, or a careless backup process can reach it.
The 5 levels
Think of credential protection as a ladder. Each level addresses a different attack vector. Most OpenClaw users are at Level 0 or 1. You should be at Level 2 minimum, Level 3 if you're running on a VPS or shared machine.
| Level | Method | Protects Against |
|---|---|---|
| L0 | Hardcoded in openclaw.json |
Nothing. Key is in plaintext, readable by agent, backed up, committed. |
| L1 | env block in openclaw.json |
Structural separation. Key still in same file, but referenced by $VAR. |
| L2 | ~/.openclaw/.env file |
Git commits, config sharing, accidental backup inclusion. |
| L3 | SecretRef with file or exec provider |
Agent file reads, log dumps, config serialization. |
| L4 | External vault (1Password, HashiCorp, AWS SM) | Host compromise, multi-agent isolation, rotation at scale. |
Let's walk through each one.
L0 Hardcoded in config (the default)
This is what the OpenClaw onboarding wizard produces for most users. Your API key sits as a literal string inside openclaw.json:
{
"models": {
"providers": {
"anthropic": {
"apiKey": "sk-ant-api03-REAL-KEY-HERE"
}
}
}
}
If you're currently at Level 0, you should move to Level 1 right now. It takes about 30 seconds.
L1 The env block (minimum viable fix)
OpenClaw supports a top-level env block in openclaw.json that acts as a built-in key-value store. You put the actual key there, and reference it with $VAR syntax everywhere else:
{
"env": {
"ANTHROPIC_API_KEY": "sk-ant-api03-REAL-KEY-HERE"
},
"models": {
"providers": {
"anthropic": {
"apiKey": "$ANTHROPIC_API_KEY"
}
}
}
}This is OpenClaw's officially recommended pattern. The gateway resolves $ANTHROPIC_API_KEY from the env block at startup. If the variable is missing, OpenClaw throws a MissingEnvVarError and refuses to start — which is actually a good thing, because it means misconfigured secrets fail loudly rather than silently.
$ANTHROPIC_API_KEY, not the raw value. Makes it slightly harder to accidentally leak through casual config sharing. But the key is still in the same file, still on disk in plaintext, and still accessible to the agent.
How to migrate from Level 0 to Level 1:
- Open
~/.openclaw/openclaw.json - Add an
"env"block at the top level with your key(s) - Replace every raw key in the config body with
$VAR_NAME - Restart the gateway:
openclaw gateway restart - Verify:
openclaw doctor
Or, if you're using the BulwarkAI security dashboard (npx openclaw-security-dashboard), the Auto-Fix button does this migration automatically — it reads your config, moves raw keys into the env block, replaces inline values with $VAR references, and backs up the original file before changing anything.
L2 Separate .env file
Move the actual secret values out of openclaw.json entirely and into a dedicated .env file. OpenClaw loads environment variables from multiple sources in this priority order:
- Process environment (shell, systemd, Docker)
./.envin the current working directory~/.openclaw/.env(the gateway's own env file)envblock in openclaw.json
For most setups, ~/.openclaw/.env is the sweet spot:
# ~/.openclaw/.env
ANTHROPIC_API_KEY=sk-ant-api03-...
OPENAI_API_KEY=sk-...
TELEGRAM_BOT_TOKEN=123456:ABC-...
DISCORD_BOT_TOKEN=...
Then your openclaw.json stays clean:
{
"models": {
"providers": {
"anthropic": {
"apiKey": "$ANTHROPIC_API_KEY"
}
}
}
}Critical hygiene steps after creating .env:
# Lock down permissions (owner read/write only)
chmod 600 ~/.openclaw/.env
# If anything in ~/.openclaw/ is version-controlled
echo ".env" >> ~/.openclaw/.gitignore
# For systemd deployments, use EnvironmentFile instead of shell sourcing
# [Service]
# EnvironmentFile=/secure/path/openclaw.env
.env file is a single, lockable target with restrictive permissions. Major upgrade for git safety and config portability.
L3 SecretRef providers
OpenClaw has a native secrets management system that most people never configure because the onboarding flow doesn't require it. The SecretRef system lets you define named secret providers and reference secrets by ID rather than value.
There are three built-in provider types:
env provider
Reads from environment variables with an optional allowlist to restrict which variables the gateway can access:
{
"secrets": {
"providers": {
"default": {
"source": "env",
"allowlist": ["ANTHROPIC_*", "TELEGRAM_*", "DISCORD_*"]
}
}
}
}file provider
Reads from a separate JSON file that you can store in a restricted location outside the OpenClaw directory entirely:
{
"secrets": {
"providers": {
"key_file": {
"source": "file",
"path": "/secure/openclaw-keys.json",
"mode": "json"
}
}
}
}exec provider
Runs a command at startup and uses stdout as the secret value. This is the bridge to external secret managers:
{
"secrets": {
"providers": {
"onepassword": {
"source": "exec",
"command": "/opt/homebrew/bin/op",
"args": ["read", "op://Personal/OpenClaw/anthropic_key"]
}
}
}
}Once providers are defined, you reference secrets using a keyRef object instead of a literal string. The gateway resolves references at startup and fails immediately if any are unresolvable — no silent failures.
L4 External vault integration
For production deployments, VPS hosting, or multi-agent setups, the exec provider connects OpenClaw to enterprise secret managers:
HashiCorp Vault
{
"secrets": {
"providers": {
"vault": {
"source": "exec",
"command": "vault",
"args": ["kv", "get", "-field=token", "secret/openclaw/anthropic"]
}
}
}
}AWS Secrets Manager
{
"secrets": {
"providers": {
"aws": {
"source": "exec",
"command": "aws",
"args": ["secretsmanager", "get-secret-value",
"--secret-id", "openclaw/anthropic",
"--query", "SecretString",
"--output", "text"]
}
}
}
}1Password CLI
{
"secrets": {
"providers": {
"op": {
"source": "exec",
"command": "op",
"args": ["read", "op://Vault/OpenClaw/anthropic_key"]
}
}
}
}The vector that none of this fully solves
Here's the uncomfortable truth: even at Level 4, there's a remaining exposure vector in OpenClaw's architecture.
OpenClaw's model configuration system resolves API key references and serializes the result into a model catalog that becomes part of the prompt context sent to the LLM. This means on every turn, your resolved API key may travel through the model provider's infrastructure as part of the conversation payload.
There's an open issue and proposed security roadmap in the OpenClaw repository addressing this with a three-layer fix: stripping keys from the serialized model catalog, implementing a masked secrets placeholder system, and adding output redaction filters. As of early March 2026, this hasn't been merged yet.
What you can do today:
- Use separate API keys per agent. Don't reuse your main Anthropic key across every agent and integration. If one agent's context leaks, only that key is exposed.
- Set spending limits. Anthropic and OpenAI both support usage caps. Set them aggressively.
- Rotate regularly. Treat API keys like passwords — rotate on a schedule, not just when you suspect a leak.
- Monitor usage dashboards. Unexpected spikes in token consumption are often the first sign of a leaked key being exploited.
Check your level in 30 seconds
The BulwarkAI security dashboard automatically detects which level you're at and flags the specific exposure:
npx openclaw-security-dashboard
It scans your openclaw.json for raw API key patterns, checks file permissions on your .env and config files, and reports exactly what's exposed. The built-in Auto-Fix migrates you from Level 0 to Level 1 automatically — backing up your config first and using OpenClaw's native env block pattern so nothing breaks.
For Levels 2–4, the dashboard provides specific remediation steps tailored to your setup.
Find out where you stand
Run the free security dashboard. See your grade, your exposure level, and exactly what to fix — in 30 seconds, no account required.
Install the Dashboard → Or get a personalized hardening report — $297 →