I run an autonomous OpenClaw agent that manages ~8 different credentials (email APIs, GitHub PAT, TOTP secrets, Nostr keys, etc). The problem OneCLI solves is real and I hit it constantly.
The placeholder-to-real-credential swap via proxy is the right abstraction. Right now my credentials sit in config files with chmod 600, which works until you need rotation or revocation — then it's manual surgery every time.
Two things I'd push on from the practitioner side:
1. Credential lifecycle matters more than initial storage. My sky.ai email provider died permanently (domain for sale) and I had to register a new service same-day. The vault needs to handle 'this credential is dead, here's the replacement' without the agent noticing.
2. The audit trail is arguably more valuable than the vault itself. When you're running 24/7 cron loops making API calls, the question isn't 'was the secret safe' — it's 'which call at 3am triggered the rate limit, and was it the right agent?'
Curious how OneCLI handles credential expiry/rotation. Does the agent get an error and retry, or does the proxy handle refresh transparently?
captn3m0 1 days ago [-]
This problem+solution, like many others in the agentic-space, have nothing agent-specific. Giving a "box" API keys was always considered a risk, and auth-proxying has existed as a solution forever. See tokenizer[0] by the fly.io team, which makes it a stateless service for eg - no database or dashboard. Or the buzzfeed SSO proxy, which lets you do the same via an OAuth2-dance at the frontend, and a upstream config at the backend which injects secrets: https://github.com/buzzfeed/sso/blob/549155a64d6c5f8916ed909....
It's not the same. The core overlap is that agents shouldn't be holding raw credentials, that part isn't new, agreed. But the problem space goes further when you're building for agents specifically:
- Requiring human approval before sensitive actions go through (as @guyb3 mentioned in the post)
- Managing short-lived JWT tokens (refresh/access) with tight TTLs.
- Scoping permissions per-session rather than per-service
Auth-proxying solves the "don't give the box your API key" part. But the approval layer and token lifecycle management are what make this agent-specific, not just "SSO proxy repackaged."
RhodesianHunter 1 days ago [-]
Just because it's been done before for a different use-case doesn't mean that building exclusively for this use-case doesn't remove friction.
arianvanp 12 hours ago [-]
But it hasn't been built exclusively for that use case. It's literally the same.
multidude 10 hours ago [-]
I think you have a point. The credential part feels like a solved problem — auth-proxying has been around for a while. What seems genuinely new to me is the approval layer, the idea that a human should confirm before a sensitive action actually executes. I'm not sure that's covered by tokenizer or SSO proxy, but I could be wrong. Is that the real differentiator here, or am I missing something?
sathish316 1 days ago [-]
This can also be done using existing Vaults or Secrets manager. Hashicorp Vault can do this and agents can be instructed to get secrets, which are set without the agent's knowledge. I use these 2 simple scripts with OpenClaw to achieve this, along with time-scoped expiration. The call to vault_get.sh is inside the agent's skill script so that the secrets are not leaked to LLMs or in any trace logs:
It seems that the architecture you describe still gives the key to the agent (who could email it to red team or perform nefarious actions with it). The advantage of OP's architecture is that the agent never sees the key and you could inspect the request before proxying it. Is that right or do I misunderstand something?
sathish316 1 days ago [-]
The call flow is:
agent -> select skill (ex: reddit) -> execute action in skill -> script or code associated with skill -> get api key -> call reddit -> reddit posts -> agent.
The agent sees the output of the service, it does not directly see the keys. In OpenClaw, it’s possible to create the skill in a way that the agent does not directly know about or use vault_get command.
vander_elst 15 hours ago [-]
It would seem in this design that all the tools should be called through a wrapper that understands the vault and provides the credentials to the tool in the right way. How otherwise can curl use the credentials? Curl has no idea that there's a vault somewhere
nonameiguess 1 days ago [-]
Sort of. The point of Vault is you're supposed to actually use RBAC and least privilege and store NPE credentials that are properly scoped to the actions you're comfortable automating, which your NPE then gets a certificate to fetch on demand, rather than just giving it your own personal root credentials that can do anything.
We're going to see this reinvented thousands of times in the next few months by people whose understanding of security is far poorer than HashiCorp's, via implementations that are nowhere near as well-tested, if tested at all.
1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.
2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.
3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.
For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.
arianvanp 12 hours ago [-]
Also doesn't this mean I have to reconfigure all my tools to use HTTP and then when I forget to enable this it will fall back to getting MITM'd by the Internet? Fails open in the most insecure method ever
lancetipton 1 days ago [-]
Im literally working on the exact same solution. Difference is I'm running the system in a Kubernetes cluster.
I essentially run a sidecar container that sets up ip tables that redirect all requests through my mitm proxy. This was specifically required because of Node not respecting HTTP_PROXY.
Also had to inject a self signed cert to ensure SSL could be proxied and terminated by the mitm proxy, which then injects the secrets, and forwards the request on.
Have you run into any issues with this setup? I'm trying to figure out if there's anything I'm missing that might come back to bite me?
hardsnow 1 days ago [-]
I’ve been running this with workloads accessing Anthropic, GitHub, Gemini, and AWS & CF R2 APIs for a while now, and have not ran into issues. I’m sure there’s an API out there that won’t work out of the box but I’m positive that support could be added.
Another thing I did was to allow configuring which hosts each credential is scoped to. Replacement /resigning doesn’t happen unless host matches. That way it is not possible to leak keys by making requests to malicious hosts.
lancetipton 1 days ago [-]
The last note about configuring hosts to credentials is an excellent idea, and one I did not think to do. Currently I'm just doing a replace on any that matches in the request.
This adds an extra layer of security to it. Much appreciated.
IronClaw seems to do this natively, I like the idea in general, so it's good too see this pulled out.
I have few questions:
- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)
- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?
Jonathanfishner 11 hours ago [-]
Re IronClaw is probably the most hardened open-source implementation I've seen for this, but a sufficiently clever prompt injection against the built-in tools (especially shell) could still reach secrets.
Re TLS: OneCLI itself runs in a separate container, acting as an HTTPS proxy. The SDK auto-configures agent containers with proxy env vars + a local CA cert. When the agent hits an intercepted domain, OneCLI terminates TLS, swaps placeholder tokens for real creds, and forwards upstream. Containers never touch actual keys.
Re 1Password adapters: not yet, but on the roadmap.
atonse 7 hours ago [-]
Ok that's what I assumed it probably did (modify the container's certificate trust store), thanks for the explanation.
inssein 1 days ago [-]
You use a forward proxy that can MITM.
atonse 1 days ago [-]
So would I have to add that Proxy's certificate to my trust store?
arrsingh 20 hours ago [-]
It’s an approach that works and I’ve thought of implementing the same thing but stopped short because I feel it just pushes the underlying problem around. Now I have to share my creds with a black box that I know very little about and it’s not a real vault.
This should be solved by the vaults (hashi corp / AWS Secrets Manager).
The one thing that I did build was based on a service that AWS provides (AWS STS) which handles temporary time bound creds out of the box.
I don't get the benefit. Yes, agents should not have access to API keys because they can easily be fooled into giving up those API keys. But what's to prevent a malicious agent from re-using the honest agent's fake API key that it exfiltrates via prompt injection? The gateway can't tell that the request is coming from the malicious agent. If the honest agent can read its own proxy authorization token, it can give that up as well.
It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.
dingdingdang 1 days ago [-]
But surely the point is that the proxy key is for your local network, it ain't got no value on the interwebz.
morphology 24 hours ago [-]
Precisely. You absolutely have to ensure that random agents can't join your local network, which means you need a deterministic orchestrator or an AI orchestrator that only spins up a handful of vetted agents.
robutsume 23 hours ago [-]
[flagged]
CuriouslyC 19 hours ago [-]
People are addressing that gap. I have a secure agent framework that uses a tool gateway hooked up to OPA (https://github.com/sibyllinesoft/smith-core), this solves the credential issue as the credentials live in the tools, and the authz issue, as OPA policy controls who does what.
wuweiaxin 1 days ago [-]
Secret and credential sprawl is a real problem in agent pipelines specifically -- each agent needs its own scoped access and the blast radius of a leaked credential is much larger when an agent can act autonomously. We ended up with a tiered secret model: agents get short-lived derived tokens scoped to exactly the tools they need for a given task, not broad API keys. Revocation on task completion, not on schedule. More ops overhead upfront but caught two misuse cases that would have been invisible otherwise.
ipince 22 hours ago [-]
This seems to prevent your keys from being exfiltrated through prompt injection. But if your agent could've been prompt injected into giving out keys, then it can also be prompt injected into using the services it has (fake) keys for to the attacker's benefit.
sethcronin 1 days ago [-]
Oops, i read vault and thought obsidian vault haha - but yeah, one of the issues is if your agent can _execute_ on the secret at all, it can be potentially convinced to use it in a way that does not benefit you, even if it doesn't have access to the secret itself.
paxys 1 days ago [-]
You don't want to give the agent a raw key, so you give it a dummy one which will automatically be converted into the real key in the proxy.
So how does that help exactly? The agent can still do exactly what it could have done if it had the real key.
brabel 1 days ago [-]
For one thing, it cannot leak secrets between services.
paxys 1 days ago [-]
Is that really a problem? All the examples on the repo page themselves show LLMs running unintended operations on the "correct" service and messing up your data. And that is very much still going to happen with this wrapper. If anything it is going to provide a false sense of security.
cataflam 23 hours ago [-]
it cannot email your secret key to an attacker because of prompt injection etc.
anthonyskipper 1 days ago [-]
The fake key for real key thing seems like a problem. A lot of enterprise scanning tools look for keys in repos and other locations and you will get a lot of false positives.
Otherwise this is cool, we need more competition here.
guyb3 1 days ago [-]
It's a good point, I don't think the placeholders we use will trigger a secret scanner, but we can adjust if it's an issue.
I use it to save every new secret/api key I get via the CLI.
It's intentionally very feature limited.
Haven't tried it with agents, but wouldn't be surprised if the CLI (as is) would be enough.
skywhopper 1 days ago [-]
This is slick but the only thing it prevents is agents from directly sharing the credentials through git or something.
But that’s not the biggest risk of giving credentials to agents. If they can still make arbitrary API calls, they can still cost money or cause security problems or delete production.
If you’re worried about creds leakage only because your credentials are static and permanent, well, time to upgrade your secrets architecture.
jpbryan 1 days ago [-]
Why not just use AWS Secrets Manager?
stevekemp 1 days ago [-]
A program making a call to github.com needs an authentication token.
What are you suggesting? The program makes a call to retrieve the secret from AWS? Then has full access to do with it what they want? That's exactly the risk and the problem this, and related solutions mentioned in this thread, is trying to solve.
docybo 10 hours ago [-]
[dead]
ezpzai 20 hours ago [-]
[dead]
bhekanik 1 days ago [-]
[dead]
rockmanzheng 19 hours ago [-]
[dead]
miki_ships 1 days ago [-]
[flagged]
Mooshux 1 days ago [-]
[dead]
eikenberry 1 days ago [-]
Rust might be a solid choice but most of it is written in Typescript... which is not as solid of a choice.
c5huracan 19 hours ago [-]
Use Vault and use a proxy. They address different problems.
Vault protects keys at rest, but the agent still gets them at runtime. The proxy keeps the key away from the agent entirely, which closes key leakage. But a prompt-injected agent can still exfiltrate data it reads through the proxy. The trust boundary shifts, it doesn't disappear.
Looks like OneCLI combines both into one tool, which is the right call.
Jonathanfishner 10 hours ago [-]
that's exactly the idea. appreciate you framing it so clearly
Rendered at 20:45:24 GMT+0000 (Coordinated Universal Time) with Vercel.
The placeholder-to-real-credential swap via proxy is the right abstraction. Right now my credentials sit in config files with chmod 600, which works until you need rotation or revocation — then it's manual surgery every time.
Two things I'd push on from the practitioner side:
1. Credential lifecycle matters more than initial storage. My sky.ai email provider died permanently (domain for sale) and I had to register a new service same-day. The vault needs to handle 'this credential is dead, here's the replacement' without the agent noticing.
2. The audit trail is arguably more valuable than the vault itself. When you're running 24/7 cron loops making API calls, the question isn't 'was the secret safe' — it's 'which call at 3am triggered the rate limit, and was it the right agent?'
Curious how OneCLI handles credential expiry/rotation. Does the agent get an error and retry, or does the proxy handle refresh transparently?
[0]: https://github.com/superfly/tokenizer
- Requiring human approval before sensitive actions go through (as @guyb3 mentioned in the post)
- Managing short-lived JWT tokens (refresh/access) with tight TTLs.
- Scoping permissions per-session rather than per-service
Auth-proxying solves the "don't give the box your API key" part. But the approval layer and token lifecycle management are what make this agent-specific, not just "SSO proxy repackaged."
vault_get.sh: https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...
vault_set.sh: https://gist.github.com/sathish316/1f4e6549a8f85ac5c5ac8a088...
Blog about the full setup for OpenClaw: https://x.com/sathish316/status/2019496552419717390
The agent sees the output of the service, it does not directly see the keys. In OpenClaw, it’s possible to create the skill in a way that the agent does not directly know about or use vault_get command.
We're going to see this reinvented thousands of times in the next few months by people whose understanding of security is far poorer than HashiCorp's, via implementations that are nowhere near as well-tested, if tested at all.
1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.
2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.
3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.
For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.
I essentially run a sidecar container that sets up ip tables that redirect all requests through my mitm proxy. This was specifically required because of Node not respecting HTTP_PROXY.
Also had to inject a self signed cert to ensure SSL could be proxied and terminated by the mitm proxy, which then injects the secrets, and forwards the request on.
Have you run into any issues with this setup? I'm trying to figure out if there's anything I'm missing that might come back to bite me?
Another thing I did was to allow configuring which hosts each credential is scoped to. Replacement /resigning doesn’t happen unless host matches. That way it is not possible to leak keys by making requests to malicious hosts.
The model is solid. It feels like the right way to use YOLO mode.
I've been working on making the auth setup more granular with macaroons and third party caveats.
My dream is to have plugins for upstreams using OpenAPI specs and then make it really easy to stitch together grants across subsets of APIs.
I think there's a product in here somewhere...
I have few questions:
- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)
- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?
Re TLS: OneCLI itself runs in a separate container, acting as an HTTPS proxy. The SDK auto-configures agent containers with proxy env vars + a local CA cert. When the agent hits an intercepted domain, OneCLI terminates TLS, swaps placeholder tokens for real creds, and forwards upstream. Containers never touch actual keys.
More here: https://www.onecli.sh/docs/sdks/node#how-it-works
Re 1Password adapters: not yet, but on the roadmap.
This should be solved by the vaults (hashi corp / AWS Secrets Manager).
The one thing that I did build was based on a service that AWS provides (AWS STS) which handles temporary time bound creds out of the box.
https://timebound-iam.com
It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.
So how does that help exactly? The agent can still do exactly what it could have done if it had the real key.
Otherwise this is cool, we need more competition here.
https://github.com/onecli/onecli/blob/942cfc6c6fd6e184504e01...
Sorry but am I missing something here?
---
If this is of interest, I also recommend looking into: https://github.com/loderunner/scrt.
To me, it's a compliment to 1password.
I use it to save every new secret/api key I get via the CLI.
It's intentionally very feature limited.
Haven't tried it with agents, but wouldn't be surprised if the CLI (as is) would be enough.
But that’s not the biggest risk of giving credentials to agents. If they can still make arbitrary API calls, they can still cost money or cause security problems or delete production.
If you’re worried about creds leakage only because your credentials are static and permanent, well, time to upgrade your secrets architecture.
What are you suggesting? The program makes a call to retrieve the secret from AWS? Then has full access to do with it what they want? That's exactly the risk and the problem this, and related solutions mentioned in this thread, is trying to solve.
Vault protects keys at rest, but the agent still gets them at runtime. The proxy keeps the key away from the agent entirely, which closes key leakage. But a prompt-injected agent can still exfiltrate data it reads through the proxy. The trust boundary shifts, it doesn't disappear.
Looks like OneCLI combines both into one tool, which is the right call.