feat: Tokenizer proxy sidecar for secure credential injection with po…#2
feat: Tokenizer proxy sidecar for secure credential injection with po…#2NickCao wants to merge 3 commits intosallyom:mainfrom
Conversation
jwm4
left a comment
There was a problem hiding this comment.
Code Review (generated by Claude Code under the supervision of Bill Murdock)
Overall Assessment
This is a well-structured, substantial feature. The crypto is correct (NaCl sealed boxes, proper ephemeral key generation, SHA-256 bearer auth digests), the security model is sound (secrets never reach the agent container), and it follows the existing sidecar patterns (LiteLLM, OTEL) consistently. The code is well-organized with tokenizer.ts for core logic and local-tokenizer.ts for container management.
Issues
1. 13 test failures on current branch (npm test)
Two categories:
-
local-tokenizer.test.ts(12 failures): The mock for../../services/container.jsdoesn't exportOPENCLAW_LABELS, whichlocal-tokenizer.tsimports. The mock needsimportOriginalto include the realOPENCLAW_LABELSconstant. -
k8s-manifests.test.ts(1 failure): Test expects anOPEN_KEY_FILEenv var on the tokenizer container, but the implementation usessh -c "export OPEN_KEY=$(cat /secrets/open-key) && exec tokenizer"via thecommandfield instead. The test is out of sync with the implementation.
2. Needs rebase onto current main — PR #11 (LiteLLM proxy routing fix) has been merged since this branch was created. This will likely produce merge conflicts in k8s-helpers.ts, k8s-manifests.ts, and local.ts. The tsc -p tsconfig.provider-plugins.json build step added in main is also missing here.
Things Done Well
- Secrets piped via stdin to avoid
/proc/*/cmdlineleakage - Open key read from file mount, not env, to avoid
/proc/PID/environleakage - Credential metadata (names/hosts) saved without raw secrets for restart recovery
- Proper sidecar cleanup on deploy failure (rollback pattern)
- Concurrency guard on credential updates (in-memory lock per instance)
- Generated SKILL.md teaches the agent how to use the proxy with examples
redactConfig()properly redacts tokenizer secrets- Comprehensive unit tests for the crypto layer (round-trip decryption verification)
Suggestions
- The
findInstancefunction instatus.tshas grown quite large with duplicated config reconstruction logic between the running-container and stopped-volume code paths. Consider extracting the shared config mapping into a helper.
|
Huh I think there's some problem with the current code base, here https://github.com/sallyom/openclaw-installer/tree/main/src/server/deployers both the js and ts files are present, they they don't seem to match. |
|
Looking at that now - I pushed a large PR at 1:30 AM last night - some things may need cleanup 😆 |
|
@NickCao I'll get that issue cleaned up then you can update this tokenizer |
|
@NickCao I'm currently ensuring all built-in features of OpenClaw upstream are accounted for in our installer - then, we'll pick up this PR to fill the gaps - I want to get the base secrets management in place first. I love this PR though. |
Add a Tokenizer (github.com/NickCao/tokenizer) sidecar that injects API credentials into HTTP requests without exposing raw secrets to the agent. Core: - NaCl sealed-box encryption for credential storage - Local deployer: pod-based sidecar lifecycle, nc -z readiness check, credential rotation preserving existing sealed tokens and open key - K8s deployer: Secret-based storage with atomic rollback on patch failure, SKILL.md generation including all credentials (kept + new) - OPEN_KEY_FILE env var for secure key delivery (no sh -c entrypoint) - Validation for kept credentials (name/host) in PUT handler Frontend: - Deploy form: Credential Proxy section with dynamic credential forms - Instance list: post-deploy credential management (add/remove with automatic container restart) API: - GET/PUT /api/instances/:id/tokenizer for credential CRUD - Validation: hostname format, credential count, headerFmt placeholder
- aria-label on all credential inputs and Remove buttons
- name attributes scoped by instance ID to prevent cross-instance
autofill collisions when multiple panels are open
- spellCheck={false} on secret/token inputs
- Placeholder ellipsis and em-dash via JSX expressions (JSX string
attributes do not interpret unicode escapes)
- Consistent '+ Add Credential' capitalization
Verify actual container volume and K8s Secret contents during the credential lifecycle (deploy, add, delete, delete-all): - open-key format, sealed box format, auth token format - credential preservation across add/delete operations - open-key reuse, stale entry cleanup after delete-all - local (podman .env) and k8s (Secret keys) both covered
|
@sallyom , what is the status of this one? It's been sitting for a while and I think the core idea here is really valuable. Right now, agents deployed by the installer can see their own API keys via environment variables. Nick's Tokenizer approach appears to solve that by injecting credentials at the HTTP proxy level so the agent never has access to raw secrets. That seems like an important security property, especially as agents get more autonomous. The ADR 0002 landed since this was opened, and it covers how the installer handles secret storage/delivery (SecretRefs, Vault as an exec provider, etc.) — but it doesn't address the credential isolation problem this PR tackles. Those feel like complementary concerns rather than conflicting ones. What do you think the next step should be here? Is this something you'd still want to pick up, or has the direction changed? |
…st-install management