You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: add gateway-mediated kubernetes authentication for PAM
Add impersonation support to the PAM Kubernetes proxy. When auth method
is gateway-kubernetes-auth, the gateway reads its own pod token, sets
Impersonate-User/Group headers, auto-discovers the K8s API from env
vars, and uses the pod CA cert for TLS.
* fix: address review feedback on gateway k8s auth
- Treat empty authMethod as service-account-token for backwards compat
- Add empty namespace/SA name guard before constructing impersonation
headers
- Use net.JoinHostPort for IPv6-safe URL construction
- Sanitize error messages sent to kubectl client (use static strings)
- Write HTTP 500 before returning on websocket auth failure
- Log warning when KUBERNETES_SERVICE_HOST env vars are missing
* fix: address second round of review feedback
- Add system:authenticated to Impersonate-Group headers (required for
K8s API discovery endpoints)
- Fail fast when KUBERNETES_SERVICE_HOST env vars are missing instead
of logging warning and continuing with broken state
- Check AppendCertsFromPEM return value before overriding TLS config
- Use _, _ for discarded write error in websocket path
* fix: address third round of review feedback
- Strip client-supplied Impersonate-* headers before injecting auth
to prevent privilege escalation
- Fail fast at session setup when namespace/SA name are empty instead
of failing per-request
- Return error instead of falling back when pod CA cert can't be read
or parsed (fallback TLS config has InsecureSkipVerify: true)
* fix: address fourth round of review feedback
- Add defer sessionLogger.Close() in HandlePAMProxy to prevent file
descriptor leak on early return paths
- Strip Impersonate-Uid header alongside User/Group/Extra-*
- Fail fast at session setup when namespace/SA name are empty
* fix(pam): prevent kubernetes session from dying after websocket exec
The local Kubernetes proxy used WaitForDisconnect which treats any
gateway-side connection close as a session-level disconnect, shutting
down the entire proxy. This works for persistent protocols (SSH, DB)
but not Kubernetes, where each kubectl command is a separate connection
and the gateway closing after handling a request is normal.
Replace with a simple per-connection wait that lets the proxy stay
alive for subsequent kubectl commands.
* fix(pam): remove stale cluster entry from kubeconfig on session end
gracefulShutdown deleted config.Contexts twice instead of also
deleting config.Clusters, leaving the http://localhost:<port>
cluster entry orphaned in kubeconfig after each session.
---------
Co-authored-by: saif <11242541+saifsmailbox98@users.noreply.github.com>
returnfmt.Errorf("gateway-kubernetes-auth requires KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT_HTTPS to be set; gateway must run inside a Kubernetes pod")
0 commit comments