SRE: Secrets Management in Kubernetes
Support this blog
If you find this content useful, consider supporting the blog.
Introduction
In the previous articles we covered SLIs and SLOs, incident management, observability, chaos engineering, capacity planning, and GitOps. We have built a solid foundation for running reliable services in Kubernetes, but there is one topic we have not touched yet that can make or break your security posture: secrets management.
If you have ever committed a database password to a Git repository, hard-coded an API key in a deployment manifest, or relied on Kubernetes Secrets thinking they were “encrypted”, you know the pain. Secrets are everywhere in modern infrastructure, and managing them poorly is one of the fastest ways to end up in the news for all the wrong reasons.
In this article we will cover why Kubernetes Secrets are not enough on their own, and then walk through the tools and strategies that actually solve the problem: Sealed Secrets, External Secrets Operator, HashiCorp Vault, secret rotation, SOPS, RBAC policies, and audit logging. By the end you should have a clear picture of which approach fits your situation and how to implement it.
Let’s get into it.
The problem with Kubernetes secrets
Kubernetes has a built-in Secret resource, and at first glance it looks like it solves the problem. You create a Secret, reference it in your Pod spec, and your application gets the value as an environment variable or a mounted file. Simple enough.
But there is a catch. Kubernetes Secrets are base64 encoded, not encrypted. Base64 is a reversible encoding, not a security mechanism. Anyone with access to the manifest or the API server can decode your secrets trivially:
# Creating a "secret" in Kubernetes
apiVersion: v1
kind: Secret
metadata:
name: my-app-secrets
namespace: default
type: Opaque
data:
# This is just base64, NOT encryption
database-password: cGFzc3dvcmQxMjM=
api-key: c3VwZXItc2VjcmV0LWtleQ==
# Anyone can decode this instantly
$ echo "cGFzc3dvcmQxMjM=" | base64 -d
password123
$ echo "c3VwZXItc2VjcmV0LWtleQ==" | base64 -d
super-secret-key
The problems go deeper than encoding:
- etcd storage: By default, secrets are stored unencrypted in etcd. Anyone with access to the etcd datastore can read every secret in the cluster
- RBAC gaps: The default RBAC configuration in many clusters is too permissive. If a service account can list secrets in a namespace, it can read all of them
- Git exposure: You cannot commit Secret manifests to Git without exposing the values, which breaks GitOps workflows
- No audit trail: Kubernetes does not log who accessed a secret value by default, only who listed or watched the resource
- No rotation: There is no built-in mechanism for rotating secrets. You change the value, restart the pods, and hope nothing breaks
- No encryption at rest: Unless you explicitly configure encryption at rest for etcd, secrets sit there in plain text
You can enable encryption at rest in the API server with an EncryptionConfiguration:
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}
This helps with data at rest in etcd, but it does not solve the Git problem, the rotation problem, or the audit problem. For those, we need dedicated tools.
Sealed Secrets
Bitnami Sealed Secrets is one of the simplest solutions for the “I need to store secrets in Git” problem. The idea is elegant: you encrypt your secrets with a public key that only the cluster controller can decrypt. The encrypted version (a SealedSecret) is safe to commit to Git because only the controller running in your cluster has the private key to unseal it.
First, install the Sealed Secrets controller in your cluster:
# Install the controller
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm repo update
helm install sealed-secrets sealed-secrets/sealed-secrets \
--namespace kube-system \
--set-string fullnameOverride=sealed-secrets-controller
Then install the kubeseal CLI on your workstation:
# Install kubeseal
brew install kubeseal
# Or download directly
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.27.3/kubeseal-0.27.3-linux-amd64.tar.gz
tar -xvf kubeseal-0.27.3-linux-amd64.tar.gz
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
Now the workflow looks like this. You create a regular Kubernetes Secret, then seal it:
# Create the regular secret (do NOT commit this file)
kubectl create secret generic my-app-secrets \
--namespace default \
--from-literal=database-password=password123 \
--from-literal=api-key=super-secret-key \
--dry-run=client -o yaml > my-secret.yaml
# Seal it with the cluster's public key
kubeseal --format yaml < my-secret.yaml > my-sealed-secret.yaml
# Delete the unencrypted version
rm my-secret.yaml
The resulting SealedSecret is safe to commit:
# my-sealed-secret.yaml - this is safe to commit to Git
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: my-app-secrets
namespace: default
spec:
encryptedData:
database-password: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq...
api-key: AgCtr8HZFBOGZ9Nk+HrKPHRf7A6WkXN0...
template:
metadata:
name: my-app-secrets
namespace: default
type: Opaque
When the Sealed Secrets controller sees this resource in the cluster, it decrypts it and creates a regular Kubernetes Secret that your pods can use as normal.
A few important things to know about Sealed Secrets:
- Scope: By default, a SealedSecret is bound to a specific name and namespace. You cannot change the name or namespace without re-sealing
- Key rotation: The controller rotates its encryption keys every 30 days by default. Old keys are kept so existing SealedSecrets can still be decrypted
- Backup the keys: If you lose the controller’s private key (for example, by deleting the namespace without backing up), you lose the ability to decrypt all your SealedSecrets. Back up the keys
- Re-encryption: After key rotation, existing SealedSecrets still work but use the old key. You should periodically re-seal them with the new key
Here is how you back up and restore the controller keys:
# Back up the sealing keys
kubectl get secret -n kube-system \
-l sealedsecrets.bitnami.com/sealed-secrets-key \
-o yaml > sealed-secrets-keys-backup.yaml
# Store this backup securely (not in Git!)
# Use a password manager, cloud KMS, or a safe
# Restore keys to a new cluster
kubectl apply -f sealed-secrets-keys-backup.yaml
# Restart the controller to pick up the restored keys
kubectl rollout restart deployment/sealed-secrets-controller -n kube-system
Sealed Secrets is a great fit when you want a simple, self-contained solution that does not depend on external services. It works perfectly with GitOps because the encrypted manifests live in your repo. The main downside is that it only solves the “secrets in Git” problem. It does not help with rotation, centralized management, or dynamic secrets.
External Secrets Operator
The External Secrets Operator (ESO) takes a different approach. Instead of encrypting secrets and storing them in Git, it syncs secrets from an external secret store (like AWS Secrets Manager, HashiCorp Vault, Google Secret Manager, or Azure Key Vault) into Kubernetes Secrets. Your Git repository only contains the reference to the secret, not the value itself.
Install ESO with Helm:
helm repo add external-secrets https://charts.external-secrets.io
helm repo update
helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets \
--create-namespace \
--set installCRDs=true
The architecture has three main components:
- SecretStore / ClusterSecretStore: Configures the connection to your external secret provider
- ExternalSecret: Declares which secrets to fetch and how to map them to Kubernetes Secrets
- The operator: Watches for ExternalSecret resources and creates/updates Kubernetes Secrets
Here is an example using AWS Secrets Manager as the backend. First, configure the SecretStore:
# cluster-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-manager
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
namespace: external-secrets
Then create an ExternalSecret that references a secret stored in AWS:
# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-app-secrets
namespace: default
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: my-app-secrets
creationPolicy: Owner
template:
type: Opaque
data:
database-password: "{{ .database_password }}"
api-key: "{{ .api_key }}"
data:
- secretKey: database_password
remoteRef:
key: production/my-app
property: database_password
- secretKey: api_key
remoteRef:
key: production/my-app
property: api_key
This ExternalSecret manifest is perfectly safe to commit to Git because it only contains references, not values. The operator fetches the actual values from AWS Secrets Manager and creates a Kubernetes Secret.
You can also use ESO with HashiCorp Vault as the backend:
# vault-secret-store.yaml
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "https://vault.example.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "external-secrets"
serviceAccountRef:
name: external-secrets-sa
namespace: external-secrets
# vault-external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-app-vault-secrets
namespace: default
spec:
refreshInterval: 15m
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: my-app-secrets
creationPolicy: Owner
data:
- secretKey: database-password
remoteRef:
key: secret/data/production/my-app
property: database_password
- secretKey: api-key
remoteRef:
key: secret/data/production/my-app
property: api_key
The refreshInterval is one of ESO’s killer features. The operator periodically checks the external store
and updates the Kubernetes Secret if the upstream value has changed. This is the foundation for automated
secret rotation, which we will cover later.
ESO is a great choice when you already have a centralized secret store and want to bring those secrets into Kubernetes without manual steps. It works well with GitOps because only the references live in Git, and it supports virtually every major cloud provider and secret management tool.
HashiCorp Vault integration
HashiCorp Vault is the heavyweight champion of secrets management. It provides centralized secret storage, dynamic secret generation, encryption as a service, and detailed audit logging. While ESO can sync secrets from Vault into Kubernetes, Vault also offers native Kubernetes integration through the Vault Agent Injector and the CSI provider.
Vault Agent Injector
The Vault Agent Injector uses a mutating webhook to inject a Vault Agent sidecar into your pods. The agent handles authentication, fetches secrets from Vault, and writes them to a shared volume that your application can read.
Install the Vault Helm chart with the injector enabled:
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set "injector.enabled=true" \
--set "server.dev.enabled=false" \
--set "server.ha.enabled=true" \
--set "server.ha.replicas=3"
Configure Vault’s Kubernetes auth method so pods can authenticate:
# Enable Kubernetes auth in Vault
vault auth enable kubernetes
# Configure it to talk to the Kubernetes API
vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Create a policy for the app
vault policy write my-app-policy - <<EOF
path "secret/data/production/my-app" {
capabilities = ["read"]
}
EOF
# Create a role that binds the policy to a Kubernetes service account
vault write auth/kubernetes/role/my-app \
bound_service_account_names=my-app-sa \
bound_service_account_namespaces=default \
policies=my-app-policy \
ttl=1h
Now annotate your deployment to use the injector:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "my-app"
vault.hashicorp.com/agent-inject-secret-db-password: "secret/data/production/my-app"
vault.hashicorp.com/agent-inject-template-db-password: |
{{- with secret "secret/data/production/my-app" -}}
{{ .Data.data.database_password }}
{{- end -}}
vault.hashicorp.com/agent-inject-secret-api-key: "secret/data/production/my-app"
vault.hashicorp.com/agent-inject-template-api-key: |
{{- with secret "secret/data/production/my-app" -}}
{{ .Data.data.api_key }}
{{- end -}}
spec:
serviceAccountName: my-app-sa
containers:
- name: my-app
image: my-app:latest
# Secrets are available at /vault/secrets/db-password and /vault/secrets/api-key
Vault CSI Provider
The CSI (Container Storage Interface) provider mounts secrets as volumes using the Secrets Store CSI driver. This approach is lighter weight than the Agent Injector because it does not require a sidecar:
# Install the Secrets Store CSI driver
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
--namespace kube-system
# Install the Vault CSI provider
helm install vault hashicorp/vault \
--namespace vault \
--set "injector.enabled=false" \
--set "csi.enabled=true"
# secret-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-my-app
namespace: default
spec:
provider: vault
parameters:
vaultAddress: "https://vault.vault.svc:8200"
roleName: "my-app"
objects: |
- objectName: "database-password"
secretPath: "secret/data/production/my-app"
secretKey: "database_password"
- objectName: "api-key"
secretPath: "secret/data/production/my-app"
secretKey: "api_key"
# Optionally sync to a Kubernetes Secret as well
secretObjects:
- secretName: my-app-secrets
type: Opaque
data:
- objectName: database-password
key: database-password
- objectName: api-key
key: api-key
# pod-with-csi.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-app
namespace: default
spec:
serviceAccountName: my-app-sa
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: secrets
mountPath: "/mnt/secrets"
readOnly: true
volumes:
- name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-my-app"
Vault is the right choice when you need dynamic secrets (like database credentials that are generated on the fly and automatically expire), fine-grained access policies, comprehensive audit logging, or encryption as a service. The tradeoff is complexity. Vault is a distributed system that needs to be deployed, managed, unsealed, and backed up. For smaller teams, ESO with a cloud-managed secret store might be a better fit.
Secret rotation strategies
Static secrets are a liability. The longer a secret exists without being changed, the more time an attacker has to find and exploit it. Secret rotation is the practice of regularly replacing secrets with new values, and it is one of the most impactful things you can do for your security posture.
Why rotate secrets?
- Limit blast radius: If a secret is compromised, rotation limits how long the attacker can use it
- Compliance: Many compliance frameworks (SOC2, PCI-DSS, HIPAA) require regular secret rotation
- Reduce stale access: When people leave the team or services are decommissioned, their credentials should stop working
- Defense in depth: Even if your other controls fail, rotation limits the damage window
Automated rotation with External Secrets Operator
ESO’s refreshInterval is the simplest way to implement rotation. If you update the secret in your
external store, ESO will pick up the new value on the next refresh cycle:
# external-secret-with-rotation.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: rotating-secret
namespace: default
spec:
# Check for new values every 15 minutes
refreshInterval: 15m
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: rotating-secret
creationPolicy: Owner
data:
- secretKey: database-password
remoteRef:
key: production/my-app/database
property: password
On the AWS side, you can set up automatic rotation with a Lambda function:
# terraform for AWS Secrets Manager rotation
resource "aws_secretsmanager_secret" "db_password" {
name = "production/my-app/database"
}
resource "aws_secretsmanager_secret_rotation" "db_password" {
secret_id = aws_secretsmanager_secret.db_password.id
rotation_lambda_arn = aws_lambda_function.secret_rotation.arn
rotation_rules {
automatically_after_days = 30
}
}
resource "aws_lambda_function" "secret_rotation" {
function_name = "secret-rotation-db"
handler = "rotation.handler"
runtime = "python3.12"
filename = "rotation-lambda.zip"
environment {
variables = {
DB_HOST = "mydb.cluster-xyz.us-east-1.rds.amazonaws.com"
}
}
}
Dynamic secrets with Vault
Vault takes rotation a step further with dynamic secrets. Instead of rotating a static credential, Vault generates a unique, short-lived credential on every request. When the lease expires, Vault automatically revokes it:
# Enable the database secrets engine
vault secrets enable database
# Configure a PostgreSQL connection
vault write database/config/my-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="my-app-role" \
connection_url="postgresql://{{username}}:{{password}}@postgres.default.svc:5432/mydb?sslmode=disable" \
username="vault_admin" \
password="admin_password"
# Create a role that generates credentials with a 1-hour TTL
vault write database/roles/my-app-role \
db_name=my-postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
# Now any request to this path generates a fresh credential
$ vault read database/creds/my-app-role
Key Value
--- -----
lease_id database/creds/my-app-role/abcd1234
lease_duration 1h
lease_renewable true
password A1B2-C3D4-E5F6-G7H8
username v-my-app-role-xyz123
With dynamic secrets, there is nothing to rotate in the traditional sense. Every pod gets its own unique credential that expires automatically. If a credential is compromised, it only works for a short window, and it only gives access to what that specific role allows.
The main challenge with rotation (both traditional and dynamic) is making sure your application handles credential changes gracefully. Your app needs to either re-read the secret file periodically, reconnect with new credentials when the old ones are revoked, or use a connection pool that handles credential rotation transparently.
SOPS with age/GPG
Mozilla SOPS (Secrets OPerationS) takes yet another approach. Instead of using a separate controller or operator, SOPS encrypts specific values in your YAML or JSON files while leaving the structure and keys in plain text. This means you can see what secrets a file contains without being able to read the values, which is great for code review and diffing.
Install SOPS and age (a modern encryption tool that is simpler than GPG):
# Install sops
brew install sops
# Install age
brew install age
# Generate an age key pair
age-keygen -o keys.txt
# Output: public key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
Create a .sops.yaml configuration file in your repository root:
# .sops.yaml
creation_rules:
# Encrypt secrets in the production directory
- path_regex: secrets/production/.*\.yaml$
encrypted_regex: "^(data|stringData)$"
age: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
# Encrypt secrets in the staging directory with a different key
- path_regex: secrets/staging/.*\.yaml$
encrypted_regex: "^(data|stringData)$"
age: age1wrg9q5p84t03edh09vqnqv60xfmxqxfaslfcm2yln95jwzxqntrse2x8fq
# You can also use AWS KMS, GCP KMS, or Azure Key Vault
- path_regex: secrets/production-aws/.*\.yaml$
encrypted_regex: "^(data|stringData)$"
kms: "arn:aws:kms:us-east-1:123456789:key/abcd-1234-efgh-5678"
Now create a secret file and encrypt it:
# secrets/production/my-app.yaml (before encryption)
apiVersion: v1
kind: Secret
metadata:
name: my-app-secrets
namespace: default
type: Opaque
stringData:
database-password: password123
api-key: super-secret-key
# Encrypt the file in place
sops --encrypt --in-place secrets/production/my-app.yaml
After encryption, the file looks like this:
# secrets/production/my-app.yaml (after encryption)
apiVersion: v1
kind: Secret
metadata:
name: my-app-secrets
namespace: default
type: Opaque
stringData:
database-password: ENC[AES256_GCM,data:kJH7x9mN...,iv:abc...,tag:xyz...,type:str]
api-key: ENC[AES256_GCM,data:pQR8y0oP...,iv:def...,tag:uvw...,type:str]
sops:
age:
- recipient: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+...
-----END AGE ENCRYPTED FILE-----
lastmodified: "2026-03-07T10:30:00Z"
version: 3.9.0
Notice that the keys and structure are visible, but the values are encrypted. This is perfect for code
review because you can see that someone changed the database-password without seeing the actual value.
To decrypt and apply:
# Decrypt and apply to the cluster
sops --decrypt secrets/production/my-app.yaml | kubectl apply -f -
# Or edit the encrypted file directly (decrypts in your editor, re-encrypts on save)
sops secrets/production/my-app.yaml
Integrating SOPS with ArgoCD
ArgoCD has native SOPS support through plugins. You can use the argocd-vault-plugin or the built-in
Kustomize SOPS support:
# argocd-repo-server with SOPS support
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-repo-server
namespace: argocd
spec:
template:
spec:
containers:
- name: argocd-repo-server
env:
# Age private key for decryption
- name: SOPS_AGE_KEY_FILE
value: /sops/age/keys.txt
volumeMounts:
- name: sops-age
mountPath: /sops/age
volumes:
- name: sops-age
secret:
secretName: sops-age-key
# Using kustomize-sops with ArgoCD
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generators:
- secret-generator.yaml
# secret-generator.yaml
apiVersion: viaduct.ai/v1
kind: ksops
metadata:
name: my-app-secrets
files:
- secrets/production/my-app.yaml
SOPS is a great fit when you want to keep everything in Git (true GitOps), you have a small to medium number of secrets, and you do not need dynamic secrets or complex rotation. It works well for teams that are already comfortable with Git workflows and want minimal additional infrastructure.
RBAC for secrets
No matter which tool you use to manage secrets, the Kubernetes RBAC layer is your last line of defense. If your RBAC is too permissive, an attacker who compromises any service account can read every secret in the namespace or even the entire cluster.
Here are the key principles:
- Least privilege: Only grant access to the specific secrets a service needs
- Namespace isolation: Use separate namespaces for different environments and teams
- No wildcard access: Avoid
resources: ["*"]in RBAC rules for secrets- Separate read and write: Most services only need to read secrets, not create or modify them
Here is a restrictive Role that only allows reading a specific secret:
# role-secret-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-app-secret-reader
namespace: default
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["my-app-secrets"] # Only this specific secret
verbs: ["get"] # Only get, not list or watch
# rolebinding-secret-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app-secret-reader
namespace: default
subjects:
- kind: ServiceAccount
name: my-app-sa
namespace: default
roleRef:
kind: Role
name: my-app-secret-reader
apiGroup: rbac.authorization.k8s.io
For namespace isolation, create a NetworkPolicy that prevents pods in one namespace from communicating with pods in other namespaces, combined with RBAC that restricts service accounts to their own namespace:
# namespace-isolation.yaml
apiVersion: v1
kind: Namespace
metadata:
name: team-payments
labels:
team: payments
environment: production
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: team-payments
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Only allow traffic from same namespace
egress:
- to:
- podSelector: {} # Only allow traffic to same namespace
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- port: 53
protocol: UDP # Allow DNS resolution
You should also restrict who can create or modify Roles and RoleBindings, because an attacker who can create a RoleBinding can grant themselves access to any secret:
# restrict-rbac-management.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rbac-manager
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Only bind this to cluster administrators, not regular service accounts
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rbac-manager-binding
subjects:
- kind: Group
name: cluster-admins
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: rbac-manager
apiGroup: rbac.authorization.k8s.io
A common mistake is giving the edit or admin ClusterRole to service accounts or developers. These
built-in roles include the ability to read all secrets in a namespace. Instead, create custom roles with
only the permissions that are actually needed.
Auditing secret access
Even with strong RBAC, you need to know who is accessing your secrets and when. Kubernetes audit logging gives you this visibility, but it needs to be configured explicitly because it is not enabled by default in most distributions.
The audit policy defines which events to log and at what level:
# audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all secret access at the RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Log token requests (service account tokens)
- level: Metadata
resources:
- group: ""
resources: ["serviceaccounts/token"]
verbs: ["create"]
# Log RBAC changes
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
verbs: ["create", "update", "patch", "delete"]
# Log everything else at the metadata level
- level: Metadata
omitStages:
- "RequestReceived"
Configure the API server to use this policy:
# kube-apiserver flags
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-path=/var/log/kubernetes/audit.log
--audit-log-maxage=30
--audit-log-maxbackup=10
--audit-log-maxsize=100
# Or send audit logs to a webhook (like Elasticsearch or Loki)
--audit-webhook-config-file=/etc/kubernetes/audit-webhook.yaml
An audit log entry for a secret access looks like this:
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "RequestResponse",
"auditID": "abc-123-def-456",
"stage": "ResponseComplete",
"requestURI": "/api/v1/namespaces/default/secrets/my-app-secrets",
"verb": "get",
"user": {
"username": "system:serviceaccount:default:my-app-sa",
"groups": ["system:serviceaccounts", "system:serviceaccounts:default"]
},
"sourceIPs": ["10.244.0.15"],
"objectRef": {
"resource": "secrets",
"namespace": "default",
"name": "my-app-secrets",
"apiVersion": "v1"
},
"responseStatus": {
"metadata": {},
"code": 200
},
"requestReceivedTimestamp": "2026-03-07T10:30:00.000000Z",
"stageTimestamp": "2026-03-07T10:30:00.005000Z"
}
You can build alerts on top of audit logs to detect suspicious activity:
# Falco rule for detecting secret access from unexpected service accounts
- rule: Unexpected Secret Access
desc: Detect when a service account that is not in the allowlist accesses a secret
condition: >
ka.verb in (get, list) and
ka.target.resource = secrets and
not ka.user.name in (allowed_secret_readers)
output: >
Unexpected secret access
(user=%ka.user.name verb=%ka.verb
secret=%ka.target.name ns=%ka.target.namespace
source=%ka.sourceips)
priority: WARNING
source: k8s_audit
tags: [security, secrets]
# Prometheus alerting rule based on audit log metrics
# (requires audit log metrics exporter)
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: secret-access-alerts
namespace: monitoring
spec:
groups:
- name: secret.access
rules:
- alert: UnusualSecretAccessRate
expr: |
sum(rate(apiserver_audit_event_total{
resource="secrets",
verb="get"
}[5m])) by (user) > 10
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual rate of secret access by {{ $labels.user }}"
description: "Service account {{ $labels.user }} is accessing secrets at an unusually high rate"
Combining audit logging with alerting gives you the ability to detect and respond to unauthorized secret access in near real time. This is critical for compliance and for catching compromised service accounts before they can do serious damage.
Putting it all together
With all these tools and approaches, how do you decide what to use? Here is a decision matrix based on your team’s needs and maturity level:
- Just starting out, small team: Use Sealed Secrets. It is the simplest to set up, requires no external infrastructure, and solves the biggest problem (secrets in Git). Add RBAC restrictions and basic audit logging.
- Growing team, cloud-native: Use External Secrets Operator with your cloud provider’s secret store (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault). This gives you centralized management, automatic rotation through the cloud provider, and a clean GitOps workflow.
- Large organization, strict compliance: Use HashiCorp Vault with the Agent Injector or CSI provider. Vault gives you dynamic secrets, detailed audit logging, policy as code, and integrations with everything. Combine with ESO for a hybrid approach.
- GitOps purists: Use SOPS with age or KMS. Everything stays in Git, encrypted at the value level, with clear diffs in pull requests.
- Maximum security: Combine Vault for secret storage and dynamic credentials, ESO for Kubernetes integration, RBAC with least-privilege policies, audit logging with alerting, and automatic rotation with short TTLs.
Here is a maturity model to guide your journey:
- Level 0: Secrets hardcoded in code or committed to Git in plain text. Stop everything and fix this first.
- Level 1: Kubernetes Secrets with encryption at rest enabled in etcd. Better, but secrets are still in manifests and not audited.
- Level 2: Sealed Secrets or SOPS for encrypted secrets in Git. RBAC restricted to least privilege. This is a solid baseline.
- Level 3: External Secrets Operator with a centralized secret store. Automated rotation. Audit logging enabled.
- Level 4: Vault with dynamic secrets, short-lived credentials, and comprehensive audit logging. Secret access alerts. Regular rotation. Compliance controls in place.
Most teams will find that Level 2 or Level 3 covers their needs. Level 4 is for organizations with strict compliance requirements or high-value targets. The important thing is to be honest about where you are and take incremental steps to improve.
Closing notes
Secrets management is one of those topics that seems simple on the surface but gets complex fast. The good news is that the Kubernetes ecosystem has mature, battle-tested tools for every level of complexity, from Sealed Secrets for small teams to Vault for enterprise-grade dynamic secrets.
The most important takeaway is this: base64 is not encryption, and Kubernetes Secrets alone are not sufficient. Pick a tool that fits your team’s size and needs, enforce least-privilege RBAC, enable audit logging, and rotate your secrets regularly. You do not need to implement everything at once, but you should know where you are on the maturity ladder and have a plan to move up.
Hope you found this useful and enjoyed reading it, until next time!
Errata
If you spot any error or have any suggestion, please send me a message so it gets fixed.
Also, you can check the source code and changes in the sources here
$ Comments
Online: 0Please sign in to be able to write comments.