DevOps from Zero to Hero: Secrets, Config, and Environment Management
Support this blog
If you find this content useful, consider supporting the blog.
Introduction
Welcome to article nine of the DevOps from Zero to Hero series. In the previous article we deployed our TypeScript API to ECS with Fargate, and everything is running in the cloud. But we skipped over something important: how does your application get its database URL, API keys, and other configuration values? If you hard-coded them into your source code, you have a problem.
Configuration and secrets management is one of those topics that seems simple until you get it wrong.
A leaked API key can cost you thousands of dollars. A misconfigured database URL can point your
production app at the staging database. A checked-in .env file can expose credentials to anyone who
clones your repository. These are not hypothetical scenarios, they happen all the time.
In this article we will cover the foundational practices for handling configuration and secrets: the
12-factor methodology, environment variables, .env files, secret scanning, AWS Secrets Manager,
AWS Systems Manager Parameter Store, and how to structure configuration across dev, staging, and
production environments. By the end you will have a clear, practical approach to keeping your config
clean and your secrets safe.
Let’s get into it.
The 12-factor app: config belongs in the environment
The Twelve-Factor App is a methodology for building modern applications that was published by the team at Heroku back in 2012. It describes twelve principles for building software that is easy to deploy, scale, and maintain. Factor number three is about configuration, and it says something very clear: store config in the environment.
What does “config” mean here? It is anything that is likely to change between environments (dev, staging, production). Database URLs, API keys, feature flags, external service endpoints, log levels. These values should not live in your source code. They should not be baked into your Docker image. They should come from the environment where your application is running.
The reasoning is simple:
- Security: Secrets in source code end up in version control, in CI logs, in Docker layers, and in the hands of anyone who has access to your repository.
- Portability: If your database URL is hard-coded, you cannot run the same code against a staging database without changing the code. If it comes from the environment, you just change the environment variable.
- Simplicity: One build artifact (your Docker image) works in every environment. The only thing that changes is the configuration injected at runtime.
Here is the anti-pattern versus the correct approach:
// BAD: hard-coded config
const dbUrl = "postgresql://admin:[email protected]:5432/myapp";
// GOOD: read from the environment
const dbUrl = process.env.DATABASE_URL;
if (!dbUrl) {
throw new Error("DATABASE_URL environment variable is required");
}
That second example follows the 12-factor principle. The application does not know or care which environment it is running in. It just reads the value from the environment and uses it.
Environment variables: how they work
Environment variables are key-value pairs that exist in the operating system’s process environment. Every process inherits the environment of its parent process, and you can set additional variables when launching a process.
Setting and reading environment variables in the shell:
# Set a variable for the current shell session
export DATABASE_URL="postgresql://localhost:5432/myapp"
# Read it
echo $DATABASE_URL
# Set a variable only for a single command
DATABASE_URL="postgresql://localhost:5432/myapp" node app.js
# List all environment variables
env
# Unset a variable
unset DATABASE_URL
In Node.js/TypeScript, you access them through process.env:
// Read an environment variable
const port = process.env.PORT || "3000";
const dbUrl = process.env.DATABASE_URL;
const logLevel = process.env.LOG_LEVEL || "info";
// Check for required variables at startup
const required = ["DATABASE_URL", "API_KEY", "JWT_SECRET"];
for (const key of required) {
if (!process.env[key]) {
console.error(`Missing required environment variable: ${key}`);
process.exit(1);
}
}
This pattern of checking for required variables at startup is important. You want your application to fail fast and loud if it is missing configuration, not silently break at some random point later.
Dotenv files: local development convenience
Typing export DATABASE_URL=... every time you open a terminal gets old fast. That is where .env
files come in. A .env file is a simple text file that lists environment variables, one per line:
# .env
DATABASE_URL=postgresql://localhost:5432/myapp_dev
API_KEY=dev-api-key-not-real
JWT_SECRET=local-dev-secret
LOG_LEVEL=debug
PORT=3000
Libraries like dotenv for Node.js automatically read this file
and load the variables into process.env when your application starts:
// Load .env file at the very top of your entry point
import "dotenv/config";
// Now process.env.DATABASE_URL is available
console.log(process.env.DATABASE_URL);
The critical rule with .env files is: never commit them to Git. They contain secrets, and your
Git repository is not a secure place to store secrets. Add .env to your .gitignore immediately:
# .gitignore
# Environment files with secrets
.env
.env.local
.env.*.local
# Keep the example file (no real secrets)
!.env.example
Instead of committing your actual .env file, commit a .env.example file with placeholder values.
This tells your teammates what variables they need without exposing real secrets:
# .env.example
DATABASE_URL=postgresql://localhost:5432/myapp_dev
API_KEY=your-api-key-here
JWT_SECRET=generate-a-random-string
LOG_LEVEL=debug
PORT=3000
When a new developer joins the team, they copy .env.example to .env and fill in their own values.
Simple, safe, effective.
Why you should never commit secrets to Git
This deserves its own section because it is that important. When you commit a secret to Git, it does not just exist in the current version of the file. It exists in the Git history forever. Even if you delete the file or overwrite the value in a later commit, anyone who clones the repository can find it by looking at the commit history.
# Oops, I committed my .env file
git log --all --full-history -- .env
# Anyone can see the contents of that file at that commit
git show abc123:.env
If this happens, the secret is compromised. You need to rotate it immediately, meaning generate a new
key and revoke the old one. Rewriting Git history with git filter-branch or BFG Repo-Cleaner is
possible but painful, especially in a shared repository.
The better approach is prevention. Use tools that scan your repository for secrets before they ever get committed:
- git-secrets: An AWS tool that installs Git hooks to prevent committing secrets. It scans for AWS access keys, secret keys, and custom patterns you define.
- gitleaks: A faster, more comprehensive scanner that detects a wide range of secret patterns (API keys, tokens, passwords) across your entire repository history.
- pre-commit: A framework for managing Git pre-commit hooks. You can add gitleaks or git-secrets as a hook that runs automatically on every commit.
Here is how to set up gitleaks as a pre-commit hook:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
# Install pre-commit and set up the hooks
pip install pre-commit
pre-commit install
# Now every commit will be scanned for secrets automatically
git commit -m "add new feature"
# gitleaks runs and blocks the commit if it finds a secret
You should also run gitleaks in your CI pipeline as a safety net. We covered CI pipelines in article five, so adding a gitleaks step is straightforward:
# In your GitHub Actions workflow
- name: Scan for secrets
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Config hierarchy: how values get resolved
In a real application, configuration can come from multiple sources. When the same key is defined in more than one place, you need a clear precedence order. The standard hierarchy, from lowest to highest priority, looks like this:
1. Application defaults (hard-coded fallbacks in your code)
2. Config files (JSON, YAML, TOML files loaded at startup)
3. Environment variables (set by the OS, container runtime, or .env file)
4. CLI flags (passed when starting the application)
5. Remote config (fetched from Secrets Manager, Parameter Store, etc.)
Each level overrides the one below it. So if your code has a default LOG_LEVEL=info, your config
file sets it to warn, and your environment variable sets it to debug, the environment variable
wins. If you also pass --log-level=error as a CLI flag, that wins over everything.
Here is a practical example showing this hierarchy in TypeScript:
import { readFileSync, existsSync } from "fs";
interface AppConfig {
port: number;
logLevel: string;
dbUrl: string;
}
function loadConfig(): AppConfig {
// Level 1: Application defaults
let config: AppConfig = {
port: 3000,
logLevel: "info",
dbUrl: "postgresql://localhost:5432/myapp",
};
// Level 2: Config file (if it exists)
const configPath = "./config.json";
if (existsSync(configPath)) {
const fileConfig = JSON.parse(readFileSync(configPath, "utf-8"));
config = { ...config, ...fileConfig };
}
// Level 3: Environment variables (override file config)
if (process.env.PORT) config.port = parseInt(process.env.PORT, 10);
if (process.env.LOG_LEVEL) config.logLevel = process.env.LOG_LEVEL;
if (process.env.DATABASE_URL) config.dbUrl = process.env.DATABASE_URL;
return config;
}
const config = loadConfig();
console.log("Config loaded:", config);
This pattern gives you flexibility. Developers can use a config file locally, the CI environment can set environment variables, and production can pull secrets from AWS Secrets Manager (which we will cover next).
AWS Secrets Manager: storing and retrieving secrets
AWS Secrets Manager is a managed service for storing, retrieving, and rotating secrets. Unlike environment variables, which are visible in ECS task definitions, CloudFormation templates, and potentially in logs, Secrets Manager stores values encrypted at rest and provides fine-grained access control through IAM policies.
When should you use Secrets Manager instead of plain environment variables?
- Database credentials: Secrets Manager can automatically rotate database passwords on a schedule, updating both the secret value and the database itself.
- API keys for third-party services: Stripe, Twilio, SendGrid, anything where a leaked key means real money.
- TLS certificates and private keys: Anything cryptographic that should never appear in plain text.
- Shared secrets across services: When multiple services need the same credentials, Secrets Manager is a single source of truth.
Creating a secret with the AWS CLI:
# Create a simple string secret
aws secretsmanager create-secret \
--name "prod/task-api/database-url" \
--description "Production database connection string" \
--secret-string "postgresql://admin:s3cur3P@[email protected]:5432/myapp"
# Create a JSON secret (multiple key-value pairs in one secret)
aws secretsmanager create-secret \
--name "prod/task-api/credentials" \
--description "Production API credentials" \
--secret-string '{
"DB_URL": "postgresql://admin:s3cur3P@[email protected]:5432/myapp",
"API_KEY": "sk_live_abc123",
"JWT_SECRET": "a-very-long-random-string"
}'
Notice the naming convention: environment/service/secret-name. This hierarchical naming makes it
easy to organize secrets and write IAM policies that restrict access by environment or service.
Retrieving a secret:
# Get the secret value
aws secretsmanager get-secret-value \
--secret-id "prod/task-api/database-url" \
--query SecretString \
--output text
Secrets Manager: rotation basics
One of the most powerful features of Secrets Manager is automatic rotation. Instead of using the same database password forever (and hoping nobody leaks it), you can configure Secrets Manager to rotate the password on a schedule, for example every 30 days.
For Amazon RDS databases, AWS provides built-in rotation Lambda functions. The rotation process works like this:
1. Secrets Manager invokes a Lambda function on a schedule
2. The Lambda generates a new password
3. It updates the password in the RDS database
4. It stores the new password in Secrets Manager
5. Your application fetches the new value next time it reads the secret
Setting up rotation with the CLI:
# Enable rotation for an RDS secret
aws secretsmanager rotate-secret \
--secret-id "prod/task-api/database-url" \
--rotation-lambda-arn "arn:aws:lambda:us-east-1:123456789012:function:SecretsManagerRDSRotation" \
--rotation-rules '{"AutomaticallyAfterDays": 30}'
The important thing to understand about rotation is that your application needs to handle it gracefully. If your app caches the database connection string at startup and never re-reads it, a rotated password will break your connection. The solution is to either re-fetch the secret periodically or use a connection library that can handle credential refresh.
Secrets Manager: IAM access policies
You control who and what can access your secrets through IAM policies. Here is a policy that allows an ECS task role to read only the secrets for a specific environment and service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/task-api/*"
}
]
}
This policy follows the principle of least privilege. The ECS task can only read secrets under the
prod/task-api/ prefix. It cannot list all secrets in the account, it cannot read secrets from other
services, and it cannot modify or delete any secrets. If someone compromises your task-api service, they
still cannot access the secrets belonging to your user-service or payment-service.
You attach this policy to the ECS task execution role that we set up in the previous article:
# Create the policy
aws iam create-policy \
--policy-name task-api-secrets-read \
--policy-document file://secrets-policy.json
# Attach it to the ECS task role
aws iam attach-role-policy \
--role-name task-api-task-role \
--policy-arn "arn:aws:iam::123456789012:policy/task-api-secrets-read"
AWS Systems Manager Parameter Store
Parameter Store is another AWS service for storing configuration, and it serves a different purpose than Secrets Manager. Think of it this way:
- Secrets Manager: For sensitive values that need encryption, rotation, and fine-grained access control. It costs $0.40 per secret per month.
- Parameter Store: For non-sensitive or less-sensitive configuration values. The standard tier is free for up to 10,000 parameters.
Parameter Store supports three types of parameters:
- String: A plain text value. Good for configuration like log levels, feature flags, or endpoint URLs.
- StringList: A comma-separated list of values.
- SecureString: An encrypted value using AWS KMS. This provides similar encryption to Secrets Manager but without the rotation features.
Creating parameters with the CLI:
# Plain string parameter
aws ssm put-parameter \
--name "/prod/task-api/log-level" \
--type "String" \
--value "info"
# Encrypted parameter
aws ssm put-parameter \
--name "/prod/task-api/api-key" \
--type "SecureString" \
--value "sk_live_abc123"
# Get a parameter
aws ssm get-parameter \
--name "/prod/task-api/log-level" \
--query "Parameter.Value" \
--output text
# Get an encrypted parameter (decrypt it)
aws ssm get-parameter \
--name "/prod/task-api/api-key" \
--with-decryption \
--query "Parameter.Value" \
--output text
# Get all parameters under a path
aws ssm get-parameters-by-path \
--path "/prod/task-api/" \
--with-decryption
The hierarchical path naming (/environment/service/parameter) is the same convention we used with
Secrets Manager, and it makes IAM policies straightforward:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Resource": "arn:aws:ssm:us-east-1:123456789012:parameter/prod/task-api/*"
}
]
}
A common pattern is to use Parameter Store for non-sensitive config (log level, feature flags, service URLs) and Secrets Manager for truly sensitive values (database passwords, API keys). This keeps costs down and gives you the best of both services.
Practical example: loading config from env vars and Secrets Manager
Let’s bring everything together. Here is a realistic example of loading configuration in a TypeScript application that reads from environment variables first, then falls back to AWS Secrets Manager for sensitive values.
First, install the AWS SDK:
npm install @aws-sdk/client-secrets-manager @aws-sdk/client-ssm
Now the configuration loader:
// src/config.ts
import {
SecretsManagerClient,
GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
import { SSMClient, GetParameterCommand } from "@aws-sdk/client-ssm";
const smClient = new SecretsManagerClient({ region: "us-east-1" });
const ssmClient = new SSMClient({ region: "us-east-1" });
interface AppConfig {
port: number;
logLevel: string;
dbUrl: string;
apiKey: string;
jwtSecret: string;
}
async function getSecret(secretId: string): Promise<string> {
const command = new GetSecretValueCommand({ SecretId: secretId });
const response = await smClient.send(command);
if (!response.SecretString) {
throw new Error(`Secret ${secretId} has no string value`);
}
return response.SecretString;
}
async function getParameter(name: string): Promise<string> {
const command = new GetParameterCommand({
Name: name,
WithDecryption: true,
});
const response = await ssmClient.send(command);
if (!response.Parameter?.Value) {
throw new Error(`Parameter ${name} not found`);
}
return response.Parameter.Value;
}
export async function loadConfig(): Promise<AppConfig> {
const env = process.env.APP_ENV || "dev";
// Non-sensitive config: prefer env vars, fall back to Parameter Store
const port = process.env.PORT
? parseInt(process.env.PORT, 10)
: 3000;
const logLevel = process.env.LOG_LEVEL
|| await getParameter(`/${env}/task-api/log-level`).catch(() => "info");
// Sensitive config: prefer env vars (for local dev), fall back to Secrets Manager
let dbUrl = process.env.DATABASE_URL;
let apiKey = process.env.API_KEY;
let jwtSecret = process.env.JWT_SECRET;
if (!dbUrl || !apiKey || !jwtSecret) {
console.log(`Fetching secrets from AWS Secrets Manager for env: ${env}`);
const secretString = await getSecret(`${env}/task-api/credentials`);
const secrets = JSON.parse(secretString);
dbUrl = dbUrl || secrets.DB_URL;
apiKey = apiKey || secrets.API_KEY;
jwtSecret = jwtSecret || secrets.JWT_SECRET;
}
if (!dbUrl || !apiKey || !jwtSecret) {
throw new Error("Missing required configuration. Check env vars or Secrets Manager.");
}
return { port, logLevel, dbUrl, apiKey, jwtSecret };
}
And here is how you use it in your application entry point:
// src/index.ts
import "dotenv/config";
import { loadConfig } from "./config";
import { createApp } from "./app";
async function main() {
const config = await loadConfig();
console.log(`Starting server on port ${config.port} (log level: ${config.logLevel})`);
const app = createApp(config);
app.listen(config.port, () => {
console.log(`Server running at http://localhost:${config.port}`);
});
}
main().catch((err) => {
console.error("Failed to start:", err);
process.exit(1);
});
This setup works for both local development and production:
- Local development: Developers set values in their
.envfile. The app reads fromprocess.envand never hits AWS.- Production: The
.envfile does not exist. The app detects the missing env vars and fetches from Secrets Manager. The ECS task role provides the necessary IAM permissions.
Environment promotion: dev, staging, and production
When you have multiple environments, you need a clear strategy for what changes between them and what stays the same. The general principle is: your code and Docker image should be identical across all environments. Only the configuration should differ.
Things that should differ between environments:
- Database connection strings: Each environment has its own database.
- API keys and secrets: Separate keys for each environment, so a compromised dev key does not affect production.
- Log levels: Usually
debugin dev,infoin staging,warnorerrorin production.- Feature flags: Test new features in staging before enabling them in production.
- Scaling parameters: Dev runs one instance, production runs three or more.
- External service endpoints: Dev might point to sandbox APIs, production to live ones.
Things that should NOT differ between environments:
- Application code: The same Docker image runs everywhere. No environment-specific code paths.
- Business logic: If your app behaves differently in staging and production, you are going to have a bad time.
- Configuration structure: The same keys exist in all environments, just with different values.
Here is a practical structure using Parameter Store and Secrets Manager:
Parameter Store:
/dev/task-api/log-level = "debug"
/staging/task-api/log-level = "info"
/prod/task-api/log-level = "warn"
/dev/task-api/feature-new-ui = "true"
/staging/task-api/feature-new-ui = "true"
/prod/task-api/feature-new-ui = "false"
Secrets Manager:
dev/task-api/credentials = { DB_URL: "...", API_KEY: "...", JWT_SECRET: "..." }
staging/task-api/credentials = { DB_URL: "...", API_KEY: "...", JWT_SECRET: "..." }
prod/task-api/credentials = { DB_URL: "...", API_KEY: "...", JWT_SECRET: "..." }
Your ECS task definition sets a single environment variable, APP_ENV, to tell the application which
environment it is running in. The config loader (like the one we built above) uses that value to
fetch the right secrets:
{
"containerDefinitions": [
{
"name": "task-api",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/task-api:v1.2.3",
"environment": [
{ "name": "APP_ENV", "value": "prod" },
{ "name": "PORT", "value": "3000" }
]
}
]
}
Notice that the only values in the task definition are non-sensitive. The database URL and API keys are fetched from Secrets Manager at runtime, so they never appear in your Terraform code, CloudFormation templates, or ECS console.
ECS integration with Secrets Manager
ECS also has native integration with Secrets Manager, where it can inject secret values directly as environment variables when starting a container. This means your application does not need to call the Secrets Manager API at all:
{
"containerDefinitions": [
{
"name": "task-api",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/task-api:v1.2.3",
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/task-api/credentials:DB_URL::"
},
{
"name": "API_KEY",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/task-api/credentials:API_KEY::"
}
],
"environment": [
{ "name": "APP_ENV", "value": "prod" },
{ "name": "PORT", "value": "3000" }
]
}
]
}
The valueFrom field uses the format secret-arn:json-key:version-stage:version-id. The double
colon at the end means “use the latest version”. This approach is simpler because your application
just reads process.env.DATABASE_URL like normal, and ECS handles the Secrets Manager integration.
The trade-off is that the secret values are only fetched when the container starts. If a secret rotates, you need to restart the container to pick up the new value. The SDK-based approach from the previous section lets you re-fetch secrets without restarting.
Advanced tools: Vault, SOPS, and Sealed Secrets
Everything we have covered so far handles the most common scenarios well. But as your infrastructure grows, you might need more specialized tools. I covered these in depth in the SRE: Secrets Management in Kubernetes article, so here is a quick overview with links:
- HashiCorp Vault: A full-featured secrets management platform. It supports dynamic secrets (generate a fresh database credential for each request), encryption as a service, and audit logging. Ideal for large organizations with complex compliance requirements.
- SOPS: Mozilla’s tool for encrypting secrets in files. You can store encrypted YAML, JSON, or .env files directly in Git. SOPS encrypts only the values, not the keys, so diffs are still readable. Great for GitOps workflows.
- Sealed Secrets: A Kubernetes-specific solution. You encrypt secrets locally with a public key, commit the encrypted version to Git, and the Sealed Secrets controller in your cluster decrypts them. Perfect for GitOps with Kubernetes.
For the scope of this series, AWS Secrets Manager and Parameter Store will cover everything you need. If you are working with Kubernetes and want the deep dive into these tools, check out the SRE article linked above.
Quick reference: choosing the right approach
Here is a simple decision guide:
Is it sensitive (password, API key, token)?
YES --> Use AWS Secrets Manager
- Needs rotation? --> Enable Secrets Manager rotation
- Multiple services need it? --> Use resource-based policy
NO --> Is it environment-specific config?
YES --> Use Parameter Store (free tier)
NO --> Hard-code it as an application default
And here is a comparison table:
Feature | Env Vars | Parameter Store | Secrets Manager
-------------------------|---------------|-----------------|----------------
Cost | Free | Free (std tier) | $0.40/secret/mo
Encryption | No | Optional (KMS) | Always (KMS)
Rotation | Manual | Manual | Automatic
Audit logging | No | CloudTrail | CloudTrail
Version history | No | Yes | Yes
Cross-account access | No | Yes | Yes
Best for | Local dev | Non-sensitive | Sensitive data
Closing notes
You now have a solid understanding of how to manage configuration and secrets in a real application.
The key takeaways are: follow the 12-factor methodology and keep config out of your code, use .env
files for local development but never commit them, scan your repositories for leaked secrets with
tools like gitleaks, use AWS Secrets Manager for sensitive values and Parameter Store for everything
else, and structure your configuration so that the same Docker image works in every environment.
These are the fundamentals that will serve you well regardless of which cloud provider or orchestration platform you end up using. In the next article, we will tackle DNS, TLS, and making your application reachable from the internet with a proper domain name and HTTPS. See you there.
Hope you found this useful and enjoyed reading it, until next time!
Errata
If you spot any error or have any suggestion, please send me a message so it gets fixed.
Also, you can check the source code and changes in the sources here
$ Comments
Online: 0Please sign in to be able to write comments.