DevOps from Zero to Hero: Your First TypeScript API with Express and Docker
Support this blog
If you find this content useful, consider supporting the blog.
Introduction
Welcome to article two of the DevOps from Zero to Hero series. In the first article we set up our development environment and got familiar with the basic tools. Now it is time to build something real: a REST API that we can deploy, test, and iterate on throughout the rest of the series.
We are going to build a simple task tracker API using TypeScript and Express. Nothing fancy, just CRUD operations on an in-memory array. The goal is not to build a production-grade app right now, but to have a working API that we can containerize, deploy, and improve in future articles.
After the API is working, we will write a Dockerfile using multi-stage builds, set up a
.dockerignore, run the container as a non-root user, add a health check endpoint, and wire
everything up with Docker Compose for local development with hot reload.
Let’s get into it.
Why TypeScript and Express?
You might wonder why we are not using Python, Go, or something else. TypeScript with Express is one of the most common stacks you will encounter in the wild. It has a massive ecosystem, the tooling is mature, and the concepts translate directly to other languages and frameworks.
For DevOps, the language itself matters less than understanding how to build, test, package, and deploy applications. We picked TypeScript because it gives us type safety without too much ceremony, and Express because it is minimal enough that we can focus on the DevOps side of things.
Project setup
First, create a new directory and initialize the project:
mkdir task-api && cd task-api
npm init -y
Install the dependencies we need:
npm install express
npm install -D typescript @types/express @types/node ts-node nodemon
Now create the TypeScript configuration. This tells the compiler how to process our code:
// tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"lib": ["ES2020"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}
Update your package.json scripts section:
{
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "nodemon --watch src --ext ts --exec ts-node src/index.ts"
}
}
Create the source directory:
mkdir src
Defining the task model
Let’s start with a simple type definition for our tasks. Create src/types.ts:
// src/types.ts
export interface Task {
id: number;
title: string;
description: string;
completed: boolean;
createdAt: string;
updatedAt: string;
}
export interface CreateTaskRequest {
title: string;
description?: string;
}
export interface UpdateTaskRequest {
title?: string;
description?: string;
completed?: boolean;
}
This gives us a clear contract for what a task looks like and what data we expect when creating or updating one.
Building the API
Now let’s build the actual API. Create src/index.ts:
// src/index.ts
import express, { Request, Response } from "express";
import { Task, CreateTaskRequest, UpdateTaskRequest } from "./types";
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware
app.use(express.json());
// In-memory storage
let tasks: Task[] = [];
let nextId = 1;
// Health check endpoint
app.get("/health", (_req: Request, res: Response) => {
res.json({
status: "healthy",
uptime: process.uptime(),
timestamp: new Date().toISOString(),
});
});
// GET /tasks - List all tasks
app.get("/tasks", (_req: Request, res: Response) => {
res.json({
data: tasks,
count: tasks.length,
});
});
// GET /tasks/:id - Get a single task
app.get("/tasks/:id", (req: Request, res: Response) => {
const task = tasks.find((t) => t.id === parseInt(req.params.id));
if (!task) {
res.status(404).json({ error: "Task not found" });
return;
}
res.json({ data: task });
});
// POST /tasks - Create a new task
app.post("/tasks", (req: Request, res: Response) => {
const body: CreateTaskRequest = req.body;
if (!body.title || body.title.trim() === "") {
res.status(400).json({ error: "Title is required" });
return;
}
const now = new Date().toISOString();
const task: Task = {
id: nextId++,
title: body.title.trim(),
description: body.description?.trim() || "",
completed: false,
createdAt: now,
updatedAt: now,
};
tasks.push(task);
res.status(201).json({ data: task });
});
// PUT /tasks/:id - Update a task
app.put("/tasks/:id", (req: Request, res: Response) => {
const taskIndex = tasks.findIndex((t) => t.id === parseInt(req.params.id));
if (taskIndex === -1) {
res.status(404).json({ error: "Task not found" });
return;
}
const body: UpdateTaskRequest = req.body;
const existing = tasks[taskIndex];
const updated: Task = {
...existing,
title: body.title?.trim() ?? existing.title,
description: body.description?.trim() ?? existing.description,
completed: body.completed ?? existing.completed,
updatedAt: new Date().toISOString(),
};
tasks[taskIndex] = updated;
res.json({ data: updated });
});
// DELETE /tasks/:id - Delete a task
app.delete("/tasks/:id", (req: Request, res: Response) => {
const taskIndex = tasks.findIndex((t) => t.id === parseInt(req.params.id));
if (taskIndex === -1) {
res.status(404).json({ error: "Task not found" });
return;
}
const deleted = tasks.splice(taskIndex, 1)[0];
res.json({ data: deleted, message: "Task deleted" });
});
// Start the server
app.listen(PORT, () => {
console.log(`Task API running on port ${PORT}`);
});
export default app;
Testing the API locally
Start the development server:
npm run dev
You should see Task API running on port 3000. Now let’s test each endpoint with curl:
# Health check
curl http://localhost:3000/health | jq
# Create a task
curl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Learn Docker", "description": "Build and run containers"}' | jq
# Create another task
curl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Write Dockerfile", "description": "Multi-stage build"}' | jq
# List all tasks
curl http://localhost:3000/tasks | jq
# Get a single task
curl http://localhost:3000/tasks/1 | jq
# Update a task
curl -X PUT http://localhost:3000/tasks/1 \
-H "Content-Type: application/json" \
-d '{"completed": true}' | jq
# Delete a task
curl -X DELETE http://localhost:3000/tasks/2 | jq
You should see proper JSON responses for each request. The health check returns the server uptime, the POST returns the created task with an auto-incremented ID, and so on.
The Dockerfile
Now we get to the fun part. We are going to containerize this API using Docker best practices.
First, let’s talk about why multi-stage builds matter. A typical TypeScript project has development dependencies (the compiler, type definitions, nodemon) that we do not need at runtime. With multi-stage builds, we compile in one stage and copy only the output to a smaller final image. This means smaller images, faster pulls, and a smaller attack surface.
Create the Dockerfile:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
# Copy package files first for better layer caching
COPY package*.json ./
# Install all dependencies (including devDependencies for building)
RUN npm ci
# Copy source code
COPY tsconfig.json ./
COPY src ./src
# Compile TypeScript
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
# Add a non-root user
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -s /bin/sh -D appuser
WORKDIR /app
# Copy package files and install production-only dependencies
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy compiled output from builder stage
COPY --from=builder /app/dist ./dist
# Switch to non-root user
USER appuser
# Expose the port
EXPOSE 3000
# Set environment variable
ENV NODE_ENV=production
# Health check using the /health endpoint
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
# Start the application
CMD ["node", "dist/index.js"]
Let’s break down what each part does:
- Multi-stage build We use two stages. The first installs all dependencies and compiles TypeScript. The second only has production dependencies and the compiled JavaScript. This keeps the final image small.
- Alpine base We use
node:20-alpineinstead ofnode:20. Alpine is a minimal Linux distribution that produces much smaller images.- Layer caching We copy
package*.jsonbefore the source code. This means Docker can cache thenpm cilayer and only reinstall dependencies whenpackage.jsonchanges.- Non-root user Running as root inside a container is a security risk. We create a dedicated user and switch to it before starting the app.
- Health check Docker can monitor the container health by hitting our
/healthendpoint. Orchestrators like Kubernetes use this to know when to restart unhealthy containers.
The .dockerignore file
Just like .gitignore keeps files out of your repository, .dockerignore keeps files out of your
Docker build context. This makes builds faster and prevents sensitive files from leaking into images.
Create .dockerignore:
node_modules
dist
npm-debug.log
.git
.gitignore
.env
.env.*
*.md
.vscode
.idea
coverage
.nyc_output
The most important entry is node_modules. Without this, Docker would copy your entire local
node_modules directory into the build context, which is slow and unnecessary since we run
npm ci inside the container anyway.
Building and running the container
Build the image:
docker build -t task-api:latest .
You should see Docker executing both stages. The first time takes a bit longer because it downloads the base image and installs dependencies. Subsequent builds are faster thanks to layer caching.
Check the image size:
docker images task-api
The Alpine-based multi-stage image should be around 130-150 MB. Compare that to a full node:20
image which starts at over 900 MB before you even add your code.
Run the container:
docker run -d --name task-api -p 3000:3000 task-api:latest
Test it:
# Health check
curl http://localhost:3000/health | jq
# Create a task
curl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Running in Docker!"}' | jq
Check the container health status:
docker inspect --format='{{.State.Health.Status}}' task-api
After about 30 seconds, it should show healthy.
Stop and remove the container when you are done:
docker stop task-api && docker rm task-api
Docker Compose for local development
Running docker build and docker run every time you change code gets old fast. Docker Compose
gives us a better workflow. We can define services, mount our source code as a volume, and get hot
reload inside the container.
Create docker-compose.yml:
services:
api:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./src:/app/src
- ./package.json:/app/package.json
environment:
- NODE_ENV=development
- PORT=3000
restart: unless-stopped
We need a separate Dockerfile for development since we want ts-node and nodemon available.
Create Dockerfile.dev:
FROM node:20-alpine
WORKDIR /app
# Copy package files and install all dependencies
COPY package*.json ./
RUN npm ci
# Copy TypeScript config
COPY tsconfig.json ./
# Copy source code (will be overridden by volume mount)
COPY src ./src
# Expose the port
EXPOSE 3000
# Run with nodemon for hot reload
CMD ["npx", "nodemon", "--watch", "src", "--ext", "ts", "--exec", "ts-node", "src/index.ts"]
Start the development environment:
docker compose up
Now edit src/index.ts, save the file, and watch nodemon restart automatically inside the
container. Your changes appear without rebuilding the image. This is the development workflow you
want: fast feedback loops while still running inside a container.
To run it in the background:
docker compose up -d
Check the logs:
docker compose logs -f api
Stop everything:
docker compose down
Why containers matter for DevOps
We just went from “code on my machine” to “code in a container.” This might seem like extra work for a simple API, but containers solve real problems that show up in every team:
- Reproducibility The container runs the same way on your laptop, in CI, and in production. No more “it works on my machine” conversations.
- Consistency Everyone on the team uses the same Node.js version, the same OS, the same dependencies. The Dockerfile is the single source of truth.
- Isolation Your app runs in its own filesystem and network namespace. It does not conflict with other services on the same machine.
- Portability The image runs anywhere Docker runs: local machines, cloud VMs, Kubernetes clusters. You build once and deploy anywhere.
- Immutability Once built, the image does not change. You do not SSH into production and tweak files. You build a new image and deploy it.
These properties are the foundation of modern DevOps. Every tool and practice we cover in this series builds on top of containers. CI/CD pipelines build container images. Kubernetes orchestrates them. GitOps tracks which image version runs where. Without containers, none of that works as smoothly.
Project structure recap
At this point, your project should look like this:
task-api/
├── src/
│ ├── index.ts
│ └── types.ts
├── .dockerignore
├── docker-compose.yml
├── Dockerfile
├── Dockerfile.dev
├── package.json
├── package-lock.json
└── tsconfig.json
Closing notes
In this article we built a complete REST API with TypeScript and Express, then containerized it
using Docker best practices. We covered multi-stage builds, non-root users, health checks,
.dockerignore, and Docker Compose for local development.
The API itself is intentionally simple. It stores tasks in memory, which means all data disappears when the container restarts. That is fine for now. In a future article we will add a real database and learn how to manage data persistence with containers.
In the next article, we will set up a CI/CD pipeline that automatically builds our Docker image, runs tests, and pushes the image to a container registry. That is where the DevOps workflow really starts to come together.
Hope you found this useful and enjoyed reading it, until next time!
Errata
If you spot any error or have any suggestion, please send me a message so it gets fixed.
Also, you can check the source code and changes in the sources here
$ Comments
Online: 0Please sign in to be able to write comments.