The Simplest GitOps Implementation That Actually Works
Introduction
In this article we will strip GitOps down to its bare essentials and build the simplest implementation that actually works. No fancy operators, minimal tooling - just Git, GitHub Actions, and a sprinkle of automation magic.
After exploring different GitOps approaches in my previous article, I realized that sometimes we overthink things. Sometimes, all you need is a simple, reliable pipeline that gets the job done. Let’s build exactly that, using a real example from my tools repository.
The goal here is simple:
- One repository for your application code
- One repository for your Kubernetes manifests
- One workflow that connects them together
- Zero additional infrastructure beyond what you already have
Just pure, simple GitOps that you can understand, debug, and maintain without a PhD in cloud-native technologies.
What we’re building
We’re going to implement a push-based GitOps workflow that:
- Builds and tests your Go application
- Creates a container image with proper versioning
- Pushes it to GitHub Container Registry (free with GitHub!)
- Updates your Kubernetes manifests automatically
- Adds security scanning because, well, we’re not cowboys
The entire setup requires just two repositories and one GitHub Actions workflow:
- Application repository: github.com/kainlite/tools - where your Go code lives
- Manifests repository: github.com/kainlite/tools-manifests - where your Kubernetes manifests live
For the deployment part, I’m using ArgoCD to watch the manifests repository and sync changes to the cluster, but you could just as easily apply the manifests manually or use a simple CronJob. The beauty is in the simplicity of the pipeline itself.
The Application Repository
First, let’s talk about the application repository. This is where your code lives, and where developers spend most of their time. The only GitOps-specific thing here is the CI/CD workflow.
Here’s what happens when you push code:
name: CI/CD Pipeline
on:
push:
branches: [ master ]
tags: [ 'v*' ]
Simple trigger - push to master or create a tag, and the magic begins. No webhooks to configure, no external services to integrate. GitHub handles everything.
Step 1: Testing (Because We’re Professionals)
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.24'
- name: Run tests
run: go test -v ./...
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v8
with:
version: latest
Nothing fancy here. Check out the code, set up Go, run the tests, and lint the code. If tests fail or the linter complains, nothing else happens. This is your first quality gate, and it’s non-negotiable.
Step 2: Build and Push the Container
Here’s where things get interesting. We’re using GitHub Container Registry (ghcr.io) because it’s free, integrated, and just works:
build-and-push:
needs: test
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
Notice something beautiful here? We’re using GITHUB_TOKEN
for the registry, no need to create and manage registry credentials. GitHub provides this token automatically with just the right permissions. One less secret to rotate, one less thing to worry about.
The image tagging strategy is where the magic happens:
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=sha,prefix=,suffix=,format=short
type=ref,event=branch
type=raw,value=latest,enable={{is_default_branch}}
We tag images with the commit SHA. Why? Because SHAs are immutable, unique, and tell you exactly what code is running in production. No more “latest” nightmares, no more version conflicts.
Step 3: Security Scanning
Before we deploy anything, let’s make sure we’re not shipping known vulnerabilities:
security-scan:
needs: build-and-push
runs-on: ubuntu-latest
steps:
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ghcr.io/${{ github.repository }}:${{ github.sha }}
format: 'sarif'
output: 'trivy-results.sarif'
Trivy scans our image for known vulnerabilities and reports them directly to GitHub’s Security tab. If critical vulnerabilities are found, you’ll know immediately. No external dashboards, no additional logins - everything stays in GitHub.
Step 4: The GitOps Magic - Updating Manifests
This is where GitOps actually happens. After our image is built and scanned, we update the manifest repository:
update-manifests:
needs: [build-and-push, security-scan]
runs-on: ubuntu-latest
steps:
- name: Checkout manifest repository
uses: actions/checkout@v4
with:
repository: kainlite/tools-manifests
token: ${{ secrets.MANIFEST_REPO_TOKEN }}
path: manifests
- name: Update deployment image
working-directory: manifests
run: |
yq eval '.spec.template.spec.containers[0].image = "ghcr.io/kainlite/tools:${{ github.sha }}"' \
-i 02-deployment.yaml
- name: Commit and push changes
working-directory: manifests
run: |
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add 02-deployment.yaml
git commit -m "Update image to sha-${{ github.sha }} from ${{ github.repository }}"
git push
This is beautiful in its simplicity. We check out the manifest repo, update the image tag using yq
, commit the change, and push. That’s it. Your manifests now reflect the exact version that was just built. Note that I’m using Kustomize in my setup, but the principle remains the same - update the image reference, commit, push.
The Manifest Repository
The manifest repository is even simpler. It contains your Kubernetes YAML files and… that’s it. No scripts, no pipelines, just declarative configuration:
# 02-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tools
namespace: tools
spec:
template:
spec:
containers:
- name: tools
image: ghcr.io/kainlite/tools:968faeda187b88f51dd07635301839cee38754f3
# This SHA gets updated automatically by CI
imagePullPolicy: Always
ports:
- containerPort: 3000
If you’re using ArgoCD like I am, you’d also have an Application spec:
# 01-appspec.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tools
namespace: tools
spec:
source:
repoURL: https://github.com/kainlite/tools-manifests
targetRevision: master
path: .
destination:
server: https://kubernetes.default.svc
namespace: tools
syncPolicy:
automated:
prune: true
selfHeal: true
Every change to this repository is tracked in Git. You can see who deployed what, when, and why. Need to rollback? Just revert the commit. Need to see what’s running in production? Look at the main branch. The SHA in the image tag tells you exactly which commit is deployed.
Setting It Up
Ready to implement this? Here’s your checklist:
1. Create a Personal Access Token
# Go to GitHub Settings > Developer settings > Personal access tokens
# Create a token with 'repo' scope for the manifest repository
# Save it as MANIFEST_REPO_TOKEN in your app repo's secrets
2. Create Your Manifest Repository
mkdir k8s-manifests
cd k8s-manifests
git init
# Add your Kubernetes YAML files
cp /path/to/your/*.yaml .
git add .
git commit -m "Initial manifests"
git push
3. Add the Workflow
Copy the workflow to .github/workflows/ci.yaml
in your application repository. Update the repository names and you’re done.
4. Deploy to Your Cluster Now, you have a few options:
Option A: Using ArgoCD (what I use) If you have ArgoCD installed, just apply the Application spec and it will handle everything:
kubectl apply -f https://raw.githubusercontent.com/kainlite/tools-manifests/main/01-appspec.yaml
ArgoCD will then watch the repository and automatically sync changes. Done.
Option B: Manual sync (simplest)
kubectl apply -f https://raw.githubusercontent.com/kainlite/tools-manifests/main/02-deployment.yaml
Option C: Automated sync without ArgoCD Set up a simple CronJob in your cluster that pulls and applies changes:
apiVersion: batch/v1
kind: CronJob
metadata:
name: gitops-sync
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: sync
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
kubectl apply -f https://raw.githubusercontent.com/kainlite/tools-manifests/main/
That’s it. Every 5 minutes, your cluster checks for changes and applies them. No operators needed if you don’t want them.
Why This Works
This approach might seem too simple, but that’s exactly why it works:
- No learning curve: If you know Git and basic CI/CD, you’re ready
- Debuggable: When something breaks, you can see exactly where and why
- Portable: Works with any Kubernetes cluster, anywhere
- Auditable: Every change is in Git with full history
- Free: Uses only GitHub’s free tier features
- Secure: Minimal attack surface, standard GitHub security
When to Use This
This setup is perfect for:
- Small to medium teams getting started with GitOps
- Projects where simplicity trumps features
- Teams that want to understand their entire pipeline
- Situations where you can’t install additional tools in the cluster
It’s probably not ideal if you need:
- Multi-cluster deployments
- Complex rollout strategies (canary, blue-green)
- Automatic rollback on metrics
- Multi-tenancy with strict RBAC
But you know what? You can always add those features later. Start simple, understand the basics, then add complexity only when you actually need it.
Common Pitfalls and Solutions
Image not updating?
Check that your image pull policy isn’t set to IfNotPresent
with a tag that doesn’t change. Using SHAs solves this automatically.
Manifest repo token expired? Use GitHub’s fine-grained personal access tokens with longer expiration dates, or better yet, use a GitHub App for production.
Need to rollback quickly?
git revert HEAD
git push
# Wait for sync, or manually apply
Conclusion
GitOps doesn’t have to be complicated. This simple setup gives you 90% of the benefits with 10% of the complexity. You get version control, automated deployments, security scanning, and full auditability with just one GitHub Actions workflow.
Start here, get comfortable with the concepts, and then explore more advanced tools like ArgoCD or Flux when you actually need their features. Remember, the best GitOps implementation is the one your team can understand and maintain.
Sometimes, the simplest solution is the best solution. And in this case, simple doesn’t mean amateur - it means focused, maintainable, and production-ready.
Hope you found this useful and enjoyed reading it, until next time!
-
Comments
Online: 0
Please sign in to be able to write comments.