Probando el Operator SDK y creando un mecanismo de prefetch para Kubernetes
Apoya este blog
Si te resulta util este contenido, considera apoyar el blog.
Introducción
En este artículo exploraremos cómo crear un operador que pueda prefetchar nuestras imágenes (de nuestros deployments en todos los nodos) utilizando el Operator SDK. Tal vez te preguntes, ¿por qué querrías hacer esto? La idea principal es obtener las imágenes por adelantado para que no tengas que descargarlas cuando el pod realmente necesite comenzar a ejecutarse en un nodo determinado. Esto puede acelerar un poco las cosas y también es un ejercicio interesante.
Si has leído el artículo Aplicaciones nativas en la nube con kubebuilder y kind, también conocidos como operadores de kubernetes, notarás que los comandos son muy similares entre sí, ya que ahora el operator-sdk usa kubebuilder. Puedes leer más sobre ello aquí.
El código fuente para este artículo está aquí
Requisitos previos
Creando nuestro clúster local
Configuración de Kind para multi-nodo
Esta es la configuración de kind necesaria para tener un setup de múltiples nodos localmente: kind create cluster --config kind.yaml
# kind create cluster --config kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
Creando el clúster
Necesitaremos un clúster para ejecutar y probar nuestro operador. Kind es bastante sencillo y lo suficientemente liviano como para ejecutarse en cualquier lugar.
❯ kind create cluster --config kind.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
Creando nuestro operador o controlador
Empecemos por crear el proyecto
$ operator-sdk init --domain=techsquad.rocks --repo=github.com/kainlite/kubernetes-prefetch-operator
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
go: downloading sigs.k8s.io/controller-runtime v0.6.2
go: downloading k8s.io/apimachinery v0.18.6
go: downloading k8s.io/client-go v0.18.6
go: downloading github.com/prometheus/client_model v0.2.0
go: downloading k8s.io/apiextensions-apiserver v0.18.6
go: downloading github.com/gogo/protobuf v1.3.1
go: downloading golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd
go: downloading github.com/google/gofuzz v1.1.0
go: downloading k8s.io/api v0.18.6
go: downloading github.com/golang/protobuf v1.4.2
go: downloading sigs.k8s.io/structured-merge-diff/v3 v3.0.0
go: downloading github.com/fsnotify/fsnotify v1.4.9
go: downloading k8s.io/utils v0.0.0-20200603063816-c1c6865ac451
go: downloading github.com/imdario/mergo v0.3.9
go: downloading github.com/hashicorp/golang-lru v0.5.4
go: downloading github.com/json-iterator/go v1.1.10
go: downloading github.com/google/go-cmp v0.4.0
go: downloading golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975
go: downloading google.golang.org/protobuf v1.23.0
go: downloading gopkg.in/yaml.v2 v2.3.0
go: downloading sigs.k8s.io/yaml v1.2.0
go: downloading k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6
go: downloading github.com/prometheus/procfs v0.0.11
go: downloading golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7
go: downloading k8s.io/klog/v2 v2.0.0
go: downloading golang.org/x/text v0.3.3
Update go.mod:
$ go mod tidy
go: downloading github.com/onsi/gomega v1.10.1
go: downloading github.com/onsi/ginkgo v1.12.1
go: downloading go.uber.org/atomic v1.4.0
go: downloading golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
go: downloading github.com/nxadm/tail v1.4.4
Running make:
$ make
/home/kainlite/Webs/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
Next: define a resource with:
$ operator-sdk create api
Creamos nuestra API
Este será el objeto que contendrá toda la información importante para una imagen dada. Los archivos que necesitamos modificar en primera instancia están en: controllers/*_controller.go y api/v1/*_types.go.
$ operator-sdk init --domain=techsquad.rocks --repo=github.com/kainlite/kubernetes-prefetch-operator
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
go: downloading sigs.k8s.io/controller-runtime v0.6.2
go: downloading k8s.io/apimachinery v0.18.6
go: downloading k8s.io/client-go v0.18.6
go: downloading github.com/prometheus/client_model v0.2.0
go: downloading k8s.io/apiextensions-apiserver v0.18.6
go: downloading github.com/gogo/protobuf v1.3.1
go: downloading golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd
go: downloading github.com/google/gofuzz v1.1.0
go: downloading k8s.io/api v0.18.6
go: downloading github.com/golang/protobuf v1.4.2
go: downloading sigs.k8s.io/structured-merge-diff/v3 v3.0.0
go: downloading github.com/fsnotify/fsnotify v1.4.9
go: downloading k8s.io/utils v0.0.0-20200603063816-c1c6865ac451
go: downloading github.com/imdario/mergo v0.3.9
go: downloading github.com/hashicorp/golang-lru v0.5.4
go: downloading github.com/json-iterator/go v1.1.10
go: downloading github.com/google/go-cmp v0.4.0
go: downloading golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975
go: downloading google.golang.org/protobuf v1.23.0
go: downloading gopkg.in/yaml.v2 v2.3.0
go: downloading sigs.k8s.io/yaml v1.2.0
go: downloading k8s.io/kube-openapi v0.0.0-20200410145947-61e04a5be9a6
go: downloading github.com/prometheus/procfs v0.0.11
go: downloading golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7
go: downloading k8s.io/klog/v2 v2.0.0
go: downloading golang.org/x/text v0.3.3
Update go.mod:
$ go mod tidy
go: downloading github.com/onsi/gomega v1.10.1
go: downloading github.com/onsi/ginkgo v1.12.1
go: downloading go.uber.org/atomic v1.4.0
go: downloading golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
go: downloading github.com/nxadm/tail v1.4.4
Running make:
$ make
/home/kainlite/Webs/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
Next: define a resource with:
$ operator-sdk create api
Construcción y publicación (imagen de Docker)
Construcción básica y publicación de la imagen del operador con la ayuda del proyecto.
$ make docker-build docker-push IMG=kainlite/kubernetes-prefetch-operator:latest
/home/kainlite/Webs/go/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
/home/kainlite/Webs/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
mkdir -p /home/kainlite/Webs/kubernetes-prefetch-operator/testbin
test -f /home/kainlite/Webs/kubernetes-prefetch-operator/testbin/setup-envtest.sh || curl -sSLo /home/kainlite/Webs/kubernetes-prefetch-operator/testbin/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.6.3/hack/setup-envtest.sh
source /home/kainlite/Webs/kubernetes-prefetch-operator/testbin/setup-envtest.sh; fetch_envtest_tools /home/kainlite/Webs/kubernetes-prefetch-operator/testbin; setup_envtest_env /home/kainlite/Webs/kubernetes-prefetch-operator/testbin; go test ./... -coverprofile cover.out
Using cached envtest tools from /home/kainlite/Webs/kubernetes-prefetch-operator/testbin
setting up env vars
? github.com/kainlite/kubernetes-prefetch-operator [no test files]
? github.com/kainlite/kubernetes-prefetch-operator/api/v1 [no test files]
ok github.com/kainlite/kubernetes-prefetch-operator/controllers 7.643s coverage: 0.0% of statements
docker build . -t kainlite/kubernetes-prefetch-operator:latest
Sending build context to Docker daemon 283.5MB
Step 1/14 : FROM golang:1.13 as builder
---> d6f3656320fe
Step 2/14 : WORKDIR /workspace
---> Using cache
---> daa8163e90d8
Step 3/14 : COPY go.mod go.mod
---> Using cache
---> 915e48e7d848
Step 4/14 : COPY go.sum go.sum
---> Using cache
---> aaafab83a12c
Step 5/14 : RUN go mod download
---> Using cache
---> 4f9b0dc66b6e
Step 6/14 : COPY main.go main.go
---> Using cache
---> 6650d207bf3d
Step 7/14 : COPY api/ api/
---> Using cache
---> 02f5deba19a4
Step 8/14 : COPY controllers/ controllers/
---> Using cache
---> c115b1d97125
Step 9/14 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
---> Using cache
---> 93d496caf9b3
Step 10/14 : FROM gcr.io/distroless/static:nonroot
---> 0b9eb5cc7e55
Step 11/14 : WORKDIR /
---> Using cache
---> 6cbde711827b
Step 12/14 : COPY --from=builder /workspace/manager .
---> Using cache
---> e5b22a5aba41
Step 13/14 : USER nonroot:nonroot
---> Using cache
---> a77bd02bcecd
Step 14/14 : ENTRYPOINT ["/manager"]
---> Using cache
---> 582cb3195193
Successfully built 582cb3195193
Successfully tagged kainlite/kubernetes-prefetch-operator:latest
docker push kainlite/kubernetes-prefetch-operator:latest
The push refers to repository [docker.io/kainlite/kubernetes-prefetch-operator]
b667daa3236e: Pushed
fd6fa224ea91: Pushed
latest: digest: sha256:f0519419c8c4bfdcd4a9b2d3f0e7d0086f3654659058de62447f373fd0489ddc size: 739
Desplegando
Ahora que tenemos el proyecto construido en una imagen de Docker y almacenado en DockerHub, podemos instalar nuestro CRD y luego desplegar el operador.
$ make install
/home/kainlite/Webs/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/usr/bin/kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/prefetches.cache.techsquad.rocks created
Desplegando el operador
$ make deploy IMG=kainlite/kubernetes-prefetch-operator:latest
/home/kainlite/Webs/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && /usr/bin/kustomize edit set image controller=kainlite/kubernetes-prefetch-operator:latest
/usr/bin/kustomize build config/default | kubectl apply -f -
namespace/kubernetes-prefetch-operator-system created
customresourcedefinition.apiextensions.k8s.io/prefetches.cache.techsquad.rocks configured
role.rbac.authorization.k8s.io/kubernetes-prefetch-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/kubernetes-prefetch-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/kubernetes-prefetch-operator-proxy-role created
clusterrole.rbac.authorization.k8s.io/kubernetes-prefetch-operator-metrics-reader created
rolebinding.rbac.authorization.k8s.io/kubernetes-prefetch-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-prefetch-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-prefetch-operator-proxy-rolebinding created
service/kubernetes-prefetch-operator-controller-manager-metrics-service created
deployment.apps/kubernetes-prefetch-operator-controller-manager created
Validando
Verificamos que el pod esta ejecutandose.
$ kubectl get pods -n kubernetes-prefetch-operator-system
NAME READY STATUS RESTARTS AGE
kubernetes-prefetch-operator-controller-manager-59d8bc86-2z2sq 2/2 Running 0 66s
Hasta ahora todo está perfecto, pero nuestro operador es un poco inútil en este momento, así que vamos a agregar algo de código para que haga lo que queremos…
Nuestro código
Gran parte de lo que usamos está generado, sin embargo, necesitamos darle permisos y comportamientos específicos a nuestro operador para que haga lo que queremos cuando creamos un objeto en Kubernetes.
Nuestro manifiesto
Este será el manifiesto que utilizaremos para decirle a nuestro operador para qué implementaciones queremos precargar imágenes.
apiVersion: cache.techsquad.rocks/v1
kind: Prefetch
metadata:
name: prefetch-sample
namespace: default
spec:
# We will use labels to fetch the deployments that we want to fetch images for, but we
# don't want to prefetch everything in the cluster that would be too much bandwidth
# for no reason, but for this deployment we want to have it everywhere ready to be used.
filter_by_labels:
app: nginx
retry_after: 60
# this is a strings.Contains
node_filter: worker
Ejemplo de despliegue de nginx
Este despliegue de nginx se usará para validar que las imágenes se precargan en todos los nodos.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
# One of our nodes won't have that label so we can validate
# that our operator prefetches images even if the deployment
# has not created a pod in that node (this isn't really necessary
# because we can have 3 nodes and request 2 replicas, but just in case)
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: nginx-schedulable
operator: In
values:
- "yes"
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
No necesitamos hacer esto realmente, pero de esta manera es fácil asegurarse de que no se programe un pod si la etiqueta no está presente: kubectl label nodes kind-worker3 nginx-schedulable="true"
Nuestra lógica real (esto me hizo reír, tanto bootstrap solo para llegar aquí, pero imagina tener que hacer todo eso por ti mismo)
Aquí es donde realmente sucede todo, primero actualizamos nuestro Spec:
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Phases
const (
PhasePending = "PENDING"
PhaseRunning = "RUNNING"
PhaseFailed = "FAILED"
)
// PrefetchSpec defines the desired state of Prefetch
type PrefetchSpec struct {
// Labels are the labels to use to filter the deployments
// +kubebuilder:default={}
FilterByLabels map[string]string `json:"filter_by_labels,omitempty"`
// Simple matcher of the hostname of the nodes
NodeFilter string `json:"node_filter,omitempty"`
// The default time to wait between fetch and fetch
// if not specified it will default to 300 seconds
// +optional
// +kubebuilder:validation:Minimum=0
RetryAfter int `json:"retry_after,omitempty"`
}
// PrefetchStatus defines the observed state of Prefetch
type PrefetchStatus struct {
Phase string `json:"phase,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// Prefetch is the Schema for the prefetches API
type Prefetch struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec PrefetchSpec `json:"spec,omitempty"`
Status PrefetchStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// PrefetchList contains a list of Prefetch
type PrefetchList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Prefetch `json:"items"`
}
func init() {
SchemeBuilder.Register(&Prefetch{}, &PrefetchList{})
}
Este archivo se encuentra aqui
Ahora al comportamiento:
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"fmt"
"strings"
"time"
"github.com/go-logr/logr"
"github.com/prometheus/common/log"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/google/uuid"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
set "k8s.io/apimachinery/pkg/labels"
cachev1 "github.com/kainlite/kubernetes-prefetch-operator/api/v1"
)
// PrefetchReconciler reconciles a Prefetch object
type PrefetchReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
// I have been rather permissive than restrictive here, so be aware of that when using this
// +kubebuilder:rbac:groups=cache.techsquad.rocks,resources=prefetches,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.techsquad.rocks,resources=cache,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.techsquad.rocks,resources=prefetches/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cache.techsquad.rocks,resources=pods/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;delete
// +kubebuilder:rbac:groups="",resources=pods,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch
// +kubebuilder:rbac:groups="",resources=nodes,verbs=get;list;watch
func getClientSet() (*kubernetes.Clientset, error) {
// creates the in-cluster config
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
return clientset, err
}
func fetchImagesWithTags(clientset *kubernetes.Clientset, labels map[string]string) []string {
list := []string{}
labelsAsString := set.FormatLabels(labels)
fmt.Printf("labelsAsString: %+v
", labelsAsString)
// List Deployments
deploymentsClient := clientset.AppsV1().Deployments("")
DeploymentList, err := deploymentsClient.List(context.TODO(), metav1.ListOptions{LabelSelector: labelsAsString})
if err != nil {
fmt.Printf("Error fetching deployments, check your labels: %+v
", err)
}
for _, d := range DeploymentList.Items {
for _, f := range d.Spec.Template.Spec.InitContainers {
fmt.Printf("Adding init container %s to the list
", f.Image)
list = append(list, fmt.Sprintf("%s", f.Image))
}
for _, f := range d.Spec.Template.Spec.Containers {
fmt.Printf("Adding container %s to the list
", f.Image)
list = append(list, fmt.Sprintf("%s", f.Image))
}
}
return list
}
func fetchNodeNames(clientset *kubernetes.Clientset, prefetch *cachev1.Prefetch) []string {
list := []string{}
nodes, err := clientset.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
fmt.Printf("Error fetching nodes, check your permissions: %+v
", err)
}
for _, node := range nodes.Items {
if strings.Contains(node.Name, prefetch.Spec.NodeFilter) {
list = append(list, node.Name)
}
}
fmt.Printf("Node list: %+v
", list)
return list
}
func PrefetchImages(r *PrefetchReconciler, prefetch *cachev1.Prefetch) {
id := uuid.New()
prefix := "prefetch-pod"
name := prefix + "-" + id.String()
labels := map[string]string{
"app": prefix,
}
clientset, _ := getClientSet()
imagesWithTags := fetchImagesWithTags(clientset, prefetch.Spec.FilterByLabels)
nodeList := fetchNodeNames(clientset, prefetch)
for _, node := range nodeList {
for _, image := range imagesWithTags {
// command := fmt.Sprintf("docker pull %s")
command := fmt.Sprintf("/bin/sh -c exit")
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: name + "-" + node,
Namespace: prefetch.Namespace,
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "prefetch",
Command: strings.Split(command, " "),
Image: image,
// Initially I was going to use a privileged container
// to talk to the docker daemon, but I then realized
// it's easier to call the image with exit 0
// Image: "docker/dind",
// SecurityContext: &v1.SecurityContext{Privileged: &privileged},
},
},
RestartPolicy: corev1.RestartPolicyOnFailure,
Affinity: &corev1.Affinity{
NodeAffinity: &corev1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
NodeSelectorTerms: []corev1.NodeSelectorTerm{
{
MatchExpressions: []corev1.NodeSelectorRequirement{
{
Key: "kubernetes.io/hostname",
Operator: "In",
Values: []string{node},
},
},
},
},
},
},
},
},
}
if prefetch.Status.Phase == "" || prefetch.Status.Phase == "PENDING" {
prefetch.Status.Phase = cachev1.PhaseRunning
}
switch prefetch.Status.Phase {
case cachev1.PhasePending:
prefetch.Status.Phase = cachev1.PhaseRunning
case cachev1.PhaseRunning:
err := controllerutil.SetControllerReference(prefetch, pod, r.Scheme)
found := &corev1.Pod{}
nsName := types.NamespacedName{Name: pod.Name, Namespace: pod.Namespace}
err = r.Get(context.TODO(), nsName, found)
if err != nil && errors.IsNotFound(err) {
_ = r.Create(context.TODO(), pod)
fmt.Printf("Pod launched with name: %+v
", pod.Name)
} else if found.Status.Phase == corev1.PodFailed ||
found.Status.Phase == corev1.PodSucceeded {
fmt.Printf("Container terminated reason with message: %+v, and status: %+v",
found.Status.Reason, found.Status.Message)
prefetch.Status.Phase = cachev1.PhaseFailed
}
}
// Update the At prefetch, setting the status to the respective phase:
err := r.Status().Update(context.TODO(), prefetch)
err = r.Create(context.TODO(), pod)
if err != nil {
fmt.Printf("There was an error invoking the pod: %+v
", err)
}
}
}
}
func DeleteCompletedPods(prefetch *cachev1.Prefetch) {
fieldSelectorFilter := "status.phase=Succeeded"
clientset, _ := getClientSet()
pods, err := clientset.CoreV1().Pods(prefetch.Namespace).List(context.TODO(), metav1.ListOptions{FieldSelector: fieldSelectorFilter})
if err != nil {
fmt.Printf("failed to retrieve Pods: %+v
", err)
}
for _, pod := range pods.Items {
fmt.Printf("Deleting pod: %+v
", pod.Name)
if err := clientset.CoreV1().Pods(prefetch.Namespace).Delete(context.TODO(), pod.Name, metav1.DeleteOptions{}); err != nil {
fmt.Printf("Failed to delete Pod: %+v", err)
}
}
}
func (r *PrefetchReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
_ = context.Background()
r.Log.WithValues("prefetch", req.NamespacedName)
prefetch := &cachev1.Prefetch{}
err := r.Client.Get(context.TODO(), req.NamespacedName, prefetch)
if err != nil {
log.Error(err, "failed to get Prefetch resource
")
if errors.IsNotFound(err) {
// Request object not found, could have been deleted after
// reconcile request—return and don't requeue:
return reconcile.Result{}, client.IgnoreNotFound(err)
}
// Error reading the object—requeue the request:
return reconcile.Result{}, err
}
fmt.Printf("Filter by labels %+v
", prefetch.Spec.FilterByLabels)
fmt.Printf("RetryAfter %+v
", prefetch.Spec.RetryAfter)
var retryAfter int
if prefetch.Spec.RetryAfter != 0 {
retryAfter = prefetch.Spec.RetryAfter
} else {
retryAfter = 300
}
if len(prefetch.Spec.FilterByLabels) > 0 {
PrefetchImages(r, prefetch)
} else {
fmt.Printf("Skipping empty labels
")
}
DeleteCompletedPods(prefetch)
if err != nil {
return ctrl.Result{RequeueAfter: time.Second * time.Duration(retryAfter)}, nil
} else {
return ctrl.Result{RequeueAfter: time.Second * time.Duration(retryAfter)}, err
}
}
func (r *PrefetchReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Prefetch{}).
Complete(r)
}
Básicamente, lo que hacemos es configurar un temporizador para crear un pod en cada nodo con el fin de forzar que se obtenga la imagen que los despliegues (filtrados por etiquetas) necesitan o van a usar. Al hacer esto, si el nodo ya tiene la imagen, no sucede nada y el pod será eliminado en la próxima ejecución. Sin embargo, si la imagen no está presente, será descargada, por lo que si algo sucede y se necesita programar un pod, no será necesario descargar todo, lo que debería hacer que el proceso sea relativamente más rápido. Podés encontrar este archivo aquí
Lo que deberíamos estar viendo en nuestro clúster
$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-deployment-697c4998bb-2qm6h 1/1 Running 3 6d4h 10.244.2.3 kind-worker3 <none> <none>
default nginx-deployment-697c4998bb-tnzpx 1/1 Running 3 6d4h 10.244.2.4 kind-worker3 <none> <none>
default nginx-deployment-798984b768-ndsrk 0/1 Pending 0 27m <none> <none> <none> <none>
default prefetch-pod-b1ba3b2f-6667-4cb0-99c8-30d5cafa572a-kind-worker 0/1 Completed 0 51s 10.244.3.72 kind-worker <none> <none>
default prefetch-pod-b1ba3b2f-6667-4cb0-99c8-30d5cafa572a-kind-worker2 0/1 Completed 0 51s 10.244.1.84 kind-worker2 <none> <none>
default prefetch-pod-b1ba3b2f-6667-4cb0-99c8-30d5cafa572a-kind-worker3 0/1 Completed 0 51s 10.244.2.37 kind-worker3 <none> <none>
kube-system coredns-66bff467f8-4vnd8 1/1 Running 6 6d7h 10.244.3.2 kind-worker <none> <none>
kube-system coredns-66bff467f8-tsrtp 1/1 Running 6 6d7h 10.244.3.3 kind-worker <none> <none>
Limpieza
Para limpiar el operador del clúster podés hacer lo siguiente, y también recordá limpiar tus clústeres o lo que estés usando si está en la nube para evitar facturas inesperadas:
$ kubectl delete -f config/samples/cache_v1_prefetch.yaml
prefetch.cache.techsquad.rocks "prefetch-sample" deleted
$ kustomize build config/default | kubectl delete -f -
namespace "kubernetes-prefetch-operator-system" deleted
customresourcedefinition.apiextensions.k8s.io "prefetches.cache.techsquad.rocks" deleted
role.rbac.authorization.k8s.io "kubernetes-prefetch-operator-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-prefetch-operator-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-prefetch-operator-proxy-role" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-prefetch-operator-metrics-reader" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-prefetch-operator-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-prefetch-operator-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-prefetch-operator-proxy-rolebinding" deleted
service "kubernetes-prefetch-operator-controller-manager-metrics-service" deleted
deployment.apps "kubernetes-prefetch-operator-controller-manager" deleted
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-697c4998bb-2qm6h 1/1 Running 3 6d4h
default nginx-deployment-697c4998bb-tnzpx 1/1 Running 3 6d4h
default nginx-deployment-798984b768-ndsrk 0/1 Pending 0 38m
kube-system coredns-66bff467f8-4vnd8 1/1 Running 6 6d7h
kube-system coredns-66bff467f8-tsrtp 1/1 Running 6 6d7h
kube-system etcd-kind-control-plane 1/1 Running 2 5d7h
kube-system kindnet-6g7fc 1/1 Running 7 6d7h
kube-system kindnet-jxjdd 1/1 Running 6 6d7h
kube-system kindnet-rw28j 1/1 Running 5 6d7h
kube-system kindnet-w4wqg 1/1 Running 5 6d7h
kube-system kube-apiserver-kind-control-plane 1/1 Running 2 5d7h
kube-system kube-controller-manager-kind-control-plane 1/1 Running 9 6d7h
kube-system kube-proxy-b9js2 1/1 Running 4 6d7h
kube-system kube-proxy-cc89w 1/1 Running 4 6d7h
kube-system kube-proxy-fwk7n 1/1 Running 4 6d7h
kube-system kube-proxy-prbds 1/1 Running 4 6d7h
kube-system kube-scheduler-kind-control-plane 1/1 Running 10 6d7h
local-path-storage local-path-provisioner-bd4bb6b75-6mnrg 1/1 Running 13 6d7h
$ kind delete cluster
Deleting cluster "kind" ...
Notas finales
Asegurate de revisar los enlaces si querés aprender más sobre el proyecto. ¡Espero que lo hayas disfrutado! Nos vemos en twitter o github!
- https://sdk.operatorframework.io/docs/building-operators/golang/tutorial/
- https://sdk.operatorframework.io/docs/building-operators/golang/operator-scope/
- https://opensource.com/article/20/3/kubernetes-operator-sdk
El código fuente para este artículo está aquí
Errata
Si encontrás algún error o tenés alguna sugerencia, mandame un mensaje para que lo pueda corregir.
$ Comentarios
Online: 0Por favor inicie sesión para poder escribir comentarios.