Learning Kubernetes with some kindness
I really enjoy being able to get familiar with some thing by locally installing it and monkeying with it a bit. Things that run in Kubernetes (k8s) and all the complications that come with it are no different. Having a convenient way to spin up and tear down a cluster easily makes learning about Cloud Native Computing Foundation (CNCF) projects much more fun. Here’s a short post that demonstrates how to create a four node k8s cluster.
Until recently, I had used vagrant to handle the cluster creation (see https://github.com/galexrt/k8s-vagrant-multi-node for an example of how that works). Although that works, it’s pretty resource intensive and as I was building out how infrastructure would play together it just became to much for my laptop to handle.
I’ve tried minikube a bit, but it just wasn’t the same experience (and not possible in some cases) when trying to segregate workloads.
A coworker did a demo with kind (Kubernetes IN Docker), https://kind.sigs.k8s.io/, and I was instantly sold on it. It uses docker containers as its k8s nodes, so it’s much easier on the system resources. Since it was made to test kubernetes, the clusters that it creates are multinode and compatible with everything I’ve tried to do with them so far. And it’s fast!
Requirements for running kind are minimal, you need to have a newer version of GoLang installed and, of course, docker. Installation is pretty straightforward and described pretty well here: https://kind.sigs.k8s.io/docs/user/quick-start#installation
Example of getting up and running with a 4 node ( 1 master / 3 worker ) cluster.
❯ kind create cluster --config cluster.yaml Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.21.1) ✓ Preparing nodes ✓ Writing configuration ✓ Starting control-plane ✓ Installing CNI ✓ Installing StorageClass ✓ Joining worker nodes Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Thanks for using kind! ❯ k get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane,master 7m5s v1.21.1 kind-worker Ready <none> 6m32s v1.21.1 kind-worker2 Ready <none> 6m32s v1.21.1 kind-worker3 Ready <none> 6m32s v1.21.1
This is the config I’ll be using in the next few posts. Here are the guts of the cluster.yaml file:
# a cluster with a control-plane node and 3 workers kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane kubeadmConfigPatches: - | kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-labels: "ingress-ready=true" extraPortMappings: - containerPort: 80 hostPort: 80 protocol: TCP - containerPort: 443 hostPort: 443 protocol: TCP - role: worker - role: worker - role: worker containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"] endpoint = ["http://kind-registry:5000"]
When you’re done with it or goof it up so bad it needs to be rebuilt, getting rid of it is as easy as this
❯ kind delete cluster Deleting cluster "kind" ...
Now you know how to create a local k8s cluster the kind way.