From time to time we need small container hosts - very small, single hosts. To work with these and still get benefits of gitops deployments, we use microk8s which has reasonable footprint.
On our installations we're strictly moving persistent data to dedicated disks and do not store these on the operating system disks (for various reasons). So, our layout looks for like this:
As these hosts are by design not meant to be scaled out, we want to avoid using a replicated storage like ceph or mayastor as this adds overhead in terms of memory and reduces throughput. We want a very simple system, hostPath storage.
Long story short: We enable hostpath storage according to the documentation with microk8s enable hostpath-storage.
But - then all PVs will default to /var/snap/microk8s/common/default-storage/ and we don't want this. We want the location be /mnt/data/microk8s.
To adjust this, the documentation states to create a new storageClass - which didn't work in our case. We needed to adjust the deployment (namespace: kube-system) of the hostpath-provisioner as the hostPath root for the provisioning is mapped there. Additional (to make things more clear) we provided a custom, default storageClass without a pvDir parameter.
As you can see, the only thing we adjusted is the mountPath and PV_DIR within the deployment.
Newly created PVs will now be placed on the new location - would be nice if the manual would be updated to reflect this requirement. Enjoy!
On a recent project I've been stumbling on the case that kerberos tickets have been inadvertently shared across containers on a node - which obviously caught my attention as I'm not keen on sharing such secrets across workloads. This post describes why this happens and what to do to prevent this.
If you run kubernetes on your own, you need to provide a storage solution with it. We are using ceph (operated through rook). This article gives some short overview about it's benefits and some pro's and con's of it.