k8s resource limits and their edge cases
In a kubernetes podspec, ressource limts can be quite suprising...
To ensure that each application in a Kubernetes cluster has sufficient resources to fulfill its task, the Pod spec allows you to define the required resources. This can be done in several ways - one of them is to directly specify the minimum requested and maximum allowed resources in the Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: cpu-load
spec:
containers:
- name: cpu-stress
image: polinux/stress
resources:
requests:
cpu: "0.5"
memory: 128Mi
limits:
cpu: "1"
memory: 256Mi
command: ["stress"]
args: ["--cpu", "30", "--timeout", "300s", "--verbose"]
restartPolicy: Never
Requests
In the example above, half a CPU and 128Mi of memory are defined as minimum required. When scheduling this pod, the kube-scheduler ensures that the pod is scheduled on a node where it is guaranteed that the pod can use these resources at any time. On the other hand, the pod is not forced to use these resources; a pod can always use less.
Limits
The limits of a pod are a boundary the pod is not able to overstep. This is ensured in different ways, depending on the specified type. In case of the CPU, the pod will be throttled if it tries to use more than its limits. In case of memory limits, the limits are ensured by simply killing the pod ("OOMKilled")
Edge Cases
Most of the time, a pod has some kind of requests and limits specified. This is considered best practice. But what happens if someone sets the limits to 0? First of all, yes, this is possible if the requirements are also set to 0. The configuration in this case looks like this:
resources:
requests:
cpu: "0"
memory: "0"
limits:
cpu: "0"
memory: "0"
This configuration disables all requests and limits for this pod - the behaviour is the same as if no requests or limits where defined at all.
Implications
In most cases, this will not break anything. Although this is against all recommendations, it is a valid configuration. However, when using policy enforcement tools such as OPA or Kyverno, this configuration can and will cause problems. If there is a policy in place to enforce requirements and limist on all pods, this policy may not work as intended. While checking the podspec, this looks like a valid configuration because requirements and restrictions are present, even if they effectively do nothing.