Running postgres in kubernetes with hugepages

It might happen that you need to run postgres on a node with hugepages enabled - this is normally not an issue, except you're working with containers, for example within kubernetes/openshift.

In our case we are using mayastor at the storage layer which requires us to enable hugepages. But on the same system also postgres needs to run. Both support huge pages, so it should be fine?

Unfortunately, despite everything should work, postgres has issues starting - it seemingly detects huge pages but fails to allocate these unless you explicitly assign them to the container (https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/).

Add hugepages to resources

The fix is to add a hugepages-limit to the resources:

apiVersion: v1
kind: Pod
metadata:
  name: postgres-with-hugepages
spec:
  containers:
  - name: postgres
    image: postgres
    resources:
      limits:
        hugepages-2Mi: 512Mi
        memory: 512Mi
      requests:
        memory: 128i
huge page allocation

In this example you can see that we only add the hugepages-2Mi (for 2Mi sized hugepages) directive. Having this enabled, postgres with start right away without any issues (another issue might be shared memory, but that's not part of this post).

Omit request As request and limit needs to match on hugepages, you can just omit the value at request, kubernetes/openshift will do this for you.

Quite an easy fix. If you're interested more in huge pages, check out the following links:

https://wiki.debian.org/Hugepages

https://docs.openshift.com/container-platform/4.7/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.html