You've successfully subscribed to Nuvotex Blog
Great! Next, complete checkout for full access to Nuvotex Blog
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.

Running postgres in kubernetes with hugepages

To run postgres in a container on nodes with huge pages enabled requires you to configure the container accordingly. This post shows how to do this on kubernetes/openshift.

Daniel Nachtrub
Daniel Nachtrub

It might happen that you need to run postgres on a node with hugepages enabled - this is normally not an issue, except you're working with containers, for example within kubernetes/openshift.

In our case we are using mayastor at the storage layer which requires us to enable hugepages. But on the same system also postgres needs to run. Both support huge pages, so it should be fine?

Unfortunately, despite everything should work, postgres has issues starting - it seemingly detects huge pages but fails to allocate these unless you explicitly assign them to the container (

Add hugepages to resources

The fix is to add a hugepages-limit to the resources:

apiVersion: v1
kind: Pod
  name: postgres-with-hugepages
  - name: postgres
    image: postgres
        hugepages-2Mi: 512Mi
        memory: 512Mi
        memory: 128i
huge page allocation

In this example you can see that we only add the hugepages-2Mi (for 2Mi sized hugepages) directive. Having this enabled, postgres with start right away without any issues (another issue might be shared memory, but that's not part of this post).

Omit request As request and limit needs to match on hugepages, you can just omit the value at request, kubernetes/openshift will do this for you.

Quite an easy fix. If you're interested more in huge pages, check out the following links:


Daniel Nachtrub

Just some guy working with computers.