You've successfully subscribed to Nuvotex Blog
Great! Next, complete checkout for full access to Nuvotex Blog
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.

upgrade postgres with timescaledb

Upgrading postgres can be a challenging task, because databases are mostly in use. Having extensions like timescaledb introduces even more complexity. This guide shows a fast way to upgrade a postgres database with timescaledb.

Daniel Nachtrub
Daniel Nachtrub

timescaledb runs as an extension on postgres. While upgrading the timescaledb extension is quite simple, upgrading the backing postgres is a little more challenging.

This guide will show how to upgrade postgres (in this example we're upgrading from postgres 12 to postgres 13) using pg_upgrade. As we might upgrade large databases, we'll use hardlinks to avoid cloning of data.

The environment uses a container to run the migration.

The filesystem layout looks like the following

/path/to/postgres/
└── data
    ├── pg12
    │   ├── base
    │   ├── ...
    │   └── pg_xact
    └── pg13
XX directories
filesystem

The only important point is to have a common parent folder for the OLD and NEW version data, so that we can mount this into the container.

Prepare the migration container

Let's start a container we'll use for the migration.

docker run -d --name migrate \
  -v /path/to/postgres/data:/data \
  ubuntu bash -c "while true; do echo Sleeping; sleep 60; done"
spawn container

Configure the container.

# install requirements
apt update
apt install -y gnupg postgresql-common apt-transport-https lsb-release wget curl nano

# add repos
/usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
curl -L https://packagecloud.io/timescale/timescaledb/gpgkey | apt-key add -
sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/ubuntu/ $(lsb_release -c -s) main' > /etc/apt/sources.list.d/timescaledb.list"
wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | apt-key add -

# install timescaledb
apt update
apt install -y timescaledb-2-2.3.1-postgresql-12 timescaledb-2-2.3.1-postgresql-13

# generate locales required by postgres
# this might need to be adjusted if pg_upgrade fails
localedef -i en_US -f UTF-8 en_US.UTF-8

# map uid and gid
# this needs to match the gid/uid of the existing data (70 is used in docker images from timescaledb)
usermod -u 70 postgres
groupmod -u 70 postgres
install applications and configure environment

Setting up the environment take a while.

Check upgrade

Having set up our environment, it's time to validate the migration. This can be done using pg_upgrade with the --check flag.

/usr/lib/postgresql/13/bin/pg_upgrade -b /usr/lib/postgresql/12/bin -B /usr/lib/postgresql/13/bin -d /data/pg12 -D /data/pg13 --link --check
validate upgrade

If everything is fine, the output will state that the pg clusters are compatible.

Run upgrade

Now it's time to take a snapshot of the environment and perform the upgrade.

/usr/lib/postgresql/13/bin/pg_upgrade -b /usr/lib/postgresql/12/bin -B /usr/lib/postgresql/13/bin -d /data/pg12 -D /data/pg13 --link

This will only take a few moments - even on huge databases - because we're using hardlinks to avoid copying actual data.

After upgrade is completed, the container can be stopped and you can start your production database server using postgres 13 with timescaledb.

Perform optimization

It's quite important to notice that we've linked the data between both postgres versions - but we didn't have linked statistics. Depending on the size of your database you might be very very unhappy with the performance. This is because the query optimizer has no clue about the data and requires vacuum to be run to get required statistics.

This can be done during runtime. Use this command:

vacuumdb --all --analyze-in-stages
generate optimizer statistics

Running this might take several hours to complete, depending on the size of your database. As it's running in multiple stages, the first optimization might complete quite fast, so that you can use the database already.

That's it already - when everything is confirmed, you may just remove the old directory.

LinuxDatabaseDocker

Daniel Nachtrub

Kind of likes computers. Linux foundation certified: LFCS / CKA / CKAD / CKS. Microsoft certified: Cybersecurity Architect Expert & Azure Solutions Architect Expert.