Migrating zabbix from mysql to timescaledb - setting up environment

In order to migrate data from mysql to timescaledb, we need to setup environments properly.

Overview

Our goal is to migrate data from one database to another (which also has another technology). As mentioned in the introduction (https://blog.nuvotex.de/migrating-zabbix-from-mysql-to-timescaledb/) we're migrating from mysql to timescaledb and using pgloader as utility for this.

Prepare mysql server (source)

In our case, we've been running mysql directly installed on the operating system which has been a result of the age of the setup. When this system had been set up containers had not yet reached mainstream.

Important Our mysql setup is using mysql partitioning for our existing data. During migration we're making heavily use of these partitions. There are ways around this but migration might take much longer. You might consider

Zabbix Server and Zabbix Frontend are still running on the same system - so we're using mysql with unix sockets. Migrating data now means that we need to access the mysql server from a remote endpoint (as we don't want to install a container engine on this node anymore).

This means, the following steps are required:

  • Enable tcp socket on mysqld (requires a daemon restart)
  • Open firewall to allow remote connections from the host running pgloader

Set up timescaledb (destination)

The destination will be a timescaledb. We'll be running the whole database within a container which will be managed by docker.

The compose file contains the following configuration:

version: '3.3'

services:
  db:
    image: timescale/timescaledb:latest-pg12
    network_mode: host
    restart: on-failure 
    environment:
      - POSTGRES_PASSWORD=changeme12!
    volumes:
      - /mnt/data0/docker/N-MON-ZXDB0-PGSQL/data:/var/lib/postgresql/data
    shm_size: 2g
docker-compose.yml

The setup is quite simple here:

  • Container is running in network host mode
    This should bypass unnecessary processing through iptables / nftables
  • An initial password will be set on first start
  • The postgres data path is mapped using a bind mount
  • shared memory size is increased (which is required for postgres running in a container)

When the container is running, use docker exec to enter the container and connect to postgres using the command line client. The following steps need to be performed:

  • change password of the postgres user
  • create a user for the zabbix server (name: zabbix)
  • create a database for the zabbix server (name: zabbix)

Now we'll enable the timescaledb extension on our newly created database

psql -U postgres zabbix
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

Create SQL schema

Next step will be to import the schema into the database - which will contain all tables (relations). This step is well documented at the zabbix documentation: https://www.zabbix.com/documentation/current/manual/appendix/install/timescaledb

Having imported the SQL schema it's time to create some hypertables.

SELECT create_hypertable('history_log', 'clock', chunk_time_interval => 86400, migrate_data => true); SELECT create_hypertable('history_text', 'clock', chunk_time_interval => 86400, migrate_data => true); SELECT create_hypertable('history_str', 'clock', chunk_time_interval => 86400, migrate_data => true); SELECT create_hypertable('trends', 'clock', chunk_time_interval => 2592000, migrate_data => true); SELECT create_hypertable('trends_uint', 'clock', chunk_time_interval => 2592000, migrate_data => true); SELECT create_hypertable('history_uint', 'clock', chunk_time_interval => 86400, migrate_data => true); SELECT create_hypertable('history', 'clock', chunk_time_interval => 86400, migrate_data => true);
create hypertables

This will create the hypertables for all relevant tables in our zabbix database.

Having completed all those steps we're ready to start our first migration - the configuration. This will be shown in the next post.