Adding storage to a ceph cluster
Ceph is meant to store data. A natural requirement therefore is to add data to the cluster in order to provide the cluster the ability to actually store data. This article describes how to add storage to your existing cluster.
Ceph is meant to store data. A natural requirement therefore is to add data to the cluster in order to provide the cluster the ability to actually store data. This article describes how to add storage to your existing or newly created cluster. If you've not yet created a cluster, check out this guide https://blog.nuvotex.de/setting-up-a-ceph-cluster/
The first step when adding storage is to add an OSDs which are used to map drives into the ceph rados.
What are OSDs?
An OSD (object storage daemon) is a component within ceph architecture which is used to abstract disk devices and provide an interface to add them to the ceph rados.
For each disk device an OSD daemon will be created.
When you've already added storage to your cluster, you may list existing OSDs.
In this case we've three OSDs that are quite idle as there is no I/O right now.
If you want to go deeper, check out ceph osd metadata - this will show OSD detail data.
Adding new OSDs
As state above, adding storage means adding OSDs. Adding a new OSD is quite simple as shown in the next snippet.
This will add new storage to your cluster and listing osds (ceph osd status) will include the newly added daemons.
Setting device class
If you're using SSD devices, you might need to manually specify the device class for the new OSDs.
Reuse existing drives
If you're adding a disk that has been used (at least partitioned) before, ceph will require that you clear (zap in ceph jargon) the device before.
This can be done using ceph orch.
It's propably quite clear that this command is destructive and will destroy data on the devices. Recovery data after zapping the wrong device might be a time consuming (if feasable) task.
Having cleared your devices, you might just proceed with the steps above when adding them as storage by adding OSDs.
If you're adding OSDs to an existing setup and you want to rebalance existing data, you can just use the ceph balancer.
Check out if balancer is already enabled and has a mode assigned (ceph balancer status).
This shows that the balancer is currently off and has none as mode (= no mode at all). Let's enable the balancer
Let's check the status again.
Fine - balancer is up and running! This will rebalance out data automatically for us.
Next step is to create a pool which will actually provide access to your data. I'll cover this in one of the next articles.