Photo by Alex Reynolds / Unsplash

rook ceph - unmap stuck rbd

You might find yourself in a situation where you need to push rook ceph a little to unmount RBDs in order to unblock PV operations.

Daniel Nachtrub
Daniel Nachtrub

Working with rook ceph there might be situations in which RBDs are not detached correctly. We had such an issue after terminating two VMs (scheduled via kubevirt) after which the rbds remained attached on the nodes while not having any associated workload anymore.

As a result we knew that there's no consumer and we already cleaned the VolumeAttachments - still the driver could not unmount the rbd.

Forcefully unmount from nodes

To unblock the system we carefully checked:

  • No workload is running anymore
  • Checked mountpoints

Then we used the rook ceph pod on each node to check if there are any remaining rbd mounts for the PV in question;

[root@n2 /]# rbd showmapped | grep csi-vol-5722e410-4b62-4b16-aca8-8a30d1ae4491
27  ceph-blockpool-replicated             csi-vol-5722e410-4b62-4b16-aca8-8a30d1ae4491  -     /dev/rbd27

check for active mounts

All clear - let's close the mountpoint:

rbd unmap /dev/rbd27 -o force

unmap the rbd from the node

This will unmap the rbd from the node you checked. Make sure to repeat this for all nodes (if it's readwritemany like you can do for specific block devices).

Having completed this unblocks the PV operations.

CloudContainerKubernetesOpenShift

Daniel Nachtrub Twitter

Kind of likes computers. Linux foundation certified: LFCS / CKA / CKAD / CKS. Microsoft certified: Cybersecurity Architect Expert & Azure Solutions Architect Expert.