Kubernetes, blockDevices & denied permissions

We're currently quite diving into kubevirt and came upon an issue with block volumes that have been cloned using datavolume.

The datavolume looks like this

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: debian-vm-clone1-os-disk
spec:
  pvc:
    volumeMode: Block
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: 32Gi
  source:
    pvc:
      name: debian-vm-image
      namespace: vm-library
datavolume spec

This creates a new PVC that will be attached. As we're running a recent version of kubevirt the virtlauncher pod runs as qemu user (= no root permissions).

The issue now is that the blockdevice is owned by root and gid 6 which result in the fact that qemu cannot attach the blockdevice to the VM.

Why blockdev We're using block volumes because this leverages benefits of ceph and eases many processes around the lifecycle.

As a result, the issue shows as following in the logs

{"component":"virt-launcher","kind":"","level":"error","msg":"Failed to sync vmi","name":"debian-vm-clone1","namespace":"vms","pos":"server.go:202","reason":"virError(Code=1, Domain=10, Message='internal error: process exited w
hile connecting to monitor: 2024-04-10T10:08:22.675233Z qemu-kvm: -blockdev {\"driver\":\"host_device\",\"filename\":\"/dev/os-disk\",\"node-name\":\"libvirt-1-storage\",\"cache\":{\"direct\":false,\"no-flush\":false},\"auto-read-only\
":true,\"discard\":\"unmap\"}: Could not open '/dev/os-disk': Permission denied')","timestamp":"2024-04-10T10:08:22.912726Z","uid":"c70ff394-aee1-438e-8eaf-f18bc27a7753"}
error message in virtlauncher

After all - the permission mismatch results in the error that kubevirt can't map the blockdevice into the VM.

The solution is to adjust the containerd settings in such a way that block devices are mounted using the securityContext of the pod as owner.

[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = true
adjust ownership of block devices / containerd config.yoml

or here

[crio.runtime]
device_ownership_from_security_context = true
adjustment for CRIO runtime

(sources: https://github.com/kubevirt/containerized-data-importer/issues/2378 & https://kubernetes.io/blog/2021/11/09/non-root-containers-and-devices/)

Just apply the change, restart containerd and new pods will be able to attach the disks as expected.