Create RBD Volume Snapshot ClassΒΆ

Volume Snapshot Class for RBD provisioner can be created via Helm overrides to support PVC snapshots.

About this task

A Volume Snapshot Class enables the creation of snapshots for PVCs, allowing efficient backups and data restoration. This functionality ensures data protection, facilitating point-in-time recovery and minimizing the risk of data loss in Kubernetes clusters.

The procedure below demonstrates how to create a Volume Snapshot Class and Volume Snapshot for the RBD provisioner.

Note

It is necessary that the CRDs and snapshot-controller running pod are present in the system to create the Volume Snapshot Class.

The CRDs and snapshot-controller are created by default during installation when running the bootstrap playbook.

Procedure

  1. List installed Helm chart overrides for the platform-integ-apps.

    ~(keystone_admin)$ system helm-override-list platform-integ-apps
    +--------------------+----------------------+
    | chart name         | overrides namespaces |
    +--------------------+----------------------+
    | ceph-pools-audit   | ['kube-system']      |
    | cephfs-provisioner | ['kube-system']      |
    | rbd-provisioner    | ['kube-system']      |
    +--------------------+----------------------+
    
  2. Review existing overrides for the rbd-provisioner chart.

    ~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
    
  3. Check if the provisioner.snapshotter.enabled is set to true.

    ~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
    +--------------------+------------------------------------------------------+
    | Property           | Value                                                |
    +--------------------+------------------------------------------------------+
    | attributes         | enabled: true                                        |
    |                    |                                                      |
    | combined_overrides | ...                                                  |
    |                    | provisioner:                                         |
    |                    |   replicaCount: 1                                    |
    |                    |   snapshotter:                                       |
    |                    |     enabled: true                                    |
    +--------------------+------------------------------------------------------+
    

    True means that the csi-snapshotter container is created inside the RBD provisioner pod and that the CRDs and snapshot-controller with the corresponding Kubernetes version are created.

    If the value is false, and the CRDs and snapshot controller are present in a later version than what is recommended for Kubernetes on your system, you can update the value via helm-overrides and set it to true and continue with the creation of the container as follows:

    1. Update to true via helm-overrides.

      ~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set provisioner.snapshotter.enabled=true
      
    2. Create container.

      ~(keystone_admin)$ system application-apply platform-integ-apps
      

    Important

    To proceed with the creation of the snapshot class and volume snapshot, it is strictly necessary that the csi-snapshotter container is created.

  4. Update snapshotClass.create to true via Helm.

    ~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True
    
  5. Confirm that the new overrides have been applied to the chart.

    ~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
    +--------------------+------------------------------------------------------+
    | Property           | Value                                                |
    +--------------------+------------------------------------------------------+
    | attributes         | enabled: true                                        |
    |                    |                                                      |
    | combined_overrides | classdefaults:                                       |
    |                    |   adminId: admin                                     |
    |                    |   adminSecretName: ceph-admin                        |
    |                    |   monitors:                                          |
    |                    |   - 192.168.204.2:6789                               |
    |                    |   storageClass: general                              |
    |                    | csiConfig:                                           |
    |                    | - clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e    |
    |                    |   monitors:                                          |
    |                    |   - 192.168.204.2:6789                               |
    |                    | provisioner:                                         |
    |                    |   replicaCount: 1                                    |
    |                    |   snapshotter:                                       |
    |                    |     enabled: true                                    |
    |                    | snapshotClass:                                       |
    |                    |   clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e    |
    |                    |   create: true                                       |
    |                    |   provisionerSecret: ceph-pool-kube-rbd              |
    |                    | storageClasses:                                      |
    |                    | - additionalNamespaces:                              |
    |                    |   - default                                          |
    |                    |   - kube-public                                      |
    |                    |   chunk_size: 64                                     |
    |                    |   clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e    |
    |                    |   controllerExpandSecret: ceph-pool-kube-rbd         |
    |                    |   crush_rule_name: storage_tier_ruleset              |
    |                    |   name: general                                      |
    |                    |   nodeStageSecret: ceph-pool-kube-rbd                |
    |                    |   pool_name: kube-rbd                                |
    |                    |   provisionerSecret: ceph-pool-kube-rbd              |
    |                    |   replication: 1                                     |
    |                    |   userId: ceph-pool-kube-rbd                         |
    |                    |   userSecretName: ceph-pool-kube-rbd                 |
    |                    |                                                      |
    | name               | rbd-provisioner                                      |
    | namespace          | kube-system                                          |
    | system_overrides   | ...                                                  |
    |                    |                                                      |
    | user_overrides     | snapshotClass:                                       |
    |                    |   create: true                                       |
    |                    |                                                      |
    +--------------------+------------------------------------------------------+
    
  6. Apply the overrides.

    1. Run the application-apply command.

      ~(keystone_admin)$ system application-apply platform-integ-apps
      +---------------+--------------------------------------+
      | Property      | Value                                |
      +---------------+--------------------------------------+
      | active        | True                                 |
      | app_version   | 1.0-65                               |
      | created_at    | 2024-01-08T18:15:07.178753+00:00     |
      | manifest_file | fluxcd-manifests                     |
      | manifest_name | platform-integ-apps-fluxcd-manifests |
      | name          | platform-integ-apps                  |
      | progress      | None                                 |
      | status        | applying                             |
      | updated_at    | 2024-01-08T18:39:10.251660+00:00     |
      +---------------+--------------------------------------+
      
    2. Monitor progress using the application-list command.

      ~(keystone_admin)$ system application-list
      +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
      | application              | version | manifest name                             | manifest file    | status   | progress  |
      +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
      | platform-integ-apps      | 1.0-65  | platform-integ-apps-fluxcd-manifests      | fluxcd-manifests | applied  | completed |
      +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
      
  7. Confirm the creation of the Volume Snapshot Class after a few seconds.

    ~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
    NAME              DRIVER                DELETIONPOLICY   AGE
    rbd-snapshot      rbd.csi.ceph.com      Delete           5s
    
  8. With the RBD Volume Snapshot Class created, you can now create RBD PVC snapshots.

    1. Consider the RBD Volume Snapshot yaml example:

      ~(keystone_admin)$ cat << EOF > ~/rbd-volume-snapshot.yaml
      ---
      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshot
      metadata:
      name: <rbd-pvc-snapshot-name>
      spec:
      volumeSnapshotClassName: rbd-snapshot
      source:
          persistentVolumeClaimName: <rbd-pvc-name>
      EOF
      
    2. Replace the values in the persistentVolumeClaimName and name fields.

    3. Create the Volume Snapshot.

      ~(keystone_admin)$ kubectl create -f rbd-volume-snapshot.yaml
      
    4. Confirm that it was created successfully.

      ~(keystone_admin)$ kubectl get volumesnapshots.snapshot.storage.k8s.io
      NAME               READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
      rbd-pvc-snapshot   true         rbd-pvc                             1Gi           rbd-snapshot    snapcontent-1bb7e2cb-9123-47c4-9e56-7d16f24f973e   13s            17s