Install Rook Ceph

About this task

Rook Ceph is an orchestrator that provides a containerized solution for Ceph Storage with a specialized Kubernetes Operator to automate the management of the cluster. It is an alternative solution to the bare metal Ceph storage. For more details, see https://rook.io/docs/rook/latest-release/Getting-Started/intro/. for more details.

Prerequisites

Before configuring the deployment model and services:

  • Ensure that there is no ceph-store storage backend configured on the system.

    ~(keystone_admin)$ system storage-backend-list
    
  • Create a storage backend for Rook Ceph, choose your deployment model (controller, dedicated, open), and the desired services (block or ecblock, filesystem, object).

    ~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
    
  • Create a host-fs ceph for each host that will use a Rook Ceph monitor (preferably an odd number of hosts).

    ~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
    
  • On a Duplex system without workers, it is recommended to add a floating Ceph monitor. To add a floating monitor, first lock the inactive controller and create a controllerfs for the monitor.

    ~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
    ~(keystone_admin)$ system controllerfs-add ceph-float=<size>
    

    Note

    The recommended size for the host-fs and controllerfs ceph is minimum 20 GB.

  • Configure OSDs.

    • Check the UUID of the disks of the desired host that will use the OSDs.

      ~(keystone_admin)$ system host-disk-list <hostname>
      

      Note

      The OSD placement should follow the chosen deployment model placement rules.

    • Add the desired disks to the system as OSDs (preferably an even number of OSDs).

      ~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
      

For more details on deployment models and services, see Deployment Models and Services for Rook Ceph.

Procedure

After configuring the environment based on the selected deployment model, Rook Ceph will be installed automatically.

Check the health of the cluster after a few minutes after the application is applied using any ceph command, for example ceph status.

~(keystone_admin)$ ceph -s

e.g. (STD with 3 mon and 12 OSDs):
        ~(keystone_admin)$ ceph -s
        cluster:
            id:     5c8eb4ff-ba21-40f4-91ed-68effc47a08b
            health: HEALTH_OK

        services:
            mon: 3 daemons, quorum a,b,c (age 2d)
            mgr: c(active, since 5d), standbys: a, b
            mds: 1/1 daemons up, 1 hot standby
            osd: 12 osds: 12 up (since 5d), 12 in (since 5d)

        data:
            volumes: 1/1 healthy
            pools:   4 pools, 81 pgs
            objects: 133 objects, 353 MiB
            usage:   3.8 GiB used, 5.7 TiB / 5.7 TiB avail
            pgs:     81 active+clean

Check if the cluster contains all the required elements. All pods should be running or completed on the cluster to be considered healthy. Use the following command to check the Rook Ceph pods on the cluster.

~(keystone_admin)$ kubectl get pod -n rook-ceph

e.g. (SX with 1 mon and 2 OSDs):
        ~(keystone_admin)$ kubectl get pod -n rook-ceph
        NAME                                                    READY   STATUS      RESTARTS   AGE
        ceph-mgr-provision-2g9pz                                0/1     Completed   0          11m
        csi-cephfsplugin-4j7l6                                  2/2     Running     0          11m
        csi-cephfsplugin-provisioner-67bd9fcc8d-jckzq           5/5     Running     0          11m
        csi-rbdplugin-dzdb8                                     2/2     Running     0          11m
        csi-rbdplugin-provisioner-5698784bb8-4t7xw              5/5     Running     0          11m
        rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m   1/1     Running     0          11m
        rook-ceph-exporter-controller-0-857698d7cc-9dqn4        1/1     Running     0          10m
        rook-ceph-mds-kube-cephfs-a-76847477bf-2snzp            2/2     Running     0          11m
        rook-ceph-mds-kube-cephfs-b-6984b58b79-fzhk6            2/2     Running     0          11m
        rook-ceph-mgr-a-5b86cb5c74-bhp59                        2/2     Running     0          11m
        rook-ceph-mon-a-6976b847f4-5vmg9                        2/2     Running     0          11m
        rook-ceph-operator-c66b98d94-87t8s                      1/1     Running     0          12m
        rook-ceph-osd-0-f56c65f6-kccfn                          2/2     Running     0          11m
        rook-ceph-osd-1-7ff8bc8bc7-7tqhz                        2/2     Running     0          11m
        rook-ceph-osd-prepare-controller-0-s4bzz                0/1     Completed   0          11m
        rook-ceph-provision-zp4d5                               0/1     Completed   0          5m23s
        rook-ceph-tools-785644c966-6zxzs                        1/1     Running     0          11m
        stx-ceph-manager-64d8db7fc4-tgll8                       1/1     Running     0          11m
        stx-ceph-osd-audit-28553058-ms92w                       0/1     Completed   0          2m5s

Additional Enhancements

Add new OSDs on a Running Cluster

To add new OSDs to the cluster, add the new OSD to the platform and reapply the application.

~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph

Add a new Monitor on a Running Cluster

To add a new monitor to the cluster, add the host-fs to the desired host and reapply the application.

~(keystone_admin)$ system host-fs-add <host> ceph=<size>
~(keystone_admin)$ system application-apply rook-ceph

Enable the Ceph Dashboard

To enable the Ceph dashboard, a Helm override must be provided to the application. Provide a password coded in base64.

Procedure

  1. Create the override file.

    $ openssl base64 -e <<< "my_dashboard_passwd"
    bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=
    
    $ cat << EOF >> dashboard-override.yaml
    cephClusterSpec:
      dashboard:
        enabled: true
        password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo="
    EOF
    
  2. Update the Helm chart with the created user-override.

    ~(keystone_admin)$ system helm-override-update --values dashboard-override.yaml rook-ceph rook-ceph-cluster rook-ceph
    +----------------+-------------------+
    | Property       | Value             |
    +----------------+-------------------+
    | name           | rook-ceph-cluster |
    | namespace      | rook-ceph         |
    | user_overrides | cephClusterSpec:  |
    |                |   dashboard:      |
    |                |     enabled: true |
    |                |                   |
    +----------------+-------------------+
    
  3. Apply/reapply the Rook Ceph application.

    ~(keystone_admin)$ system application-apply rook-ceph
    

You can access the dashboard using the following address: https://<floating_ip>:30443.

Check Rook Ceph Pods

You can check the pods of the storage cluster using the following command:

kubectl get pod -n rook-ceph