DigitalOcean Docker-compose volumes

Introduction

For DigitalOcean, Bunnyshell supports only ReadWriteMany (network) volumes type and not ReadWriteOnce due to the following reasons:

  • DigitalOcean has a limit of seven volume mounts per droplet and this limit cannot be changed
  • DigitalOcean's managed Kubernetes cluster supports only Persistent Volume Claims (PVC) with ReadWriteOnce.

As a solution, Bunnyshell will create all PVC using storage class bns-network-sc. However, you need to create the bns-network-sc CSI yourself, following the instructions below.

📘

Note

The Container Storage Interface (CSI) in Kubernetes is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Read more at the Kubernetes CSI Documentation page.

📘

Bunnyshell Volumes Add-on

Bunnyshell can help you create these StorageClasses in the Bunnyshell Volumes Add-on, with a universal recipe, which works on any cluster.

If you need extra configurations on the StorageClasses, see below the manual setup. Don't forget to disable the StorageClass from Add-on, so Bunnyshell won't update your manually configured class.

 

Prerequisites

  • Make sure you're connected to the cluster and that the cluster is the current config context.
    Creating a cluster in DigitalOcean is so straightforward that we did not write a guide for it.
  • Install Helm. For detailed instruction, visit the Helm docs platform.

🚧

Note

If you downloaded the Configuration file from DigitalOcean, but did not add it to the ~/.kube directory:

  • Set the KUBECONFIG env variable: export KUBECONFIG=<path to k8s config file>
  • Make sure the variable is set correctly: stat $KUBECONFIG

🚧

Setting the proper context

Starting here, you will work in the terminal. Make sure you're connected to the cluster and that the cluster is the current context. Use the command kubectl config --help to obtain the necessary information.

 

Steps to create Disk Volumes

For DigitalOcean, Bunnyshell supports only ReadWriteMany (network) volumes type and not ReadWriteOnce (disk) due to the following reasons:

  • DigitalOcean has a limit of 7 volume mounts per droplet and this limit cannot be changed
  • DigitalOcean's managed Kubernetes cluster supports only Persistent Volume Claims (PVC) with ReadWriteOnce.

It's very easy on kubernetes to reach the situation when a node would have more than 7 volumes mounted, imagine you have 7 Pods, each with a PVC. So Bunnyshell decided to use only network volumes to avoid situations when a Pod won't schedule because of the max volumes per node error.

So there is no bns-disk-sc StorageClass for DigitalOcean.

 

Steps to create Network Volumes

For network volumes you will create the bns-network-sc StorageClass, which will provision PVCs with the help of nfs-subdir-external-provisioner which will use a NFS server to actually store data. You will configure the StorageClass with reclaimPolicy=Delete, so when PVCs are deleted, and PVs are no longer bound, they are automatically deleted too.

 

Creating a NFS server

The NFS server consists of a PVC, where all the provisioned PVCs will be stored as folders, a Deployment with the actual nfs-server and a Service to expose the nfs-server in cluster. You will create all these in the bns-nfs-server namespace. As a measure of protection for the PVC, you will create also a StorageClass with reclaimPolicy=Retain

Start by creating the namespace:

kubectl create ns bns-nfs-server

Then save the following snippet in a file named nfs-server.yaml:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: bns-nfs-sc
provisioner: dobs.csi.digitalocean.com
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-server-bns-pvc
spec:
  storageClassName: bns-nfs-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi # <- set a size that suits your needs
---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nfs-server
spec:
    replicas: 1
    selector:
        matchLabels:
            io.kompose.service: nfs-server
    template:
        metadata:
            labels:
                io.kompose.service: nfs-server
        spec:
            containers:
                - name: nfs-server
                  image: itsthenetwork/nfs-server-alpine:latest
                  volumeMounts:
                      - name: nfs-storage
                        mountPath: /nfsshare
                  env:
                      - name: SHARED_DIRECTORY
                        value: "/nfsshare"
                  ports:
                      - name: nfs
                        containerPort: 2049
                  securityContext:
                      privileged: true # <- privileged mode is mandatory.
            volumes:
                - name: nfs-storage
                  persistentVolumeClaim:
                      claimName: nfs-server-bns-pvc
---
apiVersion: v1
kind: Service
metadata:
    name: nfs-server
    labels:
        io.kompose.service: nfs-server
spec:
    type: ClusterIP
    ports:
        -
            name: nfs-server-2049
            port: 2049
            protocol: TCP
            targetPort: 2049
    selector:
        io.kompose.service: nfs-server

Apply the manifests to create the NFS server:

kubectl apply -f nfs-server.yaml -n bns-nfs-server

Check that the Pod is Running, the Deployment is Ready, the Service has CLUSTER-IP and the PVC is Bound

kubectl get all,pvc -n bns-nfs-server
NAME																READY   STATUS    RESTARTS   AGE
pod/nfs-server-59b5d596c8-28xmh     1/1     Running   0          16m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/nfs-server   ClusterIP   10.254.2.218   <none>        2049/TCP   16m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-server    1/1     1            1           16m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-server-59b5d596c8    1         1         1       16m

NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-server-bns-pvc  Bound    pvc-b9712e48-48da-4dd3-b6e0-99979848cabc   100Gi      RWO            bns-nfs-sc     16m

Get the NFS Service IP, and store it in a variable.

NFS_SERVICE_IP=$(kubectl get service nfs-server -n bns-nfs-server -o=jsonpath='{.spec.clusterIP}')
echo $NFS_SERVICE_IP
10.254.2.218

(Yes, it's the Service CLUSTER-IP you saw earlier)

 

Use the Helm charts to create the Storage Class

Add the following Helm Chart repository:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

Install the Helm Chart to create the nfs-subdir-external-provisioner and the bns-network-sc Storage Class. See above how to obtain the $NFS_SERVICE_IP variable.

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  -n bns-nfs-server \
  --set nfs.server="$NFS_SERVICE_IP" \
  --set nfs.path="/" \
  --set storageClass.name=bns-network-sc \
  --set storageClass.reclaimPolicy=Delete \
  --set "nfs.mountOptions={nfsvers=4.1,proto=tcp}"

Wait until the Storage Class is created, check status using command:

kubectl get sc bns-network-sc
NAME             PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
bns-network-sc   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   29m

Testing the network Storage Class

  1. Create the test-network-sc.yaml file with the contents below. Later, the file will generate the test PVC and Pod:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc-network
spec:
  resources:
    requests:
      storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: bns-network-sc
---
apiVersion: v1
kind: Pod
metadata:
  name: test-app-network
  labels:
    name: test-network
spec:
  containers:
  - name: app
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
    volumeMounts:
      - name: persistent-storage-network
        mountPath: /data
    resources:
      limits:
        memory: "50Mi"
        cpu: "50m"
  volumes:
    - name: persistent-storage-network
      persistentVolumeClaim:
        claimName: test-pvc-network
  1. Apply the test-network-sc.yaml file:
kubectl create ns test-network
kubectl apply -f test.yaml -n test-network
// OUTPUT
persistentvolumeclaim/test-pvc-network created
pod/test-app-network created
  1. Wait until the test-app-network Pod reaches the status Running. To check that the Pod reached the Running status, perform the following command:
kubectl get pods -w -n test-network
// OUTPUT
NAME                                             READY   STATUS    RESTARTS      AGE
test-app-network                                 1/1     Running   0             2s
  1. Check for the presence of a persistent volume that has the following properties:
  • STORAGECLASS set to bns-network-sc for the PV created
  • CLAIM for the PV is set to default/test-pvc-network and the ACCESS MODES is set to RWX
kubectl get pv
// OUTPUT
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                         STORAGECLASS       REASON   AGE
pvc-h445g434-3333-c2c3-v3v3-c2er2d23r21r   1Gi        RWX            Delete           Bound    default/test-pvc-network                                      bns-network-sc              84s
  1. Verify that the test-app-network pod is writing data to the volume:
kubectl exec test-app-network -n test-network -- bash -c "cat data/out"
// OUTPUT
Thu Jul 21 16:28:58 UTC 2022
Thu Jul 21 16:29:03 UTC 2022
Thu Jul 21 16:29:08 UTC 2022
  1. If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Delete the PVCs and the Pods. This will also cause the PVs to be deleted:
kubectl delete -f test.yaml -n test-network
// OUTPUT
persistentvolumeclaim "test-pvc-network" deleted
pod "test-app-network" deleted
  1. Check if the PV displayed at step 4 are no longer present.
kubectl get pv pvc-9765fe63-2222-g4h5-j6k7-vwrf23r2f32f
// OUTPUT
Error from server (NotFound): persistentvolumes "pvc-9765fe63-2222-g4h5-j6k7-vwrf23r2f32f" not found
  1. Delete the test namespace
kubectl delete ns test-network