Azure Docker-compose volumes

Introduction

Azure volumes can be:

  • Azure Disks
  • Azure Files
  • Azure NetApp Files
  • Azure Blobs.

To provide a PersistentVolume, you can use only:

As noted in the Volumes section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.

In this article we will show you how to create Disk and Network Volumes.

📘

Bunnyshell Volumes Add-on

Bunnyshell can help you create these StorageClasses in the Bunnyshell Volumes Add-on, with a universal recipe, which works on any cluster.

If you need extra configurations on the StorageClasses or you want to use custom Azure solutions, see below the manual setup. Don't forget to disable the StorageClass from Add-on, so Bunnyshell won't update your manually configured class.

 

Prerequisites

 

Steps to create Disk Volumes

The main focus of the next steps is preparing your cluster to support Disk Volumes.

  1. Create a file named csi_rwo.yaml.

  2. Copy and paste the content below into the csi_rwo.yaml file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: bns-disk-sc
provisioner: disk.csi.azure.com
parameters:
  skuName: <insert a valid SKU type>
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

📘

Note

The complete list of valid SKU types is available on the Microsoft documentation website.

  1. Save the file and exit from edit mode

  2. Apply the file to Kubernetes using the following command:

kubectl apply -f csi_rwo.yaml
// OUTPUT
storageclass.storage.k8s.io/bns-disk-sc created

 

Testing the Storage Class

  1. Create the test-disk-sc.yaml file with the contents below. Later, the file will generate the test PVC and Pod:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc-disk
spec:
  resources:
    requests:
      storage: 1Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: bns-disk-sc
---
apiVersion: v1
kind: Pod
metadata:
  name: test-app-disk
  labels:
    name: test-disk
spec:
  containers:
  - name: app
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
    volumeMounts:
      - name: persistent-storage-disk
        mountPath: /data
    resources:
      limits:
        memory: "50Mi"
        cpu: "50m"
  volumes:
    - name: persistent-storage-disk
      persistentVolumeClaim:
        claimName: test-pvc-disk

  1. Apply the test-disk-sc.yaml file:
kubectl create ns test-disk-sc
kubectl apply -f test-disk-sc.yaml -n test-disk-sc
  1. Wait until the test-app-disk pod reach the status Running.
kubectl wait --for=condition=Ready pod/test-app-disk -n test-disk-sc
  1. Check the Pod, PVC and the associated PV:
  • PVC test-pvc-disk is Bound
  • PVC test-pvc-disk uses STORAGECLASS bns-disk-sc
  • a PV was also created and it has the CLAIM the PVC above
  • the PV has RECLAIM POLICY Delete
kubectl get all,pv,pvc -n test-disk-sc
NAME                READY   STATUS    RESTARTS   AGE
pod/test-app-disk   1/1     Running   0          39s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                                STORAGECLASS     REASON   AGE
persistentvolume/pvc-40c1936f-3b1b-4175-a10e-906a4cf8b91c   1Gi        RWO            Delete           Bound      test-disk-sc/test-pvc-disk                           bns-disk-sc               39s

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-pvc-disk   Bound    pvc-40c1936f-3b1b-4175-a10e-906a4cf8b91c   1Gi        RWO            bns-disk-sc    40s
  1. (Optional) Verify that the test-app-disk Pod is writing OK data to the volume:
kubectl exec test-app-disk -n test-disk-sc -- bash -c "cat data/out"
Fri Nov 17 14:14:08 UTC 2023
Fri Nov 17 14:14:13 UTC 2023
Fri Nov 17 14:14:18 UTC 2023
  1. If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Delete the PVCs and the Pods. This will also cause the PVs to be deleted:
kubectl delete ns test-disk-sc

Steps to create Network Volumes

For network volumes you will create the bns-network-sc StorageClass, which will provision PVCs with the help of nfs-subdir-external-provisioner which will use a NFS server to actually store data. You will configure the StorageClass with reclaimPolicy=Delete, so when PVCs are deleted, and PVs are no longer bound, they are automatically deleted too.

Creating the NFS server

The NFS server consists of a PVC, where all the provisioned PVCs will be stored as folders, a Deployment with the actual nfs-server and a Service to expose the nfs-server in cluster. You will create all these in the bns-nfs-server namespace. As a measure of protection for the PVC, you will create also a StorageClass with reclaimPolicy=Retain.

Start by creating the namespace:

kubectl create ns bns-nfs-server

Then save the following snippet in a file named nfs-server.yaml. Update the StorageClass parameters.skuName with appropriate value.

📘

Note

The complete list of valid SKU types is available on the Microsoft documentation website.

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: bns-nfs-sc
parameters:
  skuName: <insert a valid SKU type>
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-server-bns-pvc
spec:
  storageClassName: bns-nfs-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi # <- set a size that suits your needs
---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nfs-server
spec:
    replicas: 1
    selector:
        matchLabels:
            io.kompose.service: nfs-server
    template:
        metadata:
            labels:
                io.kompose.service: nfs-server
        spec:
            containers:
                - name: nfs-server
                  image: itsthenetwork/nfs-server-alpine:latest
                  volumeMounts:
                      - name: nfs-storage
                        mountPath: /nfsshare
                  env:
                      - name: SHARED_DIRECTORY
                        value: "/nfsshare"
                  ports:
                      - name: nfs
                        containerPort: 2049
                  securityContext:
                      privileged: true  # <- privileged mode is mandatory.
            volumes:
                - name: nfs-storage
                  persistentVolumeClaim:
                      claimName: nfs-server-bns-pvc
---
apiVersion: v1
kind: Service
metadata:
    name: nfs-server
    labels:
        io.kompose.service: nfs-server
spec:
    type: ClusterIP
    ports:
        -
            name: nfs-server-2049
            port: 2049
            protocol: TCP
            targetPort: 2049
    selector:
        io.kompose.service: nfs-server

Apply the manifests to create the NFS server:

kubectl apply -f nfs-server.yaml -n bns-nfs-server

 Check that the Pod is Running, the Deployment is Ready, the Service has CLUSTER-IP and the PVC is Bound

kubectl get all,pvc -n bns-nfs-server
NAME																READY   STATUS    RESTARTS   AGE
pod/nfs-server-59b5d596c8-28xmh     1/1     Running   0          16m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/nfs-server   ClusterIP   10.254.2.218   <none>        2049/TCP   16m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-server    1/1     1            1           16m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-server-59b5d596c8    1         1         1       16m

NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nfs-server-bns-pvc  Bound    pvc-b9712e48-48da-4dd3-b6e0-99979848cabc   100Gi      RWO            bns-nfs-sc     16m

Get the NFS Service IP, and store it in a variable.

NFS_SERVICE_IP=$(kubectl get service nfs-server -n bns-nfs-server -o=jsonpath='{.spec.clusterIP}')
echo $NFS_SERVICE_IP
10.254.2.218

(Yes, it's the Service CLUSTER-IP you saw earlier)

 

Use a Helm chart to create the NFS provisioner and the Storage Class

Add the following Helm Chart repository:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

Install the Helm Chart to create the nfs-subdir-external-provisioner and the bns-network-sc Storage Class. See above how to obtain the $NFS_SERVICE_IP variable.

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  -n bns-nfs-server \
  --set nfs.server="$NFS_SERVICE_IP" \
  --set nfs.path="/" \
  --set storageClass.name=bns-network-sc \
  --set storageClass.reclaimPolicy=Delete \
  --set "nfs.mountOptions={nfsvers=4.1,proto=tcp}"

Wait until the Storage Class is created, check status using command:

kubectl get sc bns-network-sc
NAME             PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
bns-network-sc   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   29m

 

Testing the network Storage Class

  1. Create the test-network-sc.yaml file with the contents below. Later, the file will generate the test PVC and Pod:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc-network
spec:
  resources:
    requests:
      storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: bns-network-sc
---
apiVersion: v1
kind: Pod
metadata:
  name: test-app-network
  labels:
    name: test-network
spec:
  containers:
  - name: app
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
    volumeMounts:
      - name: persistent-storage-network
        mountPath: /data
    resources:
      limits:
        memory: "50Mi"
        cpu: "50m"
  volumes:
    - name: persistent-storage-network
      persistentVolumeClaim:
        claimName: test-pvc-network

  1. Apply the test-network-sc.yaml file:
kubectl create ns test-network-sc
kubectl apply -f test-network-sc.yaml -n test-network-sc
  1. Wait until the test-app-network pod reach the status Running.
kubectl wait --for=condition=Ready pod/test-app-network -n test-network-sc
  1. Check the Pod, PVC and the associated PV:
  • PVC test-pvc-network is Bound
  • PVC test-pvc-network uses STORAGECLASS bns-network-sc
  • a PV was also created and it has the CLAIM the PVC above
  • the PV has RECLAIM POLICY Delete
kubectl get all,pv,pvc -n test-network-sc
NAME                   READY   STATUS    RESTARTS   AGE
pod/test-app-network   1/1     Running   0          11m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                                                STORAGECLASS     REASON   AGE
persistentvolume/pv-nfs-subdir-external-provisioner         10Mi       RWO            Retain           Bound      bns-nfs-server/pvc-nfs-subdir-external-provisioner                             55m
persistentvolume/pvc-b9712e48-48da-4dd3-b6e0-99979848cabc   100Gi      RWO            Retain           Bound      bns-nfs-server/nfs-server-bns-pvc                    standard                  45m
persistentvolume/pvc-bd3bbc3c-040c-4d20-a3d6-007eee507a5e   1Gi        RWX            Delete           Bound      test-network-sc/test-pvc-network                     bns-network-sc            11m

NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/test-pvc-network   Bound    pvc-bd3bbc3c-040c-4d20-a3d6-007eee507a5e   1Gi        RWX            bns-network-sc   11m
  1. (Optional) Verify that the test-app-network Pod is writing OK data to the volume:
kubectl exec test-app-network -n test-network-sc -- bash -c "cat data/out"
Fri Nov 17 14:14:08 UTC 2023
Fri Nov 17 14:14:13 UTC 2023
Fri Nov 17 14:14:18 UTC 2023
  1. If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Delete the PVCs and the Pods. This will also cause the PVs to be deleted:
kubectl delete ns test-network-sc