Azure Docker-compose volumes
Introduction
Azure volumes can be:
- Azure Disks
- Azure Files
- Azure NetApp Files
- Azure Blobs.
To provide a PersistentVolume, you can use only:
As noted in the Volumes section, the choice of Disks or Files is often determined by the need for concurrent access to the data or the performance tier.
In this article we will show you how to create Disk and Network Volumes.
Prerequisites
- Make sure you have an existing AKS Cluster.
- Make sure you're connected to the cluster. Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI
Steps to create Disk Volumes
The main focus of the next steps is preparing your cluster to support Disk Volumes.
-
Create a file named
csi_rwo.yaml
. -
Copy and paste the content below into the
csi_rwo.yaml
file:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: bns-disk-sc
provisioner: disk.csi.azure.com
parameters:
skuName: <insert a valid SKU type>
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Note
The complete list of valid SKU types is available on the Microsoft documentation website.
-
Save the file and exit from edit mode
-
Apply the file to Kubernetes using the following command:
kubectl apply -f csi_rwo.yaml
// OUTPUT
storageclass.storage.k8s.io/bns-disk-sc created
Testing the Storage Class
- Create a file named
claim_rwo.yaml
and paste the content below inside it.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim-volume
spec:
accessModes:
- ReadWriteOnce
storageClassName: bns-disk-sc
resources:
requests:
storage: 5Gi
- Apply the claim:
kubectl apply -f claim_rwo.yaml
- Now it's time to create a pod that uses the claim. Create a new file named
pod.yaml
and paste the content below inside it:
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- name: myfrontend
image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
volumeMounts:
- mountPath: "/mnt/azure"
name: test-claim-volume
volumes:
- name: test-claim-volume
persistentVolumeClaim:
claimName: test-claim-volume
- Apply the file
kubectl apply -f pod.yaml
- Make sure everything worked out:
kubectl get pvc
// OUTPUT
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim-volume Bound pvc-8ad14dec-2c8b-41b5-aec0-d953f0c03764 5Gi RWO bns-disk-sc 3m7s
- Now let's check if the pod is running:
kubectl get pods
- Cleanup the resources:
kubectl delete -f claim_rwo.yaml
kubectl delete -f pod.yaml
Note
The volume type disk will be automatically provisioned by Azure using the defined storage class.
Steps to create Network Volumes
The NFS component that has the Volume mounted will be exported using a Kubernetes deployment.
Creating a NFS server
Start by creating a deployment file nfs-deployment.yaml
:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-bns-pvc
spec:
storageClassName: managed-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server-bns
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nfs-server-bns
template:
metadata:
labels:
io.kompose.service: nfs-server-bns
spec:
containers:
- name: nfs-server-bns
image: itsthenetwork/nfs-server-alpine:latest
volumeMounts:
- name: nfs-storage
mountPath: /nfsshare
env:
- name: SHARED_DIRECTORY
value: "/nfsshare"
ports:
- name: nfs
containerPort: 2049 # <- export port
securityContext:
privileged: true # <- privileged mode is mandentory.
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-server-bns-pvc
Apply the deployment file:
kubectl apply -f nfs-deployment.yaml
This will create the container which will export a host volume.
Creating a Kubernetes service for nfs-deployment.yaml
Create a file named service.yaml
containing the text below:
apiVersion: v1
kind: Service
metadata:
name: srv-nfs-server-bns
labels:
io.kompose.service: nfs-server-bns
spec:
ports:
-
name: nfs-server-bns-2049k
port: 2049
protocol: TCP
targetPort: 2049
selector:
io.kompose.service: nfs-server-bns
Apply the file.
kubectl apply -f service.yaml
kubectl get pods
kubectl get svc
Use Helm charts to create the Storage Class
Retrieve the Service Endpoint IP using the command below:
kubectl get endpoints
// OUTPUT
kubernetes 100.65.7.129:443 34d
srv-nfs-server-bns 10.244.0.15:2049 15h
Note
In this case,
10.244.0.15
is the NFS Server IP which will be used in the Helm Chart command.
2049
is the port used for NFS mounts.
Add the following Helm Chart repository:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
Use the Helm Chart to create the Storage Class:
Note
Replace NFS_SERVICE_ENDPOINT_IP with the IP of your NFS service Endpoint. In our case, the IP is 10.244.0.15 (visible in the Output of a previous step).
helm install nfs-subdir-external-provisioner-vv3 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=NFS_SERVICE_ENDPOINT_IP --set nfs.path="/" --set storageClass.name=bns-network-sc \
--set "nfs.mountOptions={nfsvers=4.1,proto=tcp}"
Verifying the presence of the bns-network-sc storage class
Wait until the Storage Class is created, check status using command:
kubectl get sc
// OUTPUT
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
bns-network-sc cluster.local/nfs-subdir-external-provisioner-vv3 Delete Immediate true 39m
Testing the Storage Class
- Create the
test.yaml
file with the contents below.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-network
spec:
resources:
requests:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: bns-network-sc
---
apiVersion: v1
kind: Pod
metadata:
name: test-app-network
labels:
name: test-network
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage-network
mountPath: /data
resources:
limits:
memory: "50Mi"
cpu: "50m"
volumes:
- name: persistent-storage-network
persistentVolumeClaim:
claimName: test-pvc-network
- Apply
test.yaml
:
kubectl apply -f test.yaml
// OUTPUT
persistentvolumeclaim/test-pvc-network created
pod/test-app-network created
- Let's make sure that test-pvc-network has the status Bound by performing the next command:
kubectl get pvc
// OUTPUT
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-backup Bound pvc-1ead6f96-be65-46b6-81b9-1bb0ab69f773 5Gi RWO default 14d
pvc-nfs-subdir-external-provisioner-vv3 Bound pv-nfs-subdir-external-provisioner-vv3 10Mi RWO 93s
test-pvc-network Bound pvc-60600a30-74a5-4035-9052-d4a35c1617b5 5Gi RWX bns-network-sc 25s
- Check that the pod reached the
Running
status by performing the following command:
kubectl exec test-app-network -- bash -c "cat data/out"
// OUTPUT
NAME READY STATUS RESTARTS AGE
nfs-server-bns-5876f5997-9gv5j 1/1 Running 0 3m42s
nfs-subdir-external-provisioner-vv3-65b74898c9-g87rp 1/1 Running 0 98s
test-app-network 1/1 Running 0 30s
- Verify that the
test-app-network
pod is writing data to the volume:
kubectl exec test-app-network -- bash -c "cat data/out"
// OUTPUT
Tue Jul 26 12:24:22 UTC 2022
Tue Jul 26 12:24:28 UTC 2022
Tue Jul 26 12:24:33 UTC 2022
Tue Jul 26 12:24:38 UTC 2022
Tue Jul 26 12:24:43 UTC 2022
Tue Jul 26 12:24:48 UTC 2022
Tue Jul 26 12:24:53 UTC 2022
Tue Jul 26 12:24:58 UTC 2022
Tue Jul 26 12:25:03 UTC 2022
Tue Jul 26 12:25:08 UTC 2022
Tue Jul 26 12:25:13 UTC 2022
- If your results are similar with the Output displayed above, then you've completed the process successfully and you can delete the test resources.
- Delete the PVC and the Pod. This will also cause the PV to be deleted:
kubectl delete -f test.yaml
// OUTPUT
persistentvolumeclaim "test-pvc-network" deleted
pod "test-app-network" deleted
Updated 9 months ago