DigitalOcean Docker-compose volumes
Introduction
For DigitalOcean, Bunnyshell supports only ReadWriteMany (network) volumes type and not ReadWriteOnce due to the following reasons:
- DigitalOcean has a limit of seven volume mounts per droplet and this limit cannot be changed
- DigitalOcean's managed Kubernetes cluster supports only Persistent Volume Claims (PVC) with
ReadWriteOnce
.
As a solution, Bunnyshell will create all PVC using storage class bns-network-sc
. However, you need to create the bns-network-sc
CSI yourself, following the instructions below.
Note
The Container Storage Interface (CSI) in Kubernetes is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Read more at the Kubernetes CSI Documentation page.
Prerequisites
- Make sure you're connected to the cluster and that the cluster is the current config context.
Creating a cluster in DigitalOcean is so straightforward that we did not write a guide for it. - Install Helm. For detailed instruction, visit the Helm docs platform.
Note
If you downloaded the Configuration file from DigitalOcean, but did not add it to the
~/.kube
directory:
- Set the KUBECONFIG env variable:
export KUBECONFIG=<path to k8s config file>
- Make sure the variable is set correctly:
stat $KUBECONFIG
Setting the proper context
Starting here, you will work in the terminal. Make sure you're connected to the cluster and that the cluster is the current context. Use the command kubectl config --help to obtain the necessary information.
Steps to create Disk and Network Volumes
The NFS component that has the Volume mounted will be exported using a Kubernetes deployment.
Creating a NFS server
Start by creating a deployment file nfs-deployment.yaml
:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-bns-pvc
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server-bns
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nfs-server-bns
template:
metadata:
labels:
io.kompose.service: nfs-server-bns
spec:
containers:
- name: nfs-server-bns
image: itsthenetwork/nfs-server-alpine:latest
volumeMounts:
- name: nfs-storage
mountPath: /nfsshare
env:
- name: SHARED_DIRECTORY
value: "/nfsshare"
ports:
- name: nfs
containerPort: 2049 # <- export port
securityContext:
privileged: true # <- privileged mode is mandatory.
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: nfs-server-bns-pvc
Apply the deployment file:
kubectl apply -f nfs-deployment.yaml
This will create the container which will export a host volume.
// OUTPUT
deployment.apps/nfs-server-bns created
Creating a Kubernetes service for nfs-deployment.yaml
Create a file named service.yaml
containing the text below:
apiVersion: v1
kind: Service
metadata:
name: srv-nfs-server-bns
labels:
io.kompose.service: nfs-server-bns
spec:
ports:
-
name: nfs-server-bns-2049k
port: 2049
protocol: TCP
targetPort: 2049
selector:
io.kompose.service: nfs-server-bns
Apply the file
kubectl apply -f service.yaml
// OUTPUT
service/srv-nfs-server-bns created
Wait for the service and pod to be created, then check the Status for each one.
Start by checking the pod status.
kubectl get pod
// OUTPUT
NAME READY STATUS RESTARTS AGE
nfs-server-bns-5876f5997-nlj4n 1/1 Running 0 57s
Continue by checking the service status.
kubectl get svc
// OUTPUT
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 98d
srv-nfs-server-bns ClusterIP 10.245.18.152 <none> 2049/TCP 15s
Use the Helm charts to create the Storage Class
Retrieve the Service Endpoint IP and assign it to the environment variable NFS_SERVICE_ENDPOINT_IP
using the command below:
NFS_SERVICE_ENDPOINT_IP=$(kubectl get endpoints srv-nfs-server-bns -o=jsonpath='{range .subsets..addresses..}{.ip}')
echo Defined the environment variable "NFS_SERVICE_ENDPOINT_IP" with the value: $NFS_SERVICE_ENDPOINT_IP
// OUTPUT
Defined the environment variable NFS_SERVICE_ENDPOINT_IP with the value: 10.244.0.89 15h
Note
In this case,
10.244.0.89
is the NFS Server IP that was assigned to the environment variableNFS_SERVICE_ENDPOINT_IP
. This will be used later in the Helm Chart command.
Add the following Helm Chart repository:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
Use the Helm Chart to create the Storage Class.
- Replace NFS_SERVICE_ENDPOINT_IP from the example below with the IP of your NFS service Endpoint. In our case, the IP is
10.244.0.15
(visible in the Output of a previous step).
helm install nfs-subdir-external-provisioner-vv3 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=$NFS_SERVICE_ENDPOINT_IP --set nfs.path="/" --set storageClass.name=bns-network-sc \
--set "nfs.mountOptions={nfsvers=4.1,proto=tcp}"
Wait until the Storage Class is created, check status using command:
kubectl get sc
// OUTPUT
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
bns-network-sc cluster.local/nfs-subdir-external-provisioner-vv3 Delete Immediate true 11m
Testing the Storage Class
- Create the
test.yaml
file with the contents below. Later, the file will generate the test PVC and Pod:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-network
spec:
resources:
requests:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: bns-network-sc
---
apiVersion: v1
kind: Pod
metadata:
name: test-app-network
labels:
name: test-network
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage-network
mountPath: /data
resources:
limits:
memory: "50Mi"
cpu: "50m"
volumes:
- name: persistent-storage-network
persistentVolumeClaim:
claimName: test-pvc-network
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-disk
spec:
resources:
requests:
storage: 4Gi
accessModes:
- ReadWriteOnce
storageClassName: bns-network-sc
---
apiVersion: v1
kind: Pod
metadata:
name: test-app-disk
labels:
name: test-disk
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage-disk
mountPath: /data
resources:
limits:
memory: "50Mi"
cpu: "50m"
volumes:
- name: persistent-storage-disk
persistentVolumeClaim:
claimName: test-pvc-disk
- Apply the
test.yaml
file:
kubectl apply -f test.yaml
// OUTPUT
persistentvolumeclaim/test-pvc-network created
pod/test-app-network created
persistentvolumeclaim/test-pvc-disk created
pod/test-app-disk created
- Wait until the
test-app-network
andtest-app-disk
pods reach the statusRunning
. To check that the pods reached theRunning
status, perform the following command:
kubectl get pods -w
// OUTPUT
NAME READY STATUS RESTARTS AGE
bns-network-cs-server-nfs-server-provisioner-0 1/1 Running 1 (24h ago) 2d1h
test-app-network 0/1 Pending 0 0s
test-app-network 0/1 Pending 0 0s
test-app-network 0/1 ContainerCreating 0 0s
test-app-disk 0/1 Pending 0 0s
test-app-disk 0/1 Pending 0 0s
test-app-disk 0/1 ContainerCreating 0 0s
test-app-network 1/1 Running 0 2s
test-app-disk 1/1 Running 0 18s
- Check for the presence of a persistent volume that has the following properties:
- STORAGECLASS set to bns-network-sc for the 2 PVs created
- CLAIM for one of the PVs is set to default/test-pvc-disk and the ACCESS MODES is set to RWO
- CLAIM for the other PV is set to default/test-pvc-network and the ACCESS MODES is set to RWX
kubectl get pv
// OUTPUT
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-9765fe63-2222-g4h5-j6k7-vwrf23r2f32f 4Gi RWO Delete Bound default/test-pvc-disk bns-network-sc 84s
pvc-h445g434-3333-c2c3-v3v3-c2er2d23r21r 5Gi RWX Delete Bound default/test-pvc-network bns-network-sc 84s
- Verify that the
test-app-network
pod is writing data to the volume:
kubectl exec test-app-network -- bash -c "cat data/out"
// OUTPUT
Thu Jul 21 16:28:58 UTC 2022
Thu Jul 21 16:29:03 UTC 2022
Thu Jul 21 16:29:08 UTC 2022
- Verify that the
test-app-disk
pod is writing data to the volume:
kubectl exec test-app-disk -- bash -c "cat data/out"
// OUTPUT
Thu Jul 21 16:29:13 UTC 2022
Thu Jul 21 16:29:18 UTC 2022
Thu Jul 21 16:29:23 UTC 2022
- If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Delete the PVCs and the Pods. This will also cause the PVs to be deleted:
kubectl delete -f test.yaml
// OUTPUT
persistentvolumeclaim "test-pvc-network" deleted
pod "test-app-network" deleted
persistentvolumeclaim "test-pvc-disk" deleted
pod "test-app-disk" deleted
- Check if the PVs displayed at step 4 are no longer present.
- Checking the first PV
kubectl get pv pvc-9765fe63-2222-g4h5-j6k7-vwrf23r2f32f
// OUTPUT
Error from server (NotFound): persistentvolumes "pvc-9765fe63-2222-g4h5-j6k7-vwrf23r2f32f" not found
- Checking the second PV
kubectl get pv pvc-h445g434-3333-c2c3-v3v3-c2er2d23r21r
// OUTPUT
Error from server (NotFound): persistentvolumes "pvc-h445g434-3333-c2c3-v3v3-c2er2d23r21r" not found
Updated 5 months ago