AWS Docker-compose volumes

Cluster pre-requisites on being able to create volumes for docker-compose components

Introduction

You can set up persistent storage in Amazon EKS using either of the following options:

🚧

Note

It's a best practice to make sure you install the latest version of the drivers. For more information, see in the GitHub repositories for the Amazon EBS CSI driver and Amazon EFS CSI driver.

If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure you're using the most recent version of the AWS CLI.

Please be mindful that some CLIs are using the default AWS region set by yourself when installing and configuring them. This value is taken into consideration when using the CLIs and might create issues if the default region (set locally) and the cluster region are different.

In this article we will show you how to create Disk and Network Volumes.

📘

Bunnyshell Volumes Add-on

Bunnyshell can help you create these StorageClasses in the Bunnyshell Volumes Add-on, with a universal recipe, which works on any cluster.

Even if you are using the add-on, you need to have installed in cluster the Amazon EBS CSI add-on

If you need extra configurations on the StorageClasses or you want to use custom AWS solutions, see below the manual setup. Don't forget to disable the StorageClass from Add-on, so Bunnyshell won't update your manually configured class.

 

Prerequisites

Before you complete the steps in either section, you must:

  • Install eksctl
  • Install the AWS CLI
  • Install kubectl
  • Make sure you're logged in to Amazon Management Console
  • Set AWS Identity and Access Management (IAM) permissions for creating and attaching a policy to the Amazon EKS worker node role CSI Driver Role.
  • Create your Amazon EKS cluster and join your worker nodes to the cluster.
  • For Amazon EBS CSI add-on, make sure you have an existing cluster that's version 1.18 or later. To see the required platform version, run the following command.
    aws eks describe-addon-versions --addon-name aws-ebs-csi-driver
  • Have an existing IAM OpenID Connect (OIDC) provider for your cluster. To determine if you already have a cluster, or to create one, see Create an IAM OIDC provider for your cluster.

🚧

Note

To verify that your worker nodes are attached to your cluster, run the kubectl get nodes command.

 

Steps to create Disk Volumes

The main focus of the next steps will be Deploying and testing the Amazon EBS CSI driver.

 

Adding the Amazon EBS CSI add-on

For this step, you need to follow the Amazon instructions presented in the article linked here.

 

Setting the proper context

Starting here, you will work in the terminal. Make sure you're connected to the cluster and that the cluster is the current context. Use the command kubectl config --help to obtain the necessary information.

 

Creating the storage class spec.yaml file

Create a spec.yaml file with the following contents:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: bns-disk-sc
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

 

Applying the spec.yaml file

Apply the file using the command below. Line 2 contains the expected Output.

kubectl apply -f spec.yaml
// OUTPUT
storageclass.storage.k8s.io/bns-disk-sc created

 

Verifying the presence of the bns-disk-sc storage class

Use the command below to verify the presence of the storage class:

kubectl get sc
// OUTPUT
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
bns-disk-sc     ebs.csi.aws.com         Delete          WaitForFirstConsumer   false                  2m5s
gp2 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  50m

 

Testing the EBS CSI drive

It's time to create two files to test the CSI driver. Start by creating a test namespace

kubectl create ns test-bns-disk

Then create the test-bns-disk.yaml file with the contents below. This will create a PVC and a Pod.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: bns-disk-sc
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim

Apply test-bns-disk.yaml:

kubectl apply -f test-bns-disk.yaml -n test-bns-disk

Wait until the app pod reaches the status Running. To check that the pod reached the Running status, perform the following command:

kubectl get pods -w -n test-bns-disk
// OUTPUT
NAME   READY   STATUS              RESTARTS   AGE
app    0/1     Pending             0          0s
app    0/1     Pending             0          4s
app    0/1     ContainerCreating   0          5s
app    1/1     Running             0          17s

Check for the presence of a persistent volume that has the following properties:

  • STORAGECLASS set to bns-disk-sc
  • CLAIM set to default/ebs-claim
kubectl get pv -o wide -n test-bns-disk
// OUTPUT
NAME                                     ...  CLAIM               STORAGECLASS   REASON   AGE    VOLUMEMODE
pvc-e87327ce-1111-2222-3333-9651b92bce57 ...  default/ebs-claim   bns-disk-sc             2m2s   Filesystem

Verify that the app Pod is writing data to the volume:

kubectl exec -it app -n test-bns-disk -- cat /data/out.txt
// OUTPUT
Wed Jul 20 08:28:52 UTC 2022
Wed Jul 20 08:28:57 UTC 2022
Wed Jul 20 08:29:02 UTC 2022

If the your results are similar with the output displayed above, then you've completed the process successfully and you can delete the test resources. Start by deleting the Pod:

kubectl delete pod/app -n test-bns-disk
// OUTPUT
pod "app" deleted

You can now delete the PVC too. This will also cause the PV to be deleted (that's why we use reclaimPolicy=Delete on the StorageClass):

kubectl delete pvc/ebs-claim -n test-bns-disk
// OUTPUT
persistentvolumeclaim "ebs-claim" deleted

Check if the PV is no longer present:

kubectl get pv pvc-e87327ce-1111-2222-3333-9651b92bce57
// OUTPUT
Error from server (NotFound): persistentvolumes "pvc-e87327ce-1111-2222-3333-9651b92bce57" not found

 Delete the test namespace

kubectl delete ns test-bns-disk

Steps to create Network Volumes

The main focus of the next steps will be Installing the Amazon EFS CSI driver, Creating the Amazon EFS file system and testing them.

 

Setting the proper context

Starting here, you will work in the terminal. Make sure you're connected to the cluster and that the cluster is the current context. Use the command kubectl config --help to obtain the necessary information.

 

Creating an EFS instance

When creating the EFS instance, make sure you select the cluster VPC. Visit the AWS documentation platform for detailed instructions on how to create an EFS instance.

Go to the EFS instance Details page and access the Network tab. Wait until the platform displays the ID of the Security Group, then copy and save it for later use.

 

Adding the necessary Inbound Rule

Navigate to the AWS VPC (Virtual Private Cloud) listing.

Retrieve the IPv4 CIDR corresponding to the VPC where you created the EFS. You will need it at the next step.

Navigate to EC2 / Network & Security / Security groups , find the Security group related with the EFS instance. To create an inbound rule, select Edit inbound rules and create an inbound rule using the details listed below.

📘

Note

Replace the VPC IPv4 CIDR with the IPv4 CIDR you retrieved at the previous step.

TypeProtocolPort RangeSource
NFSTCP2049VPC IPv4 CIDR

 

Creating the storage class using a Helm chart

Now it's time to create a storage class. Start with the command below:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
// OUTPUT
"nfs-subdir-external-provisioner" has been added to your repositories

Proceed with the following command.

You can retrieve the EFS DNS NAME from the Details page of your EFS instance.

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    -n bns-network-sc \
    --set nfs.server=EFS_DNS_NAME \
    --set nfs.path=/ \
    --set storageClass.name=bns-network-sc

 

Verifying the presence of the bns-network-sc storage class

Use the command below to verify the presence of the storage class:

kubectl get sc
// OUTPUT
NAME             PROVISIONER                                        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
bns-network-sc   cluster.local/nfs-subdir-external-provisioner-bns  Delete          Immediate              true                   52m
gp2 (default)    kubernetes.io/aws-ebs                              Delete          WaitForFirstConsumer   false                  6h40m

 

Testing the EFS CSI driver

Start by creating a test namespace

kubectl create ns test-bns-network

Create the test-bns-network.yaml file with the contents below. This will generate the test PVC and Pod.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: bns-network-sc
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: alpine
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: efs-claim

Apply test-bns-network.yaml:

kubectl apply -f test-bns-network.yaml -n test-bns-network
// OUTPUT
persistentvolumeclaim/efs-claim created
pod/app created

Wait until the app pod reaches the status Running. To check that the pod reached the Running status, perform the following command:

kubectl get pods -w -n test-bns-network
// OUTPUT
NAME       READY   STATUS              RESTARTS   AGE
app        0/1     Pending             0          0s
app        0/1     Pending             0          4s
app        0/1     ContainerCreating   0          5s
app        1/1     Running             0          17s

Verify that the app Pod is writing data to the volume:

kubectl exec app -n test-bns-network -- bash -c "cat data/out"
// OUTPUT
Wed Jul 20 14:33:04 UTC 2022
Wed Jul 20 14:33:09 UTC 2022
Wed Jul 20 14:33:14 UTC 2022

If your results are similar with the Output displayed above, then you've completed the process successfully and you can delete the test resources.

Delete the PVC and the Pod. This will also cause the PV to be deleted (that's why we use reclaimPolicy=Delete on the StorageClass):

kubectl delete -f test-bns-network.yaml -n test-bns-network
// OUTPUT
persistentvolumeclaim "efs-claim" deleted
pod "efs-app" deleted

Delete the test namespace

kubectl delete ns test-bns-network