Using Terraform Modules in Environments

Users can attach Terraform Modules to Environments in Bunnyshell by configuring input variables and mapping outputs to variables.

👍

Note

A Terraform Module can be attached multiple times to the same Environment.

 

Attaching the Terraform Module

  1. Let's create another Environment with the same repository used before, but this time we'll select the terraform branch. The Chart for the backend also contains the needed AWS Environment Variables exposed in values.yaml, so we can pass them to the application to have access to create buckets and write in S3.

🚧

Do not forget to change the gitRepo with your own.
Search for gitRepo: 'git://github.com/bunnyshell/demo-books.git' and replace it.

 

  1. Next, let's add the Terraform Module we created earlier.
3582

 

  1. You will be able to select the previously added one, and choose:
  • Whether it will update itself every time new commits that change the module are added in Git;
  • Whether the module will also be automatically attached to any Ephemeral environments created from this Primary.

In this case, let's switch both options to ON.

3582

 

  1. Next, if any variables need to be defined at the Environment level, you are prompted to do so. When we added the Terraform Module, we enforced the bucket name from there, so anyone using it cannot override its value. But, if you'd wish this module to be attached more than once to an Environment, you would make this option configurable.
3582

 

  1. Next, you are able to define mappings. Essentially, you can take Terraform Outputs and dump them into Environment Variables or Component Variables, so Components will be able to use their values.

🚧

Important

This is how you connect Terraform with your Applications and Services from the Environment.

You will need to map the value of the Bucket name to the backend. Meaning:

  • Output Value s3_bucket_name at Component level to Component backend (or api), as AWS_S3_BUCKET_NAME
  • Output Value s3_bucket_region at Component level to Component backend (or api), as AWS_S3_BUCKET_REGION
3582

 

  1. The Terraform plan is tested once more, with all the new configurations you provided, and the plan is also outputted.
3582

 

  1. Lastly, click Attach Terraform Module.
    You will be able to see it in the Components listing of the Environment.
3582

The Terraform Module was successfully attached to the Environment. However, now we also need to grant access to the backend application for the bucket, so it can upload images there.

 

Pass AWS credentials in the Application

In order to pass the credentials into the application's container, you will need to adjust the Environment Definition a bit. This will differ based on how is the Environment created, either from Docker-compose or with Helm Charts and Kubernetes manifests.

 

Helm Charts based Environment

You will need to edit the bunnyshell.yaml configuration and:

  • 1st: pass the desired variables into the Component, meaning into the deploy runner
  • 2nd: use these variables inside the runner to pass the values in the Helm Chart, so they end up in the application's container(s).

So, you would first add to the environment property of the api Component the AWS_S3_ACCESS_KEY_ID and AWS_S3_SECRET_ACCESS_KEY Environment Variables, with their actual values.

The bucket name and region were already injected into the environment from the attached Terraform Module.

Alternately, you could add them via the UI as well.

3582 3582

And then, you would use these values and interpolate them in the deploy section, under my_values.yaml values file used for the helm upgrade --install command.
The s3.* variables are the ones we added. (line 26)

components:
    ...
    -
        kind: Helm
        name: api
        runnerImage: 'dtzar/helm-kubectl:3.8.2'
        environment:
            AWS_S3_BUCKET_NAME: TF_OUT_AAAABAY201wQAAAAAQ_s3_bucket_name
            AWS_S3_BUCKET_REGION: TF_OUT_AAAABAY201wQAAAAAQ_s3_bucket_region
            AWS_S3_ACCESS_KEY_ID: AKIA************RNE4
            AWS_S3_SECRET_ACCESS_KEY: Vhsb*******************************I9Uvl
        deploy:
            - |
                cat << EOF > my_values.yaml
                    serviceImage: {{ components.backend-image.image }}
                    replicas: 1
                    ingress:
                        className: bns-nginx
                        host: api-{{ env.base_domain }}
                    postgres:
                        host: '{{ components.postgres.exported.POSTGRES_HOST }}'
                        db: '{{ env.vars.POSTGRES_DB }}'
                        user: '{{ env.vars.POSTGRES_USER }}'
                        password: '{{ env.vars.POSTGRES_PASSWORD }}'
                    frontendUrl: '{{ env.vars.FRONTEND_URL }}'
                    s3:
                        accessKeyId: {{ components.api.vars.AWS_S3_ACCESS_KEY_ID }}
                        secretAccessKey: {{ components.api.vars.AWS_S3_SECRET_ACCESS_KEY }}
                        bucketName: {{ components.api.vars.AWS_S3_BUCKET_NAME }}
                        bucketRegion: {{ components.api.vars.AWS_S3_BUCKET_REGION }}
                EOF
            - 'helm upgrade --install --namespace {{ env.k8s.namespace }} --dependency-update --post-renderer /bns/helpers/helm/add_labels/kustomize -f my_values.yaml api-{{ env.unique }} ./helm/backend'
        ...

👍

The complete bunnyshell.yaml definition can also be found in in the Git repo.

 

Docker-compose based Environment

You need to replace the Access Key ID and Access Key Secret in the Application variables of the backend app, meaning AWS_S3_ACCESS_KEY_ID and AWS_S3_SECRET_ACCESS_KEY.

Select the backend Component, then open the Actions menu and click Variables.

3582

The bucket name and region are already injected from the attached Terraform Module.

3582

Obviously, you can achieve the same result by editing the bunnyshell.yaml and editing the dockerCompose.environment property of the backend component.

 


📘

For more details on how to use Terraform Modules in Environments, see the dedicated Terraform Modules in Environments documentation page.