Container database

Introduction

When using databases running in a container, you can leverage the import feature provided by the container image itself.

There is a common feature among database container images to import dump files from a pre-determined folder.

For example, Postgres and MySQL have the folder /docker-entrypoint-initdb.d, in which you can place either dumps (*.sql or *.sql.gz) or scripts (*.sh). The dumps will be imported by the container when it starts if the database is not empty, while the scripts will be ran every time.


Example

Here's how a database seeding might look like for a Postgres database, and it is very similar for MySQL as well.

You would have an InitContainer specified, which will download the dump(s) from S3 into a volume which will also be mounted in the database container, in the init folder's location. Then, the database container will handle the import out-of-the-box, and on top of that, to free up space, we also remove the contents of the dump files. We do not remove the files altogether, since we want to avoid re-downloading the dumps from S3 every time the database starts, and this is done through setting the SKIP_IF_EXISTS variable to 1 on the InitContainer.

kind: Environment
name: 'postgres database seeding'
type: primary
components:
    -
        kind: InitContainer
        name: db-seed
        dockerCompose:
            image: 'bunnyshell/s3-download:0.1.0'
            environment:
                AWS_ACCESS_KEY_ID: AKIAZALYJ6P2G224TA4I
                AWS_REGION: us-west-1
                AWS_SECRET_ACCESS_KEY: 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
                DOWNLOAD_PATH: /tmp/dumpfile/
                S3_FILES: 's3://demo-books-sql-dump/books.sql.gz'
                SKIP_IF_EXISTS: '1'
        volumes:
            -
                name: db-data-seed
                mount: /tmp/dumpfile
                subPath: ''
    -
        kind: Database
        name: db
        dockerCompose:
            environment:
                POSTGRES_DB: bunny_books
                POSTGRES_PASSWORD: need-to-replace
                POSTGRES_USER: postgres
            image: 'postgres:15.2-alpine3.17'
            restart: always
            user: postgres
            ports:
                - '5432:5432'
        pod:
            init_containers:
                -
                    from: db-seed
                    name: seed
        volumes:
            -
                name: db-data
                mount: /var/lib/postgresql/data
                subPath: ''
            -
                name: db-data-seed
                mount: /docker-entrypoint-initdb.d
                subPath: ''
        files:
            /docker-entrypoint-initdb.d/zzz_cleanup_seed_files.sh: |
                #!/bin/sh

                for sqlfile in /docker-entrypoint-initdb.d/*.sql /docker-entrypoint-initdb.d/*.sql.gz
                do
                    if [ -f "$sqlfile" ]; then
                        echo "" > $sqlfile
                        echo "Emptied contents of file $sqlfile"
                    fi
                done
volumes:
    -
        name: db-data
        size: 1Gi
        type: disk
    -
        name: db-data-seed
        size: 1Gi
        type: disk

📘

Demo Credentials

In order to be able to try this example out for our Demo Books application, we've included a set of credentials with permissions to only read the files in an isolated S3 bucket.

When implementing this into your environments, you should use secrets to keep the AWS_SECRET_ACCESS_KEY, by wrapping the key into the bns_secret() function or by enabling the Secret toggle from the UI.
Eg: AWS_SECRET_ACCESS_KEY: bns_secret(ECBGd93au87p0ZtnTK/BZssI3DZRJ23TqWp5JFX7)