Bunnyshell seamlessly integrates with Kubernetes clusters and Image Registries from major cloud providers, while also leveraging a number of proven tools, such as Helm and Terraform, to create an efficient workspace for developers.
To be able to create environments on demand, Bunnyshell only requires the following:
Your Kubernetes cluster
Access to your Git repository (if you want Bunnyshell to perform the builds)
Access to your Image Registry is optional.
Bunnyshell connects to your Kubernetes cluster via its API. This is why the clusters need to either be publicly exposed, or have Bunnyshell IPs whitelisted.
Parameters needed for connecting to the Kubernetes cluster are dependent on the cloud. However, in most cases, the following parameters are required:
The endpoint of the cluster
An admin token OR a user and password (or similar) pair of credentials
Creating Primary Environments
Bunnyshell uses your existing docker-compose.yaml file to generate the environment definition file called env.yaml.
All subsequent changes for the respective Environment's definition can be made either through the UI, or by editing the env.yaml. You can build up the env.yaml by adding multiple docker-compose.yaml files to the same Environment.
Currently, the env.yaml file is stored in Bunnyshell.
Creating Ephemeral Environments
Ephemeral Environments can be created automatically, triggered by Git webhooks.
Bunnyshell installs webhooks for each repository into your Git Provider when the Git account is connected.
When created automatically, an Ephemeral Environment is always based on a Primary Environment, that it uses as a template.
When you perform an action in Git, Bunnyshell receives a webhook from your Git provider, and decides if it needs to act upon it or not.
An Ephemeral Environment will be created when a Pull Request is created, if:
The environment contains one or more Applications belonging to the PR's repository;
The target branch of the PR is deployed;
The Create ephemeral environments on pull request option is set to ON on the Primary Environment.
Image Building and Terraform applying
When deploying an environment, the first steps include building the container images and applying the Terraform modules.
The build is performed in a Kubernetes cluster, using Kaniko. The cluster can be either managed by you or by Bunnyshell, depending on what you chose in the Build Settings of the Primary Environment.
Images are then pushed into the Image Registry chosen for the respective Environment. This registry can be either the one managed by Bunnyshell, or your own (previously connected) registry.
Terraform modules will be applied in parallel with the images being built.
Helm cleanup (uninstall)
At this stage, any deleted Helm charts are uninstalled.
Any newly-defined Helm charts detected by Bunnyshell are now installed.
Generating and applying the Kubernetes manifest
With the image building process complete, Terraform modules applied and Helm charts installed, the Kubernetes manifests are generated and applied. Bunnyshell creates the needed Kubernetes resources:
Persistent Volume Claims
Environments will run isolated from one another, each Environment being deployed in a separate Kubernetes Namespace.
Finally, the DNS records are created for the respective Environment's Kubernetes Ingress endpoints.
Notifications are sent to users, and after that the Environment enters the Running state.
All environments can be started and stopped manually. They can also be subject to a defined schedule, at a Project or Environment level.
Kubernetes Deployments and Helm-installed resources are scaled to 0(zero) replicas.
Terraform-created resources are not affected by stopping the Environment they're part of.
Ephemeral Environments can be destroyed automatically by Bunnyshell when a Pull Request is either merged or closed, depending on the Environment settings.
All resources associated with an Environment are destroyed once an Environment is deleted:
Kubernetes resources: the whole namespace is deleted
Terraform resources: destroyed
Helm Charts: uninstalled
Currently, most of the development work happens on local environments, with programmers having to run their own docker containers, services, databases and any other required elements. We all know that this can become a hardware-resource guzzler in no-time.
Bunnyshell's Remote Development feature aims to streamline this process, requiring just the code being locally-stored.
You can write code (and debug it as well) in your favorite IDE, and it will be synchronised in real-time into a Pod from the Kubernetes cluster. Just to be clear, the Pod is the one running the code.
Instead of having all the containers running locally, you will work directly in the cloud, on a Kubernetes cluster.