Containers are the future (and some of us are already there). Container technology, spearheaded by Docker, is revolutionary by allowing developers to write applications once, and run them (almost) anywhere.
Containers help developers to fully test a complete application including frontend, middle tier, and databases on their workstations, and expect the same result in the production environment.
Most applications can be migrated to containers if the runtime environment, and all application features are supported by the container architecture. Because containers are really nothing else than namespaces on the host operating system, Linux containers can natively run only on Linux host operating systems, Windows containers can natively run only on Windows hosts. Using virtual machines, it is possible to run containers on a different host operating system, but it requires an additional layer of complexity.
Twelve-Factor methodology offers guidance on multiple aspects of the application design, development, and deployment. During our migration process we will extensively use the third, “Configuration” factor. This recommends to get configuration values from environment variables, so the same code can be deployed to any environment without changes. This guarantees code parity between test and production environments to reduce the risk of failures during the promotion to a higher environment.
Using environment variables our application can read them the same way regardless where it runs.
Store configuration values in environment variables
On our workstation we can set environment variables
- manually in the terminal (not recommended),
- in the ~/.bashrc file ( on Linux )
- in the “Environment Variables” section of the computer properties ( on Windows )
- in a file called from the ~/.bashrc file ( on Linux )
- in an automatically executed batch file ( on Windows, see How to run a batch file each time the computer loads Windows )
- with an auto executed script reading values from Vault or any other secure secret storage ,and saving them in environment variables.
Automation allows us to build, test and deploy our application quickly and frequently without manual intervention. This avoids human errors during the CI/CD process and produces the same result every time we run the CI/CD pipeline.
These are language agnostic recommendations, on the language specific pages listed below we will revisit them in more detail.
Create the init.sh file
Create the init.sh file in the source directory and enable the execution with the command:
chmod +x init.sh
This file will contain the terminal initialization steps for the application. This example
- sets the AWS_PROFILE environment variable value.
Create the env.list file
The env.list text file contains the list of environment variables (without values) used by the application. When we run the Docker container, the –env-file option will copy those into the environment of the container.
Create a Makefile
In the source directory create the Makefile to automate frequently executed steps. This Makefile provides the following functionality:
- make init ( the first and default option, so it is enough to type “make”)
- provides instructions on how to execute the init.sh file the first time you open the terminal in this directory. (“Make” recipes execute the instructions in subprocesses, so those lines cannot update the parent environment you are calling them from. The extra dot instructs the terminal to run the init.sh script in the same process. It is same as “source” but more portable.)
- make docker-build
- using the Dockerfile builds the application into a Docker container
- make docker-run
- runs the Docker container using the list of environment variables from the env.list file
init: # TO INITIALIZE THE ENVIRONMENT EXECUTE # . ./init.sh docker-build: docker build -t MY_APP . docker-run: docker run -it --env-file env.list MY_APP
For language specific examples see