Migrate a Node.js application into a Docker container

To avoid repetition, for overview and language agnostic examples on application migration to containers see Migrate your application into a Docker container

Update your application

Exclude downloadable libraries

Add a Node.js-specific .gitignore file to the root of the Git repository. This is an example of a basic file.

.DS_Store
node_modules

/.cache
**/build
/public/build
.env

Read configuration values from environment variables

Node.js natively supports the reading of environment variables.

const user_name = process.env['USER_NAME']

Automate the application build

Create the init.sh file

Create the init.sh file in the source directory and enable the execution with the command:

chmod +x init.sh

This file will contain the terminal initialization steps for the application. This example

  • sets the AWS_PROFILE environment variable value
AWS_PROFILE=aws1

Create the env.list file

The env.list text file contains the list of environment variables (without values) used by the application. When we run the Docker container, the –env-file option will copy those into the environment of the container.

MY_VAR1
MY_VAR2

Create the Dockerfile

The Dockerfile contains the list of instructions to build our application into a Docker container. This generic Dockerfile can build any Node.js application, as the referenced packages are listed in the “package.json” and “package-lock.json” files.

FROM node:16

WORKDIR /app

COPY package.json ./
COPY package-lock.json ./
RUN npm install

COPY . .

ENTRYPOINT [ "node", "index.js" ]

Exclude the “node_modules” and “builds” directories from the COPY . . operation

The “COPY . “. command will copy all files and directories from the “context” directory to the image. As we already placed “node_modules” into the .gitignore file, in the CI/CD pipeline it will not be available during the build, so the “npm install” command will recreate it. To be able to test the build process on our workstation, we need to ignore it during the docker build. Create the .dockerignore file in the context directory (usually in the parent of the “node_modules” directory.

WARNING: This setting will force the Docker build process to ignore the “node_modules” and “build” directories in all COPY . . commands, so if the Dockerfile uses the two step build process, explicitly copy the “node_modules” and “build” directories with the
COPY –from=BUILD_IMAGE /app/node_modules ./node_modules
COPY –from=BUILD_IMAGE /app/build ./build
commands.

**/node_modules
**/build

Create the Makefile

In the source directory create the Makefile to automate frequently executed steps. This Makefile provides the following functionality:

  • make init ( the first and default option, so it is enough to type “make”)
    • provides instructions on how to execute the init.sh file the first time you open the terminal in this directory. (“Make” recipes execute the instructions in subprocesses, so those lines cannot update the parent environment you are calling them from. The extra dot instructs the terminal to run the init.sh script in the same process. It is same as “source” but more portable.)
  • make install
    • installs the referenced Node.js packages and saves the list in the package.json and package-lock.json files
  • make run
    • starts the application
  • make unittest
    • runs the unittest.js script
  • make docker-build
    • using the Dockerfile builds the application into a Docker container
  • make docker-run
    • runs the Docker container using the list of environment variables from the env.list file
    • configures the container to redirect the ingress port to the port watched by the application (first value is the container port, second value is the application port)
init:
	# TO INITIALIZE THE ENVIRONMENT EXECUTE
	# . ./init.sh

install:
	npm install aws-sdk
	npm install aws-sdk/credential-providers

run:
	node index.js

unittest:
	node unittest.js

docker-build:
	docker build -t aws-listener .

docker-run:
	docker run -it --env-file env.list -p 3000:3000 aws-listener

Migrate a Python application into a Docker container

To avoid repetition, for overview and language agnostic examples on application migration to containers see Migrate your application into a Docker container

Update your application

Exclude downloadable libraries

Add a Python-specific .gitignore file to the root of the Git repository.

Read configuration values from environment variables

Python natively supports the reading of environment variables. The os.environ.get() function reads the environment variables. This small function can intelligently report missing variables.

import os
import sys

def get_config_value(variable_name):

  value = os.environ.get(variable_name, None)
  if value is not None:
    return value

  sys.exit(f"'{variable_name}' environment variable not found")

Call it with

region_name=get_config_value("AWS_REGION")

Automate the application build

Create the init.sh file

Create the init.sh file in the source directory and enable the execution with the command:

chmod +x init.sh

This file will contain the terminal initialization steps for the application. This example

  • activates the Anaconda Python environment and
  • sets the AWS_PROFILE environment variable value
conda activate aws-worker
AWS_PROFILE=aws1

Create the env.list file

The env.list text file contains the list of environment variables (without values) used by the application. When we run the Docker container, the –env-file option will copy those into the environment of the container.

MY_VAR1
MY_VAR2

Save the list of referenced Python packages in the requirements.txt file

When you install Python packages those are available for your application. As Python is an interpreted language those packages also have to be available during runtime in the container. The pip freeze command saves the list in a text file, so the container build process can also install them.

pip freeze > requirements.txt

Create the Dockerfile

The Dockerfile contains the list of instructions to build our application into a Docker container. This generic Dockerfile can build any Python application, as the referenced Python packages are listed in the requirements.txt file. In this example our main application file is aws_worker.py

FROM python:bullseye

WORKDIR /usr/src/app

COPY requirements.txt ./

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

# -u option turns off print buffering in Python
CMD [ "python", "-u", "./aws_worker.py" ]

Create the Makefile

In the source directory create the Makefile to automate frequently executed steps. This Makefile provides the following functionality:

  • make init ( the first and default option, so it is enough to type “make”)
    • provides instructions on how to execute the init.sh file the first time you open the terminal in this directory. (“Make” recipes execute the instructions in subprocesses, so those lines cannot update the parent environment you are calling them from. The extra dot instructs the terminal to run the init.sh script in the same process. It is same as “source” but more portable.)
  • make run
    • starts the Python application
  • make install
    • installs the referenced Python packages and saves the list in the requirements.txt file
  • make docker-build
    • using the Dockerfile, builds the application into a Docker container
  • make docker-run
    • runs the Docker container using the list of environment variables from the env.list file
init:
	# TO INITIALIZE THE ENVIRONMENT EXECUTE
	# . ./init.sh

install:
	pip install boto3
	pip install requests
	# Save the package list in the requirements file
	pip freeze > requirements.txt

run:
	python aws_worker.py

docker-build:
	docker build -t aws-worker .

docker-run:
	docker run -it --env-file env.list aws-worker

Migrate your application into a Docker container

Containers are the future (and some of us are already there). Container technology, spearheaded by Docker, is revolutionary by allowing developers to write applications once, and run them (almost) anywhere.

Containers help developers to fully test a complete application including frontend, middle tier, and databases on their workstations, and expect the same result in the production environment.

Most applications can be migrated to containers if the runtime environment, and all application features are supported by the container architecture. Because containers are really nothing else than namespaces on the host operating system, Linux containers can natively run only on Linux host operating systems, Windows containers can natively run only on Windows hosts. Using virtual machines, it is possible to run containers on a different host operating system, but it requires an additional layer of complexity.

Twelve-Factor methodology offers guidance on multiple aspects of the application design, development, and deployment. During our migration process we will extensively use the third, “Configuration” factor. This recommends to get configuration values from environment variables, so the same code can be deployed to any environment without changes. This guarantees code parity between test and production environments to reduce the risk of failures during the promotion to a higher environment.

Using environment variables our application can read them the same way regardless where it runs.

Store configuration values in environment variables

On our workstation we can set environment variables

  • manually in the terminal (not recommended),
  • in the ~/.bashrc file ( on Linux )
  • in the “Environment Variables” section of the computer properties ( on Windows )
  • in a file called from the ~/.bashrc file ( on Linux )
  • in an automatically executed batch file ( on Windows, see How to run a batch file each time the computer loads Windows )
  • with an auto executed script reading values from Vault or any other secure secret storage ,and saving them in environment variables.

Automate everything

Automation allows us to build, test and deploy our application quickly and frequently without manual intervention. This avoids human errors during the CI/CD process and produces the same result every time we run the CI/CD pipeline.

These are language agnostic recommendations, on the language specific pages listed below we will revisit them in more detail.

Create the init.sh file

Create the init.sh file in the source directory and enable the execution with the command:

chmod +x init.sh

This file will contain the terminal initialization steps for the application. This example

  • sets the AWS_PROFILE environment variable value.
AWS_PROFILE=aws1

Create the env.list file

The env.list text file contains the list of environment variables (without values) used by the application. When we run the Docker container, the –env-file option will copy those into the environment of the container.

MY_VAR1
MY_VAR2

Create a Makefile

In the source directory create the Makefile to automate frequently executed steps. This Makefile provides the following functionality:

  • make init ( the first and default option, so it is enough to type “make”)
    • provides instructions on how to execute the init.sh file the first time you open the terminal in this directory. (“Make” recipes execute the instructions in subprocesses, so those lines cannot update the parent environment you are calling them from. The extra dot instructs the terminal to run the init.sh script in the same process. It is same as “source” but more portable.)
  • make docker-build
    • using the Dockerfile builds the application into a Docker container
  • make docker-run
    • runs the Docker container using the list of environment variables from the env.list file
init:
	# TO INITIALIZE THE ENVIRONMENT EXECUTE
	# . ./init.sh

docker-build:
	docker build -t MY_APP .

docker-run:
	docker run -it --env-file env.list MY_APP

For language specific examples see

Elektronikus könyv küldése az Amazon Kindle-re email-ben

Ez a módszer akkor működik, ha a könyv file “.mobi” formátumú file-ban van.

  • Indítsd el az email programodat
  • Kezdj írni egy új email-t
  • Ird be a Kindle email címét a cimzett helyére és a könyv címét a tárgy helyére
  • Add hozzá a “.mobi” formátumú könyvet az email-hez csatolmányként

    és jelöld ki a könyv file-t
  • Küldd el az email-t
  • Néhány perc múlva a könyv megérkezik a Kindle-re, ellenőrizd le.

Elektronikus könyv küldése az Amazon Kindle-re a Calibre programmal

  • Indítsuk el a Calibre programot,
  • A könyvtárban jelöljük ki a könyvet, kattintsunk a Kapcsolat/megosztás gombra és válasszuk ki az Email küldése ide: …@kindle.com menü pontot.
  • Ellenőrizzük le, hogy a program a korrekt formátumra konvertálja a könyvet. Ha a formátum megfelel katintsunk az Igen gombra. Ha “Nem”-el válaszoulnk, Calibre akkor is elküldi az email-t, csak nem konvertálja a könyvet.

Ha nem érkezik a könyv a Kindle-re

Várni kell…

Ha néhány perc után sem érkezik meg a könyv a Kindle-re,

  • A képernyő jobb alsó sarkába kattintva mengézhetjük a küldési folyamatot.
  • Duplán kattintsunk a könyv címére.
  • A program 5 percet vár két könyv küldése között, hogy a Hotmail és Kindle email rendszerek ne gondolják, hogy körleveleket küldünk

Template variables are not supported in alert queries – Grafana error

When you try to create the first alert on a Grafana dashboard, you may get the error message:

Template variables are not supported in alert queries

Cause

All queries of the panel use at least one template variable with the format ${VARIABLE_NAME} or $VARIABLE_NAME

Template variables usually represent the value of selected items in dropdown lists, like the data source, region, queue name.

Alerts run queries outside of the dashboard, do not have information on the value of selected items. Alert queries have to be able to collect data without user interaction.

In the AWS SQS Queue dashboard template # 584 there are three template variables we need to replace with selected items:

Solution

Click each variable and select the appropriate values from the lists.

To create a new query without template variables

  • Click the + Query button at the bottom of the Query tab

Tejfölös, mustáros sült hús krumplival

Hozzávalók

  • Néhány szelet hús
  • Hagyma
  • Krumpli
  • Alma ( zöld alma a legjobb )
  • 2 dl Tejföl
  • Fehér bor
  • Mustár
  • Bors

Előkészítés

  1. Besózzzuk, megborsozzuk a hús szeletek mindkét oldalát
  2. Pici olajjal kiolajozzuk a hőálló tálat
  3. Kevés fehér bort teszünk bele
  4. Alulra tesszük a húsokat
  5. Rátesszük a karikára vágott hagymát
  6. Rátesszük a vastagabb karikára vágott almát
  7. Rátesszük a vastagabb karikára vágott krumplit
  8. Tejfölt mustárral összekeverjük, kicsi sót teszünk bele
  9. Az öntetet rákenjük a tetejére

Sütés

  • Alufóliával lefedve 30-35 percig 180 celsius fokon alsó felső sütéssel a húsokat puhára sütjük.
  • Levesszük a fóliát, felülre tesszük a húsokat és 180 fokon a húsok mindkét oldalát kicsit pirosra sütjük. Ha sok a lé, légkeveréssel pirítjuk, hogy kissé elpárologjon.
  • Amikor megfordítjuk a húsokat, megkenjük a lével.

Encountered exception: The request identified this synchronization job…

During Azure AD user on-demand provisioning you may encounter the error message:

Encountered exception: The request identified this synchronization job: …. The request also identified this synchronization rule: . However, the job does not include the rule.

To fix this issue click the Retry button at the bottom of the page.