To view the final configuration values of the docker-config.yml file after reading the environment variables from .env file, and all variable substitutions are done
Make sure the config file in the current directory is named docker-compose.yml
With Portainer we can monitor multiple Docker Swarms from one Portainer Server. To connect an existing Portainer server to an agent
Configure the Agent
For security reasons, by default, the Portainer Agent only accepts connection from the first Portainer Server it encounters. To enable the Portainer Agent to connect to multiple Portainer Servers, add the AGENT_SECRET environment variable to the docker-compose.yml file of the Agent. This is necessary if you launch a Portainer Server on the Docker host and connect to the local Agent to test it. Without specifying the AGENT_SECRET, another Portainer Server cannot connect to the same agent.
Publish the Agent port on the host network
environment:
# REQUIRED: Should be equal to the service name prefixed by "tasks." when
# deployed inside an overlay network
AGENT_CLUSTER_ADDR: tasks.agent
# AGENT_PORT: 9001
# LOG_LEVEL: debug
AGENT_SECRET: my_secret_token
ports:
- target: 9001
published: 9001
protocol: tcp
mode: host
Configure the Server
Add the AGENT_SECRET environment variable to the docker-compose.yml file of the Server
environment:
AGENT_SECRET: my_secret_token
Add the endpoint to the Portainer Server
Log into the Portainer Server
Navigate to the Endpoints page
Click the Add Endpoint button
Select the Agent endpoint type
Enter the IP address and the port number ( by default 9001 ) of the Portainer Agent
The Portainer server restarts every 5 minutes before the admin account is created
When the Portainer server starts, waits 5 minutes for a user to create the admin account. If no account created in the first 5 minutes, the server stops with error code 1, message:
No administrator account was created after 5 min. Shutting down the Portainer instance for security reasons.
To keep the Portainer server running, with your web browser navigate to the web UI on port 9000 and enter a password for the admin account.
The first step of the Chef Test Kitchen converge operation is to transfer the cookbooks to the instance. If any of the cookbooks contain large files, the operation can take minutes while the terminal displays the message
The stopped Docker containers are still available for troubleshooting. You can create an image of them and run them as new containers to inspect the log files and execute commands in them.
View the standard output of the failed container
docker logs MY_CONTAINER_ID
Run a failing container with a Bash terminal
If a container exists with an error within a few seconds, it can be beneficial to start a terminal window in it to view the log files and execute commands. We will override the entry point of the container to start a Bash terminal.
Create an image of the stopped container
docker commit MY_STOPPED_CONTAINER_ID MY_NEW_IMAGE NAME
Run the saved image as a new container and start a Bash terminal instead of the original entry point
docker run -it --entrypoint bash MY_NEW_IMAGE_NAME
service ( one or multiple instances of the same task, like multiple copies of the same web API )
stack ( one or multiple services that belong together, like a front end web application, middle tier, and database server launch scripted in a .yml file )
The difference between the service and the stack is like docker run vs. docker compose, but in a Docker Swarm cluster.
Docker Swarm Services
Global service
Global services will run on every available node once.
Replicated service
The Manager distributes the given number of tasks ( containers and commands to run ) of the replicated services on the nodes based on the desired scale number, that can be one. Once a task is assigned to a node it cannot be moved, it will run on that node until stops or fails.
Docker Swarm Networking
Host network
Uses the host’s network stack without any namespace separation, and sharing all of the host’s interfaces.
Bridge network
Docker-managed Linux bridge on the Docker host. By default, all containers created on the same bridge can talk to each other.
Overlay network
An overlay network that may span over multiple Docker hosts. Uses the gossip protocol to communicate between hosts.
None
The container’s own network stack and namespace, without any interfaces. It stays isolated from every other network, and even its own host’s network.
MACVLAN
Establishes connections between container interfaces and parent host interfaces. They can be used to assign IP addresses that are routable on physical networks to containers.
Docker Swarm Load Balancing
Internal load balancing
Internal load balancing is enabled by default. When a container contacts another container in the same Docker Swarm, the internal load balancer routes the request.
External ingress load balancing
To enable the external ingress load balancing, publish the port of the service with the –publish flag. Every node in the cluster starts to listen on the published port to answer incoming requests. If the service does not run a container on the node that received the request, the Routing Mesh will route the request to the node that runs the container on the Ingress Network.
Create a service with an image in a private registry
These instructions will pass the login token from your local client to the Docker Swarm nodes, so those are able to log into the registry and pull the image.
# Save the Docker Registry password in the PASSWORD environment variable
# Log into the Docker Registry
echo $PASSWORD | docker login -u [user] registry.my_registry.com --password-stdin
# Create the service
docker service create \
--with-registry-auth \
--name my_service \
registry.my_registry.com/my_namespace/my_image:latest