Helm takes values that we pass into the Helm deployments and transforms a template (also known as a chart) to Kubernetes resources. It is a wrapper around Kubernetes resources.
It calls kubectl in the background to create the resources on the fly.
Knowledge Base for IT Professionals, Teachers and Astronauts
Helm takes values that we pass into the Helm deployments and transforms a template (also known as a chart) to Kubernetes resources. It is a wrapper around Kubernetes resources.
It calls kubectl in the background to create the resources on the fly.
For security reasons it is not a good practice to create individual load balancers for each service.
The safer way is to create one application load balancer outside of the cluster and launch ingress controller NGINX containers to proxy the traffic to the individual services.
“Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.”
To get the list of ingresses
kubectl get ingress
To show the details of an ingress
kubectl describe ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-frontend-service
servicePort: 80
Kubernetes provides ways to check the health of pods. Liveness and Readiness Probes can monitor the health of the pods.
We can specify an address in the pod what Kubernetes can curl and if 200 http response code is returned, the pod is considered healthy.
A pod can contain one or multiple containers, usually one.
Kubernetes only recommends to launch multiple containers in a pod, when those containers need to share a volume. For example a syslog-ng container saves log files in a volume, a Splunk Heavy Forwarder container monitors them and sends the log entries to the Splunk Indexer.
Create a deployment to specify the image that will run in the pod as a container.To list the pods
kubectl get pods
To display the standard out (stdout) messages of a container
kubectl logs MY_POD_NAME
Execute a command in a container of a pod. As usually there is only one container runs in the pod, we don’t have to specify the container name.
kubectl exec -it MY_POD_NAME /bin/bash
If the pod has multiple containers add the –container option
kubectl exec -it MY_POD_NAME --container MY_CONTAINER_NAME /bin/bash
You only need a deployment to launch a container in Kubernetes. Deployments tell Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-frontend-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app1-frontend-template-label
template:
metadata:
labels:
app: app1-frontend-template-label
spec:
containers:
- name: app1-frontend-container-label
image: nginx:1.7.9
ports:
- containerPort: 80
To list all deployments
kubectl get deployments
To launch the container specified in the deployment file
kubectl apply -f ./MY_DEPLOYMENT_FILE.yml
Display information about the deployment
kubectl describe deployment MY_DEPLOYMENT
The deployment creates pods. A pod can contain one or multiple containers, usually one. To list the pods
kubectl get pods
First, we want to separate the non-production and production environments:
We also want to separate each non-production and production like environment. Kubernetes offers namespaces to create segregated areas, resources in separate namespaces cannot see each other. Create a namespace for each environment:
To list all resources in a namespace use the -n option in the commands
kubectl get all -n MY_NAMESPACE
Load balancers are external to the cluster and point to the nodes to expose the applications outside of the cluster.
For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.
Do not use the :latest version, as it is hard to control the actual version of the launched image and to roll back to an earlier (unidentified) version.
Use the imagePullPolicy: Always. The Docker caching semantics makes it very efficient, as the layers are cached in the cluster to avid unnecessary downloads..
Install the CoreDNS add-on to provide in-cluster DNS service for pods, so all pods can find all services by name within the cluster.
Create the service before the deployment it refers to, because the service passes environment variables into the deployment when the containers start.
Create the target service and deployment before the caller deployment, so the target is available when the request is generated.
Companies launch multiple Kubernetes clusters, and the DevOps team needs access to all of them. The kubectl command-line utility can only work with one cluster at a time. To work with multiple Kubernetes clusters you need to switch between Kubernetes configurations on your workstation.
To connect to a Kubernetes cluster, add the cluster-info to the ~/.kube/config file. If you use AWS EKS the simplest way is to use the AWS CLI to update the file.
aws eks --region MY_REGION update-kubeconfig --name MY_CLUSTER_NAME
To see the configuration values execute
kubectl config view
To test the connectivity execute
kubectl get svc
If you are not the creator of the cluster you will get the error message
error: You must be logged in to the server (Unauthorized)
To access the cluster, in the [default] profile of the ~/.aws/credentials file use the access keys of the account that created the cluster. For more information see How do I resolve an unauthorized server error when I connect to the Amazon EKS API server?
Get the list of configured Kubernetes clusters. The asterisk in the first column of the output shows the currently selected cluster.
kubectl config get-contexts
Switch to another cluster
kubectl config use-context THE_VALUE_OF_THE_CONTEXT_NAME # (the name:attribute of the context)
Display the config
kubectl config view
Delete the user
kubectl config unset users.THE_NAME_VALUE_OF_THE_USER
Delete the cluster
kubectl config unset clusters.THE_NAME_VALUE_OF_THE_USER
Delete the context
kubectl config unset contexts.THE_NAME_VALUE_OF_THE_CONTEXT
Get all deployments
kubectl get deployments
Get all pods
kubectl get pods
Scale the deployment
kubectl scale deployment --replicas=4 MY_DEPLOYMENT_NAME
Check the result of the scaling with
kubectl get deployments
kubectl get pods -o wide
Get the deployment events at the end of the output of
kubectl describe deployments/MY_DEPLOYMENT_NAME
To scale down the replicas, execute the scale command again
kubectl scale deployments/MY_DEPLOYMENT_NAME --replicas=2
Kubernetes Services route traffic across a set of pods. The service specifies how deployments (applications) are exposed to each other or the outside world.
The service type specifies how the deployment will be exposed
The ClusterIP service is only visible within the cluster. To expose the pod to other services in the cluster
apiVersion: v1
kind: Service
metadata:
name: app1-frontend-service
spec:
selector:
app: app1-frontend-template-label
ports:
- protocol: TCP
port: 8080
targetPort: 80
Creates a load balancer external to the cluster and points itself to the nodes to expose the application outside of the cluster.
For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.
Don’t specify the hostPort for a Pod unless it is really necessary, as it limits the flexibility of the resource creation, because each hostIP, hostPort, protocol combination has to be unique within the cluster.
Avoid using the hostNetwork as it also limits the networking flexibility.
Use the IPVS proxy mode, as other proxy modes, userspace and iptables are based on iptables operations that slow down dramatically in large scale cluster e.g 10,000 Services. IPVS-based kube-proxy also has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).
List all pods
kubectl get pods
List all deployments
kubectl get deployments
List all services of the cluster.
kubectl get services
Create a new service and expose a poet of the pod via a node port (the same random port on every node)
kubectl expose deployment/MY_DEPLOYMENT_NAME --type="NodePort" --port 8080
To find the IP and port of the endpoint where the service is exposed, see the value of the ‘Endpoints:’ in the output of the describe command
kubectl describe services/MY_SERVICE_NAME
The endpoint is the pod IP and port. If the service is a web site or API you can test it with
curl ENDPOINT_IP:ENDPOINT_PORT
To test the pod via the service get the Kubernetes cluster IP and use the ‘NodePort:’ value
curl CLUSTER_IP:NODE_PORT
Get the ‘Labels:’ of the service from the output of the describe command above. List the pods of the service
kubectl get pods -l run=LABEL_IN_SERVICE
List the service of the pod by label
kubectl get services -l run=LABEL_IN_SERVICE
Add a new label to the pod
kubectl label pod MY_POD_NAME app=v1
Display the pod information
kubectl describe pods MY_POD_NAME
Delete the service by label
kubectl delete service -l run=LABEL_IN_SERVICE
Containers are ephemeral. Containers live entirely in memory, so even if the container is set up with automatic restart, the new container will not have access to the data created inside of the old container. To save persistent data of Docker containers we need to create volumes that live outside of the ephemeral containers. Don’t map volumes that store important information to the host. Use platform storage, like Amazon EBS, GCE Persistent Disk or Azure Disk Volume to store all your important data. Set up an automatic backup process to create volume snapshots regularly, so even if the host goes away, your data is safe.
Docker Swarm, use drivers to connect to external resources. One of the most versatile is REX-Ray.
We will add volume to MongoDB running in a Docker Swarm.
Install the REX-Ray plugin on the Docker host
docker plugin install rexray/ebs EBS_ACCESSKEY=aws_access_key EBS_SECRETKEY=aws_secret_key
Create a 1 GB Docker volume with Rex-Ray and gp2 EBS backing.
docker volume create –driver rexray/ebs –opt size=1 –opt volumetype=gp2 –name ebs_volume
Launch a new MongoDB service and map the volume to the MongoDB data directory
docker service create –network my_overlay_network –replicas 1 –mount type=volume,src=ebs_volume,target=/data/db –name mongodb mongo