Kubernetes Deployment Scaling

Get all deployments

kubectl get deployments

Get all pods

kubectl get pods

Scale the deployment

kubectl scale deployment --replicas=4 MY_DEPLOYMENT_NAME

Check the result of the scaling with

kubectl get deployments
kubectl get pods -o wide

Get the deployment events at the end of the output of

kubectl describe deployments/MY_DEPLOYMENT_NAME

To scale down the replicas, execute the scale command again

kubectl scale deployments/MY_DEPLOYMENT_NAME --replicas=2

Kubernetes Services

Kubernetes Services route traffic across a set of pods. The service specifies how deployments (applications) are exposed to each other or the outside world.

Service types

The service type specifies how the deployment will be exposed

ClusterIP

The ClusterIP service is only visible within the cluster. To expose the pod to other services in the cluster

  • set the published port with spec: port:
  • set the port inside the container with spec: targetPort:
  • other services can find this service by its name, specified by metadata: name: even if the IP address of the pod changes
  • spec: selector: specifies the label of the template within the deployment. All pods started by the template will back the service.
  • set the service type to ClusterIP with spec: type: to only expose it within the cluster. Use Ingress to expose your Service outside of the cluster with consolidated proxy rules via a single IP address.
apiVersion: v1
kind: Service
metadata:
  name: app1-frontend-service
spec:
  selector:
    app: app1-frontend-template-label
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

LoadBalancer

Creates a load balancer external to the cluster and points itself to the nodes to expose the application outside of the cluster.

For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.

Best practices

Don’t specify the hostPort for a Pod unless it is really necessary, as it limits the flexibility of the resource creation, because each hostIP, hostPort, protocol combination has to be unique within the cluster.

Avoid using the hostNetwork as it also limits the networking flexibility.

Use the IPVS proxy mode, as other proxy modes, userspace and iptables are based on iptables operations that slow down dramatically in large scale cluster e.g 10,000 Services. IPVS-based kube-proxy also has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).

Commands

List all pods

kubectl get pods

List all deployments

kubectl get deployments

List all services of the cluster.

kubectl get services

Create a new service and expose a poet of the pod via a node port (the same random port on every node)

kubectl expose deployment/MY_DEPLOYMENT_NAME --type="NodePort" --port 8080

To find the IP and port of the endpoint where the service is exposed, see the value of the ‘Endpoints:’ in the output of the describe command

kubectl describe services/MY_SERVICE_NAME

The endpoint is the pod IP and port. If the service is a web site or API you can test it with

curl ENDPOINT_IP:ENDPOINT_PORT

To test the pod via the service get the Kubernetes cluster IP and use the ‘NodePort:’ value

curl CLUSTER_IP:NODE_PORT

Get the ‘Labels:’ of the service from the output of the describe command above. List the pods of the service

kubectl get pods -l run=LABEL_IN_SERVICE

List the service of the pod by label

kubectl get services -l run=LABEL_IN_SERVICE

Add a new label to the pod

kubectl label pod MY_POD_NAME app=v1

Display the pod information

kubectl describe pods MY_POD_NAME

Delete the service by label

kubectl delete service -l run=LABEL_IN_SERVICE

Docker Swarm volumes

Containers are ephemeral. Containers live entirely in memory, so even if the container is set up with automatic restart, the new container will not have access to the data created inside of the old container. To save persistent data of Docker containers we need to create volumes that live outside of the ephemeral containers. Don’t map volumes that store important information to the host. Use platform storage, like Amazon EBS, GCE Persistent Disk or Azure Disk Volume to store all your important data. Set up an automatic backup process to create volume snapshots regularly, so even if the host goes away, your data is safe.

Docker Swarm, use drivers to connect to external resources. One of the most versatile is REX-Ray.

We will add volume to MongoDB running in a Docker Swarm.

Install the REX-Ray plugin on the Docker host

docker plugin install rexray/ebs EBS_ACCESSKEY=aws_access_key EBS_SECRETKEY=aws_secret_key

Create a 1 GB Docker volume with Rex-Ray and gp2 EBS backing.

docker volume create –driver rexray/ebs –opt size=1 –opt volumetype=gp2 –name ebs_volume

Launch a new MongoDB service and map the volume to the MongoDB data directory

docker service create –network my_overlay_network –replicas 1 –mount type=volume,src=ebs_volume,target=/data/db –name mongodb mongo

Kubernetes volumes

To understand the types of available volumes read the official Kubernetes documentation on Volumes

The official documentation on Kubernetes Persistent Volume and Persistent Volume Claim is at Persistent Volumes

Migrating to CSI drivers from in-tree plugins

Kubernetes moves away from in-tree plugins, that had to be checked into the Kubernetes code repository to out-of-tree volume plugins like CSI ( Container Storage Interface ) drivers and Flexvolume. These drivers can be developed by third parties independently of Kubernetes, and have to be installed and configured on the node, see Migrating to CSI drivers from in-tree plugins

Mount propagation

Mount propagation allows containers to share volumes with other containers within the pod or with containers in other pods on the same host. For more information see Mount propagation

Access Mode

Access mode specifies the way volumes are accessed from one or multiple pods.

  • ReadWriteOnce ( RWO ) – the volume can be mounted as read-write by a single node only
  • ReadOnlyMany ( ROX ) – the volume can be mounted read-only by many nodes
  • ReadWriteMany ( RWX )– the volume can be mounted as read-write by many nodes

For the latest access mode limitations for each volume type see Access Modes

Volume Types

persistentVolumeClaim

persistentVolumeClaim volumes mount PersistentVolume into a Pod without knowing the underlying storage backing, like GCE PersistentDisk or iSCSI volume. CSI (Computer Storage Interface) volume types do not support direct reference from Pods, can only be referenced via the PersistentVolumeClaim object.

nfs

nfs (Netwotk File System) Persistent Disks can be mounted simultaneously by multiple Pods for writing.

awsElasticBlockStore

awsElasticBlockStore AWS EBS volumes have limitations

  • the Pods have to run on AWS EC2 instance nodes
  • the EC2 instances have to be in the same region and availability-zone as the EBS volume
  • only one EC2 instance can be mounted to the EBS
  • the access mode can only be ReadWriteOnce

awsElasticBlockStore volume examples

Kubernetes can mount a volume directly to a pod. In this example we mount an AWS EBS volume to the Mongo database pod.

Create a 1 GB volume from the command line

aws ec2 create-volume –availability-zone=us-east-1a –size=1 –volume-type=gp2

Create the mongodb.yml file

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-on-ebs
spec:
  containers:
  - image: mongo
    name: mongo-pod
    volumeMounts:
    - mountPath: /data/db
      name: mongo-volume
  volumes:
  - name: mongo-volume
    awsElasticBlockStore:
      volumeID: <THE_VOLUME_ID>
      fsType: ext4

Create the Mongo DB pod

kubectl create -f mongodb.yml

gcePersistentDisk

gcePersistentDisk can be read simultaneously by multiple Pods but written only by one at a time, so it is a good choice as a common configuration source.
Starting in Kubernetes version 1.10 a beta feature allows the creation of Regional Persistent Disks that are available from multiple zones of the same region.

The gcePersistentDisk limitations are

  • the Pods have to run on GCE VM nodes
  • the nodes have to be in the same GCE project and zone as the Persistent Disk
  • can be mounted to multiple Pods for reading, but only one Pod can write it. If the Pod is controlled by a Replica Controller the access mode has to be read-only, or the replica count has to be 0 or 1

gcePersistentDisk volume example

Create the Persistent Disk (accessible from one zone only)

gcloud compute disks create --size=200GB --zone=us-central1-a my-gce-disk

To create a Regional Persistent Disk (beta in Kubernets 1.10)

gcloud beta compute disks create --size=200GB my-gce-disk
    --region us-central1
    --replica-zones us-central1-a,us-central1-b

Create the Regional Persistent Volume (beta in Kubernets 1.10)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gce-test-volume
  labels:
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
  capacity:
    storage: 200Gi
  accessModes:
  - ReadWriteOnce
  gcePersistentDisk:
    pdName: my-gce-disk
    fsType: ext4

Then Pod definition

apiVersion: v1
kind: Pod
metadata:
  name: gce-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: gce-test-container
    volumeMounts:
    - mountPath: /gce-pd
      name: gce-test-volume
  volumes:
  - name: gce-test-volume
    # The GCE Persistent Disk has to exist
    gcePersistentDisk:
      pdName: my-gce-disk
      fsType: ext4

Working with volumes

List all Persistent Volume Claims

kubectl get pvc

Resources to learn Kubernetes

This is a great five part blog series on Kubernetes and a post on volumes by Sebastian Caceres. I recommend reading it even before the official Kubernetes tutorial to get a great overview of how Kubernetes really works.

Kubernetes Tutorials

Resources to learn Docker Swarm

This is five part blog series, and the post on volumes is great explanation of how Docker Swarm really works by Sebastian Caceres. I recommend it even before doing the official Docker Swarm tutorial, to get a peek under the hood.

Kubernetes overview

Kubernetes Hierarchy

  • image
  • container
  • pod ( one or more containers that would be deployed together on the same host to share volumes )
  • deployment
  • service

Kubelet

Kubelets run on every host to start and stop pods and communicate with the Docker engine on the host level.

Kube-proxy

Kube-proxies also run on every host to redirect the traffic to specific services and pods.

Container Linux

Container Linux by CoreOS (formerly known as CoreOS Linux, or just CoreOS) an OS specifically designed to run containers, a lightweight Linux distribution that uses containers to run applications. It does not even have a package manager, but contains the basic GNU Core Utilities for administration. It also include include KubeletDockeretcd and flannel.

Kubernetes Networking

Flannel

Flannel gives each host a separate IP subnet range to prevent IP address collisions, providing a unique IP address to each container. Flannel is the standard SDN ( software-defined network ) tool for CoreOS (Container Linux), it is shipped with the distribution.

Calico

Calico provides security in the Kubernetes cluster. By default in the Kubernetes cluster any pod can communicate to any other pod on any host. Calico restricts the inter pod communication using namespaces and selectors. It allows the communication from the host to the pods to enable health checks. Calico has tight integration with Flannel.

Canal

As Calico and Flannel nicely fit together, Canal is the combination of the two to provide a comprehensive inter-pod networking solution in the Kubernetes cluster.

Kubernetes commands

  • kubectl get – list resources
  • kubectl describe – show detailed information about a resource
  • kubectl logs – print the logs from a container in a pod
  • kubectl exec – execute a command on a container in a pod

List existing pods

kubectl get pods

Get detailed information on the pods

kubectl describe pods

Start a proxy to access the containers within the pod

kubectl proxy

Get the pod name and store it in the POD_NAME environment variable

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

Access an API running in the pod. The name of the pod is in the POD_NAME environment variable

curl http://localhost:8001/api/v1/namespaces/default/pods/POD_NAME/proxy/

View the STDOUT of the only container of the pod

kubectl logs POD_NAME

View the STDOUT of a specific container of the pod

kubectl logs POD_NAME -c CONTAINER_NAME

View the STDOUT of all containers of the pod

kubectl logs POD_NAME --all-containers=true

Execute a command in the only container of the pod

kubectl exec POD_NAME MY_COMMAND

Execute a command in a container of the pod

kubectl exec POD_NAME -c CONTAINER_NAME MY_COMMAND

Start a Bash session in the container (container name is optional if the pod has only one container)

kubectl exec -ti POD_NAME -c CONTAINER_NAME bash

To check an API from the Bash console within the container (use localhost to address it within the container)

curl localhost:8080

Force the deletion of all pods of a deployment

Make sure the MY_DEPLOYMENT_NAME only returns the pods you really want to delete

# IMPORTANT!!!
# First make sure the query only returns the list of pods you really want to delete

kubectl get pods | grep MY_DEPLOYMENT_NAME | awk '{print $1}')

Execute the command to delete the pods

while IFS= read -r result
do
    kubectl delete pod $result --grace-period=0 --force
done < <(kubectl get pods | grep MY_DEPLOYMENT_NAME | awk '{print $1}')