Kubernetes Ingress Controllers

For security reasons it is not a good practice to create individual load balancers for each service.

The safer way is to create one application load balancer outside of the cluster and launch ingress controller NGINX containers to proxy the traffic to the individual services.

Ingress

“Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.”

To get the list of ingresses

kubectl get ingress

To show the details of an ingress

kubectl describe ingress

Creating an ingress

  • specify the URL of the application in spec: rules: – host:
    • if no host set, this rule handles the request to any URL
  • specify the path this rule applies to at spec: rules: – host: http: paths: – backend: path:
  • set the service name at spec: rules: … backend: serviceName:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: app1-frontend-service
          servicePort: 80

Kubernetes Pods

A pod can contain one or multiple containers, usually one.

Kubernetes only recommends to launch multiple containers in a pod, when those containers need to share a volume. For example a syslog-ng container saves log files in a volume, a Splunk Heavy Forwarder container monitors them and sends the log entries to the Splunk Indexer.

Create a deployment to specify the image that will run in the pod as a container.To list the pods

kubectl get pods

To display the standard out (stdout) messages of a container

kubectl logs MY_POD_NAME

Execute a command in a container of a pod. As usually there is only one container runs in the pod, we don’t have to specify the container name.

kubectl exec -it MY_POD_NAME /bin/bash

If the pod has multiple containers add the –container option

kubectl exec -it MY_POD_NAME --container MY_CONTAINER_NAME /bin/bash

Kubernetes Deployments

You only need a deployment to launch a container in Kubernetes. Deployments tell Kubernetes

  • what container to run by specifying the Docker image name and tag
    • spec: template: spec: containers: – image:
  • when to pull the image from the registry
    • spec: template: spec: containers: imagePullPolicy:
      • If the image is always rebuilt with the same version, like “latest”, set the policy to Always to disable the image caching
  • what to do when the container crashes
    • spec: template: spec: containers: restartPolicy:
      • usually set to Always
  • how many replicas to launch simultaneously
    • spec: replicas
  • how to map this deployment to actual running containers
    • The label in spec: selector: matchLabels: connects the deployment to the pod specified in the deployment template via the same deployment’s spec: template: metadata: labels:
  • the way Kubernetes should replace containers when we update the deployment
    • strategy:
      • rollingUpdate (default)
  • the namespace
    • metadata: namespace:
      • if not specified, the “Default” namespace is used
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app1-frontend-template-label
  template:
    metadata:
      labels:
        app: app1-frontend-template-label
    spec:
      containers:
        - name: app1-frontend-container-label
          image: nginx:1.7.9
          ports:
          - containerPort: 80

To list all deployments

kubectl get deployments

To launch the container specified in the deployment file

kubectl apply -f ./MY_DEPLOYMENT_FILE.yml

Display information about the deployment

kubectl describe deployment MY_DEPLOYMENT

The deployment creates pods. A pod can contain one or multiple containers, usually one. To list the pods

kubectl get pods

Working with Kubernetes in enterprise settings

How many Kubernetes clusters do I need?

Clusters

First, we want to separate the non-production and production environments:

  • Create two Kubernetes clusters for every application or application suite. One for pre-production and one for production.

Namespaces

We also want to separate each non-production and production like environment. Kubernetes offers namespaces to create segregated areas, resources in separate namespaces cannot see each other. Create a namespace for each environment:

  • In the non-production cluster
    • Dev namespace
    • QA namespace
    • UAT namespace
  • In the production cluster
    • Demo namespace
    • Stage namespace
    • Production namespace

To list all resources in a namespace use the -n option in the commands

kubectl get all -n MY_NAMESPACE

Security

Load Balancers

Load balancers are external to the cluster and point to the nodes to expose the applications outside of the cluster.

For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.

Container images

Do not use the :latest version, as it is hard to control the actual version of the launched image and to roll back to an earlier (unidentified) version.

Use the imagePullPolicy: Always. The Docker caching semantics makes it very efficient, as the layers are cached in the cluster to avid unnecessary downloads..

Order of resource creation

Install the CoreDNS  add-on to provide in-cluster DNS service for pods, so all pods can find all services by name within the cluster.

Create the service before the deployment it refers to, because the service passes environment variables into the deployment when the containers start.

Create the target service and deployment before the caller deployment, so the target is available when the request is generated.

Switch between Kubernetes clusters

Companies launch multiple Kubernetes clusters, and the DevOps team needs access to all of them. The kubectl command-line utility can only work with one cluster at a time. To work with multiple Kubernetes clusters you need to switch between Kubernetes configurations on your workstation.

To connect to a Kubernetes cluster, add the cluster-info to the ~/.kube/config file. If you use AWS EKS the simplest way is to use the AWS CLI to update the file.

aws eks --region MY_REGION update-kubeconfig --name MY_CLUSTER_NAME

To see the configuration values execute

kubectl config view

To test the connectivity execute

kubectl get svc

If you are not the creator of the cluster you will get the error message

error: You must be logged in to the server (Unauthorized)

To access the cluster, in the [default] profile of the ~/.aws/credentials file use the access keys of the account that created the cluster. For more information see How do I resolve an unauthorized server error when I connect to the Amazon EKS API server?

Get the list of configured Kubernetes clusters. The asterisk in the first column of the output shows the currently selected cluster.

kubectl config get-contexts

Switch to another cluster

kubectl config use-context THE_VALUE_OF_THE_CONTEXT_NAME # (the name:attribute of the context)

To remove a cluster from the kube config

Display the config

kubectl config view

Delete the user

kubectl config unset users.THE_NAME_VALUE_OF_THE_USER

Delete the cluster

kubectl config unset clusters.THE_NAME_VALUE_OF_THE_USER

Delete the context

kubectl config unset contexts.THE_NAME_VALUE_OF_THE_CONTEXT

Kubernetes Deployment Scaling

Get all deployments

kubectl get deployments

Get all pods

kubectl get pods

Scale the deployment

kubectl scale deployment --replicas=4 MY_DEPLOYMENT_NAME

Check the result of the scaling with

kubectl get deployments
kubectl get pods -o wide

Get the deployment events at the end of the output of

kubectl describe deployments/MY_DEPLOYMENT_NAME

To scale down the replicas, execute the scale command again

kubectl scale deployments/MY_DEPLOYMENT_NAME --replicas=2

Kubernetes Services

Kubernetes Services route traffic across a set of pods. The service specifies how deployments (applications) are exposed to each other or the outside world.

Service types

The service type specifies how the deployment will be exposed

ClusterIP

The ClusterIP service is only visible within the cluster. To expose the pod to other services in the cluster

  • set the published port with spec: port:
  • set the port inside the container with spec: targetPort:
  • other services can find this service by its name, specified by metadata: name: even if the IP address of the pod changes
  • spec: selector: specifies the label of the template within the deployment. All pods started by the template will back the service.
  • set the service type to ClusterIP with spec: type: to only expose it within the cluster. Use Ingress to expose your Service outside of the cluster with consolidated proxy rules via a single IP address.
apiVersion: v1
kind: Service
metadata:
  name: app1-frontend-service
spec:
  selector:
    app: app1-frontend-template-label
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

LoadBalancer

Creates a load balancer external to the cluster and points itself to the nodes to expose the application outside of the cluster.

For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.

Best practices

Don’t specify the hostPort for a Pod unless it is really necessary, as it limits the flexibility of the resource creation, because each hostIP, hostPort, protocol combination has to be unique within the cluster.

Avoid using the hostNetwork as it also limits the networking flexibility.

Use the IPVS proxy mode, as other proxy modes, userspace and iptables are based on iptables operations that slow down dramatically in large scale cluster e.g 10,000 Services. IPVS-based kube-proxy also has more sophisticated load balancing algorithms (least conns, locality, weighted, persistence).

Commands

List all pods

kubectl get pods

List all deployments

kubectl get deployments

List all services of the cluster.

kubectl get services

Create a new service and expose a poet of the pod via a node port (the same random port on every node)

kubectl expose deployment/MY_DEPLOYMENT_NAME --type="NodePort" --port 8080

To find the IP and port of the endpoint where the service is exposed, see the value of the ‘Endpoints:’ in the output of the describe command

kubectl describe services/MY_SERVICE_NAME

The endpoint is the pod IP and port. If the service is a web site or API you can test it with

curl ENDPOINT_IP:ENDPOINT_PORT

To test the pod via the service get the Kubernetes cluster IP and use the ‘NodePort:’ value

curl CLUSTER_IP:NODE_PORT

Get the ‘Labels:’ of the service from the output of the describe command above. List the pods of the service

kubectl get pods -l run=LABEL_IN_SERVICE

List the service of the pod by label

kubectl get services -l run=LABEL_IN_SERVICE

Add a new label to the pod

kubectl label pod MY_POD_NAME app=v1

Display the pod information

kubectl describe pods MY_POD_NAME

Delete the service by label

kubectl delete service -l run=LABEL_IN_SERVICE

Docker Swarm volumes

Containers are ephemeral. Containers live entirely in memory, so even if the container is set up with automatic restart, the new container will not have access to the data created inside of the old container. To save persistent data of Docker containers we need to create volumes that live outside of the ephemeral containers. Don’t map volumes that store important information to the host. Use platform storage, like Amazon EBS, GCE Persistent Disk or Azure Disk Volume to store all your important data. Set up an automatic backup process to create volume snapshots regularly, so even if the host goes away, your data is safe.

Docker Swarm, use drivers to connect to external resources. One of the most versatile is REX-Ray.

We will add volume to MongoDB running in a Docker Swarm.

Install the REX-Ray plugin on the Docker host

docker plugin install rexray/ebs EBS_ACCESSKEY=aws_access_key EBS_SECRETKEY=aws_secret_key

Create a 1 GB Docker volume with Rex-Ray and gp2 EBS backing.

docker volume create –driver rexray/ebs –opt size=1 –opt volumetype=gp2 –name ebs_volume

Launch a new MongoDB service and map the volume to the MongoDB data directory

docker service create –network my_overlay_network –replicas 1 –mount type=volume,src=ebs_volume,target=/data/db –name mongodb mongo