This is the official Kubernetes dashboard, and for security reasons currently it is not recommended to use, but we can still use it in our local cluster to inspect the resources.
This is a tutorial to script a simple web application deployment in an enterprise grade Kubernetes cluster that you can follow on your Macintosh. You only need to install Docker and enable Kubernetes.
The frontend of the web application is represented by an NGINX container that listens on port 80 and returns the NGINX default page. The application is exposed outside of the cluster via a kubernetes/ingress-nginx NGINX Ingress Controller, at the address http://localhost
Save all files in the same directory. During the development process open a terminal in the directory of the files, and periodically test the configuration with kubectl apply -f . to check the code (don’t forget the period at the end of the command). This way Kubernetes will build the system step-by-step giving you continuous feedback.
I have used unique label values to demonstrate which labels make the connection between the resources using the label and selector values. Most of the time the application name is used as label for easy maintenance, but as you learn Kubernetes, it is important to understand the relationships between the resources..
Script the deployment
The deployment configures the containers running in the pods and contains the label that has to match the selector of the service.
Connect the deployment to the pods
The label in spec: selector: matchLabels: connects the deployment to the pods specified in the deployment template via the same deployment’s spec: template: metadata: labels:
The service specifies the environment variables of the pods backing the service and exposes the pods to the rest of the Kubernetes cluster or to the outside world.
Connect the service to the pods
The label in the service’s spec: selector: has to match the label in spec: template: metadata: labels: of the deployment.
We will expose the service outside of the cluster with type: LoadBalancer
To launch the application and configure the resources to expose it outside of the Kubernetes cluster, open a terminal in the directory where you saved the files and execute
kubectl apply -f .
Accessing the application
To access the application get the address of the service
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1-frontend-service LoadBalancer 10.99.210.235 localhost 8080:32569/TCP 9s
Open a web browser and navigate to the address indicated by the EXTERNAL-IP and PORT: http://localhost:8080
You should see the NGINX default page
Delete the resources
If you want to delete these resources from the Kubernetes cluster, execute
Helm takes values that we pass into the Helm deployments and transforms a template (also known as a chart) to Kubernetes resources. It is a wrapper around Kubernetes resources.
It calls kubectl in the background to create the resources on the fly.
For security reasons it is not a good practice to create individual load balancers for each service.
The safer way is to create one application load balancer outside of the cluster and launch ingress controller NGINX containers to proxy the traffic to the individual services.
Ingress
“Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.”
To get the list of ingresses
kubectl get ingress
To show the details of an ingress
kubectl describe ingress
Creating an ingress
specify the URL of the application in spec: rules: – host:
if no host set, this rule handles the request to any URL
specify the path this rule applies to at spec: rules: – host: http: paths: – backend: path:
set the service name at spec: rules: … backend: serviceName:
A pod can contain one or multiple containers, usually one.
Kubernetes only recommends to launch multiple containers in a pod, when those containers need to share a volume. For example a syslog-ng container saves log files in a volume, a Splunk Heavy Forwarder container monitors them and sends the log entries to the Splunk Indexer.
Create a deployment to specify the image that will run in the pod as a container.To list the pods
kubectl get pods
To display the standard out (stdout) messages of a container
kubectl logs MY_POD_NAME
Execute a command in a container of a pod. As usually there is only one container runs in the pod, we don’t have to specify the container name.
kubectl exec -it MY_POD_NAME /bin/bash
If the pod has multiple containers add the –container option
If the image is always rebuilt with the same version, like “latest”, set the policy to Always to disable the image caching
what to do when the container crashes
spec: template: spec: containers: restartPolicy:
usually set to Always
how many replicas to launch simultaneously
spec: replicas
how to map this deployment to actual running containers
The label in spec: selector: matchLabels: connects the deployment to the pod specified in the deployment template via the same deployment’s spec: template: metadata: labels:
the way Kubernetes should replace containers when we update the deployment
First, we want to separate the non-production and production environments:
Create two Kubernetes clusters for every application or application suite. One for pre-production and one for production.
Namespaces
We also want to separate each non-production and production like environment. Kubernetes offers namespaces to create segregated areas, resources in separate namespaces cannot see each other. Create a namespace for each environment:
In the non-production cluster
Dev namespace
QA namespace
UAT namespace
In the production cluster
Demo namespace
Stage namespace
Production namespace
To list all resources in a namespace use the -n option in the commands
kubectl get all -n MY_NAMESPACE
Security
Load Balancers
Load balancers are external to the cluster and point to the nodes to expose the applications outside of the cluster.
For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.
Container images
Do not use the :latest version, as it is hard to control the actual version of the launched image and to roll back to an earlier (unidentified) version.
Use the imagePullPolicy: Always. The Docker caching semantics makes it very efficient, as the layers are cached in the cluster to avid unnecessary downloads..
Order of resource creation
Install the CoreDNS add-on to provide in-cluster DNS service for pods, so all pods can find all services by name within the cluster.
Create the service before the deployment it refers to, because the service passes environment variables into the deployment when the containers start.
Create the target service and deployment before the caller deployment, so the target is available when the request is generated.
Switch between Kubernetes clusters
Companies launch multiple Kubernetes clusters, and the DevOps team needs access to all of them. The kubectl command-line utility can only work with one cluster at a time. To work with multiple Kubernetes clusters you need to switch between Kubernetes configurations on your workstation.
To connect to a Kubernetes cluster, add the cluster-info to the ~/.kube/config file. If you use AWS EKS the simplest way is to use the AWS CLI to update the file.