When we test a Chef cookbook in Chef Test Kitchen and get the error message
TypeError
———
no implicit conversion of String into Integer
Check if all attribute references have the “node” keyword in front of them.
node['key']
Knowledge Base for IT Professionals, Teachers and Astronauts
When we test a Chef cookbook in Chef Test Kitchen and get the error message
TypeError
———
no implicit conversion of String into Integer
Check if all attribute references have the “node” keyword in front of them.
node['key']
In the previous post, Learn Kubernetes part 2 – NGINX Ingress Controller, we have deployed a web application and exposed it via the kubernetes/ingress-nginx ingress controller.
In this exercise we will use Traefik, another popular proxy server. Traefik is widely used for multiple reasons: it is easier to configure, can automatically generate and renew SSL certificates using Let’s Encrypt. We will also use most of the scripts from the prior exercise.
Create the Traefik Role Based Access Control with ClusterRoleBinding. This will allow Traefik to serve all namespaces of the cluster. To restrict the Traefik to one namespace only, use the “namespace-specific RoleBindings”.
traefik-rbac.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
Deploy Traefik with a deployment. This introduces a few more network hops, but follows the Kubernetes best practices guidelines and provides more flexibility.
traefik-deployment.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
Launch the scripted resources
kubectl apply -f .
Check the Traefik pods. List the pods in the kube-system namespace, and make sure the “traefik-ingress-controller” is running.
kubectl --namespace=kube-system get pods
To be able to access the Traefik Web UI we will create a service and an ingress to expose it outside of the cluster.
ui.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
Launch the scripted resources
kubectl apply -f .
List the cluster ingresses
kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE kube-system traefik-web-ui traefik-ui.minikube 80 53s
Access the Kubernetes cluster via Traefik
To be able to access the traefik-ui.minikube host we need to modify the /etc/hosts file. Add the host to the file with the localhost IP address.
echo "127.0.0.1 traefik-ui.minikube" | sudo tee -a /etc/hosts
==================================
app1-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-frontend-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app1-frontend-template-label
template:
metadata:
labels:
app: app1-frontend-template-label
spec:
containers:
- name: app1-frontend-container-label
image: nginx:1.7.9
ports:
- containerPort: 80
app1-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app1-frontend-service
spec:
selector:
app: app1-frontend-template-label
ports:
- protocol: TCP
port: 8080
targetPort: 80
Large organizations need to control the incoming traffic to the Kubernetes cluster. The most secure way is to use an ingress controller and create an ingress to channel all incoming traffic to the cluster.
In Learn Kubernetes part 1 – Web application in a Kubernetes cluster we have created a simple web application pod and exposed it to the outside world with a service using a load balancer. We will use the files we have created in that exercise with one change. The deployment is the same:
app1-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-frontend-deployment
spec:
replicas: 3
selector:
matchLabels:
app: app1-frontend-template-label
template:
metadata:
labels:
app: app1-frontend-template-label
spec:
containers:
- name: app1-frontend-container-label
image: nginx:1.7.9
ports:
- containerPort: 80
In this exercise we will expose the service via an NGINX ingress controller. Delete type: LoadBalancer
in the app1-frontend-service.yaml file, so Kubernetes will use type: ClusterIP
, the default value.
app1-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app1-frontend-service
spec:
selector:
app: app1-frontend-template-label
ports:
- protocol: TCP
port: 8080
targetPort: 80
In this example we will use the kubernetes/ingress-nginx ingress controller maintained by the Kubernetes community. See kubernetes/ingress-nginx NGINX Ingress Controller Installation Guide to configure the NGINX Ingress Controller in your environment.
To start the kubernetes/ingress-nginx ingress controller in any operating system, execute this command to create the ‘nginx-ingress-controller’ deployment with containers
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Based on the operating system, also execute this to create the ‘ingress-nginx’ service
On Macintosh
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Verify the ingress controller installation to make sure it has successfully started
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-86449c74bb-rlx6h 1/1 Running 0 2d4h
Set the name of the service in the spec: … backend: serviceName:
ingress_nginx.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-frontend-service
servicePort: 80
To launch the application and configure the resources to expose it outside of the Kubernetes cluster, open a terminal in the directory where you saved the files and execute
kubectl apply -f .
List the ingresses
kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE ingress * localhost 80 59s
To verify the ingress execute
kubectl describe ingress MY_INGRESS_NAME
See Troubleshooting Kubernetes Ingress-Nginx
See Kubernetes Ingress Controllers for more info.
To access the application through the ingress, open a web browser and access the application via the ADDRESS and PORTS values: http://localhost:80
The browser will display a warning, click the Advanced button
Click the Proceed to localhost (unsafe) link
You should see the NGINX default page
If you want to delete these resources from the Kubernetes cluster, execute
kubectl delete -f .
Delete the ingress controller service
kubectl delete service ingress-nginx -n ingress-nginx
Delete the deployment
kubectl delete deployment nginx-ingress-controller -n ingress-nginx
Delete the ingress-nginx service
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Delete the ‘nginx-ingress-controller’ deployment
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Check if all ingress pods are deleted
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
This is the official Kubernetes dashboard, and for security reasons currently it is not recommended to use, but we can still use it in our local cluster to inspect the resources.
The latest installation information is at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
To launch the Kubernetes Dashboard in your cluster execute the command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
Create a proxy to access the cluster. The proxy will run in the background, to stop it when not needed anymore press CTRL-C.
kubectl proxy
To open the Kubernetes Dashboard web UI in a web browser navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
If you get the error message: Internal error (500): Not enough data to create auth info structure.
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"
Use the ingress-nginx Kubernetes plugin to troubleshoot the Kubernetes Ingress-Nginx.
Verify the cluster you are connected to
kubectl config current-context
To switch to another cluster
kubectl config use-context docker-for-desktop
Get the address of the cluster
kubectl cluster-info
Get the list of pods
kubectl get pods --all-namespaces
Forward a port to the pod you want to test. The port forwarder will run in the background, press CTRL-C to stop it.
kubectl port-forward MY_POD_NAME MY_HOST_PORT:MY_POD_PORT
To send a request to the pod, keep the port-forwarding running in the terminal, open a browser and navigate to
http://127.0.0.1:MY_HOST_PORT
Verify if the ingress controller pod is running
kubectl get pods --all-namespaces | grep ingress
Check the cluster ingresses in the default namespace
kubectl ingress-nginx ingresses -n default
Get the Nginx ingress configuration
kubectl ingress-nginx conf -n ingress-nginx
Check if the service is accessible from the ingress pod
kubectl ingress-nginx exec -n ingress-nginx -- curl http://127.0.0.1
This is a tutorial to script a simple web application deployment in an enterprise grade Kubernetes cluster that you can follow on your Macintosh. You only need to install Docker and enable Kubernetes.
The frontend of the web application is represented by an NGINX container that listens on port 80 and returns the NGINX default page. The application is exposed outside of the cluster via a kubernetes/ingress-nginx NGINX Ingress Controller, at the address http://localhost
Save all files in the same directory. During the development process open a terminal in the directory of the files, and periodically test the configuration with kubectl apply -f .
to check the code (don’t forget the period at the end of the command). This way Kubernetes will build the system step-by-step giving you continuous feedback.
I have used unique label values to demonstrate which labels make the connection between the resources using the label and selector values. Most of the time the application name is used as label for easy maintenance, but as you learn Kubernetes, it is important to understand the relationships between the resources..
The deployment configures the containers running in the pods and contains the label that has to match the selector of the service.
The label in spec: selector: matchLabels: connects the deployment to the pods specified in the deployment template via the same deployment’s spec: template: metadata: labels:
app1-frontend-deployment.yam
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-frontend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: app1-frontend-template-label
template:
metadata:
labels:
app: app1-frontend-template-label
spec:
containers:
- name: app1-frontend-container-label
image: nginx:1.7.9
ports:
- containerPort: 80
Launch the pod from the terminal. Don’t forget the period at the end of the line.
kubectl apply -f .
To make sure the container in the pod is running, we can test the pod. Get the list of pods
kubectl get pods
NAME READY STATUS RESTARTS AGE MY_POD_NAME 1/1 Running 0 10m
Temporarily set up port forwarding to access the pod from outside of the cluster. We only use this to test the pod.
kubectl port-forward MY_POD_NAME 8080:80
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80
Connect to the pod with a web browser. Navigate to http://127.0.0.1:8080/
You should see the NGINX default page.
To stop the port forwarding, press CTRL-C in the terminal.
See Kubernetes Deployments for more info
The service specifies the environment variables of the pods backing the service and exposes the pods to the rest of the Kubernetes cluster or to the outside world.
The label in the service’s spec: selector: has to match the label in spec: template: metadata: labels: of the deployment.
We will expose the service outside of the cluster with type: LoadBalancer
app1-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app1-frontend-service
spec:
selector:
app: app1-frontend-template-label
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 80
See Kubernetes Services for more info.
To launch the application and configure the resources to expose it outside of the Kubernetes cluster, open a terminal in the directory where you saved the files and execute
kubectl apply -f .
To access the application get the address of the service
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app1-frontend-service LoadBalancer 10.99.210.235 localhost 8080:32569/TCP 9s
Open a web browser and navigate to the address indicated by the EXTERNAL-IP and PORT: http://localhost:8080
You should see the NGINX default page
If you want to delete these resources from the Kubernetes cluster, execute
kubectl delete -f .
Helm takes values that we pass into the Helm deployments and transforms a template (also known as a chart) to Kubernetes resources. It is a wrapper around Kubernetes resources.
It calls kubectl in the background to create the resources on the fly.
For security reasons it is not a good practice to create individual load balancers for each service.
The safer way is to create one application load balancer outside of the cluster and launch ingress controller NGINX containers to proxy the traffic to the individual services.
“Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.”
To get the list of ingresses
kubectl get ingress
To show the details of an ingress
kubectl describe ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-frontend-service
servicePort: 80
Kubernetes provides ways to check the health of pods. Liveness and Readiness Probes can monitor the health of the pods.
We can specify an address in the pod what Kubernetes can curl and if 200 http response code is returned, the pod is considered healthy.
A pod can contain one or multiple containers, usually one.
Kubernetes only recommends to launch multiple containers in a pod, when those containers need to share a volume. For example a syslog-ng container saves log files in a volume, a Splunk Heavy Forwarder container monitors them and sends the log entries to the Splunk Indexer.
Create a deployment to specify the image that will run in the pod as a container.To list the pods
kubectl get pods
To display the standard out (stdout) messages of a container
kubectl logs MY_POD_NAME
Execute a command in a container of a pod. As usually there is only one container runs in the pod, we don’t have to specify the container name.
kubectl exec -it MY_POD_NAME /bin/bash
If the pod has multiple containers add the –container option
kubectl exec -it MY_POD_NAME --container MY_CONTAINER_NAME /bin/bash