How many Kubernetes clusters do I need?
First, we want to separate the non-production and production environments:
- Create two Kubernetes clusters for every application or application suite. One for pre-production and one for production.
We also want to separate each non-production and production like environment. Kubernetes offers namespaces to create segregated areas, resources in separate namespaces cannot see each other. Create a namespace for each environment:
- In the non-production cluster
- Dev namespace
- QA namespace
- UAT namespace
- In the production cluster
- Demo namespace
- Stage namespace
- Production namespace
To list all resources in a namespace use the -n option in the commands
kubectl get all -n MY_NAMESPACE
Load balancers are external to the cluster and point to the nodes to expose the applications outside of the cluster.
For security reasons large organizations don’t allow the creation of multiple load balancers. During the cluster creation they temporarily lift the restriction and one ingress load balancer is created. All inbound communication to the cluster passes through that load balancer.
Do not use the :latest version, as it is hard to control the actual version of the launched image and to roll back to an earlier (unidentified) version.
Use the imagePullPolicy: Always. The Docker caching semantics makes it very efficient, as the layers are cached in the cluster to avid unnecessary downloads..
Order of resource creation
Install the CoreDNS add-on to provide in-cluster DNS service for pods, so all pods can find all services by name within the cluster.
Create the service before the deployment it refers to, because the service passes environment variables into the deployment when the containers start.
Create the target service and deployment before the caller deployment, so the target is available when the request is generated.
Switch between Kubernetes clusters
Companies launch multiple Kubernetes clusters, and the DevOps team needs access to all of them. The kubectl command-line utility can only work with one cluster at a time. To work with multiple Kubernetes clusters you need to switch between Kubernetes configurations on your workstation.
To connect to a Kubernetes cluster, add the cluster-info to the ~/.kube/config file. If you use AWS EKS the simplest way is to use the AWS CLI to update the file.
aws eks --region MY_REGION update-kubeconfig --name MY_CLUSTER_NAME
To see the configuration values execute
kubectl config view
To test the connectivity execute
kubectl get svc
If you are not the creator of the cluster you will get the error message
error: You must be logged in to the server (Unauthorized)
To access the cluster, in the [default] profile of the ~/.aws/credentials file use the access keys of the account that created the cluster. For more information see How do I resolve an unauthorized server error when I connect to the Amazon EKS API server?
Get the list of configured Kubernetes clusters. The asterisk in the first column of the output shows the currently selected cluster.
kubectl config get-contexts
Switch to another cluster
kubectl config use-context THE_VALUE_OF_THE_CONTEXT_NAME # (the name:attribute of the context)
To remove a cluster from the kube config
Display the config
kubectl config view
Delete the user
kubectl config unset users.THE_NAME_VALUE_OF_THE_USER
Delete the cluster
kubectl config unset clusters.THE_NAME_VALUE_OF_THE_USER
Delete the context
kubectl config unset contexts.THE_NAME_VALUE_OF_THE_CONTEXT