ERROR: column “…” does not exist

PostgreSQL internally converts all column and table names to lowercase when executing the SQL query.

If the column or table name contains uppercase letters, we need to use double quotes to be able to reference them.

When we get the error message

ERROR: column “username” does not exist

select * from public.”AspNetUsers” where UserName = ‘… ^

HINT: Perhaps you meant to reference the column “AspNetUsers.UserName”. SQL state: 42703

The first line of the error message shows, that PostgreSQL internally converted UserName to username. To be able to reference the column, use double-quotes. If you reference the table name too, don’t copy the hint, make sure the table and column names are separate strings in double-quotes with a period between them.

select *  from public."AspNetUsers" where "AspNetUsers"."UserName" = 'MY_VALUE';

PostgreSQL management tools

Install postgresql

The postgresql package contains the PostgreSQL utilities: psql, pg_dump

On macOS

 brew install postgresql

Install pgAdmin

pgAdmin is a browser based database management tool for PosgtreSQL databases.

Download

Install

On macOS

  • Double click the downloaded file
  • Accept the license agreement
  • Drag the application into the Applications folder

Start pgAdmin

On macOS

  • Start the pgAdmin application from the Launchpad. The application runs in a browser window.

Install the pgcli command line utility

On macOS

brew install pgcli

Using pgcli

pgcli is a wrapper of postgresql with limited command set to enable testers and support staff to monitor PostgreSQL databases

Start pgcli with

PGPASSWORD=MY_ADMIN_PASSWORD pgcli -h MY_SERVER_URL -U MY_USERNAME -d MY_DATABASE_NAME

To see the help on the pgcli commands

\?

syntax error, unexpected tLABEL in Chef Test Kitchen

During the test of a Chef recipe that contains a template we may encounter the error message

syntax error, unexpected tLABEL
MY_KEY: MY_VALUE

Make sure there is no white space between the keyword variables and the opening parenthesis in the template definition:

template "MY_FILE" do
  source 'MY_SOURCE'
  variables(
    MY_KEY: MY_VALUE
  )
end

How to remove all Docker containers and images from the host

Docker images are stored on the host to launch as fast as possible. The drawback is, Docker can fill the hard drive with unused images. A recent possible bug in Docker caused the accumulation of unused, unreferenced images on the host. To delete all Docker containers and images, use these commands in the terminal

Get the base line

# Get the free disk space
df -h

# List the running containers
docker ps

# List all containers
docker ps -a

# List the Docker images
docker images

Gentle clean up

  • all stopped containers
  • all networks not used by at least one container
  • all dangling images
  • all dangling build cache
docker system prune

All unused images and stopped containers

docker rm $(docker ps -qa); docker rmi -f $(docker images -q);yes | docker system prune

Deep cleaning

This will stop and delete ALL containers and images on the computer !!!

# Stop all running containers
docker stop $(docker ps -q)

# Remove all stopped containers
docker rm $(docker ps -qa)

# Remove all unused Docker images
docker rmi -f $(docker images -q)

# A final clean
docker system prune

Error: Error creating LB Listener: ValidationError: ‘arn:aws:elasticloadbalancing:….b’ must be in ARN format

We create a new AWS Application Load Balancer and get this error message when the second listener is created

Error: Error creating LB Listener: ValidationError: ‘arn:aws:elasticloadbalancing:….’ must be in ARN format

make sure the region and AWS profile are correctly set for the new listener config.

Learn Kubernetes part 3 – Traefik Ingress Controller

In the previous post, Learn Kubernetes part 2 – NGINX Ingress Controller, we have deployed a web application and exposed it via the kubernetes/ingress-nginx ingress controller.

In this exercise we will use Traefik, another popular proxy server. Traefik is widely used for multiple reasons: it is easier to configure, can automatically generate and renew SSL certificates using Let’s Encrypt. We will also use most of the scripts from the prior exercise.

Create the Traefik Role Based Access Control with ClusterRoleBinding. This will allow Traefik to serve all namespaces of the cluster. To restrict the Traefik to one namespace only, use the “namespace-specific RoleBindings”.

traefik-rbac.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - extensions
    resources:
    - ingresses/status
    verbs:
    - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system

Deploy Traefik with a deployment. This introduces a few more network hops, but follows the Kubernetes best practices guidelines and provides more flexibility.

traefik-deployment.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: traefik
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort

Create the resources

Launch the scripted resources

kubectl apply -f .

Check the Traefik pods. List the pods in the kube-system namespace, and make sure the “traefik-ingress-controller” is running.

kubectl --namespace=kube-system get pods

Expose the Traefik Web UI with a service

To be able to access the Traefik Web UI we will create a service and an ingress to expose it outside of the cluster.

ui.yaml (Based on https://docs.traefik.io/user-guide/kubernetes/)
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.minikube
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web

Create the resources

Launch the scripted resources

kubectl apply -f .

Check the ingress

List the cluster ingresses

kubectl get ingress --all-namespaces
NAMESPACE     NAME             HOSTS                 ADDRESS   PORTS   AGE
kube-system   traefik-web-ui   traefik-ui.minikube             80      53s

Access the Kubernetes cluster via Traefik

To be able to access the traefik-ui.minikube host we need to modify the /etc/hosts file. Add the host to the file with the localhost IP address.

echo "127.0.0.1 traefik-ui.minikube" | sudo tee -a /etc/hosts

==================================

app1-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app1-frontend-template-label
  template:
    metadata:
      labels:
        app: app1-frontend-template-label
    spec:
      containers:
        - name: app1-frontend-container-label
          image: nginx:1.7.9
          ports:
          - containerPort: 80
app1-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: app1-frontend-service
spec:
  selector:
    app: app1-frontend-template-label
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

Learn Kubernetes part 2 – NGINX Ingress Controller

Large organizations need to control the incoming traffic to the Kubernetes cluster. The most secure way is to use an ingress controller and create an ingress to channel all incoming traffic to the cluster.

In Learn Kubernetes part 1 – Web application in a Kubernetes cluster we have created a simple web application pod and exposed it to the outside world with a service using a load balancer. We will use the files we have created in that exercise with one change. The deployment is the same:

app1-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app1-frontend-template-label
  template:
    metadata:
      labels:
        app: app1-frontend-template-label
    spec:
      containers:
        - name: app1-frontend-container-label
          image: nginx:1.7.9
          ports:
          - containerPort: 80

In this exercise we will expose the service via an NGINX ingress controller. Delete type: LoadBalancer in the app1-frontend-service.yaml file, so Kubernetes will use type: ClusterIP, the default value.

app1-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: app1-frontend-service
spec:
  selector:
    app: app1-frontend-template-label
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

Create an ingress controller

In this example we will use the kubernetes/ingress-nginx ingress controller maintained by the Kubernetes community. See kubernetes/ingress-nginx NGINX Ingress Controller Installation Guide to configure the NGINX Ingress Controller in your environment.

To start the kubernetes/ingress-nginx ingress controller in any operating system, execute this command to create the ‘nginx-ingress-controller’ deployment with containers

  • k8s_nginx-ingress-controller_nginx-ingress-controller
  • k8s_POD_nginx-ingress-controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Based on the operating system, also execute this to create the ‘ingress-nginx’ service

On Macintosh

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

Verify the ingress controller installation to make sure it has successfully started

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE       NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx   nginx-ingress-controller-86449c74bb-rlx6h   1/1     Running   0          2d4h

Script the ingress

Connect the ingress to the service

Set the name of the service in the spec: … backend: serviceName:

ingress_nginx.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: app1-frontend-service
          servicePort: 80

Create the resources

To launch the application and configure the resources to expose it outside of the Kubernetes cluster, open a terminal in the directory where you saved the files and execute

 kubectl apply -f .

Verify the ingress

List the ingresses

kubectl get ingress
NAME      HOSTS   ADDRESS     PORTS   AGE
ingress   *       localhost   80      59s

To verify the ingress execute

kubectl describe ingress MY_INGRESS_NAME

Troubleshooting Kubernetes Ingress-Nginx

See Troubleshooting Kubernetes Ingress-Nginx

See Kubernetes Ingress Controllers for more info.

Accessing the application

To access the application through the ingress, open a web browser and access the application via the ADDRESS and PORTS values: http://localhost:80

The browser will display a warning, click the Advanced button

Click the Proceed to localhost (unsafe) link

You should see the NGINX default page

Delete the resources

If you want to delete these resources from the Kubernetes cluster, execute

kubectl delete -f .

Delete the ingress controller service

kubectl delete service ingress-nginx -n ingress-nginx

Delete the deployment

kubectl delete deployment nginx-ingress-controller -n ingress-nginx

Delete the ingress-nginx service

kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml

Delete the ‘nginx-ingress-controller’ deployment

kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

Check if all ingress pods are deleted

kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch

Next: Learn Kubernetes part 3 – Traefik Ingress Controller

Kubernetes Dashboard

This is the official Kubernetes dashboard, and for security reasons currently it is not recommended to use, but we can still use it in our local cluster to inspect the resources.

The latest installation information is at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Launch the Kubernetes Dashboard

To launch the Kubernetes Dashboard in your cluster execute the command

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml

Create a proxy to access the cluster. The proxy will run in the background, to stop it when not needed anymore press CTRL-C.

kubectl proxy

To open the Kubernetes Dashboard web UI in a web browser navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Start the Kubernetes Dashboard

If you get the error message: Internal error (500): Not enough data to create auth info structure.

  • Execute the commands to add the token to the ~/.kube/config file
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"
  • Copy the token from the ~/.kube/config file to the login screen

Troubleshooting Kubernetes Ingress-Nginx

Use the ingress-nginx Kubernetes plugin to troubleshoot the Kubernetes Ingress-Nginx.

Verify the cluster you are connected to

kubectl config current-context

To switch to another cluster

kubectl config use-context docker-for-desktop

Get the address of the cluster

kubectl cluster-info

Get the list of pods

kubectl get pods --all-namespaces

Forward a port to the pod you want to test. The port forwarder will run in the background, press CTRL-C to stop it.

kubectl port-forward MY_POD_NAME MY_HOST_PORT:MY_POD_PORT

To send a request to the pod, keep the port-forwarding running in the terminal, open a browser and navigate to

http://127.0.0.1:MY_HOST_PORT

Verify if the ingress controller pod is running

kubectl get pods --all-namespaces | grep ingress

Check the cluster ingresses in the default namespace

kubectl ingress-nginx ingresses -n default

Get the Nginx ingress configuration

kubectl ingress-nginx conf -n ingress-nginx

Check if the service is accessible from the ingress pod

kubectl ingress-nginx exec -n ingress-nginx -- curl http://127.0.0.1