Kubernetes volumes

To understand the types of available volumes read the official Kubernetes documentation on Volumes

The official documentation on Kubernetes Persistent Volume and Persistent Volume Claim is at Persistent Volumes

Migrating to CSI drivers from in-tree plugins

Kubernetes moves away from in-tree plugins, that had to be checked into the Kubernetes code repository to out-of-tree volume plugins like CSI ( Container Storage Interface ) drivers and Flexvolume. These drivers can be developed by third parties independently of Kubernetes, and have to be installed and configured on the node, see Migrating to CSI drivers from in-tree plugins

Mount propagation

Mount propagation allows containers to share volumes with other containers within the pod or with containers in other pods on the same host. For more information see Mount propagation

Access Mode

Access mode specifies the way volumes are accessed from one or multiple pods.

  • ReadWriteOnce ( RWO ) – the volume can be mounted as read-write by a single node only
  • ReadOnlyMany ( ROX ) – the volume can be mounted read-only by many nodes
  • ReadWriteMany ( RWX )– the volume can be mounted as read-write by many nodes

For the latest access mode limitations for each volume type see Access Modes

Volume Types

persistentVolumeClaim

persistentVolumeClaim volumes mount PersistentVolume into a Pod without knowing the underlying storage backing, like GCE PersistentDisk or iSCSI volume. CSI (Computer Storage Interface) volume types do not support direct reference from Pods, can only be referenced via the PersistentVolumeClaim object.

nfs

nfs (Netwotk File System) Persistent Disks can be mounted simultaneously by multiple Pods for writing.

awsElasticBlockStore

awsElasticBlockStore AWS EBS volumes have limitations

  • the Pods have to run on AWS EC2 instance nodes
  • the EC2 instances have to be in the same region and availability-zone as the EBS volume
  • only one EC2 instance can be mounted to the EBS
  • the access mode can only be ReadWriteOnce

awsElasticBlockStore volume examples

Kubernetes can mount a volume directly to a pod. In this example we mount an AWS EBS volume to the Mongo database pod.

Create a 1 GB volume from the command line

aws ec2 create-volume –availability-zone=us-east-1a –size=1 –volume-type=gp2

Create the mongodb.yml file

apiVersion: v1
kind: Pod
metadata:
  name: mongodb-on-ebs
spec:
  containers:
  - image: mongo
    name: mongo-pod
    volumeMounts:
    - mountPath: /data/db
      name: mongo-volume
  volumes:
  - name: mongo-volume
    awsElasticBlockStore:
      volumeID: <THE_VOLUME_ID>
      fsType: ext4

Create the Mongo DB pod

kubectl create -f mongodb.yml

gcePersistentDisk

gcePersistentDisk can be read simultaneously by multiple Pods but written only by one at a time, so it is a good choice as a common configuration source.
Starting in Kubernetes version 1.10 a beta feature allows the creation of Regional Persistent Disks that are available from multiple zones of the same region.

The gcePersistentDisk limitations are

  • the Pods have to run on GCE VM nodes
  • the nodes have to be in the same GCE project and zone as the Persistent Disk
  • can be mounted to multiple Pods for reading, but only one Pod can write it. If the Pod is controlled by a Replica Controller the access mode has to be read-only, or the replica count has to be 0 or 1

gcePersistentDisk volume example

Create the Persistent Disk (accessible from one zone only)

gcloud compute disks create --size=200GB --zone=us-central1-a my-gce-disk

To create a Regional Persistent Disk (beta in Kubernets 1.10)

gcloud beta compute disks create --size=200GB my-gce-disk
    --region us-central1
    --replica-zones us-central1-a,us-central1-b

Create the Regional Persistent Volume (beta in Kubernets 1.10)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gce-test-volume
  labels:
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
spec:
  capacity:
    storage: 200Gi
  accessModes:
  - ReadWriteOnce
  gcePersistentDisk:
    pdName: my-gce-disk
    fsType: ext4

Then Pod definition

apiVersion: v1
kind: Pod
metadata:
  name: gce-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: gce-test-container
    volumeMounts:
    - mountPath: /gce-pd
      name: gce-test-volume
  volumes:
  - name: gce-test-volume
    # The GCE Persistent Disk has to exist
    gcePersistentDisk:
      pdName: my-gce-disk
      fsType: ext4

Working with volumes

List all Persistent Volume Claims

kubectl get pvc

Leave a comment

Leave a Reply