Kubernetes has a reputation for being great for stateless application deployment. If you don’t require any kind of local storage inside your containers, the barrier to entry for you to deploy on Kubernetes is probably very, very low. However, it’s a fact of life that some applications require some kind of local storage.
Kubernetes supports this using Volumes and out of the box there is support for more than enough volume types for the average kubernetes user. For example, if your kubernetes cluster is deployed to AWS, you’re probably going to make use make use of the awsElasticBlockStore volume type, and think very little of it.
There are situations however, where you might be deploying your cluster to a different platform, like physical datacenters or perhaps another “cloud” provider like DigitalOcean. In these situations, you might think you’re a little bit screwed, and up until recently you kind of were. The only way to get a new storage provider supported in Kubernetes was to write one, and then run the gauntlet of getting a merge request accepted into the main kubernetes repo.
However, a new volume type has opened up the door to custom volume providers, and they are exceptionally simple to write and use. FlexVolumes are a relatively new addition to the kubernetes volume list, and they allow you to run an arbitrary script or volume provisioner on the kubernetes host to create a volume.
Before we dive too deep into FlexVolumes, it’s worth refreshing exactly how volumes work on Kubernetes and how they are mapped into the container.
Volumes Crash Course
If you’ve been using Volumes in Kubernetes in a cloud provider, you might not be fully aware of exactly how they work. If you are aware, I suggest you skip ahead. For those that aren’t, let’s have a quick overview of how EBS volumes work in Kubernetes.
Create an EBS Volume.
The first thing you have to do is create an EBS volume. If you’re using the AWS CLI this is easy as:
Which will return something like..
Your EBS volume is now ready to go.
Once you have the volume, you’ll probably want to attach it to a Kubernetes pod! In order to do this, you’ll need to take the volume ID and use it in your kubernetes manifest. The awsElasticBlockStore has an example, like so:
Now, if you look in the pod, you’ll see a mount at
/test-ebs, but how has it got there? The answer is actually surprisingly simple.
If you examine the ebs volume that was created, you’ll see it’s been attached to an instance!
So let’s log into this host, and find the device:
As you can see here, it’s mounted on the host under the
/var/lib/kubelet directory. This gives us a clue as to how this happened, but to confirm, you can examine the kubelet logs and you’ll see things like this:
The main point here is that when we provide a pod with a volume mount, it’s the kubelet that takes care of the process. All it does it mount the external volume (in this case the EBS volume) onto a directory on the host (under the
/var/lib/kubelet dir) and then from there, it can map that volume into the container. There isn’t any fancy magic on the container side, it’s essentially just a normal docker volume to the container.
Okay, so now we know how volumes work in Kubernetes, we can start to examine how FlexVolumes work.
FlexVolumes are essentially very simple scripts executed by the Kubelet on the host. The script should have 5 functions
- init - to initialize the volume driver. This could be just an empty function if needed
- attach - to attach the volume to the host. In many cases, this might be empty, but in some cases, like for EBS, you might have to make an API call to attach it to the host
- mount - mount the volume on the host. This is the important part, and is the part that makes the volume available to to the host to mount it in
- unmount - hopefully self explanatory - unmount the volume
- detach - again, hopefully self explanatory - detach the volume from the external host.
For each of these functions, there’s some parameters passed to the function as scripts arguments (such as
The last passed argument is interesting, because it’s actually a JSON string with options from the driver (more on this later) These parameters specify options that are important to the function, as as we examine a real world example they should become more clear.
The kubernetes repo has a helpful LVM example in the form of a bash script, which makes it nice and readable and easy to understand. Let’s look at some of the functions..
The init function is very simple, as LVM doesn’t require and initialization:
Notice how we’re returning JSON here, which isn’t much fun in bash!
The attach function for the LVM example simply determines if the device exists. Because we don’t have to do any API calls to a cloud provider, this makes it quite simple:
As you saw earlier, the LVM device needs to exist before we can mount it (in the EBS example earlier, we had to create the device) and so during the attach phase, we ensure the device is available.
The final stage is the mount section.
This is a little bit more involved, but still relatively simple. Essentially, what happens here is:
- The passed device is formatted to a filesystem provided in the parameters
- A directory is created to mount the volume to
- it’s then mounted to a mountpath by the kubelet
You may be wondering, where do these parameters I keep talking about come from? The answer is from the pod manifest sent to the kubelet. Here’s an example that uses the above LVM FlexVolume:
The key section here is the “options” section. This volume ID, size and volume group is all passed to the kubelet as
$3 as a JSON string, which is why there’s a bunch of jq munging happening in the above scripts.
Now you understand how FlexVolumes work, you need to make the kubelet aware of them. Currently, the only way to do this is to install them on the host under a specific directory.
FlexVolumes need a “namespace” (for want of a better word) and a name. So for example, my personally built lvm FlexVolume might be
leebriggs.co.uk/lvm. When we install our script, it needs to be installed like so on the host that runs the kubelet:
Once you’ve done this, restart the kubelet, and you should be able to use your FlexVolume as you need.
The manifest above give you an example of how to use FlexVolumes. It’s worth noting that not all FlexVolumes will be in the same format though. Make sure the driver name matches the directory under the
exec folder (in our case,
leebriggs.co.uk~lvm and that you pass your required options around.
This was a relative crash course in FlexVolumes for Kubernetes. There are a couple problems with it:
- The example is written in bash, which isn’t great at manipulating JSON
- It uses LVM, which isn’t exactly multi host compatible
The first point is easily solved, by writing a driver in a language with JSON parsing built in. There are a few FlexVolume drivers popping up in Go - I wrote one for ploop in Go using a library which was written to ease the process, but there are others:
- Github user TonyZuo has written a couple of interesting ones. One for DigitalOcean and one for Packet. Check them out here
- There’s a Rancher Flexvolume for those using Rancher
- Finally, one very interesting one is this Vault Flexvolume which can map Vault to directories inside containers.
All of this deals with mapping single, static volumes into containers, but there is more. Currently, you have to manually provision the volumes you use before spinning up a pod, and as you start to create more and more volumes, you may want to deal with Persistent Volumes to have a process that automatically creates the volumes for you. My next post will detail how you can use these FlexVolumes in a custom provisioner which resembles the persistent volumes in AWS and GCE!