$work, we have several Kubernetes clusters across different geographical and AWS regions. The reasons range from customer requirements, to our own desire to reduce operational “blast radius” issues that might come up. Our team has experience large outages before, and we try and build the smallest unit of deployment we possibly can for out platform.
Unfortunately, this brings with it new challenges, especially when it comes to running Kubernetes clusters. I’ve spoke extensively about this on this blog before, particularly regarding configuration management needs and the overhead that scaling out to multiple clusters brings.
As these clusters have become more utilized by application teams, a new consideration has arisen. Deploying regional applications has the same configuration complexity problems I’ve spoken about when it comes to the infrastructure management, and essentially the needs boil down to the same words we’re familiar with: configuration management.
I set out to try and make the task of deploying applications to regional clusters as easy as possible for our teams following the same philosophy frustrations I had before. The requirements were a bit like this:
- no templating languages (no helm!)
- continuous deployment made easy
- easy for developers to grasp - low barrier to entry
- abstract as much of configuration complexity away as possible
What I came up with works very nicely, and uses largely off the shelf tooling that you can replicate very easily.
This post is the first in what I hope will be a 2 part series of posts which covers the following topics:
- Generating regional/parameterised manifests using jkcfg
- Using Gitlab, Gitlab-CI, ArgoCD and GitOps to deploy to multiple clusters
Part 1: Generate your config
It’s the first step, but it’s also the hardest. How do you generate your YAML configuration for the different clusters?
Ksonnet uses jsonnet, which for us infrastructure people didn’t seem so bad (we used it in kr8) but there was very little desire for developers to actually learn this new and strange language for their application development needs. Luckily for me, it was deprecated just as I was trying to convince my developers otherwise, which meant I kept search for other solutions.
Helm is the defacto standard for this kind of thing but again, there were some confused questions when it came to the templating of YAML. I could sympathise with this, and at the time, Helm had some serious security problems with Tiller. Helm3 has largely addresses this, but I still can’t bring myself to template yaml.
Let’s generate some manifests
The jkcfg repo has some excellent examples you can use, and getting started was generally pretty straightforward.
Download the jk binary
Okay, so we’re ready to generate some manifests. A simple deployment might look like this:
Okay, we have a nice deployment manifest now, but how does this help me with different regions?
jkcfg supports “parameters” which can be passed either via a command line argument, or a file. This is similar to Helm’s values files which are evaluated at compile time. Using values in jkcfg is very straightforward.
You can then use this value inside your deployment manifest. Here’s the end result:
Once you’ve started using parameters, you probably don’t always want to use the same number of replicas. You can invoke the parameters in two ways. The first, and easiest, is on the command line:
Check the manifest now in
manifests/myapp.yaml: you’ll see we’ve set the replicas to 5!
The other way of overriding the parameters is using a parameters file. This can be YAML or JSON. Create a file called
params/myapp.yaml and populate it like so:
Then use it with jk like so:
Abstract, abstract, abstract
As I went through this journey, it became apparent there was a lot of code reuse across different services. Most of the services we build are using the same frameworks, and need a lot of similar configuration.
For example, every service we deploy to Kubernetes needs 4 basic things:
- A deployment spec
- With KIAM annotations
- With security contexts
- etc etc
- A service spec
- An ingress
- A configmap
Here’s the repo contents:
I’ll break down the
js files so we can get an idea of what this entails.
The main meat of the package is in
kube.js. Let’s take a look at this:
Obviously this is a lot more involved than our previous, very simple deployment from earlier. What’s worth noting though is that this is completely configurable by a
service object in our jkcfg configuration parameter. I’ve gone for an approach here as well where we load the configuration variable from a configmap, which is not managed by this module. That way, we can use a basic boilerplate module for most of the stuff we want to deploy, and we can have a pretty high degree of confidence that the deployments meet our standards.
This labels file is simply a way of ensuring we have the correct labels defined for all our resources. Here’s what it looks like:
Notice, we’re exporting this as a function. The “why” will become apparent later…
index.js where we export all this to be used:
We now have a very basic NPM package. We can push this to git or to the NPM registry and let people use. So how do we actually use it?
Using the package
Using this is pretty straightforward. First, add it as a dependency:
Then import it to be used in your jkcfg
Now we’ve imported it, the fun stuff starts. For your average person, you can generate the deployment, service and ingress with two files. First, pad out your
index.js like so:
Before we get too excited, we need to populate our params file:
As you can see, the service object does most of the work for us, and we can tailor it per region or per environment as needed.
At this stage, we’re ready to generate our manifests again. Let’s use the command from before:
Notice how we specify the version as a parameter outside the params file, simply because we expect this to be a dynamic value. I’ll talk more about this in my next post.
This should have generated manifests for you in
complex/manifests and now you’re ready to do some deployments!
Add a ConfigMap
The final part of this is the configuration. We’ve so far tried to build a jkcfg package that is as agnostic as possible, and can be reused across many different services and deployments.
The reality is though, you’re still going to need to add configuration data for the service. We can utilise what we’ve got here and add a configmap to the equation which can be customised per-deployment very easily.
index.js add the following:
Your end result should be this:
This will now generate a ConfigMap, but it’ll be empty. You need to add a
config object to your params file. It’ll end up looking like this:
And now we have a working deployment, with all the resources we might need to run a service.
You might be wondering at this point, this a lot of work! Why not just use a Helm chart? My personal thoughts about why I prefer this way are:
- Using Helm charts mean developers have to learn a new pattern, specifically Go templates. With this method, they can use a programming language which they will undoubtedly feel more familiar with
This concludes part 1 of my series. In part 2, I’ll talk a little bit about using the CI pipeline to generate these configs and push them to a deploy repo (specifically with Gitlab-CI) and then talk a little bit about using Argo do deployments.
All the code from this post can be found in the github repo if you want to take a look!
Stay tuned for the next post!