I was at cfgmgmtcamp 2019 in Ghent, and did a talk which I think was well received about the need for some Kubernetes configuration management as well as the solution we built for it at $work, kr8.

I made a statement during the talk which ignited some fairly fierce discussion both online, and at the conference:

To put this into my own words:

At some point, we decided it was okay for us to template yaml. When did this happen? How is this acceptable?

After some conversation, I figured it was probably best to back up my claims in some way. This blog post is going to try to do that.

The configuration problem

Once the applications and infrastructure you’re going to manage grows past a certain size, you inevitably end up in some form of configuration complexity hell. If you’re only deploying 1 or maybe 2 things, you can write a yaml configuration file and be done with it. However once you grow beyond that, you need to figure out how to manage this complexity. It’s incredibly likely that the reason you have multiple configuration files is because the $thing that uses that config is slightly different from its companions. Examples of this include:

  • Applications deployed in different environments, like dev, stg and prod
  • Applications deployed in different regions, like Europe or North American

Obviously, not all the configuration is different here, but it’s likely the configuration differs enough that you want to be able to differentiate between the two.

This configuration complexity has been well known for Operators (System Administrators, DevOps engineers, whatever you want to call them) for some years now. An entire discpline grew up around this in Configuration Management, and each tool solved this problem in their own way, but ultimately, they used YAML to get the job done.

My favourite method has always been hiera which comes bundled with Puppet. Having the ability to hierarchically look up the variables of specific config needs is incredibly powerful and flexible, and has generally meant you don’t actually need to do any templating of yaml at all, except perhaps for embedding Puppet facts into the yaml.

Did we go backwards?

Then, as our industries’ needs moved above the operating system and into cloud computing, we had a whole new data plane to configure. The tooling to configure this changed, and tools like CloudFormation and Helm appeared. These tools are excellent configuration tools, but I firmly believe we (as an industry) got something really, really wrong when we designed them. To examine that, let’s take a look at example of a helm chart taking a custom parameter

Helm Charts

Helm charts can take external parameters defined by an values.yaml file which you specify when rendering the chart. A simple example might look like this:

Let’s say my external parameter is simple - it’s a string. It’d look a bit like this:

image: "{{ .Values.image }}"

That’s not so bad right? You just specify a value for image in your values.yaml and you’re on your way.

The real problem starts to get highlighted when you want to do more complicated and complex things. In this particular example, you’re doing okay because you know you have to specify an image for a Kubernetes deployment. However, what if you’re working with something like an optional field? Well, then it gets a little more unwieldy:

{{- with .resourceGroup  }}
    resourceGroup: {{ .  }}
{{- end }}

Optional values just make things ugly in templating languages, and you can’t just leave the value blank, so you have to resort to ugly loops and conditionals that are probably going to bite you later.

Let’s say you need to go a step further, and you need to push an array or map into the config. With helm, you’d do something like this.

{{- with .Values.podAnnotations  }}
      annotations:
{{ toYaml . | indent 8  }}
{{- end  }}

Firstly, let’s ignore the madness of having a templating function toYaml to convert yaml to yaml and focus more on the whitespace issue here.

YAML has strict requirements and whitespace implementation rules. The following, for example, is not valid or complete yaml:

something: nothing
  hello: goodbye

Generally, if you’re handwriting something, this isn’t necessarily a problem because you just hit backspace twice and it’s fixed. However, if you’re generating YAML using a templating system, you can’t do that - and if you’re operating above 5 or 10 configuration files, you probably want to be generating your config rather than writing it.

So, in the above example, you want to embed the values of .Values.podAnnotations under the annotations field, which is indented already. So you’re having to not only indent your values, but indent them correctly.

What makes this even more confusing is that the go parser doesn’t actually know anything about YAML at all, so if you try to keep the syntax clean and indent the templates like this:

{{- with .Values.podAnnotations }}
      annotations:
      {{ toYaml . | indent 6 }}
{{- end  }}

You actually can’t do that, because the templating system gets confused. This is a singular example of the complexity and difficulty you end up facing when generating config data in YAML, but when you really start to do more complex work, it really starts to become obvious that this isn’t the way to go.

Needless to say, this isn’t what I want to spend my time doing. If fiddling around with whitespace requirements in a templating system doing something it’s not really designed for is what suits you, then I’m not going to stop you. I also don’t want to spend my time writing configuration in JSON without comments and accidentally missing commas all over the shop. We (as an industry) decided a long time ago that shit wasn’t going to work and that’s why YAML exists.

So what should we do instead? That’s where jsonnet comes in.

JSON, Jsonnet & YAML

Before we actually talk about Jsonnet, it’s worth reminding people of a very important (but oft forgotten point). YAML is a superset of JSON and converting between the two is trivial. Many applications and programming languages will parse JSON and YAML natively, and many can convert between the two very simple. For example, in Python:

python -c 'import json, sys, yaml ; y=yaml.safe_load(sys.stdin.read()) ; print(json.dumps(y))'

So with that in mind, let’s talk about Jsonnet.

Welcome to the church of Jsonnet

Jsonnet is a relatively new, little known (outside the Kubernetes community?) language that calls itself a data templating language. It’s definitely a good exercise to read and consume the Jsonnet design rationale page to get an idea why it exists, but if I was going to define in a nutshell what its purpose is - it’s to generate JSON config.

So, how does it help, exactly?

Well, let’s take our earlier example - we want to generate some JSON config specifying a parameter (ie, the image string). We can do that very very easily with Jsonnet using external variables.

Firstly, let’s define some Jsonnet:

{

  image: std.extVar('image'),

}

Then, we can generate it using the Jsonnet command line tool, passing in the external variable as we need to:

jsonnet image.jsonnet -V image="my-image"
{
   "image": "my-image"
}

Easy!

Optional fields

Before, I noted that if you wanted to define an optional field, with YAML templating you had to define if statements for everything. With Jsonnet, you’re just defining code!

// define a variable - yes, jsonnet also has comments
local rg = null;
{


  image: std.extVar('image'),
  // if the variable is null, this will be blank
  [if rg != null then 'resourceGroup']: rg,

}

The output here, because our variable is null, means that we never actually populate resourceGroup. If you specify a value, it will appear:

jsonnet image.jsonnet -V image="my-image" 
{
   "image": "my-image"
}

Maps and parameters

Okay, now let’s look at our previous annotation example. We want to define some pod annotations, which takes a YAML map as its input. You want this map to be configurable by specifying external data, and obviously doing that on the command line sucks (you’d be very unlikely to specify this with Helm on the command line, for example) so generally you’d use Jsonnet imports to this. I’m going to specify this config as a variable and then load that variable into the annotation:

local annotations = {
  'nginx.ingress.kubernetes.io/app-root': '/',
  'nginx.ingress.kubernetes.io/enable-cors': true,
};

{
  metadata: { // annotations are nested under the metadata of a pod
    annotations: annotations,
  },

}

This might just be my bias towards Jsonnet talking, but this is so dramatically easier than faffing about with indentation that I can’t even begin to describe it.

Additional goodies

The final thing I wanted to quickly explore, which is something that I feel can’t really be done with Helm and other yaml templating tools, is the concept of manipulating existing objects in config.

Let’s take our example above with the annotations, and look at the result file:

{
   "metadata": {
      "annotations": {
         "nginx.ingress.kubernetes.io/app-root": "/",
         "nginx.ingress.kubernetes.io/enable-cors": true
      }
   }
}

Now, let’s say for example I wanted to append a set of annotations to this annotations map. In any templating system, I’d probably have to rewrite the whole map.

Jsonnet makes this trivial. I can simply use the + operator to add something to this. Here’s a (poor) example:

local annotations = {
  'nginx.ingress.kubernetes.io/app-root': '/',
  'nginx.ingress.kubernetes.io/enable-cors': true,
};

{
  metadata: {
    annotations: annotations,
  },
} + { // this adds another JSON object
  metadata+: { // I'm using the + operator, so we'll append to the existing metadata
    annotations+: { // same as above
      something: 'nothing',
    },
  },
}

The end result is this:

{
   "metadata": {
      "annotations": {
         "nginx.ingress.kubernetes.io/app-root": "/",
         "nginx.ingress.kubernetes.io/enable-cors": true,
         "something": "nothing"
      }
   }
}

Obviously, in this case, it’s more code to this, but as your example get more complex, it becomes extremely useful to be able to manipulate objects this way.

Kr8

We use all of these methods in kr8 to make creating and manipulating configuration for multiple Kubernetes clusters easy and simple. I highly recommend you check it out if any of the concepts you’ve found here have found you nodding your head.


TL;DR: - go here

I often spend time in my day job wishing I could implement $newtech. I’m lucky enough to be working on projects right now that many people would find exciting, interesting and challenging, however it’s often the case that I see something I’d like to try, but deploying it at $dayjob requires me to design for large scale and with security and compliance in mind.

When this happens, I generally try it out in my “homelab”. This might mean trying it in a cloud account (I’m particularly fond of DigitalOcean for this) but I also recently reinvested (I moved to another country last year, and had to sell my previous homelab equipment) in a very small homelab consisting of 3 mini PCs and a Dell T30 server, along with some UniFi.

My original intention was to blog about the journey, but I realised this might end up being more time consuming than I’d like, so with that in mind I decided that perhaps the best way to contribute knowledge back to the community was via Github.

I’ve created a new Github Org, lbrlabs to hold all this configuration. Alongside this, I’ve created project boards which will detail my journey as I build out the software in my homelab.

Currently, the Org consists of 3 repos:

  • tf-kubernetes-clusters - a repo containing simple terraform code for Kubernetes clusters for a wide variety of cloud providers. The intention here is to make launching a cluster easy and straightforward for testing purposes
  • puppet-homelab - a Puppet control repo containing roles and profiles for my homelab. This could be used as a starting point for anyone wishing to build out a homelab, I’d encourage forking this and tailoring to your needs
  • kr8-cluster-config - a repo containing configuration for kr8 which allows me to quickly and easily install components inside the Kubernetes clusters I build. As an example I have components like metallb which allow me to have Kubernetes LoadBalancers.

Some of the other tooling I’ve implemented includes:

In the near future, I plan on implementing other tech like:

  • Vault for secret management
  • Prometheus
  • eyaml encryption in Puppet

My hope is that doing this in the open can help other homelabbers learn about enterprise software, specifically DevOps related projects.

I encourage people to open issues in the repos, asking questions about how to implement things. Hopefully this can be my way to give back to the community.


Previous visitors to this blog will remember I wrote about configuration mgmt for Kubernetes clusters, and how the space was lacking. For those not familiar, the problem statement is this: it’s really hard to maintain and manage configuration for components of multiple Kubernetes clusters. As the number of clusters you have starts to scale, keeping the things you need to run in them (such as ingress controllers) configured and in sync, as well as managed the subtle differences that need to be managed across accounts and regions.

With that in mind, it’s my pleasure to announce that at my employer, Apptio we have tried to solve this problem with kr8. kr8 is an opinionated Kubernetes cluster configuration management tool, designed to be simple, flexible and use off the shelf tools where possible. This blog post details some of the design goals of kr8, as well as some of the benefits and a few examples.

Design Goals

The intention when making kr8 was to allow us to generate manifests for a variety of Kubernetes clusters, and give us the ability to template and override yaml parameters where possible. We took inspiration from a variety of different tools such as Kustomize, Kasane, Ksonnet and many others on our journey to creating a configuration management framework that is relatively simple to use, and follows some of the practices we’re used to as Puppet administrators.

Other design goals included:

  • No templating engine
  • Flexibility
  • Compatibility with existing deployment tools, like Helm
  • Small binaries, with off the shelf tools used where needed

The end goal was to be able to take existing helm charts, or yaml installation manifests, then manipulate them to our needs. We chose to use jsonnet as the language of kr8 due to its data templating capabilities and ability to work with both JSON and YAML.

Terminology & Tools

kr8

kr8 itself is the only component of the kr8 framework that we wrote at Apptio. Its purposes are:

You can see the result of these purposes using a few of the tools in the kr8 binary, for example, listing clusters:

kr8 cluster list
+--------------------+--------------------------------------------------------------------+
|        NAME        |                                PATH                                |
+--------------------+--------------------------------------------------------------------+
| do                 | /Users/Lee/github/cluster_config/clusters/do                       |
| gcloud             | /Users/Lee/github/cluster_config/clusters/gcloud                   |
| kops               | /Users/Lee/github/cluster_config/clusters/kops                     |
| docker-for-desktop | /Users/Lee/github/cluster_config/clusters/local/docker-for-desktop |
| minikube           | /Users/Lee/github/cluster_config/clusters/local/minikube           |
+--------------------+--------------------------------------------------------------------+

However, using the kr8 binary alone is probably not what you want to do. We bundle and use a variety of other tools with kr8 to achieve the ability to generate manifests for multiple clusters and deploy them.

Task

Task does a lot of the heavy lifting for kr8. It is a task runner, much like Make but with a more flexible DSL (yep, it’s yaml/json!) and the ability to run tasks in parallel. We use Taskfiles for each component to allow us to build the component config. This gives us the flexibility to use rendering options for each component that make sense, whether it be pulling in a Helm chart or plain yaml. We can then input that yaml with kr8, and manipulate it with jsonnet code to add, modify the resulting kubernetes manifest. Alongside this, we use a taskfile to generate deployment tasks and to generate all components for a Task. This gives us the ability to execute lots of generate manifest jobs in relatively short periods of time.

Kubecfg

We use Kubecfg to do the actual deployment of these manifests. Kubecfg gives us the ability to validate, diff and iteratively deploy Kubernetes manifests which kubectl does not. The kubecfg jobs are generally inside Taskfiles at the cluster level.

Components

Components are very similar to helm charts. They are installable resource collections for something you’d like to deploy to your Kubernetes clusters. A component has 3 minimal requirements:

  • params.jsonnet: contains configurable params for the component
  • Taskfile.yml: Instructions to render the component
  • An installation source: This can be anything from a pure jsonnet file to a helm input values file. Ultimately, this needs to be able to generate some yaml

I intend to write many more kr8 blog posts and docs, detailing how kr8 can work, examples of different components and tutorials. Until then, take a look at these resources:

Thanks

I want to thank Colin Spargo, who came up with the original concept of kr8 and how it might work, as well as contributing large amounts of the kr8 code. I also want to thank Sanyu Melwani, who had valuable input into the concept, as well as writing many kr8 components.

Finally, a thank you to our employer, Apptio, who has allowed us to spend time creating this tool to ease our Kubernetes deployment frustrations. If you’re interested in working on fun projects like this, we are hiring for remote team members


Serverless computing is all the rage at the moment, and why wouldn’t it be? The idea of deploying code without having to worry about anything like servers, or that pesky infrastructure everyone complains about seems pretty appealing. If you’ve ever used AWS lamdba or one of its related cousins, you’ll be able to see the freedom that triggering functions on events brings you.

The increase in excitement around serverless frameworks means that naturally, there’s been an increase in providers in the Kubernetes world. A quick look at the CNCF Landscape page shows just how many options there are to Kubernetes cluster operators.

In this post I wanted to look at Kubeless, a serverless framework written by the awesome people at Bitnami.

Kubeless appealed to me specifically for a few reasons:

  • Native Kubernetes resources (CRDs) for functions, meaning that standard Kubernetes deployment constructs can be used
  • No external dependencies to get started
  • Support for PubSub functions without having to manually bind to messages queues etc
  • Lots of language support with the runtimes

Before you get started with this, you’ll probably want to follow the Quick Start as well as get an understanding of how the PubSub functions work.

To follow along here you’ll need:

  • A working kubeless deployment, including the kubeless cli
  • A working NATS cluster, perhaps using the NATS Operator.
  • You’ll also need the as the Kubeless NATS Trigger installed in your cluster.

In this walkthrough, I wanted to show you how easy it is to get Kubernetes events (in this case, pod creations) and then use kubeless to perform actions on them (like post to a slack channel).

I’m aware there are tools out there that already fulfill this function (ie events to slack) but I figured it was a good showcase of what can be done!

Publishing Kubernetes Events

Before you can trigger kubeless functions, you first need to have events from Kubernetes published to your NATS cluster.

To do this, I used the excellent kubernetes python library

An easy way to do this is simply connect to the API using the in_cluster capabilities and then list all the pods, like so:

from kubernetes import client, config, watch

def main():
  # use your machines kubeconfig
  config.load_kube_config()

  v1 = client.CoreV1Api()

  w = watch.Watch()
  for event in w.stream(v1.list_pod_for_all_namespaces):
    print("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))


main()

This simple script will log all the information for pods in all namespaces to stdout. It can be run on your local machine, give it a try!

The problem with this is that it’s just spitting information to stdout to test locally, so we need to publish this events to NATS. In order to do this, we’ll use the python aysncio-nats libarary

Now, your script has gotten much more complicated:

import asyncio
import json
import os

from kubernetes import client, config, watch

from nats.aio.client import Client as NATS
from nats.aio.errors import ErrConnectionClosed, ErrTimeout, ErrNoServers

# connect to the cluster defined in $KUBECONFIG
config.load_kube_config()

# set up the kubernetes core API
v1 = client.CoreV1Api()

# created a method that runs async
async def run(loop):
    # set up NATS connection
    nc = NATS()
    try:
        # connect to a NATS cluster on localhost
        await nc.connect('localhost:4222', loop=loop)
    except Exception as e:
        exit(e)

    # method to get pod events async
    async def get_pod_events():
        w = watch.Watch()
        for event in w.stream(v1.list_pod_for_all_namespaces):
            print("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
            msg = {'type':event['type'],'object':event['raw_object']}

            # publish the events to a NATS topic k8s_events
            await nc.publish("k8s_events", json.dumps(msg).encode('utf-8'))
            await asyncio.sleep(0.1)

    # run the method
    await get_pod_events()
    await nc.close()

if __name__ == '__main__':

    # set up the async loop
    loop = asyncio.get_event_loop()
    loop.create_task(run(loop))
    try:
        # run until killed
        loop.run_forever()
        print("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))    
    # if killed by keyboard shutdown, clean up nicely
    except KeyboardInterrupt:
        print('keyboard shutdown')
        tasks = asyncio.gather(*asyncio.Task.all_tasks(loop=loop), loop=loop, return_exceptions=True)
        tasks.add_done_callback(lambda t: loop.stop())
        tasks.cancel()

        # Keep the event loop running until it is either destroyed or all
        # tasks have really terminated
        while not tasks.done() and not loop.is_closed():
            loop.run_forever()
    finally:
        print('closing event loop')
        loop.close()

Okay, so now we have events being pushed to NATS. We need to fancy this up a bit, to allow for running in and out of cluster, as well as building a Docker image. The final script can be found here. The changes are to include a logger module, as well as argparse to allow for running in and out of the cluster, as well as make some options configurable.

You should now deploy this to your cluster using the provided deployment manifests, which also include the (rather permissive!) RBAC configuration needed for the deployment to be able to read pod information from the API.

kubectl apply -f https://raw.githubusercontent.com/jaxxstorm/kubeless-events-example/master/manifests/events.yaml

This will install the built docker container to publish events to the NATS cluster configured earlier. If you need to, modify the environment variable NATS_CLUSTER if you deployed your NATS cluster to another address.

Consuming Events with Kubeless functions

So now the events are being published, we need to actually do something with them. Let’s first make sure the events are coming in.

You should have the kubeless cli downloaded by now, so let’s create a quick example function to make sure the events are being posted.

def dump(event, context):
    print(event)
    return(event)

As you can probably tell, this function just dumps any event sent to it and returns. So let’s try it out. With kubeless, let’s deploy it:

kubeless function deploy test --runtime python3.6 --handler test.dump --from-file functions/test.py -n kubeless

What’s happening here, exactly?

  • runtime: specify a runtime for the function, in this case, python 3.6
  • from-file: path to your file containing your function
  • handler: this is the important part. A handler is the kubeless function to call when an event is received. It’s in the format <filename>.<functionname>. So in our case, our file was called test.py and our function was called dump, so we specify test.dump.
  • namespace: make sure you specify the namespace you deployed kubeless to!

So you should now have a function deployed:

kubeless function ls -n kubeless
NAME	NAMESPACE	HANDLER  	RUNTIME  	DEPENDENCIES	STATUS
test	kubeless 	test.dump	python3.6	            	1/1 READY

So now, we need to have this function be triggered by the NATS messages. To do that, we add a trigger:

kubeless trigger nats create test --function-selector created-by=kubeless,function=test --trigger-topic k8s_events -n kubeless

What’s going on here?

  • We create a trigger with name test
  • function-selector: use labels created by kubeless function to select the function to run
  • trigger-topic: specify a trigger topic of k8s_events (which is specified in the event publisher from earlier)
  • Same namespace!

Okay, so now, let’s cycle the event publisher and test things out!

# delete the current nats event pods
# this will readd all the pods and republish them
kubectl delete po -l app=kubeless-nats-events -n kubeless

# read the pod logs from the kubeless function we created
k logs -l function=test -n kubeless

You should see something like this as an output log:

{'data': {'type': 'MODIFIED', 'object': {'kind': 'Pod', 'apiVersion': 'v1', 'metadata': {'name': 'nats-trigger-controller-66df75894b-mxppc', 'generateName': 'nats-trigger-controller-66df75894b-', 'namespace': 'kubeless', 'selfLink': '/api/v1/namespaces/kubeless/pods/nats-trigger-controller-66df75894b-mxppc', 'uid': '01ead80a-d0ef-11e8-8e5f-42010aa80fc2', 'resourceVersion': '16046597', 'creationTimestamp': '2018-10-16T02:55:58Z', 'labels': {'kubeless': 'nats-trigger-controller', 'pod-template-hash': '2289314506'}, 'ownerReferences': [{'apiVersion': 'extensions/v1beta1', 'kind': 'ReplicaSet', 'name': 'nats-trigger-controller-66df75894b', 'uid': '01e65ec2-d0ef-11e8-8e5f-42010aa80fc2', 'controller': True, 'blockOwnerDeletion': True}]}, 'spec': {'volumes': [{'name': 'controller-acct-token-dwfj5', 'secret': {'secretName': 'controller-acct-token-dwfj5', 'defaultMode': 420}}], 'containers': [{'name': 'nats-trigger-controller', 'image': 'bitnami/nats-trigger-controller:v1.0.0-alpha.9', 'env': [{'name': 'KUBELESS_NAMESPACE', 'valueFrom': {'fieldRef': {'apiVersion': 'v1', 'fieldPath': 'metadata.namespace'}}}, {'name': 'KUBELESS_CONFIG', 'value': 'kubeless-config'}, {'name': 'NATS_URL', 'value': 'nats://nats-cluster.nats-io.svc.cluster.local:4222'}], 'resources': {}, 'volumeMounts': [{'name': 'controller-acct-token-dwfj5', 'readOnly': True, 'mountPath': '/var/run/secrets/kubernetes.io/serviceaccount'}], 'terminationMessagePath': '/dev/termination-log', 'terminationMessagePolicy': 'File', 'imagePullPolicy': 'IfNotPresent'}], 'restartPolicy': 'Always', 'terminationGracePeriodSeconds': 30, 'dnsPolicy': 'ClusterFirst', 'serviceAccountName': 'controller-acct', 'serviceAccount': 'controller-acct', 'nodeName': 'gke-lbr-default-pool-3071fe55-kfm6', 'securityContext': {}, 'schedulerName': 'default-scheduler', 'tolerations': [{'key': 'node.kubernetes.io/not-ready', 'operator': 'Exists', 'effect': 'NoExecute', 'tolerationSeconds': 300}, {'key': 'node.kubernetes.io/unreachable', 'operator': 'Exists', 'effect': 'NoExecute', 'tolerationSeconds': 300}]}, 'status': {'phase': 'Running', 'conditions': [{'type': 'Initialized', 'status': 'True', 'lastProbeTime': None, 'lastTransitionTime': '2018-10-16T02:55:58Z'}, {'type': 'Ready', 'status': 'True', 'lastProbeTime': None, 'lastTransitionTime': '2018-10-16T02:56:00Z'}, {'type': 'PodScheduled', 'status': 'True', 'lastProbeTime': None, 'lastTransitionTime': '2018-10-16T02:55:58Z'}], 'hostIP': '10.168.0.5', 'podIP': '10.36.14.29', 'startTime': '2018-10-16T02:55:58Z', 'containerStatuses': [{'name': 'nats-trigger-controller', 'state': {'running': {'startedAt': '2018-10-16T02:55:59Z'}}, 'lastState': {}, 'ready': True, 'restartCount': 0, 'image': 'bitnami/nats-trigger-controller:v1.0.0-alpha.9', 'imageID': 'docker-pullable://bitnami/[email protected]:d04c8141d12838732bdb33b2890eb00e5639c243e33d4ebdad34d2e065c54357', 'containerID': 'docker://d9ff52fa7e5d821c12fd910e85776e2a422098cde37a7f2c8054e7457f41aa1e'}], 'qosClass': 'BestEffort'}}}, 'event-id': 'YMAok3se6ZQyMTM', 'event-type': 'application/json', 'event-time': '2018-10-16 02:56:00.738120124 +0000 UTC', 'event-namespace': 'natstriggers.kubeless.io', 'extensions': {'request': <LocalRequest: POST http://test.kubeless.svc.cluster.local:8080/>}}

This is the modification event for the pod you just cycled. Awesome!

Publish the event to slack

Okay, so now you’ve got some events being shipped, it’s time to get a little bit more creative. Let’s publish some of these events to slack.

You can create a slack.py with your function in, like so:

#!/usr/bin/env python
from slackclient import SlackClient
import os

def slack_message(event, context):
    try:
        # Grab the slack token and channel from the environment vars
        slack_token = os.environ.get('SLACK_TOKEN', None)
        slack_channel = os.environ.get('SLACK_CHANNEL', None)
        if not slack_token:
            msg = 'No slack token set, can\'t do anything'
            print(msg)
            return msg
        else:
            # post a slack message!
            sc = SlackClient(slack_token)
            sc.api_call(
                "chat.postMessage",
                channel=slack_channel,
                attachments=[ { "title": event['data']['type']+" Pod event", "text": str(event)  } ]
            )

            print(event)

    except Exception as inst:
        print(type(inst))
        print(inst)
        print(event.keys())
        print(type(event))
        print(str(event))

You’ll need to deploy your function using the kubeless binary:

export SLACK_TOKEN=<my_slack_token>
export SLACK_CHANNEL=<my_slack_channel>

kubeless function deploy slack --runtime python3.6 --handler slack.slack_message --from-file slack.py -n kubeless --env SLACK_TOKEN=$SLACK_TOKEN --env SLACK_CHANNEL=$SLACK_CHANNEL --dependencies requirements.txt

The only thing you might be confused about here is the --dependencies file. Kubeless uses this to determine which dependencies you need to install for the function runtime. In the python case, it’s a requirements.txt. You can find a working one in the related github repo linked to this post. This example better formats the slack responses into nice slack output, so it’s worth taking a look at.

You’ll obviously need a slack org to try this out, and need to generate a slack token to get API access. However, now, once you cycle the events pod again (or, run another pod of course!) - you’ll now see these events pushed to slack!

slack-events

Wrap up

Obviously this is a trivial example of using these functions, but the power of the event pipeline with kubeless is there to be seen. Anything you might need to happy when certain events happen in your Kubernetes cluster can be automated using this Kubeless event pipeline.

You can check out all the code, deployment and manifests for this post in the github repo that accompanies this post. Pull requests and feedback on my awful Python code are also welcome!


A few months back, I wrote an article which got a bit of interest around the issues configuring and maintaining multiple clusters, and keeping the components required to make them useful in sync. Essentially, the missing piece of the puzzle was that there was no cluster aware configuration management tool.

Internally, we created an excellent tool at $work to solve this using jsonnet, which has been very nice because it’s meant we get to use actual code to solve this problem. The only issue is that the code we have to write in is relatively niche!

When Pulumi was announced back in June, I got very excited. After seeing the that Pulumi now supports Kubernetes natively, I wanted to dig in and see if it would help with the configuration management problem.

Introductory Concepts

Before I speak about the Kubernete–specific components, I want to do a brief introduction into what Pulumi actually is.

Pulumi is a tool which allows you to create cloud resources using real programming languages. In my personal opinion, it is essentially terraform with full programming languages on top of it, instead of HCL.

Having real programming languages means you can make the resources you configure as flexible or complex as you like. Each provider (AWS, GCE, Azure and Kubernetes) has support for different languages. Currently, the AWS provider has support for Go, Python and Javascript/Typescript whereas the Kubernetes provider has support only for Javascript/Typescript at the moment.

Stacks

Pulumi uses the concept of a stack to segregate things like environments or feature branches. For example, you may have a stack for your development environment and one for your production environment.

These stacks store their state in a similar fashion to terraform. You can either store the state in:

  • The public pulumi web backend
  • locally on the filesystem

There is also an enterprise hosted stack storage provider. Hopefully it’ll be possible to have an open source stack storage some time soon.

Components

Components allow you to share reusable modules, in a similar manner to terraform modules. This means you can write reusable code and create boilerplate for regularly reused resources, like S3 buckets.

Configuration

Finally, there’s configuration ability within Pulumi stacks and components. What this allows you to do is differentiation configuration in the code depending on the stack you’re using. You specify these configuration values like so:

pulumi config set name my_name

And then you can reference that within your code!

Kubernetes Configuration Management

If you remember back to my previous post, the issue I was trying to solve was being able to install components that every cluster needs (as an example, an ingress controller) to all the clusters but with often differing configuration values (for example, the path to a certificate arn in AWS ACM). Helm helps with this in that it allows you to specify values when installing, but then managing, maintaining and storing those values for each cluster becomes difficult, and applying them also becomes hard.

Pulumi

There are two main reasons Pulumi is helping here. Firstly, it allows you to differentiate kubernetes clusters within stacks. As an example, let’s say I have two clusters - one in GKE and one in Digital Ocean. Here you can see them in my kubernetes contexts:

kubectx
digitalocean
[email protected]

I can initiate a stack for each of these clusters with Pulumi, like so:

# create a Pulumi.yml first!
cat <<EOF > Pulumi.yml
name: nginx
runtime: nodejs
description: An example stack for Pulumi
EOF
pulumi stack init gcloud
pulumi config set kubernetes:context [email protected]

Obviously, you’d repeat the stack and config steps for each cluster!

Now, if I want to actually deploy something to the stack, I need to write some code. If you’re not familiar with typescript (I wasn’t, until I wrote this) you’ll need to generate a package.json and a tsconfig.json

Fortunately, Pulumi automates this for us!

pulumi new --generate-only --force

If you’re doing a Kubernetes thing, you’ll need to select kubernetes-typescript from the template prompt.

Now we’re ready to finally write some code!

Deploying a standard Kubernetes resource

When you ran pulumi new, you got an index.ts file. In it, it imports pulumi:

import * as k8s from "@pulumi/kubernetes";

You can now write standard typescript and generate Kubernetes resources. Here’s an example nginx pod as a deployment:

import * as k8s from "@pulumi/kubernetes";

// set some defaults
const defaults = {
  name: "nginx",
  namespace: "default",
  labels: {app: "nginx"},
  serviceSelector: {app: "nginx"},
};

// create the deployment
const apacheDeployment = new k8s.apps.v1.Deployment(
  defaults.name,
  {
    metadata: {
      namespace: defaults.namespace,
      name: defaults.name,
      labels: defaults.labels
    },
    spec: {
      replicas: 1,
      selector: {
        matchLabels: defaults.labels,
      },
      template: {
        metadata: {
          labels: defaults.labels
        },
        spec: {
          containers: [
            {
              name: defaults.name,
              image: `nginx:1.7.9`,
              ports: [
                {
                  name: "http",
                  containerPort: 80,
                },
                {
                  name: "https",
                  containerPort: 443,
                }
              ],
            }
          ],
        },
      },
    }
  });

Here you can already see the power that writing true code has - defining a constant for defaults and allowing us to use those values in the declaration of the resource means less duplicated code and less copy/pasting.

The real power comes when using the config options we set earlier. Assuming we have two Pulumi stacks, gcloud and digitalocean:

pulumi stack ls
NAME                                             LAST UPDATE              RESOURCE COUNT
digitalocean*                                    n/a                      n/a
gcloud                                           n/a                      n/a

and these stacks are mapped to different contexts, like so:

pulumi config -s gcloud
KEY                                              VALUE
kubernetes:context                               [email protected]
pulumi config -s digitalocean
KEY                                              VALUE
kubernetes:context                               digitalocean

you can now also set different configuration options and keys which can be used in the code.

pulumi config set imageTag 1.14-alpine -s gcloud
pulumi config set imageTag 1.15-alpine -s digitalocean

This will write out these values into a Pulumi.<stackname>.yaml in the project directory:

cat Pulumi.digitalocean.yaml
config:
  kubernetes:context: digitalocean
  pulumi-example:imageTag: 1.15-alpine

and we can now use this in the code very easily:

let config = new pulumi.Config("pulumi-example"); // use the name field in the Pulumi.yaml here
let imageTag = config.require("imageTag");

// this is part of the pod spec, you'll need the rest of the code too!
image: `nginx:${imageTag}`,

Now, use Pulumi from your project’s root to see what would happen:

pulumi up -s gcloud
Previewing update of stack 'gcloud'
Previewing changes:

     Type                           Name                   Plan       Info
 +   pulumi:pulumi:Stack            pulumi-example-gcloud  create
 +   └─ kubernetes:apps:Deployment  nginx                  create

info: 2 changes previewed:
    + 2 resources to create

Do you want to perform this update? yes
Updating stack 'gcloud'
Performing changes:

     Type                           Name                   Status      Info
 +   pulumi:pulumi:Stack            pulumi-example-gcloud  created
 +   └─ kubernetes:apps:Deployment  nginx                  created

info: 2 changes performed:
    + 2 resources created
Update duration: 17.704161467s

Permalink: file:///Users/Lee/.pulumi/stacks/gcloud.json

Obviously you can specify whicever stack you need as required!

Let’s verify what happened…

kubectx digitalocean # switch to DO context
kubectl get deployment -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES              SELECTOR
nginx     1         1         1            1           1m        nginx        nginx:1.15-alpine   app=nginx

kubectx [email protected] # let's look now at gcloud context
Switched to context "[email protected]".
k get deployment -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES              SELECTOR
nginx     1         1         1            1           3m        nginx        nginx:1.14-alpine   app=nginx

Okay, so as you can see here, I’ve deployed an nginx deployment to two different clusters, with two different images with very little effort and energy. Awesome!

Going further - Helm

What makes this really really awesome is that Pulumi already supports Helm charts!

In my previous post, I made the comment that Helm has lots of community supported charts which have done a whole load of configuration for you. However, Helm suffers from 2 main problems (in my humble opinion)

  • Templating yaml with Go templates is extremely painful when doing complex tasks
  • The helm chart community can be slow to merge pull requests. This means if you find an issue with a chart, you unfortunately have to fork it, and host it yourself.

Pulumi really helps and solves this problem. Let me show you how!

First, let’s create a Pulumi component using a helm chart:

import * as k8s from "@pulumi/kubernetes";

const redis = new k8s.helm.v2.Chart("redis", {
    repo: "stable",
    chart: "redis",
    version: "3.10.0",
    values: {
        usePassword: true,
        rbac: { create: true },
    }
});

And now preview what would happen on this stack:

pulumi preview -s digitalocean
Previewing update of stack 'digitalocean'
Previewing changes:

     Type                                                    Name                              Plan       Info
 +   pulumi:pulumi:Stack                                     pulumi-helm-example-digitalocean  create
 +   └─ kubernetes:helm.sh:Chart                             redis                             create
 +      ├─ kubernetes:core:Secret                            redis                             create
 +      ├─ kubernetes:rbac.authorization.k8s.io:RoleBinding  redis                             create
 +      ├─ kubernetes:core:Service                           redis-slave                       create
 +      ├─ kubernetes:core:Service                           redis-master                      create
 +      ├─ kubernetes:core:ConfigMap                         redis-health                      create
 +      ├─ kubernetes:apps:StatefulSet                       redis-master                      create
 +      └─ kubernetes:extensions:Deployment                  redis-slave                       create

info: 9 changes previewed:
    + 9 resources to create

As you can see, we’re generating the resources automatically because Pulumi renders the helm chart for us, and then creates the resources, which really is very awesome.

However, it gets more awesome when you see there’s a callback called transformations. This allows you to patch and manipulate the generated resources! For example:

import * as k8s from "@pulumi/kubernetes";

const redis = new k8s.helm.v2.Chart("redis", {
    repo: "stable",
    chart: "redis",
    version: "3.10.0",
    values: {
        usePassword: true,
        rbac: { create: true },
    },
    transformations: [
        // Make every service private to the cluster, i.e., turn all services into ClusterIP instead of
        // LoadBalancer.
        (obj: any) => {
            if (obj.kind == "Service" && obj.apiVersion == "v1") {
                if (obj.spec && obj.spec.type && obj.spec.type == "LoadBalancer") {
                    obj.spec.type = "ClusterIP";
                }
            }
        }
    ]
});

Of course, you can combine this with the configuration options from before as well, so you can override these as needed.

I think it’s worth emphasising that this improves the implementation time for any Kubernetes deployed resource dramatically. Before, if you wanted to use something other than helm to deploy something, you had to write it from scratch. Pulumi’s ability to import and them manipulate rendered helm charts is a massive massive win for the operator and the community!

Wrap up

I think Pulumi is going to change the way we deploy things to Kubernetes, but it’s also definitely in the running to solve the configuration management problem. Writing resources in code is much much better than writing yaml or even jsonnet, and having the ability to be flexible with your deployments and manifests using the Pulumi concepts is really exciting.

I’ve put the code examples from the blog post in a github repo for people to look at and improve. I really hope people try out Pulumi!