Keep your ship together with Kapitan

Alessandro De Maria
Kapitan Blog
Published in
8 min readFeb 16, 2020

--

NEW: Katacoda scenario!

UPDATE: Generators have been ported from jsonnet to kadet (python)

Manage complexity with Kapitan

We open sourced Kapitan back in October 2017.

In the 12 months before that date, Kapitan helped us revolutionise the way we were running things at DeepMind Health, and allowed us to keep control over many heterogeneous systems: Kubernetes, Terraform, Documentations, Scripts, Playbooks, Grafana Dashboards, Prometheus rules: everything was kept under control from one single point of truth.

I am not afraid to say it loud: there is nothing out there which is as versatile and powerful as Kapitan for managing configurations of complex systems. There.. I said it. Prove me wrong :)

Having a product so radically different than anything else out there, obviously also meant that we had to learn and discover how to use it: Patterns, Best Practices.. We had to recognise and discover them as they surfaced while refactoring the mess we made with our initial rollout. Fortunately one of Kapitan’s strength is to make refactoring a joy, and so we did, and did it again, until we came out with a nice set of best practices.

What we didn’t do, was to make them available to others… until now. Spoiler!

Joining Synthace last year as Head of SRE also meant I had a chance to apply those best practices and approaches to a fresh new environment. This gave me the opportunity to test them and improve them much faster than what I would have been able to do previously, due to the much faster iterations we have there. The results were spectacular, allowing me to bring control in a place where manifests were generated using unmaintainable go code, secrets were managed manually, and each Kubernetes cluster was a snowflake.

I introduced successfully Kapitan, but I was still working way too much with jsonnet, and needed to have a jsonnet file for each of the (almost identical) 20 microservices we had.

So I had a though: what if I could replicate the full setup without touching any code at all? Introducing Kapitan Generators

Kapitan Generator Libraries

Diesel Ship Marine Generator, for Power, 230v
Diesel Ship Marine Generator, for Power, 230v

Today I will give you a preview of how we use Kapitan internally at Synthace.

I am also pleased to announce that we have released some of our internal kadet (ported from jsonnet) libraries we developed at Synthace as open source!

See: https://github.com/kapicorp/kapitan-reference

In particular, we will release:

  • [RELEASED] A kadet manifest generator library to quickly create Kubernetes “workloads” manifests by simply defining them in the inventory. Get started with something as simple as:
parameters:
components:
api-server:
image: gcr.io/your-company/api:latest
  • A kadet pipelines generatorlibrary to quickly create Spinnaker pipelines for the above defined workloads
  • A kadet terraform generator library to create Terraform configurations.
  • A set of helper scripts which will make easy to get up and running with Kapitan.

To set expectations right, these libraries will be released in a form that will probably require some refinements, but it should hopefully allow you to get started and inspire you to contribute with your libraries or implement the same approach to manage your system of choice.

These generators are a huge step forward from our approach where you would need to create a jsonnet/kadet component file for each service you wanted to manage with Kapitan. The ambition is to allow you to get started and generate configuration for you 80% of cases, and at the same time enforce some sane best practices along the way. Of course, you can still extend them or write your own generator using jsonnet/kadet if you need something fancier or want to have full control on a specific component.

A sneak peek — generating manifests

So let’s say that you want to get started with Kapitan. Until now the steps to get you started where quite a few, often cryptic and not well documented.

We have releasing a “kapitan reference” repository (https://github.com/kapicorp/kapitan-reference) with all the batteries included. I will run this session assuming you have this already.

Pre-requisites:

  • docker
  • gcloud (the example is on GCP)
  • kapitan
  • yq
  • kapitan generators (released)

Suggested read:

Your first target file

Note: Since we have released the Manifest Generator, you will now be able to follow these steps

From your kapitan-reference repository (https://github.com/kapicorp/kapitan-reference), go on and create a first devtarget file: inventory/targets/dev.yml. For simplicity, we won’t be creating inventory classes right now, so we are going to edit the target file directly.

Make sure it has the following content:

classes:
- common
parameters:
target_name: dev
components:
echo-server:
image: inanimate/echo-server

Now run: kapitan compile --fetch

The --fetch command will make kapitan download the latest libraries and supports scripts that we package. Some of the third-party libraries we use are kube.libsonnet and spinnaker/sponnet.

If you now check your git repository, you will find that Kapitan has generated for you some files:

compiled/dev/
├── docs
├── manifests
│ └── echo-server-bundle.yml
└── scripts

Let’s look at what we have! Have a look at echo-server-bundle.yml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo-server
name: echo-server
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: echo-server
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- echo-server
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- image: inanimate/echo-server
imagePullPolicy: IfNotPresent
name: echo-server
restartPolicy: Always
terminationGracePeriodSeconds: 30

Admit it. That was quick!

The manifest generator loops through the keys of thecomponents inventory hash, and generates a new set of config for each one it finds, in this case, echo-server (from: https://hub.docker.com/r/inanimate/echo-server)

As anticipated, the generator library tries to be smart and adds some best practices that you may or may not like for all your services.

Why would you want a Deployment without podAntiAffinity? I’m sure there are valid reasons, but let’s make it a default, shall we?

Exposing the service

The deployment looks good, but it is missing some essential parts. Ehm.. we need a service! Right, let’s do that and recompile with kapitan compile

classes:
- common
parameters:
target_name: dev
namespace: ${target_name}
components:
echo-server:
image: inanimate/echo-server
service:
type: ClusterIP
ports:
http:
service_port: 80

Adding the port definition will produce the following:

It will add the port definition to the container

...
containers:
- image: inanimate/echo-server
imagePullPolicy: IfNotPresent
name: echo-server
ports:
- containerPort: 80
name: http
protocol: TCP

And it will create a new service definition

apiVersion: v1
kind: Service
metadata:
labels:
app: echo-server
name: echo-server
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: echo-server
sessionAffinity: None
type: ClusterIP

Are we there yet? Not quite:

  • The service assumes that the echo-server runs on port 80. From the documentation, it looks as if the service is actually running on port 8080 instead.
  • We would want the service to be exposed using a LoadBalancer service, so let’s change that.
  • We would like a readiness probe
classes:
- common
parameters:
target_name: dev
namespace: ${target_name}
components:
echo-server:
image: inanimate/echo-server
service:
type: LoadBalancer
ports:
http:
service_port: 80
container_port: 8080
healthcheck:
type: http
port: http
probes: ['readiness']
path: /
timeout_seconds: 3

Have a look at the bundle again:

...
containers:
- image: inanimate/echo-server
imagePullPolicy: IfNotPresent
name: echo-server
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: echo-server
name: echo-server
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: echo-server
sessionAffinity: None
type: LoadBalancer

Attaboy!

Adding Environment Variables

What else could we do? Well, from the echo-server docker page it looks as if we can play with a few parameters to change its configuration. Let’s add some env variables.

classes:
- common
parameters:
target_name: dev
namespace: ${target_name}
echo_server:
port: 8081
components:
echo-server:
image: inanimate/echo-server
env:
PORT: ${echo_server:port}
POD_NAME:
fieldRef:
fieldPath: metadata.name
POD_NAMESPACE:
fieldRef:
fieldPath: metadata.namespace
POD_IP:
fieldRef:
fieldPath: status.podIP
service:
type: LoadBalancer
ports:
http:
service_port: 80
container_port: ${echo_server:port}

As expected, the changes are reflected in the manifest:

      containers:
- env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PORT
value: 8081
image: inanimate/echo-server
imagePullPolicy: IfNotPresent
name: echo-server
ports:
- containerPort: 8081
name: http
protocol: TCP

Adding secrets

Just for the sake of testing, let’s also add a secret to the setup, even if the component won’t be using it.

classes:
- common
parameters:
target_name: dev
namespace: ${target_name}
echo_server:
port: 8081
components:
echo-server:
image: inanimate/echo-server
env:
PORT: ${echo_server:port}
POD_NAME:
fieldRef:
fieldPath: metadata.name
POD_NAMESPACE:
fieldRef:
fieldPath: metadata.namespace
POD_IP:
fieldRef:
fieldPath: status.podIP
SECRET_PASSWORD:
secretKeyRef:
key: echo_server_password
service:
type: LoadBalancer
healthcheck:
type: http
port: http
probes: ['readiness']
path: /
timeout_seconds: 3
ports:
http:
service_port: 80
container_port: ${echo_server:port}
secret:
items: ['echo_server_password']
data:
echo_server_password:
value: ?{plain:targets/${target_name}/echo_server_password||randomstr}

Let’s break this down:

  • ?{plain:targets/${target_name}/echo_server_password||randomstr} will create a random string, and store it in on git. Because we have used the plain backend, we will be storing it in cleartext. User gkms or other secrets backend if you care about your secrets
  • The SECRET_PASSWORD env variable will have the content of the generated password. Now because we have decided to test the plain backend, you will see it in clear text in the manifest. Otherwise if would be encrypted and you would only see a secure tag.
  • The items instruction will also mount the secret as a volume, and only exposed the selected item. This means you will be also able to access the content of the secret using /opt/secrets/echo_server_password

The result of the compilation adds a new file:

compiled/dev/
├── docs
├── manifests
│ ├── echo-server-bundle.yml
│ └── echo-server-secret.yml
└── scripts

Also notice that the files are all nicely and consistently named after the service.

Lastly, we can move the component definition in its own class file: inventory/classes/components/echo-server.yml

parameters:
echo_server:
port: 8081
components:
echo-server:
image: inanimate/echo-server
env:
PORT: ${echo_server:port}
POD_NAME:
fieldRef:
fieldPath: metadata.name
POD_NAMESPACE:
fieldRef:
fieldPath: metadata.namespace
POD_IP:
fieldRef:
fieldPath: status.podIP
SECRET_PASSWORD:
secretKeyRef:
key: echo_server_password
service:
type: LoadBalancer
healthcheck:
type: http
port: http
probes: ['readiness']
path: /
timeout_seconds: 3
ports:
http:
service_port: 80
container_port: ${echo_server:port}
secret:
items: ['echo_server_password']
data:
echo_server_password:
value: ?{plain:targets/${target_name}/echo_server_password||randomstr}

And then we can simplify the target to reference the component

classes:
- common
- components.echo-server
parameters:
target_name: dev
namespace: ${target_name}

This way we can now reuse that component across other targets, like for instance inventory/targets/production.yml

classes:
- common
- components.echo-server
parameters:
target_name: prod
namespace: ${target_name}

Running kapitan compile again will effortlessly generate the new files for the new target prod:

./kapitan compile
Compiled dev (0.29s)
Compiled prod (0.29s)

which produced:

compiled/prod/
├── docs
├── manifests
│ ├── echo-server-bundle.yml
│ └── echo-server-secret.yml
└── scripts

Final words

We are aiming to release the kapitan reference repository with all the goodies within the next few weeks. Please contact us directly on the #kapitan kubernetes slack if you want to help us to work on releasing it, and if you want to take part of an initial alpha release.

--

--

Alessandro De Maria
Kapitan Blog

#father #kapitan #devops. Head of SRE at Synthace. Ex DeepMind. Ex Google. Opinions are my own