Custom Configurations with Prometheus Operator

(Last Updated On: 2018-11-26)

If you are using Prometheus Operator, you have probably encountered an occasion where you wish that you could declare your own custom configuration.  Currently, there are a couple of use cases that require custom configurations because Prometheus Operator does not cover them.  For example, if your use case is relabeling metrics or black box probing, you will need declare your own custom configuration to accomplish these tasks.

When this article was published, custom configuration relied on a hack rather than a dedicated path method for including custom configurations.  However, since v0.19.0, this hack is not longer needed.  Prometheus Operator now allows you to include additional configs that will be merged with the configs that Prometheus Operator automatically generates.  If you are using this version or later, use the additional scrape configs feature rather than the method described here.

Before going down this rabbit hole, be warned that I am presenting a temporary work around until Prometheus Operator has better support for your use case that requires custom configuration.  Also be aware that this work around relies on hack that isn’t explicitly supported by Prometheus Operator, although the maintainers of Prometheus Operator acknowledge that the work around exists and provide some sparse documentation on how to use it.  Additionally, I have not had an opportunity to figure out how to mount the alerting rules in kubernetes 1.8+.  This issue was reported by Vinayak Saokar (@saokar).  If you can contribute to the solution, please post to the issue.

To understand how this workaround works, it helps to know how Prometheus Operator manages the configuration of each Prometheus instance deployed to your Kubernetes cluster.  When a value is assigned to the serviceMonitorSelector, Prometheus Operator creates a Kubernetes Secret object which contains the base64 encoded prometheus.yaml file used to configure Prometheus.  This secret can be inspected just like any other Kubernetes Secret.  Assuming you have an instance of prometheus deployed to the monitoring namespace via Prometheus Operator, you can inspect the secret with command below.

kubectl -n monitoring get secret -o yaml

You should see a file like the one below:

Notice that there are two base64 encoded files that are passed as data in the secret.  The first is a configmaps.json file that contains a checksum for the version of prometheus.yaml that passed in the data object of the secret.  If you don’t pass Prometheus Operator a serviceMonitorSelector, you can pass your own secret in place of this one.

Let’s take advantage of this to set up blackbox probing of my website  Blackbox probing simply means to probe a machine that you can’t instrument internally.  For example, I can’t instrument the applications that make google’s search engine work, but I can probe to see if it is up, and to see how long it takes to respond.  The company Pingdom actually sells premium services to do just that for your own website if you are concerned with things like uptime.

To do the blackbox probing, the first thing that we need is an application to handle the probes.  I am going to use blackbox_exporter.  You can get your own copies of the configuration that I am using for blackbox_exporter here.

To run blackbox_exporter in my prometheus cluster, I am using the deployment.yml and services.yml files below:

If you copy these files to a directory, you can run kubectl apply -f . in that directory to create the resources in your own cluster or minkube instance.  This particular configuration is very simple, and only checks to see if probe returns 200.

After setting up our exporter, it’s time to create a prometheus instance to record the probe results.  You can get a copy of the files that I will use to create these resources here.  Let’s take care of some of the more straightforward stuff first.  We can create our configMaps, storage, and services before we need to worry about how to manage the secret.  Go ahead and kubectl create -f these files if you are following along.

The storage.yml file tells kubernetes to provision a persistent disk.  I am using GCE.  Refer to the kubernetes docs on StorageClass if you are using another provisioner.

There isn’t much to say about the services.yml file, but keep the port number 9090 in mind as we will use it later to access the prometheus UI.

I’m not doing much with my configMap.yml yet, but later on it will house any alerting or recording rules that I’d like to have.  It is mainly important because the secret we will create for our custom configuration relies on this file existing.

Finally, we can write our prometheusConfig.yml file.  This file won’t be consumed by kubernetes directly.  Instead, we will pass this file as a secret to kubernetes so that prometheus can consume it for it’s configs.  Normally, I would say don’t share your kubernetes secrets, but in this case, there isn’t much that is secret about this file.

If we wanted to, we could stop here and make a secrets.yml file that has a base64 encoded version of this file, plus a base64 encoded version of a configmaps.json file.  The first time I set blackbox_exporter up it was for work, and this is how I did it because I was in a crunch for time.  But manually editing and applying the secrets was a huge pain in the neck, so I wised up and wrote a little bash script to take care of generating the secret for me.  Bash may not be able to do floating point multiplication, but it’s great for templating.  I recommend that you use this script as well.  If you are on a mac, you may need to run brew update && brew install shasum.  If you are on linux, you should replace the shasum command with the linux equivalent in this script.

As you can see, I am generating the configmaps.json file and the secret dynamically, and then applying the secret.

Finally, we can create the prometheus instance by running kubectl apply -f on the the file below.  But remember, this only works if you have prometheus-operator running.  If you haven’t set up prometheus-operator, it only takes one minute.

We could also make a public facing interface for our prometheus instance using nginx+basic auth.  But I’m not going to go through the trouble, because we can also use port forwarding to access the prometheus UI.  After applying the prometheus.yml file, run kubectl port-forward prometheus-blackbox-0 9090:9090 to forward the application port to your local machine.  Then, we can visit localhost:9090 to visit the prometheus UI.

Prometheus user interface

The image above shows the probe_http_duration_seconds metrics for my site.  The response times are pretty good.  Eventually, I’ll set up alerting so that I can be notified if my site is unavailable for any reason.  In addition to monitoring my personal website, I use the same strategy to probe critical endpoints at work so that our team can be alerted if any endpoints are unavailable.  Hopefully this information saves you a lot of time as you set up your own blackbox probes.