AKS Azure Monitor Kubernetes Prometheus

Azure Monitor – Monitoring Kubernetes (AKS) Sample Application Using Prometheus Scraping

image

Kubernetes is a proven and booming technology on Azure and so it is no surprise, that we need to monitor the Kubernetes infrastructure layer as well as the applications running on top of Kubernetes. A while ago Microsoft released Azure Monitor for containers, which gives you a good health and performance status of your Azure Kubernetes Service (AKS). Like node status…

image

…and most important performance counters and much more…

image

If you haven’t seen this solution I highly recommend to play with it. When deploying an AKS cluster you need to make sure that you enable the monitoring switch. That’s it!

Many companies use Prometheus to monitor their Kubernetes infrastructure and application in conjunction with Grafana as a dashboard solution. Azure Monitor has a feature in public preview, which let’s us collect Prometheus metrics and send this data to Log Analytics. There is a documentation on Microsoft Docs, how to enable this feature. I was very interested to get my hands-on and understand how we can configure it. In the documentation there is an overview picture and explains what kind of options end endpoints there are to collect metrics data…

Container monitoring architecture for Prometheus

Either we specify a node, a pod or a service endpoint in Kubernetes to collect metrics from. In most cases, I think, pod annotation is the way to go and will be used in most situations. Probably in most cases, the metrics data is exposed by the pod on port 80 (http) under a path like http://myservice/metrics. If you have running a lot of pods and all applications are exposing their metrics the same way, it will be easy to configure Prometheus scraping for the AKS cluster.

There are basically two parts we need to configure. First we need to have an application, that has the Prometheus library implemented and second we need to have an AKS cluster which has monitoring enabled. Then we apply a ConfigMap file to tell the “omsagent which endpoint it should consider for metrics collection. We do not need to worry about any agent installation, because if you enabled monitoring, the omsagent is already running on the AKS cluster in Azure.

Let’s get started with step 1 and deploy an Azure Kubernetes Service (aka AKS) cluster… image

…next leave the defaults…

image

…defaults again…

image

…leave defaults…

image

…make sure “Enable container monitoring” is enabled…

image

…everything should be fine and the deployment is ready to start…

image

While the AKS cluster is provisioning, we are going to build our sample application container image.  I was searching the internet and found a simple go application, which would fit our needs. I installed Docker on my Windows machine and cloned the https://github.com/Azure/kubernetes-hackfest repository to my local machine…

git clone https://github.com/Azure/kubernetes-hackfest

…next change to the ..\sample-go directory where we can find the “Dockerfile”. Then run “Docker build .” command. This will build the docker image and download all necessary binaries…

image

…after a while, the build is finished and we can see the ID of the image…

image

Run “docker images” to see all of your local images. Because the image has only an ID and does not have a tag / repository yet, we need to assign one. In my case I would type…

docker tag 0f80b7dbb730  stefanroth/gosampleapp:latest

…this adds the following tag / repository to my image…

image

Before we can do the next step, make sure, you created a https://hub.docker.com/ account, which is the public docker registry for the image. We will need this registry, so we are able to pull it from the AKS cluster in Azure. Of course we could also use an Azure Container Registry, but I also wanted to publish my build, so you can easily reproduce the next steps.  To push the image to the docker registry I type…

docker push stefanroth/gosampleapp

…and it looks like this…

image

…because I had uploaded the image before, the output says that the image has not changed.

If you want to use my image for the next steps you can access it here https://hub.docker.com/r/stefanroth/gosampleapp

image

Next switch to Azure Cloud Shell (shell.azure.com) and set the subscription where you have the AKS cluster installed to your current context…

az account set –s [subscriptionID]

…and pull the credentials from the AKS cluster…

az aks get-credentials  --resource-group aks2 –name aksnumbertwo

…it looks like this…

image

…then we see if all nodes are up and running…

image

…as a next step we will use kubectl run command to quickly launch the container / pod, without the need to write any manifest and used for simple testing….

image…as a next step I want to publish, the application externally on Kubernetes, so I can see what metrics are exposed using a web browser. In Kubernetes we simply write a service definition which defines on which ports my application is accessible and define a load balancer to publish it externally with a public IP address.

Create a file in Azure Cloud Shell, like…

cd $HOME
code ./myService.yaml

…and type the following definition…

image

…save the file and apply the it to Kubernetes…

kubectl apply –f ./myService.yaml

…you should get a confirmation and then see, if a pod and a service is running…image

…if your service has received a public IP (External-IP), you can reach the service at the public IP address on port 8080….

image

…the application exposes tons of metrics. The next step is to tell Azure Monitor, where to pull this information from. For our example we will use pod annotation to basically tell Azure Monitor, that it should use the specified annotation to find the /metrics information. In this example I am just going to follow these steps documented. I downloaded the ConfigMap and modified the setting below…

image

…then I am going to apply the ConfigMap to the agent…

image

…then I’ll check to logs of the omsagent to see, if there are any errors…

image

…there are no errors and everything, seems to be fine. Just for testing purposes, I am going to assign the annotations to my Kubernetes pod using

kubectl annotate pods [podID] prometheus.io/scrape=”true”
kubectl annotate pods [podID] prometheus.io/port=”8080”

…this will add temporary meta data to the pod and if the pod gets deleted the configuration is gone. In this example I am telling Azure Monitor, that I want to have the data from this pod (scrape=”true”) and /metrics (default) is available on port 8080 (port=”8080”). In the documentation there are more settings, but they are not necessary in my case, because they are the defaults…

image

If you want to see if the annotation has been set, run…

kubectl describe pod [podID]

…this will output the pod configuration, like here…

image

…after a couple of minutes, we should see some result in Log Analytics…

image

…all the metrics from the “gosampleapp” are quickly sent to Log Analytics. If we want to see a specific counter exposed by the app we can check the code…

image

…there you see that we should find a “requests_counter_total” and here we go…

image

As I mentioned before, the annotations are just set temporarily. If we want to set it persistent for pod(s), we need to define a service (again) and a deployment. A deployment defines how many replicas of the pod should be created and also meta data like annotations. Within the deployment, we could use the template/metadata section to define the annotations like in this example, so all objects receive this information…

service_deployment

…save the file as e.g. “service_deployment.yaml” and apply it to Kubernetes…

image

…the check if the pods are running (we had two replicas defined in the deployment configuration!), and output the pod definition, like in this example…

image

…and we can run a query again to see if the data is floating in…

image

Awesome, everything works now as expected!

In addition you could also pull data from a Kubernetes service endpoint URL like here. Notice, that “default” means, the namespace name, where your application is running. In my case the “default” namespace…

image

If you need more settings to configure, just read the online documentation from Microsoft thoroughly. I hope this post gives you a head start in playing with Azure Monitor and Prometheus metrics. Remember this feature is in public preview!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.