Prometheus is an open-source instrumentation framework that can absorb massive amounts of data every second. This property makes Prometheus well-suited for monitoring complex workloads.
Use Prometheus to monitor your servers, VMs, databases, and draw on that data to analyze the performance of your applications and infrastructure.
This article explains how to install and set up Prometheus monitoring in a Kubernetes cluster.
- A Kubernetes cluster
- A fully configured
kubectlcommand-line interface on your local machine
Install Prometheus Monitoring on Kubernetes
Prometheus monitoring can be installed on a Kubernetes cluster by using a set of YAML (Yet Another Markup Language) files. These files contain configurations, permissions, and services that allow Prometheus to access resources and pull information by scraping the elements of your cluster.
YAML files are easily tracked, edited, and can be reused indefinitely.
Note: The .yaml files below, in their current form, are not meant to be used in a production environment. Instead, you should adjust these files to fit your system requirements.
Step 1: Create Monitoring Namespace
All the resources in Kubernetes are started in a namespace. Unless one is specified, the system uses the default namespace. To have better control over the cluster monitoring process, specify a monitoring namespace.
The name of the namespace needs to be a label compatible with DNS. For easy reference, we are going to name the namespace: monitoring.
There are two ways to create a monitoring namespace for retrieving metrics from the Kubernetes API.
Enter this simple command in your command-line interface and create the monitoring namespace on your host:
kubectl create namespace monitoring
The output confirms the namespace creation.
1. Create and apply a .yml file:
apiVersion: v1 kind: Namespace metadata: name: monitoring
This method is convenient as you can deploy the same file in future instances.
2. Apply the file to your cluster by entering the following command in your command terminal:
kubectl -f apply namespace monitoring.yml
3. List existing namespaces by using this command:
kubectl get ns
Note: Learn how to delete a Kubernetes namespace.
Step 2: Create Persistent Volume and Persistent Volume Claim
A Prometheus deployment needs dedicated storage space to store scraping data. A practical way to fulfill this requirement is to connect the Prometheus deployment to an NFS volume. The following is a procedure for creating an NFS volume for Prometheus and including it in the deployment via persistent volumes.
1. Install the NFS server on your main system.
sudo apt install nfs-kernel-server
2. Create a directory to use with Prometheus.
sudo mkdir -p /mnt/nfs/promdata
3. Change the ownership of the directory.
sudo chown nobody:nogroup /mnt/nfs/promdata
4. Change the permissions for the directory.
sudo chmod 777 /mnt/nfs/promdata
5. Create a .yaml file using a text editor, such as nano:
6. Paste the following configuration into the file. Adjust the parameters to fit your system. The
spec.nfs.server field should correspond to the IP address of the system you installed NFS on.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-data namespace: monitoring labels: type: nfs app: prometheus-deployment spec: storageClassName: managed-nfs capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 192.168.49.1 path: "/mnt/nfs/promdata" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nfs-data namespace: monitoring labels: app: prometheus-deployment spec: storageClassName: managed-nfs accessModes: - ReadWriteMany resources: requests: storage: 500Mi
Save the file and exit.
7. Apply the configuration with kubectl.
kubectl apply -f pv-pvc.yaml
Step 3: Create Cluster Role, Service Account and Cluster Role Binding
Namespaces are designed to limit the permissions of default roles. Hence, if we want to retrieve cluster-wide data, we need to give Prometheus access to all cluster resources.
The steps below explain how to create and apply a basic set of .yaml files that provide Prometheus with cluster-wide access.
1. Create a file for the cluster role definition.
2. Copy the following configuration and adjust it according to your needs. The verbs on each rule define the actions the role can take on the apiGroups.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus rules: - apiGroups: [""] resources: - nodes - services - endpoints - pods verbs: ["get", "list", "watch"] - apiGroups: - extensions resources: - ingresses verbs: ["get", "list", "watch"]
Save the file and exit.
3. Apply the file.
kubectl apply -f cluster-role.yaml
4. Create a service account file.
5. Copy the configuration below to define a service account.
apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: monitoring
Save the file and exit.
6. Apply the file.
kubectl apply -f service-account.yaml
7. Create another file in a text editor:
8. Define ClusterRoleBinding. This action is going to bind the Service Account to the previously-created Cluster Role.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: monitoring
Save the file and exit.
9. Finally, apply the binding with kubectl.
kubectl apply -f cluster-role-binding.yaml
By adding these resources to our file, we have granted Prometheus cluster-wide access from the monitoring namespace.
Step 4: Create Prometheus ConfigMap
This section of the file provides instructions for the scraping process. Specific instructions for each element of the Kubernetes cluster should be customized to match individual monitoring requirements and cluster setup.
The example in this article uses a simple ConfigMap that defines the scrape and evaluation intervals, jobs, and targets.
1. Create the file in a text editor.
2. Copy the configuration below.
apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace: monitoring data: prometheus.yml: | global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers: - static_configs: - targets: rule_files: # - "example-file.yml" scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090']
Save the file and exit.
3. Apply the ConfigMap with kubectl.
kubectl apply -f configmap.yaml
While the configuration presented above is sufficient to create a test Prometheus deployment, ConfigMaps usually provide further configuration details. The following sections discuss some additional options that you can include in the file.
This service discovery exposes the nodes that make up your Kubernetes cluster. The kubelet runs on every single node and is a source of valuable information.
scrape_configs: - job_name: 'kubelet' kubernetes_sd_configs: - role: node scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true # Required with Minikube.
Scrape cAdvisor (container level information)
The kubelet only provides information about itself and not the containers. To receive information from the container level, we need to use an exporter. The cAdvisor is already embedded and only needs a metrics_path: /metrics/cadvisor for Prometheus to collect container data:
- job_name: 'cadvisor' kubernetes_sd_configs: - role: node scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true # Required with Minikube. metrics_path: /metrics/cadvisor
Use the endpoints role to target each application instance. This section of the file allows you to scrape API servers in your Kubernetes cluster.
- job_name: 'k8apiserver' kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true # Required if using Minikube. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https
Scrape Pods for Kubernetes Services (excluding API Servers)
Scrape the pods backing all Kubernetes services and disregard the API server metrics.
- job_name: 'k8services' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name action: drop regex: default;kubernetes - source_labels: - __meta_kubernetes_namespace regex: default action: keep - source_labels: [__meta_kubernetes_service_name] target_label: job
Discover all pod ports with the name metrics by using the container name as the job label.
- job_name: 'k8pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_container_port_name] regex: metrics action: keep - source_labels: [__meta_kubernetes_pod_container_name] target_label: job
Step 5: Create Prometheus Deployment File
The deployment .yaml defines the number of replicas and templates to be applied to the defined set of pods. The file also connects the elements defined in the previous files, such as PV and PVC.
1. Create a file to store the deployment configuration.
2. Copy the following example configuration and adjust it according to your needs.
apiVersion: apps/v1 kind: Deployment metadata: name: prometheus namespace: monitoring labels: app: prometheus spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus annotations: prometheus.io/scrape: "true" prometheus.io/port: "9090" spec: containers: - name: prometheus image: prom/prometheus args: - '--storage.tsdb.retention=6h' - '--storage.tsdb.path=/prometheus' - '--config.file=/etc/prometheus/prometheus.yml' ports: - name: web containerPort: 9090 volumeMounts: - name: prometheus-config-volume mountPath: /etc/prometheus - name: prometheus-storage-volume mountPath: /prometheus restartPolicy: Always volumes: - name: prometheus-config-volume configMap: defaultMode: 420 name: prometheus-config - name: prometheus-storage-volume persistentVolumeClaim: claimName: pvc-nfs-data
Save and exit.
3. Deploy Prometheus with the following command.
kubectl apply -f deployment.yaml
Step 6: Create Prometheus Service
Prometheus is currently running in the cluster. Follow the procedure to create a service and obtain access to the data Prometheus has collected:
1. Create a .yaml to store service-related data.
2. Define the service in the file.
apiVersion: v1 kind: Service metadata: name: prometheus-service namespace: monitoring annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9090' spec: selector: app: prometheus type: NodePort ports: - port: 8080 targetPort: 9090 nodePort: 30909
Save the file and exit.
3. Create the service with
kubectl apply -f service.yaml
Monitoring Kubernetes Cluster with Prometheus
Prometheus is a pull-based system. It sends an HTTP request, a so-called
scrape, based on the configuration defined in the deployment file. The response to this
scrape request is stored and parsed in storage along with the metrics for the scrape itself.
The storage is a custom database on the Prometheus server and can handle a massive influx of data. It is possible to monitor thousands of machines simultaneously with a single Prometheus server.
Note: With so much data coming in, disk space can quickly become an issue. The collected data has great short-term value. If you are planning on keeping extensive long-term records, it might be a good idea to provision additional persistent storage volumes.
The data needs to be appropriately exposed and formatted so that Prometheus can collect it. Prometheus can access data directly from the app’s client libraries or by using exporters.
Exporters are used for data that you do not have full control over (for example, kernel metrics). An exporter is a piece of software placed next to the application. Its purpose is to accept HTTP requests from Prometheus, make sure the data is in a supported format, and then provide the requested data to the Prometheus server.
Once you equip your applications to provide data to Prometheus, you need to inform Prometheus where to look for that data. Prometheus discovers targets to scrape from by using Service Discovery.
A Kubernetes cluster already has labels and annotations and an excellent mechanism for keeping track of changes and the status of its elements. Hence, Prometheus uses the Kubernetes API to discover targets.
The Kubernetes service discoveries that you can expose to Prometheus are:
Prometheus retrieves machine-level metrics separately from the application information. The only way to expose memory, disk space, CPU usage, and bandwidth metrics is to use a node exporter. Additionally, metrics about cgroups need to be exposed as well.
Fortunately, the cAdvisor exporter is already embedded on the Kubernetes node level and can be readily exposed.
Once the system collects the data, you can access it by using the PromQL query language, export it to graphical interfaces like Grafana, or use it to send alerts with the Alertmanager.
Access Prometheus Monitoring
Ensure that all the relevant elements run properly in the monitoring namespace:
kubectl get all -n monitoring
Use the individual node URL and the nodePort defined in the service.yaml file to access Prometheus from your browser. For example:
By entering the URL or IP of your node, and by specifying the port from the .yaml file, you have successfully gained access to Prometheus Monitoring.
Note: If you need a comprehensive dashboard system to graph the metrics gathered by Prometheus, one of the available options is Grafana. It uses data sources to retrieve the information used to create graphs.
How to Monitor kube-state-metrics? (Optional)
You are now able to fully monitor your Kubernetes infrastructure, as well as your application instances. However, this does not include metrics on the information Kubernetes has about the resources in your cluster.
kube-state-metrics is an exporter that allows Prometheus to scrape that information as well.
1. Create a YAML file for the
--- apiVersion: apps/v1beta2 kind: Deployment metadata: name: kube-state-metrics spec: selector: matchLabels: app: kube-state-metrics replicas: 1 template: metadata: labels: app: kube-state-metrics spec: serviceAccountName: prometheus containers: - name: kube-state-metrics image: quay.io/coreos/kube-state-metrics:v1.2.0 ports: - containerPort: 8080 name: monitoring --- kind: Service apiVersion: v1 metadata: name: kube-state-metrics spec: selector: app: kube-state-metrics type: LoadBalancer ports: - protocol: TCP port: 8080 targetPort: 8080
2. Apply the file by entering the following command:
kubectl apply -f kube-state-metrics.yml
Once you apply the file, access Prometheus by entering the node IP/URL and defined nodePort as previously defined.
Now that you have successfully installed Prometheus Monitoring on a Kubernetes cluster, you can track the overall health, performance, and behavior of your system.
No matter how large and complex your operations are, a metrics-based monitoring system such as Prometheus is a vital DevOps tool for maintaining a distributed microservices-based architecture.