I am here with a new learning series to kickstart Prometheus. I am in the process to learn and implement metrics and alerting through Prometheus. Prometheus is a new hot market leader in terms of monitoring the infrastructure and applications.
Let’s begin some introduction to understand what are the different components of Prometheus.
Introduction to Prometheus
What is Prometheus? Prometheus is an open source systems monitoring and alerting toolkit. It is written in Golang along with most of its components. This allows it to be available in binary that can run easily. Many companies have started adopting it and converting their existing monitoring framework. Prometheus is now part of Cloud Native Computing Foundation, second project hosted after Kubernetes.
In very simple terms, time series data is collected just like Graphite and rules are defined to raise alerts based on the metrics pattern.
Components of Prometheus
- Prometheus server that scrapes and stores time series data
- Different client libraries to instrument application code
- Push gateway for short lived jobs
- Special purpose exporters for services like HAProxy, StatsD, etc.
- Alertmanager for alerting
- Expression manager to visualize metrics
Now, let’s get started in setting up prometheus server and scrape some data.
Get Started with Prometheus on Ubuntu
Here is the system specs on which I am installing the things.
OS: Ubuntu 18.04 Bionic
- Download the latest release of prometheus. Select the Linux tarball.
- Extract the tarball. Here prometheus is the binary to run and prometheus.yml is the main configuration file.
console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool
- Update prometheus.yml with following configuration
global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'prometheus' # Override global scrape_interval scrape_interval: 5s static_configs: - targets: ['localhost:9090']
- Now, run prometheus server
▶ ./prometheus --config.file=prometheus.yml
- Above will start prometheus server and start scraping metrics from localhost. Scrape interval is set to 5s. This means it will collect data every 5 seconds. To visualize the metrics, open http://localhost:9090/graph. This is the expression browser. It will list the metrics collected. By default it collects go runtime parameters and process information.
This post is now getting longer and you will be bored to read more :). I’ll write a new post explaining how to collect OS metrics like memory, CPU usage, etc.