Prometheus Configuration
This post will collect information on how I have configured Prometheus in case there is a need to rebuild or make changes in the future.
Prometheus Server
At the moment I have the Prometheus server running on a Raspberry Pi 3 Model B. There is plenty of good information on how to quickly set up both a Prometheus server and the various node_ and other exporters for Prometheus out on the Prometheus Documentation Website and other locations which can be used for reference. I specifically want to focus on modifications to the default installation configuration which make it easier for me to quickly add additional collection targets without repeatedly altering the core /etc/prometheus/prometheus.yml
for each target.
$ neofetch
-` ceed-chuck.local
.o+` ----------------
`ooo/ OS: Arch Linux ARM aarch64
`+oooo: Host: Raspberry Pi 3 Model B
`+oooooo: Kernel: 5.15.5-1-ARCH
-+oooooo+: Uptime: 2 days, 26 mins
`/:-:++oooo+: Packages: 211 (pacman)
`/++++/+++++++: Shell: bash 5.1.12
`/++++++++++++++: Resolution: 720x480
`/+++ooooooooooooo/` Terminal: /dev/pts/1
./ooosssso++osssssso+` CPU: (4) @ 1.200GHz
.oossssso-````/ossssss+` Memory: 210MiB / 895MiB
-osssssso. :ssssssso.
:osssssss/ osssso+++.
/ossssssss/ +ssssooo/-
`/ossssso+/:- -:/+osssso+-
`+sso+:-` `.-/+oso:
`++:. `-/+/
.` `/
Sticking with the defaults, the Prometheus service is running on host ceed-chuck
and available on port 9090. An instance of the node-exporter is also installed and running on the same host and is available via port 9100.
Enhancements to the Default Prometheus Configuration
- Create a targets directory,
/etc/prometheus/targets.d/
.
sudo mkdir -p /etc/prometheus/targets.d
$
- Add
node.yml
config file the/etc/prometheus/targets.d/
listing target hosts and ports. In the following sample I am just adding a single target for thenode_exporter
on hostceed-chuck
.
sudo vim /etc/prometheus/targets.d/node.yml
- targets:
- 'ceed-chuck.local:9100'
- Modify /etc/prometheus/prometheus.yml file to include information from YAML file in new directory as part of configuration. Specifically, adding the following:
- job_name: 'node'
file_sd_configs:
- files:
- '/etc/prometheus/targets.d/node.yml'
To the existing scrape_configs:
section:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
So that the /etc/prometheuse/prometheus.yml
file ends up with a scrpes_config:
section that now looks like this:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: 'node'
file_sd_configs:
- files:
- '/etc/prometheus/targets.d/node.yml'
After doing the above, adding new targets for Prometheus server to scrape is a matter of:
- Adding new host and port to the node.yml file in
/etc/prometheus/targets.d/
directory. - Restarting the Prometheus service,
sudo systemctl restart prometheus
.
The next section is a snapshot of the state file I currently use to install Prometheus server.
Salt State for Prometheus Needful
app_prometheus file system layout
app_prometheus
├── files
│ └── node.yml
└── init.sls
init.sls
# Installs Prometheus for monitoring network compute devices
# Created: 2021-10-22
# Modified 2021-12-18 - create /etc/prometheus/targets.d, manage node.yml,
# adjust prometheus.yml to make use of prior
# Make sure prometheus installed
prometheus_inst:
pkg.installed:
- name: {{ pillar['prometheus'] }}
- refresh: False
# Create /etc/prometheus/targets.d/ subdirectory
prom_targets:
file.directory:
- user: root
- name: /etc/prometheus/targets.d
- group: prometheus
- mode: 755
# Put a node.yml file in /etc/prometheus/targets.d only if does not exist.
node.yml:
file.managed:
- name: /etc/prometheus/targets.d/node.yml
- source: salt://app_prometheus/files/node.yml
- replace: False
- mode: 644
# Make sure /etc/prometheus/prometheus.yml will read node.yml from /etc/prometheus/targets.d/
prometheus.yml:
file.append:
- name: /etc/prometheus/prometheus.yml
- text: |
- job_name: 'node'
file_sd_configs:
- files:
- '/etc/prometheus/targets.d/node.yml'
# Make sure service is enabled and started.
prom_service:
service.running:
- name: prometheus.service
- enable: True
- watch:
- file: '/etc/prometheus/targets.d/node.yml'
- file: '/etc/prometheus/prometheus.yml'
node.yml
- targets:
- 'bob-c2.local:9100'
- 'ceed-carmen.local:9100'
- 'ceed-chuck.local:9100'
- 'dylan-c2.local:9100'
- 'mc1_0.local:9100'
- 'mc1_2.local:9100'
- 'mc1_3.local:9100'
- 'mc1_5.local:9100'
- 'parallella.local:9100'
- 'pc0.local:9100'
- 'pc1.local:9100'
- 'pc3.local:9100'
- 'pc5.local:9100'
- 'richard.local:9100'
- 'rock64.local:9100'
- 'setback.local:9100'
- 'sparky.local:9100'
- 'steve.local:9100'
- 'symbolics.local:9100'
- 'test1.local:9100'
- 'trinity.local:9100'
- 'trisquel.local:9100'
- 'velma.local:9100'
- 'xu4-cloud.local:9100'