Need help with docker-prometheus-swarm?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

bvis
210 Stars 95 Forks 26 Commits 3 Opened issues

Description

Sample prometheus that can be used as a sample to get Swarm cluster metrics

Services available

!
?

Need anything else?

Contributors list

# 167,508
HTML
Ruby
iaas
Shell
18 commits
# 165,143
C
CSS
Shell
helm
1 commit
# 393,938
Shell
docker-...
grafana
prometh...
1 commit
# 356,363
PHP
yii2-ex...
yii2
Shell
1 commit
# 260,340
CSS
Shell
outlook
glsl-sh...
1 commit

Prometheus Swarm

A sample image that can be used as a base for collecting Swarm mode metrics in Prometheus

How to use it

You can use the provided

docker-compose.yml
file as an example. You can deploy the full stack with the command:
docker stack deploy --compose-file docker-compose.yml monitoring

The grafana by default is exposed in the 3000 port and the credentials are admin/admin, be sure you use something different in your deploys.

Once everything is running you just need to connect to grafana and import the Docker Swarm & Container Overview

In case you don't have an Elasticsearch instance with logs and errors you could provide an invalid configuration or you could launch the sample stack with ELK.

docker stack deploy --compose-file docker-compose.logging.yml logging

Be patient, some services can take some minutes to start. This stack sample is using old versions of Elasticsearch and Kibana intentionally for simplify the configuration.

Docker Engine Metrics

In case you have activated the metrics endpoint in your docker swarm cluster you could import the Docker Engine Metrics dashboard as well, which offers complementary data about the docker daemon itself.

More info available about this dashboard and its configuration in this post Docker Daemon Metrics in Prometheus

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.