A Prometheus discoverer that scrapes Amazon ECS and a generates file SD configuration file.
Prometheus has native Amazon EC2 discovery capabilities, but it does not have the capacity to discover ECS instances that can be scraped by Prometheus. This program is a Prometheus File Service Discovery (
file_sd_config) integration that bridges said gap.
prometheus-ecs-discovery --helpto get information.
The command line parameters that can be used are:
First, build this program using the usual
Then, run it as follows:
AWS_SECRET_ACCESS_KEYinto the environment of the program, making sure that the keys have access to the EC2 / ECS APIs (IAM policies should include
ECS:DescribeTaskDefinition). If the program needs to assume a different role to obtain access, this role's ARN may be passed in via the
--config.role-arnoption. This option also allows for cross-account access, depending on which account the role is defined in.
-config.write-toto point the program to the specific folder that your Prometheus master can read from.
file_sd_configto your Prometheus master:
scrape_configs: - job_name: ecs file_sd_configs: - files: - /path/to/ecs_file_sd.yml refresh_interval: 10m # Drop unwanted labels using the labeldrop action metric_relabel_configs: - regex: task_arn action: labeldrop
To scrape the containers add following docker labels to them:
PROMETHEUS_EXPORTER_PORTspecify the container port where prometheus scrapes (mandatory)
PROMETHEUS_EXPORTER_SERVER_NAMEspecify the hostname here, per default ip is used (optional)
PROMETHEUS_EXPORTER_JOB_NAMEspecify job name here (optional)
PROMETHEUS_EXPORTER_PATHspecify alternative scrape path here (optional)
That's it. You should begin seeing the program scraping the AWS APIs and writing the discovery file (by default it does that every minute, and by default Prometheus will reload the file the minute it is written). After reloading your Prometheus master configuration, this program will begin informing via the discovery file of new targets that Prometheus must scrape.