Fast carbon relay+aggregator with admin interfaces for making changes online - production ready
A relay for carbon streams, in go. Like carbon-relay from the graphite project, except it:
This makes it easy to fanout to other tools that feed in on the metrics. Or balance/split load, or provide redundancy, or partition the data, etc. This pattern allows alerting and event processing systems to act on the data as it is received (which is much better than repeated reading from your storage)
You have 1 master routing table. This table contains 0-N routes. There's different route types. A carbon route can contain 0-M destinations (tcp endpoints)
First: "matching": you can match metrics on one or more of: prefix, substring, or regex. All 3 default to "" (empty string, i.e. allow all). The conditions are AND-ed. Regexes are more resource intensive and hence should - and often can be - avoided.
The route can have different behaviors, based on its type:
carbon-relay-ng (for now) focuses on staying up and not consuming much resources.
For carbon routes: * if connection is up but slow, we drop the data * if connection is down and spooling enabled. we try to spool but if it's slow we drop the data * if connection is down and spooling disabled -> drop the data
kafka, Google PubSub, and grafanaNet have an in-memory buffer and can be configured to blocking or non-blocking mode when the buffer runs full.