Kubernetes cluster autoscaler with pluggable metrics backends and scaling engines
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Cerebral is a Kubernetes cluster autoscaler with pluggable metrics backends and scaling engines.
Cerebral is a provider agnostic tool for increasing or decreasing the size of pools of nodes in your Kubernetes cluster in response to alerts generated by user-defined policies. These policies reference pluggable and configurable metrics backends, e.g. Prometheus, for gathering metrics to make autoscaling decisions on.
Automatically increasing the number of nodes is important in order to meet resource demand, while decreasing the number is helpful in controlling cost.
Manually scaling nodes in a Kubernetes cluster is not feasible given the largely dynamic nature of web infrastructure; thus, automation is needed to assist operators in these tasks. With the increased importance placed on monitoring and observability in modern infrastructure, operators should be able to easily take action on the metrics they are collecting.
Cerebral is simple at its core: it polls a
MetricsBackendand triggers alerts if thresholds defined in any
AutoscalingPolicyassociated with any
AutoscalingGroupare breached. These alerts may result in a scale request to the
AutoscalingEngineare all defined by Custom Resource Definitions (CRDs). An
AutoscalingGroup, for example, is just a group of Kubernetes nodes that can be selected using some label selector.
The most powerful feature of Cerebral is the ability to easily plug in new metrics backend and autoscaling implementations.
Support for a different
MetricsBackendcan be added by implementing the metrics backend interface.
In addition to traditional metrics backends such as the currently available Prometheus integration, there are countless possible use-cases for custom, application-specific metrics backends. For example, autoscaling could be performed based on the current depth of some application queue.
Support for a different
AutoscalingEnginecan be added by implementing the engine interface.
AutoscalingGroupis defined by a label selector, the provider (or some other entity) must be able to label nodes when they are added.
This project is in alpha. There may be breaking changes made as we continue to expand the project and integrated user feedback.
Currently, the project has support for several metrics backends and engines. A lot more is to come - please see the GitHub issues for a roadmap, and feel free to open your own issue if a feature you'd like to see isn't already in the roadmap!
Currently, all pluggable components, namely the MetricsBackend and AutoscalingEngine, are required to be implemented in-tree. There are a number of advantages to supporting out-of-tree components:
Supporting out-of-tree components is on our roadmap (see e.g. #45) but not yet implemented.
Another tool for autoscaling is the Kubernetes Autoscaler. It is a prerequisite that the integrated providers support Autoscaling Groups (ASGs), a feature which many cloud providers do not have.
Additionally, the method by which scaling occurs is naïve; often triggering events too late to be rendered useful. Cerebral takes a more generic, flexible, and powerful approach to autoscaling by integrating with existing metric backends as input, and in turn, triggering pluggable actions in response to scaling events.
Please refer to our documentation for more information on building, configuring, and running Cerebral.
Thank you for your interest in this project and for your interest in contributing! Feel free to open issues for feature requests, bugs, or even just questions - we love feedback and want to hear from you.
Pull requests are also always welcome! However, if the feature you're considering adding is fairly large in scope, please consider opening an issue for discussion first. See CONTRIBUTING.md for more details.