Need help with redis-cluster-operator?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

ucloud
134 Stars 46 Forks Apache License 2.0 216 Commits 12 Opened issues

Description

Redis Cluster Operator creates and manages Redis Clusters atop Kubernetes.

Services available

!
?

Need anything else?

Contributors list

redis-cluster-operator

Overview

Redis Cluster Operator manages Redis Cluster atop Kubernetes.

The operator itself is built with the Operator framework.

Redis Cluster atop Kubernetes

Each master node and its slave nodes is managed by a statefulSet, create a headless svc for each statefulSet, and create a clusterIP service for all nodes.

Each statefulset uses PodAntiAffinity to ensure that the master and slaves are dispersed on different nodes. At the same time, when the operator selects the master in each statefulset, it preferentially select the pod with different k8s nodes as master.

Table of Contents

Prerequisites

  • go version v1.13+.
  • Access to a Kubernetes v1.13.10 cluster.

Features

  • Customize the number of master nodes and the number of replica nodes per master

  • Password

  • Safely Scaling the Redis Cluster

  • Backup and Restore

  • Persistent Volume

  • Custom Configuration

  • Prometheus Discovery

Quick Start

Deploy redis cluster operator

Install Step by step

Register the DistributedRedisCluster and RedisClusterBackup custom resource definition (CRD).

$ kubectl create -f deploy/crds/redis.kun_distributedredisclusters_crd.yaml
$ kubectl create -f deploy/crds/redis.kun_redisclusterbackups_crd.yaml

A namespace-scoped operator watches and manages resources in a single namespace, whereas a cluster-scoped operator watches and manages resources cluster-wide. You can chose run your operator as namespace-scoped or cluster-scoped. ``` // cluster-scoped $ kubectl create -f deploy/serviceaccount.yaml $ kubectl create -f deploy/cluster/clusterrole.yaml $ kubectl create -f deploy/cluster/clusterrolebinding.yaml $ kubectl create -f deploy/cluster/operator.yaml

// namespace-scoped $ kubectl create -f deploy/serviceaccount.yaml $ kubectl create -f deploy/namespace/role.yaml $ kubectl create -f deploy/namespace/rolebinding.yaml $ kubectl create -f deploy/namespace/operator.yaml ```

Install using helm chart

Add Helm repository

helm repo add ucloud-operator https://ucloud.github.io/redis-cluster-operator/
helm repo update

Install chart

helm install --generate-name ucloud-operator/redis-cluster-operator

Verify that the redis-cluster-operator is up and running:

$ kubectl get deployment
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
redis-cluster-operator   1/1     1            1           1d

Usage

Deploy a sample Redis Cluster

NOTE: Only the redis cluster that use persistent storage(pvc) can recover after accidental deletion or rolling update.Even if you do not use persistence(like rdb or aof), you need to set pvc for redis.

$ kubectl apply -f deploy/example/redis.kun_v1alpha1_distributedrediscluster_cr.yaml

Verify that the cluster instances and its components are running. ``` $ kubectl get distributedrediscluster NAME MASTERSIZE STATUS AGE example-distributedrediscluster 3 Scaling 11s

$ kubectl get all -l redis.kun/name=example-distributedrediscluster NAME READY STATUS RESTARTS AGE pod/drc-example-distributedrediscluster-0-0 1/1 Running 0 2m48s pod/drc-example-distributedrediscluster-0-1 1/1 Running 0 2m8s pod/drc-example-distributedrediscluster-1-0 1/1 Running 0 2m48s pod/drc-example-distributedrediscluster-1-1 1/1 Running 0 2m13s pod/drc-example-distributedrediscluster-2-0 1/1 Running 0 2m48s pod/drc-example-distributedrediscluster-2-1 1/1 Running 0 2m15s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/example-distributedrediscluster ClusterIP 172.17.132.71 6379/TCP,16379/TCP 2m48s service/example-distributedrediscluster-0 ClusterIP None 6379/TCP,16379/TCP 2m48s service/example-distributedrediscluster-1 ClusterIP None 6379/TCP,16379/TCP 2m48s service/example-distributedrediscluster-2 ClusterIP None 6379/TCP,16379/TCP 2m48s

NAME READY AGE statefulset.apps/drc-example-distributedrediscluster-0 2/2 2m48s statefulset.apps/drc-example-distributedrediscluster-1 2/2 2m48s statefulset.apps/drc-example-distributedrediscluster-2 2/2 2m48s

$ kubectl get distributedrediscluster NAME MASTERSIZE STATUS AGE example-distributedrediscluster 3 Healthy 4m ```

Scaling Up the Redis Cluster

Increase the masterSize to trigger the scaling up.

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # Increase the masterSize to trigger the scaling.
  masterSize: 4
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

Scaling Down the Redis Cluster

Decrease the masterSize to trigger the scaling down.

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # Increase the masterSize to trigger the scaling.
  masterSize: 3
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

Backup and Restore

NOTE: Only Ceph S3 object storage and PVC is supported now

Backup

$ kubectl create -f deploy/example/backup-restore/redisclusterbackup_cr.yaml

Restore from backup

$ kubectl create -f deploy/example/backup-restore/restore.yaml

Prometheus Discovery

$ kubectl create -f deploy/example/prometheus-exporter.yaml

Create Redis Cluster with password

$ kubectl create -f deploy/example/custom-password.yaml

Persistent Volume

$ kubectl create -f deploy/example/persistent.yaml

Custom Configuration

$ kubectl create -f deploy/example/custom-config.yaml

Custom Service

$ kubectl create -f deploy/example/custom-service.yaml

Custom Resource

$ kubectl create -f deploy/example/custom-resources.yaml

ValidatingWebhook

see ValidatingWebhook

End to end tests

see e2e

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.