redis cluster k8s解决方案

yaml文件地址

nfs server IP

192.168.1.1

nfs server配置

/data/nfs/redis *(rw,sync,no_root_squas)

创建角色

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: redis-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: redis-provisioner
subjects:
- kind: ServiceAccount
  name: redis-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: redis-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: redis-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: redis-provisioner
subjects:
- kind: ServiceAccount
  name: redis-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
roleRef:
  kind: Role
  name: redis-provisioner
  apiGroup: rbac.authorization.k8s.io

创建service account

apiVersion: v1
kind: ServiceAccount
metadata:
  name: redis-nfs-client-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: redis-nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: redis-nfs-client-provisioner
    spec:
      serviceAccount: redis-nfs-client-provisioner
      containers:
      - name: redis-nfs-client-provisioner
        image: quay.io/external_storage/nfs-client-provisioner:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: fuseim.pri/ifs
        - name: NFS_SERVER
          value: 192.168.1.1
        - name: NFS_PATH
          value: /data/nfs/redis
      volumes:
      - name: nfs-client-root
        nfs:
          server: 192.168.1.1
          path: /data/nfs/redis

创建storage class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: redis-managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"

创建redis

---
apiVersion: v1
kind: Service
metadata:
  #name: redis-headless
  name: redis-cluster
  labels:
    app: redis
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  ports:
    - port: 6379
      name: server
      targetPort: 6379
  #clusterIP: None
  selector:
    app: redis
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
data:
  redis-config: |
    appendonly yes
    cluster-enabled yes
    cluster-config-file /var/lib/redis/nodes.conf
    cluster-node-timeout 5000
    dir /var/lib/redis
    port 6379
    requirepass password
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
spec:
  serviceName: redis-headless
  replicas: 6
  template:
    metadata:
      labels:
        app: redis
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      serviceAccountName: redis-nfs-client-provisioner
      containers:
        - name: redis
          imagePullPolicy: IfNotPresent
          image: redis:latest
          command: [ "/bin/sh","-c","redis-server /etc/redis/redis.conf"]
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 6379
              name: client-port
          ports:
            - containerPort: 16379
              name: cluster-port
          readinessProbe:
            exec:
              command:
              - "/bin/sh"
              - "-c"
              - "redis-cli -h $(hostname) ping"
            initialDelaySeconds: 15
            timeoutSeconds: 15
          livenessProbe:
            exec:
              command:
              - "/bin/sh"
              - "-c"
              - "redis-cli -h $(hostname) ping"
          volumeMounts:
            - name: "redis-conf"
              mountPath: "/etc/redis"
            - name: "redis-data"
              mountPath: "/var/lib/redis"
      volumes:
        - name: "redis-conf"
          configMap:
            name: "redis-config"
            items:
              - key: "redis-config"
                path: "redis.conf"
  volumeClaimTemplates:
    - metadata:
        name: redis-data
        annotations:
          volume.beta.kubernetes.io/storage-class: "redis-managed-nfs-storage"
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 5Gi
  selector:
    matchLabels:
      app: redis

初始化集群

echo "yes" | kubectl exec -it redis-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') -a password
### 如何在 Kubernetes 上部署 Redis Sentinel 和 Redis 集群 #### 准备工作 为了顺利部署 Redis Sentinel 和 Redis 集群,在开始之前应当拥有一个正常运行的 Kubernetes 集群[^1]。 #### 创建命名空间 创建一个新的命名空间来隔离 Redis 资源,可以提高资源管理和安全性。 ```bash kubectl create namespace redis-cluster ``` #### 定义 ConfigMap 或 Secret 存储配置文件 对于 Redis 的配置参数以及认证信息可以通过 ConfigMap 或者 Secret 来定义并挂载到 Pod 中。这一步骤有助于简化后续操作中的环境变量设置和命令行选项传递。 #### 使用 Helm Chart 进行安装 Helm 是 Kubernetes 应用程序包管理工具,能够方便快捷地完成复杂应用的一键式部署。社区提供了官方支持的稳定版 chart `bitnami/redis` 支持哨兵模式下的集群搭建[^4]。 执行如下命令以启动带有哨兵保护机制的一主二从架构: ```bash helm repo add bitnami https://charts.bitnami.com/bitnami helm install my-release \ --set architecture=sentinel \ --set replicaCount=2 \ --namespace redis-cluster \ bitnami/redis ``` 上述命令会自动创建三个 StatefulSet 对象分别对应 master/slave/sentinels 组件,并且按照预设规则建立连接关系形成完整的 HA 解决方案[^5]。 #### 访问验证 一旦成功部署完毕,则可通过 kubectl 命令行工具进入任意一个 sentinel pod 执行 CLI 查看当前集群状态: ```bash kubectl exec -it <sentinel-pod-name> -n redis-cluster -- redis-cli -h localhost -p 26379 sentinel masters ``` 此命令返回的结果应该包含有关 master 的名称和其他细节信息[^3]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值