CephFS Provisioner最佳实践:通过Helm简化K8s存储配置

在云原生和容器化应用日益普及的今天,Kubernetes作为领先的容器编排平台,对于高效的存储解决方案有着迫切需求。使用Helm部署CephFS Provisioner提供了一种简易且强大的方法,能够显著提高Kubernetes集群的数据管理和访问性能。

01

Ceph操作

1、创建kubernetes存储池

$ ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created

$ ceph osd pool create cephfs-data 64 64   
pool 'cephfs-data' created

2、创建cephfs

$ ceph fs new cephfs cephfs-metadata cephfs-data
new fs with metadata pool 7 and data pool 8

3、获取ceph相关信息

$ ceph mon dump
dumped monmap epoch 2
epoch 2
fsid a43fa047-755e-4208-af2d-f6090154f902
last_changed 2024-08-12T20:34:52.706720+0800
created 2024-08-08T14:48:39.332770+0800
min_mon_release 15 (octopus)
0: [v2:172.139.20.20:3300/0,v1:172.139.20.20:6789/0] mon.storage-ceph01
1: [v2:172.139.20.94:3300/0,v1:172.139.20.94:6789/0] mon.storage-ceph03
2: [v2:172.139.20.208:3300/0,v1:172.139.20.208:6789/0] mon.storage-ceph02

02

Helm部署CephFS provisioner

1、下载对应的chart文件并推送到私有Harbor仓库

$ curl -L -O https://github.com/ceph/ceph-csi/archive/refs/tags/v3.9.0.tar.gz
$ tar xvf v3.9.0.tar.gz -C /tmp/
$ cd /tmp/ceph-csi-3.9.0/charts
$ helm package ceph-csi-cephfs
$ helm push ceph-csi-cephfs-3.9.0.tgz oci://core.jiaxzeng.com/plugins

2、部署CephFS provisioner配置文件

$ cat <<'EOF' | sudo tee /etc/kubernetes/addons/ceph-csi-cephfs-values.yaml > /dev/null
nodeplugin:
  # nodeplugin使用到的各资源的名称
  fullnameOverride: ceph-csi-cephfs-nodeplugin
  # 配置下载的镜像地址
  registrar:
    image:
      repository: 172.139.20.170:5000/library/csi-node-driver-registrar 
  plugin:
    image:
      repository: 172.139.20.170:5000/library/cephcsi
      tag: v3.9.0
  # 配置容忍
  tolerations:
  - operator: Exists

provisioner:
  # provisioner使用到的各资源的名称
  fullnameOverride: ceph-csi-cephfs-provisioner
  # 配置下载的镜像地址
  provisioner:
    image:
      repository: 172.139.20.170:5000/library/csi-provisioner
  resizer:
    image:
      repository: 172.139.20.170:5000/library/csi-resizer
  snapshotter:
    image:
      repository: 172.139.20.170:5000/library/csi-snapshotter

# kubelet数据目录。
kubeletDir: /var/lib/kubelet
# 驱动名称(即provisioner)
driverName: cephfs.csi.ceph.com
# csi配置文件的configmap名称
configMapName: cephfs-csi-config
# ceph配置文件的configmap名称
cephConfConfigMapName: cephfs-config

# ceph配置文件
cephconf: |
  [global]
  fsid = a43fa047-755e-4208-af2d-f6090154f902
  cluster_network = 172.139.20.0/24
  mon_initial_members = storage-ceph01, storage-ceph02, storage-ceph03
  mon_host = 172.139.20.20,172.139.20.208,172.139.20.94
  auth_cluster_required = cephx
  auth_service_required = cephx
  auth_client_required = cephx

# csi配置文件
csiConfig:
- clusterID: a43fa047-755e-4208-af2d-f6090154f902
  monitors:
  - "172.139.20.20:6789"
  - "172.139.20.94:6789"
  - "172.139.20.208:6789"

storageClass:
  create: true
  name: ceph-fs-storage # storageClass名称
  clusterID: a43fa047-755e-4208-af2d-f6090154f902 # ceph集群ID
  fsName: cephfs # cephfs名称
  fstype: xfs 
  reclaimPolicy: Retain
  allowVolumeExpansion: true

secret:
  create: true
  name: csi-cephfs-secret
  adminID: admin # 建议使用admin
  adminKey: AQBiarRmA+FiDRAAH9TqQmxuF+iiJR0jM17Pdw== # /etc/ceph/ceph.client.admin.keyring文件中的key字段
EOF

Tip:【血教训】原本是manifests部署且已经重建过pvc的话,不要删除原来sc使用到的secret,pv会使用到该secret信息。否则pod重建会一直处于CreateContainer。

3、部署CephFS provisioner

$ helm -n storage-system upgrade csi-cephfs -f /etc/kubernetes/addons/ceph-csi-cephfs-values.yaml oci://core.jiaxzeng.com/plugins/ceph-csi-cephfs --version 3.9.0
Pulled: core.jiaxzeng.com/plugins/ceph-csi-cephfs:3.9.0
Digest: sha256:092b853cde5870b709845aff209a336c8f9d15b5c9b02f57ed03fcfd93caf4c6
Release "csi-cephfs" has been upgraded. Happy Helming!
NAME: csi-cephfs
LAST DEPLOYED: Wed Jan 22 17:09:55 2025
NAMESPACE: storage-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Examples on how to configure a storage class and start using the driver are here:
https://github.com/ceph/ceph-csi/tree/devel/examples/cephfs

03

验证

1、查看pod运行情况

$ kubectl -n storage-system get pod | grep cephfs
ceph-csi-cephfs-nodeplugin-2p2ll               3/3     Running   3 (5m41s ago)   17h
ceph-csi-cephfs-nodeplugin-66fvc               3/3     Running   3 (5m39s ago)   17h
ceph-csi-cephfs-nodeplugin-677t7               3/3     Running   3 (5m34s ago)   17h
ceph-csi-cephfs-nodeplugin-9tszk               3/3     Running   3 (5m31s ago)   17h
ceph-csi-cephfs-nodeplugin-bqr9w               3/3     Running   3 (5m39s ago)   17h
ceph-csi-cephfs-nodeplugin-dxtt6               3/3     Running   3 (5m33s ago)   17h
ceph-csi-cephfs-nodeplugin-kgzn8               3/3     Running   3 (5m28s ago)   17h
ceph-csi-cephfs-provisioner-64bcd4bdb4-7bsht   5/5     Running   5 (5m34s ago)   17h
ceph-csi-cephfs-provisioner-64bcd4bdb4-9wkhp   5/5     Running   5 (5m33s ago)   17h
ceph-csi-cephfs-provisioner-64bcd4bdb4-v2mxv   5/5     Running   5 (5m30s ago)   17h

2、创建pvc并deployment挂载pvc

$ cat << EOF | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tools-cephfs
spec:
  accessModes:
  - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 3Gi
  storageClassName: ceph-fs-storage

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tools-cephfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tools-cephfs
  template:
    metadata:
      labels:
        app: tools-cephfs
    spec:
      containers:
      - image: core.jiaxzeng.com/library/tools:v1.3
        name: tools
        volumeMounts:
        - name: data
          mountPath: /app
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: tools-cephfs
EOF

3、验证pod情况

$ kubectl get pod -l app=tools-cephfs
NAME                           READY   STATUS    RESTARTS   AGE
tools-cephfs-d4b6748ff-vnppd   1/1     Running   0          78s

$ kubectl exec -it deploy/tools-cephfs -- df -h /app
Filesystem                                                                                                                                                Size  Used Avail Use% Mounted on
172.139.20.20:6789,172.139.20.94:6789,172.139.20.208:6789:/volumes/csi/csi-vol-e4c8c737-6b6a-4fb7-b4f2-243a9200b1da/d3606737-4783-445d-8f5d-2c62398164fe  3.0G     0  3.0G   0% /app

04

结语

综上所述,利用Helm部署CephFS Provisioner不仅简化了复杂的存储部署过程,还为Kubernetes环境提供了稳定、高性能的持久化存储支持。这使得开发团队可以更加专注于应用程序的创新和发展,而无需担忧底层基础设施的问题。此方案无疑是现代云计算架构中不可或缺的一部分,助力企业快速实现数字化转型。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

qq_38220914

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值