使用StorageClass动态创建pv

rook-ceph安装部署到位后,就可以开始来尝试使用StorageClass来动态创建pv了。

有状态的中间件在kubernetes上落地基本上都会用到StorageClass来动态创建pv(对于云上应用没有那么多烦恼,云硬盘很好用,但是对于自己学习和练习来说还是Ceph更加靠谱),这里小试一试动态创建pv的威力,为后续用它来玩转redis、zookeeper、elasticsearch做准备。

1、创建 StorageClass和存储池

kubectl create -f rook/deploy/examples/csi/rbd/storageclass.yaml

查看创建的cephblockpool和StorageClass

kubectl get cephblockpool -n rook-ceph
kubectl get sc

结果如下:

rook/deploy/examples/csi/rbd/storageclass.yaml内容如下:

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph # namespace:cluster
spec:
  failureDomain: host
  replicated:
    size: 2
    # Disallow setting pool with replica 1, this could lead to data loss without recovery.
    # Make sure you're *ABSOLUTELY CERTAIN* that is what you want
    requireSafeReplicaSize: true
    # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
    # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
    #targetSizeRatio: .5
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-ceph-block
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
  # clusterID is the namespace where the rook cluster is running
  # If you change this namespace, also change the namespace below where the secret namespaces are defined
  clusterID: rook-ceph # namespace:cluster

  # If you want to use erasure coded pool with RBD, you need to create
  # two pools. one erasure coded and one replicated.
  # You need to specify the replicated pool here in the `pool` parameter, it is
  # used for the metadata of the images.
  # The erasure coded pool must be set as the `dataPool` parameter below.
  #dataPool: ec-data-pool
  pool: replicapool

  # (optional) mapOptions is a comma-separated list of map options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # mapOptions: lock_on_read,queue_depth=1024

  # (optional) unmapOptions is a comma-separated list of unmap options.
  # For krbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
  # For nbd options refer
  # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
  # unmapOptions: force

   # (optional) Set it to true to encrypt each volume with encryption keys
   # from a key management system (KMS)
   # encrypted: "true"

   # (optional) Use external key management system (KMS) for encryption key by
   # specifying a unique ID matching a KMS ConfigMap. The ID is only used for
   # correlation to configmap entry.
   # encryptionKMSID: <kms-config-id>

  # RBD image format. Defaults to "2".
  imageFormat: "2"

  # RBD image features
  # Available for imageFormat: "2". Older releases of CSI RBD
  # support only the `layering` feature. The Linux kernel (KRBD) supports the
  # full complement of features as of 5.4
  # `layering` alone corresponds to Ceph's bitfield value of "2" ;
  # `layering` + `fast-diff` + `object-map` + `deep-flatten` + `exclusive-lock` together
  # correspond to Ceph's OR'd bitfield value of "63". Here we use
  # a symbolic, comma-separated format:
  # For 5.4 or later kernels:
  #imageFeatures: layering,fast-diff,object-map,deep-flatten,exclusive-lock
  # For 5.3 or earlier kernels:
  imageFeatures: layering

  # The secrets contain Ceph admin credentials. These are generated automatically by the operator
  # in the same namespace as the cluster.
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:cluster
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster
  # Specify the filesystem type of the volume. If not specified, csi-provisioner
  # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
  # in hyperconverged settings where the volume is mounted on the same node as the osds.
  csi.storage.k8s.io/fstype: ext4
# uncomment the following to use rbd-nbd as mounter on supported nodes
# **IMPORTANT**: CephCSI v3.4.0 onwards a volume healer functionality is added to reattach
# the PVC to application pod if nodeplugin pod restart.
# Its still in Alpha support. Therefore, this option is not recommended for production use.
#mounter: rbd-nbd
allowVolumeExpansion: true
reclaimPolicy: Delete

2、创nginx的StatefulSet,通过storageclass来动态创建pv绑定到 /usr/share/nginx/html    有状态的Pod,每个Pod都要有自己的pv。    对于redis、zookeeper、elasticsearch来说都会使用到storageClass来动态创建pv。

 kubectl apply -f test_volumnClainTemplates.yaml 

  查看命令如下:

kubectl get po -l app=nginx
kubectl get pvc

   结果如下:

   test_volumnClainTemplates.yaml的内容如下:

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 2  
  template:
    metadata: 
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10 
      containers:
      - name: nginx 
        image: nginx 
        ports:
        - containerPort: 80 
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www 
    spec:
      accessModes: [ "ReadWriteOnce" ] 
      storageClassName: "rook-ceph-block"
      resources:
        requests:
          storage: 200M

### Kubernetes StorageClass 动态 Provisioning PV 教程 Kubernetes 中的 `StorageClass` 提供了一种机制来动态地创建 PersistentVolumes (PVs)。这种功能允许管理员定义存储供应的方式,而用户只需声明他们的需求即可获得所需的存储资源。 #### 创建自定义 StorageClass 为了实现动态存储供应,首先需要定义一个 `StorageClass` 对象。这个对象包含了用于配置存储提供程序的信息以及回收策略等参数。 ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard-rbd provisioner: ceph.com/rbd parameters: monitors: 192.168.1.100:6789,192.168.1.101:6789,192.168.1.102:6789 adminId: admin adminSecretNamespace: kube-system pool: rbd imageFormat: "2" imageFeatures: layering reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate ``` 上述 YAML 文件定义了一个名为 `standard-rbd` 的存储类[^4]。其中的关键字段解释如下: - **provisioner**: 定义了负责实际创建卷的服务提供商,在此例子中为 Ceph RBD。 - **parameters**: 包含了一系列键值对形式的具体选项,这些选项会被传递给 provisioner 来控制其行为。 - **reclaimPolicy**: 描述当释放某个持久化卷时应采取的动作,默认有 Delete 或者 Retain 两种模式。 - **allowVolumeExpansion**: 是否支持在线扩展已有的卷大小。 - **mountOptions**: 当挂载文件系统时使用的额外选项列表。 - **volumeBindingMode**: 控制何时绑定 Volume 到 Pod 上;可以设置成 Immediate 或 WaitForFirstConsumer。 #### 使用默认 StorageClass 自动生成 PV 如果希望简化流程并让 Kubernetes 自动处理大部分细节,则可利用默认 StorageClass 功能。一旦设置了某 StorageClass 成为默认项之后,任何未特别指明所需 StorageClassPVC 请求都会自动应用该默认规则[^3]。 下面展示的是如何构建这样一个不显式指定 storageClassName 属性但仍能成功触发动态供应过程的例子: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` 在这个场景下,只要存在适配当前环境条件下的默认 StorageClass 设置,那么提交以上描述文档后便会立即启动相应的自动化操作链路直至最终完成目标磁盘设备实例化的全过程。 #### 验证动态生成的结果 最后一步便是验证整个链条是否正常运作起来。可以通过查看新产生的 Persistent Volumes 和 Claims 状态来进行确认: ```bash kubectl get pv,pvc --namespace=<your_namespace> ``` 理想情况下,你会看到一个新的 PersistentVolume 已经被创建出来并与对应的 Claim 绑定在一起。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值