k8s不能挂载ceph块存储

本文记录了在Kubernetes集群中挂载Ceph块存储时遇到的问题及解决过程。通过分析Pod状态和日志,定位到挂载失败的原因,并通过安装ceph-common软件包解决了问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

我是参考 Tony Bai 博客进行k8s挂载ceph的存储,但是发现最终pod的状态一直是ContainerCreating

一、环境说明:

  • Tony Bai 是把k8s 和 ceph都部署在那两台虚拟机上
  • 我的环境是k8s集群和ceph存储集群分别部署在不同机器上的

ceph存储集群环境部署可以参考Tony Bai的,或者网上找很多教程,我这里只是记录k8s挂载到ceph块存储所遇到的问题。

二、配置文件

# ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFEUGpCVlpnRWphREJBQUtMWFd5SVFsMzRaQ2JYMitFQW1wK2c9PQo=

##########################################
# ceph-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 192.168.100.81:6789
    pool: rbd
    image: ceph-image
    keyring: /etc/ceph/ceph.client.admin.keyring
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

##########################################
# ceph-pvc.yaml

kind: PersistentVolumeClaim

apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

##########################################
# ceph-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: ceph-busybox1
    image: 192.168.100.90:5000/duni/busybox:latest
    command: ["sleep", "600000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim

三、查找挂载失败的原因

查看对象的状态

$ kubectl get pv,pvc,pods
NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                REASON    AGE
pv/ceph-pv   1Gi        RWO           Recycle         Bound     default/ceph-claim             11s

NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/ceph-claim   Bound     ceph-pv   1Gi        RWO           10s

NAME                   READY     STATUS              RESTARTS   AGE
po/ceph-pod1           0/1       ContainerCreating   0          11s

发现ceph-pod1状态一直是ContainerCreating

查看pod的event

$ kubectl describe po/ceph-pod1

Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  2m        2m      1   {default-scheduler }            Normal      Scheduled   Successfully assigned ceph-pod1 to duni-node1
  6s        6s      1   {kubelet duni-node1}            Warning     FailedMount Unable to mount volumes for pod "ceph-pod1_default(6656394a-37b6-11e7-b652-000c2932f92e)": timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
  6s        6s      1   {kubelet duni-node1}            Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]

又到ceph-pod1 所在k8s节点机上查看kubelet日志

$ journalctl -u  kubelet -f 

May 13 15:09:52 duni-node1 kubelet[5167]: I0513 15:09:52.650241    5167 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/e38290de-33a7-11e7-b6       52-000c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") pod "e38290de-33a7-11e7-b652-000c2932f92e" (UID: "e38290de-33a7-11e7-b652-000c2932f92e").
203299 May 13 15:10:15 duni-node1 kubelet[5167]: E0513 15:10:15.801855    5167 kubelet.go:1813] Unable to mount volumes for pod "ceph-pod1_default(ef4e99c4-37aa-11e7-b652-000c2932f92e)": t       imeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]; skipping pod
203300 May 13 15:10:15 duni-node1 kubelet[5167]: E0513 15:10:15.801930    5167 pod_workers.go:184] Error syncing pod ef4e99c4-37aa-11e7-b652-000c2932f92e, skipping: timeout expired waiting        for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
203301 May 13 15:10:17 duni-node1 kubelet[5167]: I0513 15:10:17.252663    5167 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/ddee5d45-3490-11e7-b652-000       c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") to pod "ddee5d45-3490-11e7-b652-000c2932f92e" (UID: "ddee5d45-3490-11e7-b652-000c2932f92e"). Volume is already moun       ted to pod, but remount was requested.

四、解决方法

在k8s节点机上安装ceph common

yum install ceph-common

删掉 ceph-pod1 重新运行,等一会就看到状态Running

### Kubernetes中挂载S3或Ceph存储的方法 #### 挂载Ceph对象存储(S3) 为了使Kubernetes能够访问由Ceph提供的对象存储服务,通常会利用支持S3 API的对象网关(radosgw)[^1]。这允许应用程序通过标准的S3客户端库与之通信。 要让Pod能直接读写位于远端的Ceph对象存储桶中的文件,推荐的方式之一是创建一个`Secret`资源来保存访问密钥和秘密密钥,并配置PersistentVolume(PV)以及PersistentVolumeClaim(PVC),其中PV定义里指定了`s3fs-fuse`或其他类似的工具作为CSI驱动程序来连接到外部S3兼容的服务。然而更常见的是使用专门为此设计的应用层组件比如Rook Operator自动完成这些设置工作。 对于具体的实施细节: - 需要在集群内部署相应的CSI插件; - 使用带有适当参数的StorageClass定义指定目标bucket名称和其他必要的认证信息; - 创建PVC请求所需的空间量; ```yaml apiVersion: v1 kind: Secret metadata: name: ceph-s3-secret type: Opaque data: AWS_ACCESS_KEY_ID: <base64-encoded-access-key> AWS_SECRET_ACCESS_KEY: <base64-encoded-secret-key> --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ceph-s3-sc provisioner: fuseim.pri/ceph parameters: bucket: my-bucket-name secretRef: name: ceph-s3-secret namespace: default reclaimPolicy: Retain volumeBindingMode: Immediate ``` #### 挂载Ceph设备(RBD)或文件系统(CephFS) 当考虑将Ceph用作持久卷背后的实际物理介质时,则可以选择两种主要模式——要么是以的形式呈现给虚拟机(VM)/容器(container)(即RADOS Block Device, RBD),要么就是共享式的POSIX兼容文件系统(CephFS)[^2]。 针对这两种情况,在Kubernetes环境下操作流程大致如下: ##### 对于RBD: - 安装并启动官方维护的支持RBD类型的CSI Driver; - 准备好包含有pool名、image规格等在内的YAML模板; - 发布新的SC供后续动态分配PV所用; ```bash helm repo add rook-release https://charts.rook.io/release helm install --namespace rook-ceph rook-ceph rook-release/rook-ceph kubectl apply -f common.yaml # from rook examples kubectl apply -f operator.yaml # also from rook examples ``` ##### 对于CephFS: 过程相似,只是替换为适合处理目录结构而非单一磁盘映像的技术栈元件而已。同样依赖于特定版本的CSI适配器来进行底层交互逻辑封装。 ```yaml apiVersion: "storage.k8s.io/v1" kind: StorageClass metadata: name: csi-cephfs-sc provisioner: ceph.com/cephfs parameters: monitors: "<mon-ip>:6789,<another-mon-ip>:6789" adminId: admin adminSecretName: csi-cephfs-secret adminSecretNamespace: kube-system userId: user-id-for-data-pool userSecretName: csi-cephfs-user-secret fsName: myfs readOnly: "false" allowVolumeExpansion: true mountOptions: - noprealloc reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer ```
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值