K8s中创建NFS存储分配器和StorageClass

1 创建NFS共享目录

# 创建目录
sudo mkdir -p /data/k8s
 
# 添加权限
sudo chmod 777 /data/k8s

# 编辑文件
sudo vim /etc/exports
 
# 添加以下内容
# 下面的含有“*”的旧版本可以,新版本可能出问题
# /data/k8s	192.168.108.*(rw,sync,no_subtree_check)
/data/k8s	192.168.108.0/24(rw,sync,no_subtree_check)
 
# 重启服务
sudo service nfs-kernel-server restart
 
# 查看共享目录
sudo showmount -e 192.168.108.100
# 返回值如下,表示创建成功
Export list for 192.168.108.100:
/data/k8s	192.168.108.0/24

2 设置存储分配器的权限

创建nfs-client-provisioner-authority.yaml文件

# nfs-client-provisioner-authority.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3 创建NFS存储分配器

创建nfs-client-provisioner.yaml文件

# nfs-client-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            # 存储分配器名称
            - name: PROVISIONER_NAME
              value: nfs-provisioner
            # NFS服务器地址,设置为自己的IP
            - name: NFS_SERVER
              value: 192.168.108.100
            # NFS共享目录地址
            - name: NFS_PATH
              value: /data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            # 设置为自己的IP
            server: 192.168.108.100
            # 对应NFS上的共享目录
            path: /data/k8s

4 创建StorageClass

创建nfs-storage-class.yaml文件

# nfs-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-data

# 存储分配器的名称
# 对应“nfs-client-provisioner.yaml”文件中env.PROVISIONER_NAME.value
provisioner: nfs-provisioner

# 允许pvc创建后扩容
allowVolumeExpansion: True

parameters:
  # 资源删除策略,“true”表示删除PVC时,同时删除绑定的PV
  archiveOnDelete: "true"

查看StorageClass

kubectl get storageclass

5 创建PVC

创建nfs-pvc.yaml文件

# nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # 注意,后面Deployment申请资源需要用到此处的名称
  name: nfs-pvc
spec:
  # 设置资源的访问策略,ReadWriteMany表示该卷可以被多个节点以读写模式挂载;
  accessModes:
    - ReadWriteMany
 
  # 设置资源的class名称
  # 注意,此处的名称必须与“nfs-storage-class.yaml”中的storageClassName相同
  storageClassName: nfs-data
 
  # 设置申请的资源大小
  resources:
    requests:
      storage: 100Mi

查看PVC

kubectl get pvc

6 创建Deployment控制器

创建nfs-deployment-python.yaml文件

# nfs-deployment-python.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-deployment-python
spec:
  replicas: 2
  selector:
    matchLabels:
      app: python-nfs
  template:
    metadata:
      labels:
        app: python-nfs
    spec:
      containers:
        - name: python-nfs
          image: python:3.8.2
          imagePullPolicy: IfNotPresent
          command: ['/bin/bash', '-c', '--']
          # 启动"python -m http.server 80"服务,“>>”表示向文件中追加数据
          args: ['echo "<p> The host is $(hostname) </p>" >> /containerdata/podinfor; python -m http.server 80']
          # 设置80端口
          ports:
            - name: http
              containerPort: 80
          # 设置挂载点
          volumeMounts:
            # 此处的名称与volumes有对应关系
            - name: python-nfs-data
              mountPath: /containerdata

      # 配置nfs存储卷
      volumes:
        # 此处的名称需与spec.containers.volumeMounts.name相同
        - name: python-nfs-data
          # 向PVC申请资源,此处的名称对应# nfs-pvc.yaml文件中的metadata.name
          persistentVolumeClaim:
            claimName: nfs-pvc

7 截图

### 如何在Ubuntu中设置Kubernetes NFS StorageClass配置示例及故障排除 #### 安装NFS服务器 为了使NFS能够正常工作,在Ubuntu上需更新软件列表并安装`nfs-kernel-server`,这可以通过执行命令来完成[^1]。 ```bash sudo apt update && sudo apt install -y nfs-kernel-server ``` #### 创建共享目录与导出配置 创建一个用于挂载的文件夹,并编辑`/etc/exports`文件以允许特定网络内的客户端访问此路径。例如: ```bash sudo mkdir -p /srv/nfs/kube-pv echo "/srv/nfs/kube-pv *(rw,sync,no_subtree_check)" | sudo tee -a /etc/exports ``` 重启NFS服务以便应用更改: ```bash sudo systemctl restart nfs-kernel-server ``` #### 配置StorageClass资源对象 定义YAML格式描述符来声明持久化卷类(PVC),其中指定了provisioner参数为外部插件名称或内部驱动程序标识符。对于简单的静态PV分配方案,则不需要动态供应器;而对于自动化的PVC绑定过程则必需指定合适的provisioner实现方式。 下面是一个适用于大多数场景下的通用模板: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: example.com/nfs # 或者使用社区支持版本如 "fuseim.pri/ifs" parameters: archiveOnDelete: "false" reclaimPolicy: Retain # 可选值有 Delete Retain,默认是 Delete mountOptions: - hard - nolock allowVolumeExpansion: true # 启用扩展功能 volumeBindingMode: WaitForFirstConsumer ``` 注意替换上述代码中的`example.com/nfs`部分为实际使用的第三方组件提供的有效字符串表示形式之一。 #### 故障排查指南 当遇到问题时可以按照如下方法进行初步诊断: - 使用kubectl describe查看Pod状态日志; - 查看Node节点上的/var/log/syslog是否有任何关于NFS错误的信息记录; - 确认防火墙规则是否阻止了必要的端口通信(默认情况下TCP 2049); - 测试从任意集群成员机器到NFS Server主机之间的连通性性能表现。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值