0.前置条件
准备已经安装好的k8s集群,最少一个master节点和工作节点,master节点已经初始化,工作节点已经加入到master节点。
k8s版本:1.18.0
KubeSphere版本:v3.1.1
master节点:2核5g 40GB
node1节点:2核3g 25GB
node2节点:2核3g 25GB
master,node1,node2为主机名
master 192.168.177.132
node1 192.168.177.133
node2 192.168.177.134
1.配置k8s集群中的默认存储类型NFS
1.所有节点执行
# 所有机器安装
yum install -y nfs-utils
2.主节点执行
# nfs主节点
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data
# 设置开机自启 & 现在启动 -- 远程绑定服务
systemctl enable rpcbind --now
# nfs服务
systemctl enable nfs-server --now
# 配置生效
exportfs -r
# 查看
exportfs
3.从节点执行
192.168.177.132替换成你的master节点地址
# 查看远程机器有哪些目录可以同步 -- 使用master机器ip地址
showmount -e 192.168.177.132
# 执行以下命令挂载 nfs 服务器上的共享目录到本机路径
mkdir -p /nfs/data
# 同步远程机器数据
mount -t nfs 192.168.177.132:/nfs/data /nfs/data
4.测试
# 在任意机器写入一个测试文件
echo "hello nfs" > /nfs/data/test.txt
# 在其它机器查看数据
cat /nfs/data/test.txt
2.配置动态供应的默认存储类
1.创建名为storageclass.yaml的文件
内容如下:需要修改
将192.168.177.132替换成你的master节点地址,其余不变
## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root
mountPath: persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.177.132 ## 指定自己nfs服务器地址
- name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.177.132 ##nfs服务器共享的目录
path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolume