坐井观天说Devops--2--实验环境准备

一.实验说明

目前搭建的这些网站,全部进行容器化,并且搭建到k8s集群中。搭建的过程中,遇到的问题,我把解决问题的链接,都附在了博客里。

二.实验环境准备

1.docker-compose部署harbor

关于docker-compose部署harbor,可以参考我之前写的文章,k8s学习笔记2-搭建harbor私有仓库,这里不在赘述。

2.k8s环境集群搭建

步骤:
1.关于k8s的集群搭建,如果想搭建1.23.6的版本,可以参考我之前写的文档,
k8s安装笔记(ubuntu)
2.如果想搭建1.24.2版本的k8s集群,可以参考,我之前写的文档,k8s学习笔记1-搭建k8s环境(k8s版本1.24.3,ubuntu版本22.04)
3.如果想搭建1.25.2版本的k8s集群,还可以继续参考步骤2中的链接,里面只有一个地方需要更改,博客中cri-dockerd的版本是0.2.3,到时候下载一个0.2.6版本的即可,其他不变。

root@k8s-master1:~# kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
k8s-master1   Ready    control-plane   80m   v1.25.2
k8s-node1     Ready    <none>          25m   v1.25.2
k8s-node2     Ready    <none>          76s   v1.25.2
root@k8s-master1:~# kubectl get pods -n kube-system 
NAME                                  READY   STATUS    RESTARTS         AGE
coredns-c676cc86f-52d68               1/1     Running   0                80m
coredns-c676cc86f-7hbs2               1/1     Running   0                80m
etcd-k8s-master1                      1/1     Running   0                80m
kube-apiserver-k8s-master1            1/1     Running   0                80m
kube-controller-manager-k8s-master1   1/1     Running   16 (7m18s ago)   80m
kube-proxy-2qmg2                      1/1     Running   0                93s
kube-proxy-9dt9l                      1/1     Running   0                25m
kube-proxy-vwpnd                      1/1     Running   0                80m
kube-scheduler-k8s-master1            1/1     Running   17 (9m38s ago)   80m
root@k8s-master1:~# kubectl get pods -n kube-flannel 
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-648q4   1/1     Running   0          110s
kube-flannel-ds-szhrf   1/1     Running   0          71m
kube-flannel-ds-vlhsq   1/1     Running   0          25m
root@k8s-master1:~# 

PS:
1.本次搭建的是1.25.2的k8s集群,参考我之前写的文档,真是快,半小时这样,就把集群搭建好了,积累和形成文档,真的很重要,这样做事情会比较快。
2.本次搭建k8s集群中还是使用cri-dockerd,等有时间,使用contarinerd搭建,感觉如下两篇文档可以参考:
ubuntu22.04安装Kubernetes1.25.0(k8s1.25.0)高可用集群
从0开始安装k8s1.25【最新k8s版本——20220904】

3.k8s集群搭建helm

我使用的系统是Ubuntu22.04,所以安装的时候,直接snap install helm --classic,就可以了。
关于helm的使用,可以参考如下几篇文档
helm官方文档
K8S学习之helm

helm 工具命令补全

helm 错误 Error: INSTALLATION FAILED: must either provide a name or specify --generate-name

4.k8s集群搭建nfs类型的StorageClass

a.nfs的安装

关于存储卷nfs的安装,可以参考博客,Ubuntu 20.04 Server 安装nfs或者Ubuntu 20.04 中配置NFS服务
在k8s-master1上,设置的共享文件夹为:

root@k8s-master1:/opt/nfsv4/data# cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#		to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/opt/nfsv4 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check,crossmnt,fsid=0)                                         
/opt/nfsv4/data 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)                   
/opt/nfsv4/back 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)   
/home/k8s-nfs/gitlab/config 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)
/home/k8s-nfs/gitlab/logs 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)
/home/k8s-nfs/gitlab/data 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)
/home/k8s-nfs/jenkins 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)
/home/k8s-nfs/ 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)
/home/k8s-nfs/data 192.168.100.0/24(rw,sync,no_root_squash,no_subtree_check)

root@k8s-master1:/opt/nfsv4/data# 

在k8s-node1和k8s-node2上去挂载k8s-master1上的/opt/nfsv4/data,挂载到本地的/root/data

k8s-node1上
(base) root@k8s-node1:~# mkdir data
(base) root@k8s-node1:~# mount -t nfs 192.168.100.200:/opt/nfsv4/data data
(base) root@k8s-node1:~# cd data/
(base) root@k8s-node1:~/data# ls
1111.txt  1.txt  2.txt
(base) root@k8s-node1:~/data# 

k8s-node2上
root@k8s-node2:~# mkdir data
root@k8s-node2:~# mount -t nfs 192.168.100.200:/opt/nfsv4/data data
root@k8s-node2:~# cd data/
root@k8s-node2:~/data# ls
1111.txt  1.txt  2.txt
root@k8s-node2:~/data# 

b.StorageClass的部署

深深的被这篇博客,认识PV/PVC/StorageClass,理解的也很到位,感觉参考价值还是很大的,但是,部署之后,创建pvc后,发现一直处于pending状态,查看pod的日志,出现如下错误:

root@k8s-master1:~/k8s/nfs# kubectl logs nfs-client-provisioner-99cb8db6c-gm6g9 
I1012 05:01:40.018676       1 leaderelection.go:185] attempting to acquire leader lease  default/fuseim.pri-ifs...
E1012 05:01:57.425646       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fuseim.pri-ifs", GenerateName:"", Namespace:"default", SelfLink:"", UID:"53ba6be8-0a89-4652-88e2-82a9e217a58e", ResourceVersion:"658122", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63801140588, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-99cb8db6c-gm6g9_f50386f0-49ea-11ed-ad07-8ab7727f2abd\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2022-10-12T05:01:57Z\",\"renewTime\":\"2022-10-12T05:01:57Z\",\"leaderTransitions\":4}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-99cb8db6c-gm6g9_f50386f0-49ea-11ed-ad07-8ab7727f2abd became leader'
I1012 05:01:57.425724       1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs
I1012 05:01:57.425800       1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-99cb8db6c-gm6g9_f50386f0-49ea-11ed-ad07-8ab7727f2abd!
I1012 05:01:57.526029       1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-99cb8db6c-gm6g9_f50386f0-49ea-11ed-ad07-8ab7727f2abd!
I1012 05:01:57.526091       1 controller.go:987] provision "default/www-web-0" class "managed-nfs-storage": started
E1012 05:01:57.529005       1 controller.go:1004] provision "default/www-web-0" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
root@k8s-master1:~/k8s/nfs# 

网上查找,基本上所有的文章,都提供了一种解决方案,那就是,编辑/etc/kubernetes/manifests/kube-apiserver.yaml这里,启动参数添加"- --feature-gates=RemoveSelfLink=false"这行命令。我这边添加好之后,api-server无法使用,kubectl命令无法使用,如果删除这条命令,就好了。我猜最新的k8s(版本是1.25.2)已结不支持这个启动参数了,添加这个参数后,直接k8s的api-server无法启动歇菜了。nfs的provision版本也是最新的,调查了半天,没办法,我尝试使用helm进行部署,发现是可以的。
使用helm部署nfs,可以参考这篇文档,Helm搭建NFS的StorageClass(安装Helm)(坑) ,需要修改nfs server的地址和路径,如果想要进行创建pvc的时候,关联sc,直接把名字“nfs-client”填上去即可
ps:对比了那篇文档的部署和helm部署的区别,那篇文档,没有role和rolebanding方面的部署,其他大致都一样。

root@k8s-master1:/home/k8s-nfs/data# kubectl get sc
NAME         PROVISIONER                                         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-nfs-subdir-external-provisioner   Delete          Immediate           true                   34m
root@k8s-master1:/home/k8s-nfs/data# kubectl get pvc
NAME                   STATUS   VOLUME                                     CA
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

狂奔的蜗牛x

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值