kubemark模拟5000街道实测记录
记录使用kubemark工具模拟5000个k8s节点。
1. 环境准备
kubemark集群(容器一体机环境)计划模拟5000个k8s节点,在external集群准备20个节点用于创建模拟hollow-node的pod资源。
关于节点数量的计算,主要从以下考虑:
-
每个pod资源需求资源:官方给出的建议是每个pod 0.1 CPU核心和220MB内存。从实际测试来看,这个资源需求可以更小一些,大约0.5倍资源量。
-
每个节点初始化时配置一个C段IP地址(除了一些ds的pod,每个节点安装250个hollow pod进行规划)
-
设置节点可以调度的pods数量500个(理论上大于可以分配的IP地址253个就行)
准备一个与kubemark集群(被测试集群)相同版本的external k8s集群,部署kubemark pod用于模拟hollow node:
# kubemark集群。由三台物理服务器组成,配置较高
[root@cluster54 ~]# kubectl get node -o wide
cluster54 Ready control-plane,master 12d v1.27.6 192.168.101.54 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.4.hl202.x86_64 containerd://1.7.7-u2
cluster55 Ready control-plane,master 12d v1.27.6 192.168.101.55 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.4.hl202.x86_64 containerd://1.6.14
cluster56 Ready control-plane,master 12d v1.27.6 192.168.101.56 <none> openEuler 22.03 (LTS-SP1) 5.10.0-136.12.0.86.4.hl202.x86_64 containerd://1.6.14
# 集群信息
[root@cluster54 ~]# kubectl cluster-info
Kubernetes control plane is running at https://apiserver.cluster.local:6443
CoreDNS is running at https://apiserver.cluster.local:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# hosts解析
[root@cluster54 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.210.10.54 cluster54
10.210.10.55 cluster55
10.210.10.56 cluster56
192.168.101.54 cluster54
192.168.101.55 cluster55
192.168.101.56 cluster56
192.168.101.200 vip.cluster.local
192.168.101.54 apiserver.cluster.local
192.168.101.200 vip.harbor.cloudos
# external集群信息,有四台16C32G的虚拟机组成
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 60d v1.27.6
k8s-node1 Ready <none> 60d v1.27.6
k8s-node2 Ready <none> 60d v1.27.6
k8s-node3 Ready <none> 56d v1.27.6
在上述k8s集群,扩容20个节点(16C32G)用于模拟hollow-node:
# 原有节点设置停止调度
[root@k8s-master1 ~]# kubectl cordon k8s-node1
[root@k8s-master1 ~]# kubectl cordon k8s-node2
[root@k8s-master1 ~]# kubectl cordon k8s-node3
[root@k8s-master1 ~]# kubectl cordon k8s-master1
# 扩容20个hollownode节点
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hollownode174 Ready <none> 4d v1.27.6
hollownode175 Ready <none> 4d v1.27.6
hollownode176 Ready <none> 4d v1.27.6
hollownode177 Ready <none> 4d v1.27.6
hollownode178 Ready <none> 4d v1.27.6
hollownode179 Ready <none> 4d v1.27.6
hollownode180 Ready <none> 4d v1.27.6
hollownode181 Ready <none> 4d v1.27.6
hollownode182 Ready <none> 4d v1.27.6
hollownode183 Ready <none> 4d v1.27.6
hollownode184 Ready <none> 4d v1.27.6
hollownode185 Ready <none> 4d v1.27.6
hollownode186 Ready <none> 4d v1.27.6
hollownode187 Ready <none> 4d v1.27.6
hollownode188 Ready <none> 4d v1.27.6
hollownode189 Ready <none> 4d v1.27.6
hollownode190 Ready <none> 4d v1.27.6
hollownode191 Ready <none> 4d v1.27.6
hollownode192 Ready <none> 4d v1.27.6
hollownode193 Ready <none> 4d v1.27.6
k8s-master1 Ready,SchedulingDisabled control-plane 60d v1.27.6
k8s-node1 Ready,SchedulingDisabled <none> 60d v1.27.6
k8s-node2 Ready,SchedulingDisabled <none> 60d v1.27.6
k8s-node3 Ready,SchedulingDisabled <none> 56d v1.27.6
# 批量设置label
[root@k8s-master1 ~]# for i in {174..193}; do kubectl label node hollownode$i name=hollow-node; done
2. 创建hollow-node pod
通过在external集群创建pod,pod中会有kubelet通过kubeconfig文件注册到kubemark集群,从而完成节点的模拟。
2.1 external集群
- 创建命名空间
[root@k8s-master ~]# kubectl create ns kubemark
- 创建configmap和secret
# 在 external cluster创建configmap
[root@k8s-master ~]# kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster"
# 在 external cluster 上创建secret,其中kubeconfig为kubemark集群(被测试集群)的kubeconfig文件
[root@k8s-master ~]# kubectl create secret generic kube

最低0.47元/天 解锁文章
2511

被折叠的 条评论
为什么被折叠?



