目的
在 centos8 上安装 kubenetes-1.20 版本
node 节点加入 master 集群
软件安装
| 名称 | 版本 | 备注 |
|---|---|---|
| os | centos8 | |
| docker | docker-ce-19.03.14-3.el8.x86_64 docker-ce-cli-19.03.14-3.el8.x86_64 containerd.io-1.4.3-3.1 | kubelet-1.20 暂时不建议使用 docker-20 版本 |
| kublet | cri-tools-1.13.0-0.x86_64 kubelet-1.20.1-0.x86_64 kubernetes-cni-0.8.7-0.x86_64 kubeadm-1.20.1-0.x86_64 kubectl-1.20.1-0.x86_64 |
参考
centos 8 安装 kubernetes 1.20 版本 master 中 docker 软件及 kubernetes 软件安装部分
加入 master 前准备
docker 启动
# systemctl restart docker
# systemctl enable docker
# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@ns-yun-020041 docker-19.03]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-12-23 18:03:04 CST; 18min ago
Docs: https://docs.docker.com
Main PID: 43535 (dockerd)
Tasks: 32
Memory: 56.3M
CGroup: /system.slice/docker.service
└─43535 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
导入镜像
客户端需要用到 flannel 及 kube-proxy 镜像, 建议提前出兵导入到 node 中
导出方法
docker save -o kube-proxy-1.20.1.tar k8s.gcr.io/kube-proxy:v1.20.1
docker save -o flannel-0.13.1-rc1.tar quay.io/coreos/flannel:v0.13.1-rc1
导入方法
]# docker load -i flannel-0.13.1-rc1.tar
ace0eda3e3be: Loading layer [==================================================>] 5.843MB/5.843MB
0a790f51c8dd: Loading layer [==================================================>] 11.42MB/11.42MB
db93500c64e6: Loading layer [==================================================>] 2.595MB/2.595MB
70351a035194: Loading layer [==================================================>] 45.68MB/45.68MB
cd38981c5610: Loading layer [==================================================>] 5.12kB/5.12kB
dce2fcdf3a87: Loading layer [==================================================>] 9.216kB/9.216kB
be155d1c86b7: Loading layer [==================================================>] 7.68kB/7.68kB
Loaded image: quay.io/coreos/flannel:v0.13.1-rc1
# docker load -i kube-proxy-1.20.1.tar
f00bc8568f7b: Loading layer [==================================================>] 53.89MB/53.89MB
6ee930b14c6f: Loading layer [==================================================>] 22.05MB/22.05MB
2b046f2c8708: Loading layer [==================================================>] 4.894MB/4.894MB
f6be8a0f65af: Loading layer [==================================================>] 4.608kB/4.608kB
3a90582021f9: Loading layer [==================================================>] 8.192kB/8.192kB
94812b0f02ce: Loading layer [==================================================>] 8.704kB/8.704kB
dae549f791ed: Loading layer [==================================================>] 39.49MB/39.49MB
Loaded image: k8s.gcr.io/kube-proxy:v1.20.1
新建 token
master 上执行
# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
6vduxxxxxxxx0o7v0exvs 22h 2020-12-24T16:37:08+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
假如 token 已经过期需要新增 token
新增 token
# kubeadm token create
sprqf5.xmtbojx8hzzl4h8y
获取 sha256 key
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
4f0ad49ff7fc6xxxxxxxxxxxxxxxxx6be8ea3a9efe298d6a39164
node 加入集群
在 node 上执行下面命令
# kubeadm join 10.189.20.40:6443 --token sprqf5.xmtbojx8hzzl4h8y --discovery-token-ca-cert-hash sha256:4f0ad49ff7fc66e4383b7734b1435b3aa0785061d6be8ea3a9efe298d6a39164
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
健康检查
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-ffmc4 1/1 Running 0 3h18m
kube-system coredns-7f89b7bc75-wq8t4 1/1 Running 0 3h18m
kube-system etcd-ns-yun-020040.vclound.com 1/1 Running 0 3h18m
kube-system kube-apiserver-ns-yun-020040.vclound.com 1/1 Running 0 3h18m
kube-system kube-controller-manager-ns-yun-020040.vclound.com 1/1 Running 0 3h18m
kube-system kube-flannel-ds-7npqp 0/1 CrashLoopBackOff 6 7m6s
kube-system kube-flannel-ds-8bnfw 1/1 Running 0 3h
kube-system kube-flannel-ds-hhhcz 0/1 CrashLoopBackOff 6 9m17s
kube-system kube-proxy-chl82 1/1 Running 0 3h18m
kube-system kube-proxy-cmk5k 1/1 Running 0 11m
kube-system kube-proxy-stdfq 1/1 Running 0 7m6s
kube-system kube-scheduler-ns-yun-020040.vclound.com 1/1 Running 0 3h18m
解决 flannal 无法启动问题
# kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
10.189.21.0/24
只为 node1 添加网络, 需要为 添加新网络
# kubectl patch node ns-yun-020041.vclound.com -p '{"spec":{"podCIDR":"10.189.22.0/24"}}'
删除 pod
# kubectl -n kube-system delete pod kube-flannel-ds-hhhcz
pod "kube-flannel-ds-hhhcz" deleted
pod 自动重启后就可以解决问题
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-ffmc4 1/1 Running 0 3h22m
kube-system coredns-7f89b7bc75-wq8t4 1/1 Running 0 3h22m
kube-system etcd-ns-yun-020040.vclound.com 1/1 Running 0 3h23m
kube-system kube-apiserver-ns-yun-020040.vclound.com 1/1 Running 0 3h23m
kube-system kube-controller-manager-ns-yun-020040.vclound.com 1/1 Running 0 3h23m
kube-system kube-flannel-ds-8bnfw 1/1 Running 0 3h4m
kube-system kube-flannel-ds-dvtz9 1/1 Running 0 42s
kube-system kube-flannel-ds-tg4kf 1/1 Running 0 2m3s
kube-system kube-proxy-chl82 1/1 Running 0 3h22m
kube-system kube-proxy-cmk5k 1/1 Running 0 16m
kube-system kube-proxy-stdfq 1/1 Running 0 11m
kube-system kube-scheduler-ns-yun-020040.vclound.com 1/1 Running 0 3h23m
测试
node 打标签
打标签主要为了指定 pod 启动位置
# kubectl label nodes ns-yun-020041.vclound.com node=ns-yun-020041.vclound.com
node/ns-yun-020041.vclound.com labeled
# kubectl label nodes ns-yun-020042.vclound.com node=ns-yun-020042.vclound.com
node/ns-yun-020042.vclound.com labeled
# kubectl label nodes ns-yun-020040.vclound.com node=ns-yun-020040.vclound.com
node/ns-yun-020040.vclound.com labeled
创建 namesapce
apiVersion: v1
kind: Namespace
metadata:
name: kubeterry
创建 pod 测试
kind: Pod
apiVersion: v1
metadata:
name: centos7-test-01
namespace: kubeterry
spec:
containers:
- name: cenotos7-test01
image: "centos:7.1.1503"
command: ["/bin/bash", "-c", "sleep 1000000000"]
nodeSelector:
node: ns-yun-020041.vclound.com
检查 pod
# kubectl -n kubeterry get pod
NAME READY STATUS RESTARTS AGE
centos7-test-01 1/1 Running 0 3m19s
# kubectl -n kubeterry describe pod | grep Node
Node: ns-yun-020041.vclound.com/10.189.20.xx
Node-Selectors: node=ns-yun-020041.vclound.com
注意, 假如没有对 node 完成打标签, 则 node: xxxx 无法生效, 并在创建 POD 有下面报错
Name: centos7-test-01
Namespace: kubeterry
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
cenotos7-test01:
Image: centos:7.1.1503
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
sleep 1000000000
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nmkq4 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-nmkq4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nmkq4
Optional: false
QoS Class: BestEffort
Node-Selectors: node=ns-yun-020041.vclound.com
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4s (x2 over 4s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match Pod's node affinity.
本文介绍如何在CentOS 8上安装Kubernetes 1.20版本并加入集群,包括软件配置、镜像导入、节点加入步骤及常见问题解决方法。
5822

被折叠的 条评论
为什么被折叠?



