一、Centost7.9安装K8s集群(v1.28.0)
角色 | IP地址 |
---|---|
k8s-master | 192.168.88.10 |
k8s-node1 | 192.168.88.11 |
k8s-node2 | 192.168.88.12 |
- 虚拟机配置建议:4核CPU、4G内存、20G硬盘
- 服务器可以访问互联网,会联网下载镜像
二、基础配置(三台都需要配置)
软件 | 版本 |
---|---|
操作系统 | CentOS-7-x86_64-DVD-2009.iso |
Docker | Docker version 26.1.4 |
kubernetes | v1.28.0 |
关闭Selinux
~]# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
~]# setenforce 0 # 临时
# 关闭Swap
~]# swapoff -a # 临时
~]# sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
根据规划设置主机名
~]# hostnamectl set-hostname <hostname>
标题安装阿里云的yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
确保网络桥接的数据包经过Iptables处理,启用相关的内核参数
~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
~]# sysctl --system # 生效
三、安装Docker
~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
~]# yum -y install docker-ce
~]# systemctl enable docker && systemctl start docker
配置镜像下载加速器和设置Cgroup驱动
~]# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https:自己的阿里云镜像加速地址.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
~]# systemctl restart docker
标题安装cri-dockerd(Docker与Kubernetes通信的中间程序),有时候会出现安装包无法下载的问题
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.3.2-3.el7.x86_64.rpm
指定依赖镜像地址为国内镜像地址:
~]# vi /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
~]# systemctl daemon-reload
~]# systemctl enable cri-docker && systemctl start cri-docker
部署集群
添加阿里云YUM软件源
~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm,kubelet和kubectl
~]# yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0
~]# systemctl enable kubelet
初始化主节点
~]# kubeadm init --apiserver-advertise-address=192.168.88.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.28.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
初始化完成后,根据提示信息,拷贝kubectl工具认证文件到默认路径:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
使用kubectl工具查看节点状态
~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 29s v1.28.0
由于网络插件还没有部署,节点会处于“NotReady”状态,安装完Calico或者flannel再次查看才会获得节点和pod的状态。
节点加入集群
kubeadm join 192.168.88.10:6443 --token m1fjhe.oykxam4vxhzq9u5b \
--discovery-token-ca-cert-hash sha256:07d4a0122be4a28ec7fb3f91d132b0dce583d44f9b6ebd81eecfdee3a443d2d4 --cri-socket=unix:///var/run/cri-dockerd.sock
本次搭建的集群节点加入集群出现报错
可能会出现此报错:[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
执行echo 1 > /proc/sys/net/ipv4/ip_forward即可解决
标题安装网络插件
本次集群搭建使用的是flannel插件
flannel.yml内容如下:
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
command: ["/bin/bash", "-ce", "tail -f /dev/null"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: registry.cn-hangzhou.aliyuncs.com/liuk8s/flannel:v0.21.5
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: registry.cn-hangzhou.aliyuncs.com/liuk8s/flannel-cni-plugin:v1.1.2
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: registry.cn-hangzhou.aliyuncs.com/liuk8s/flannel:v0.21.5
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
使用apply方式创建应用
kubectl apply -f flannel.yml
大概2分钟左右网络插件安装完毕
查看node状态 kubectl get nodes
查看pods运行状态;