一、安装前准备
1,准备6台虚拟机,所有节点至少2U2G。(安装一台其余克隆即可)
节点名称 | IP | hostname |
harbor | 10.10.10.10 | k8s-harbor |
master1 | 10.10.10.11 | k8s-master1 |
master2 | 10.10.10.12 | k8s-master2 |
master3 | 10.10.10.13 | k8s-master3 |
node1 | 10.10.10.21 | k8s-node1 |
node2 | 10.10.10.22 | k8s-node2 |
node3 | 10.10.10.23 | k8s-node3 |
2,安装前配置
①关闭防火墙:
systemctl disable firewalld.service --now && systemctl status firewalld.service
②关闭Linux安全机制
临时关闭:setenforce 0
setenforce 0
永久关闭:vim /etc/selinux/config
SELINUX=disabled
③关闭swap(k8s禁止虚拟内存以提高性能)
swapoff -a #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
查看是否关闭:
free -m #如果已关闭 Swap: 0 0 0
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab && free -m
④设置主机名
10.10.10.10
hostnamectl set-hostname k8s-harbor && /bin/bash
10.10.10.11
hostnamectl set-hostname k8s-master1 && /bin/bash
10.10.10.21
hostnamectl set-hostname k8s-node1 && /bin/bash
⑤设置host
cat >> /etc/hosts << EOF
10.10.10.10 k8s-harbor
10.10.10.11 k8s-master1
10.10.10.12 k8s-master2
10.10.10.13 k8s-master3
10.10.10.21 k8s-node1
10.10.10.22 k8s-node2
10.10.10.23 k8s-node3
EOF
⑥设置时间同步
⑤NTP同步:yum install chrony.x86_64 -y
vim /etc/chrony.conf
server ntp.aliyun.com iburst
allow 10.10.10.0/24
重启:systemctl enable chronyd --now && systemctl status chronyd
执行时间同步:chronyc sources -v
[root@k8s-master1 ~]# chronyc sources -v
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? tock.ntp.infomaniak.ch 1 6 1 3 +89ms[ +89ms] +/- 153ms
^? ntp.ams1.nl.leaseweb.net 2 6 3 0 +104ms[ +104ms] +/- 207ms
^? a.chl.la 2 6 1 3 +121ms[ +121ms] +/- 128ms
^? stratum2-1.ntp.mow01.ru.> 2 6 3 1 +85ms[ +85ms] +/- 117ms
^? 210.72.145.44 0 6 0 - +0ns[ +0ns] +/- 0ns
^? 203.107.6.88 2 6 3 2 +111ms[ +111ms] +/- 48ms
[root@k8s-master1 ~]#
⑦开启ipvs
安装ipset和ipvsadm:命令:yum -y install ipset ipvsadm
在所有节点执行如下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
授权、运行、检查是否加载:
命令:chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
检查,命令:lsmod | grep -e ip_vs -e nf_conntrack_ipv4
⑨
二,安装docker
①添加docker的yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
②安装docker
yum install -y docker-ce docker-ce-cli containerd.io --allowerasing
③设置默认开机自启动并查看是否运行成功。
systemctl enable --now docker && systemctl status docker
④配置daemon.json文件
vim /etc/docker/daemon.json
daemon.json内容如下:
{
"registry-mirrors": ["https://hub.littlediary.cn"],
"insecure-registries":["10.10.10.10"], #harbor服务器的IP地址
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
⑤重启docker服务并查看是否启动成功
systemctl daemon-reload && systemctl restart docker && systemctl status docker
三,安装k8s
1、从k8s官网获取最新的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
2,安装
yum install -y kubelet kubeadm kubectl
修改"/etc/sysconfig/kubelet"文件的内容
vim /etc/sysconfig/kubelet
# 修改
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
3、设置为开机自启动。
systemctl enable kubelet && systemctl restart kubelet && systemctl status kubelet 查看服务是否正常启动 :systemctl list-unit-files | grep kubelet
4、安装cri-dockerd
先下载 wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm
安装: yum -y install cri-dockerd-0.3.8-3.el7.x86_64.rpm
修改 vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd:// #使用 kubeadm config images list 查看pause版本。
设置自启动。systemctl enable --now cri-docker && systemctl daemon-reload && systemctl restart cri-docker && systemctl status cri-docker
5、指定容器运行时:【这一步非常重要 】
crictl config runtime-endpoint /run/containerd/containerd.sock
四、拉取镜像并上传到harbor
kubeadm config images list 可以看到kubeadm组件所要求的镜像
kubeadm config images pull --cri-socket unix:///var/run/cri-dockerd.sock --image-repository=registry.aliyuncs.com/google_containers
将所有下载好的镜像全部修改为harbor私有镜像仓库的标签
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.2 10.10.10.10/k8s-1.32/kube-apiserver:v1.32.2
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.2 10.10.10.10/k8s-1.32/kube-controller-manager:v1.32.2
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.2 10.10.10.10/k8s-1.32/kube-scheduler:v1.32.2
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.32.2 10.10.10.10/k8s-1.32/kube-proxy:v1.32.2
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.16-0 10.10.10.10/k8s-1.32/etcd:3.5.16-0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.11.3 10.10.10.10/k8s-1.32/coredns:v1.11.3
docker tag registry.aliyuncs.com/google_containers/pause:3.10 10.10.10.10/k8s-1.32/pause:3.10
查看打好标签的镜像
[root@k8s-master1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
10.10.10.10/k8s-1.32/kube-apiserver v1.32.2 85b7a174738b 3 weeks ago 97MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.32.2 85b7a174738b 3 weeks ago 97MB
10.10.10.10/k8s-1.32/kube-controller-manager v1.32.2 b6a454c5a800 3 weeks ago 89.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.32.2 b6a454c5a800 3 weeks ago 89.7MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.32.2 d8e673e7c998 3 weeks ago 69.6MB
10.10.10.10/k8s-1.32/kube-scheduler v1.32.2 d8e673e7c998 3 weeks ago 69.6MB
10.10.10.10/k8s-1.32/kube-proxy v1.32.2 f1332858868e 3 weeks ago 94MB
registry.aliyuncs.com/google_containers/kube-proxy v1.32.2 f1332858868e 3 weeks ago 94MB
10.10.10.10/k8s/flannel v0.26.1 4c92107ad4cf 3 months ago 82.8MB
10.10.10.10/k8s/flannel-cni-plugin v1.6.0 c9ce14f3932d 4 months ago 10.6MB
registry.aliyuncs.com/google_containers/etcd 3.5.16-0 a9e7e6b294ba 5 months ago 150MB
10.10.10.10/k8s-1.32/etcd 3.5.16-0 a9e7e6b294ba 5 months ago 150MB
10.10.10.10/k8s-1.32/coredns v1.11.3 c69fa2e9cbf5 7 months ago 61.8MB
registry.aliyuncs.com/google_containers/coredns v1.11.3 c69fa2e9cbf5 7 months ago 61.8MB
registry.aliyuncs.com/google_containers/pause 3.10 873ed7510279 9 months ago 736kB
10.10.10.10/k8s-1.32/pause 3.10 873ed7510279 9 months ago 736kB
[root@k8s-master1 ~]#
上传到10.10.10.10私有镜像仓库
①登录到10.10.10.10
docker login 10.10.10.10
②上传所有镜像
docker push 10.10.10.10/k8s-1.32/kube-apiserver:v1.32.2
docker push 10.10.10.10/k8s-1.32/kube-controller-manager:v1.32.2
docker push 10.10.10.10/k8s-1.32/kube-scheduler:v1.32.2
docker push 10.10.10.10/k8s-1.32/kube-proxy:v1.32.2
docker push 10.10.10.10/k8s-1.32/etcd:3.5.16-0
docker push 10.10.10.10/k8s-1.32/coredns:v1.11.3
docker push 10.10.10.10/k8s-1.32/pause:3.10
登录到harbor查看是否推送成功
五,集群初始化
1、master1节点初始化
sudo kubeadm init \
> --kubernetes-version 1.32.2 \
> --apiserver-advertise-address=10.10.10.11 \
> --control-plane-endpoint "10.10.10.11:6443" \
> --image-repository=10.10.10.10/lh \
> --upload-certs \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16 \
> --cri-socket unix:///var/run/cri-dockerd.sock \
> --ignore-preflight-errors=all
[root@k8s-master1 ~]# sudo kubeadm init \
> --kubernetes-version 1.32.2 \
> --apiserver-advertise-address=10.10.10.11 \
> --control-plane-endpoint "10.10.10.11:6443" \
> --image-repository=10.10.10.10/lh \
> --upload-certs \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.244.0.0/16 \
> --cri-socket unix:///var/run/cri-dockerd.sock \
> --ignore-preflight-errors=all
imageRepository: Invalid value: "10.10.10.10/lh\u00a0--upload-certs": invalid image repository format
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master1 ~]# sudo kubeadm init --kubernetes-version 1.32.2 --apiserver-advertise-address=10.10.10.11 --control-plane-endpoint "10.10.10.11:6443" --image-repository=10.10.10.10/lh --upload-certs --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.32.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0304 16:48:57.428275 12330 checks.go:846] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.10" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "10.10.10.10/lh/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.83]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [10.10.10.83 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [10.10.10.83 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.543332ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.50126942s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ec42767427d9db9631c80ba049486879ba05864052cd5f150831b524ebd0135d
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wzdsmh.vd834zzqis21tcvd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes running the following command on each as root:
kubeadm join 10.10.10.11:6443 --token wzdsmh.vd834zzqis21tcvd \
--discovery-token-ca-cert-hash sha256:186d4d5db997f4a0bb8d22b10ca072d4ea3ea53e46b7336afb1e6c72e3de6b20 \
--control-plane --certificate-key ec42767427d9db9631c80ba049486879ba05864052cd5f150831b524ebd0135d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.10.11:6443 --token wzdsmh.vd834zzqis21tcvd \
--discovery-token-ca-cert-hash sha256:186d4d5db997f4a0bb8d22b10ca072d4ea3ea53e46b7336afb1e6c72e3de6b20
2、master加入集群
kubeadm join 10.10.10.11:6443 --token wzdsmh.vd834zzqis21tcvd \
--discovery-token-ca-cert-hash sha256:186d4d5db997f4a0bb8d22b10ca072d4ea3ea53e46b7336afb1e6c72e3de6b20 \
--control-plane --certificate-key ec42767427d9db9631c80ba049486879ba05864052cd5f150831b524ebd0135d \
--cri-socket unix:///var/run/cri-dockerd.sock
3、node节点加入
kubeadm join 10.10.10.11:6443 --token wzdsmh.vd834zzqis21tcvd \
--discovery-token-ca-cert-hash sha256:186d4d5db997f4a0bb8d22b10ca072d4ea3ea53e46b7336afb1e6c72e3de6b20 \
--cri-socket unix:///var/run/cri-dockerd.sock
4、安装网络插件:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
5、查看节点状态
6、查看网络插件是否安装成功。