Ubuntu 22.04 LTS 部署 K8S 1.28.15 集群

Ubuntu 22.04 LTS 部署 K8S 1.23.17

本文将详细介绍如何在 Ubuntu 22.04 LTS 上部署 Kubernetes 1.28.15 集群。我们将使用 kubeadm 工具来初始化集群,并配置一个 Master 节点和两个 Worker 节点。此外,我们还将安装 Calico 作为 CNI 插件,并部署一个简单的 Nginx 应用来测试集群的功能。

一、基础环境准备

1. K8S 虚拟机硬件环境准备

IP主机名配置磁盘
10.0.0.50master502c4G50G
10.0.0.51worker512c4G50G
10.0.0.52worker522c4G50G

配置三个节点:1个 Master 节点和 2个 Worker 节点。

2. 关闭 swap 分区

为了确保 Kubernetes 正常运行,我们需要在所有节点上关闭 Swap 分区:

swapoff -a
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

3. 配置 Hosts 文件

在所有节点上配置 /etc/hosts 文件,确保节点之间可以通过主机名互相访问:

cat >> /etc/hosts <<EOF
10.0.0.50 master50
10.0.0.51 worker51
10.0.0.52 worker52
EOF

4. 检查网络互通

确保所有节点之间网络互通,并且可以访问外网:

ping baidu.com

5. 开启 IP 转发

在所有节点上开启 IP 转发:

cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

6. 设置时区

确保所有节点的时区设置为上海时间:

ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
date -R

二、安装 Kubernetes 组件

1. 添加 Kubernetes 源

在所有节点上添加 Kubernetes 的阿里云源:

apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/Release.key |
    gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" |
    tee /etc/apt/sources.list.d/kubernetes.list
apt-get update

2. 安装 kubeadm、kubelet、kubectl

在所有节点上安装指定版本的 Kubernetes 组件:

apt install -y kubeadm=1.28.15-1.1 kubelet=1.28.15-1.1 kubectl=1.28.15-1.1

安装完成后,检查各组件版本:

kubeadm version
kubectl version
kubelet --version

3. 安装 Container Runtime(Containerd)

注意:在优化系统期间已经将仓库源更换为阿里源
文章可参考 修改镜像源配置

3.1 安装 containerd

在所有节点上安装 containerd:

apt install -y containerd
3.2 生成默认 containerd 配置

生成 containerd 的默认配置文件:

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
3.3 修改 containerd 配置,使用阿里云镜像

修改 config.toml 文件,使用阿里云镜像:

sed -i 's#sandbox_image = ".*"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml

确保 SystemdCgroup 设置为 true

sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
3.4 启用并启动 containerd

启用并启动 containerd 服务:

systemctl daemon-reload
systemctl restart containerd.service
systemctl enable --now containerd

检查 containerd 状态:

systemctl status containerd
3.5 查看 containerd 的版本
[root@master50:~]# ctr version
Client:
  Version:  1.7.24
  Revision: 
  Go version: go1.22.2

Server:
  Version:  1.7.24
  Revision: 
  UUID: 72fa82ef-8f9f-4d91-b055-41fc08e008fb

三、初始化 Kubernetes 集群(Master)

1. 使用 kubeadm 初始化 master 节点

在 Master 节点上使用 kubeadm 初始化集群:

# 使用 kubeadm 初始化 master 节点
kubeadm init --kubernetes-version=v1.28.15 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.100.0.0/16  --service-cidr=10.200.0.0/16 --service-dns-domain=cxjyyds.com --apiserver-advertise-address=10.0.0.50

2. 配置 kubeconfig 文件

配置 kubeconfig 文件,以便使用 kubectl 工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3. 检查控制平面组件状态

kubectl get cs

四、加入 Worker 节点

1. 在 Worker 节点执行加入命令

在 Worker 节点上执行 kubeadm join 命令,加入集群:

kubeadm join 10.0.0.50:6443 --token luhq5o.jqafdmuxj6dzshf7 \
	--discovery-token-ca-cert-hash sha256:63a122759db5b72b0d95f054ac293c0faf02938a99035564ca87dff03bb59ef0 

2. 检查集群节点状态

在 Master 节点上检查集群的节点状态:

[root@master50:~]# kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
master50   NotReady   control-plane   44s   v1.28.15
worker51   NotReady   <none>          9s    v1.28.15
worker52   NotReady   <none>          9s    v1.28.15

注意:此时,节点状态为 NotReady,因为网络插件还没有部署。四、部署 calico 的 CNI 插件

五、部署 Calico CNI 插件

1. 下载 calico 的资源清单并修改 Pod 网段

下载 Calico 的资源清单文件:

[root@master50:~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.3/manifests/tigera-operator.yaml

[root@master50:~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.3/manifests/custom-resources.yaml

修改 custom-resources.yaml 文件中的 Pod 网段,以匹配集群配置:

[root@master50:~]# grep 'cidr' custom-resources.yaml
      cidr: 192.168.0.0/16
[root@master50:~]# sed -i '/cidr/s#192.168#10.100#' custom-resources.yaml
[root@master50:~]# grep 'cidr' custom-resources.yaml
      cidr: 10.100.0.0/16

2. 创建资源

此处我为 containerd 配置了代理,如果不会配置代理或者镜像拉取不下来采用下面这种方法:

提前将镜像导入到每个节点:

通过网盘分享的文件:calico.tar.gz
链接: https://pan.baidu.com/s/1dB62ewTYLIGdMuQ9ksBNGg?pwd=r65s 提取码: r65s

ctr -n k8s.io i import calico.tar.gz

应用 Calico 的资源清单:

[root@master50:~]# kubectl create -f custom-resources.yaml

3. 检查 Pod 是否部署成功

检查 Calico 相关的 Pod 是否成功部署:

[root@master50:~]# kubectl get pods -o wide -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-5df9c7b545-9v9r8          1/1     Running   0          2m56s   10.100.253.133   worker51   <none>           <none>
calico-apiserver   calico-apiserver-5df9c7b545-jggs2          1/1     Running   0          15s     10.100.203.130   worker52   <none>           <none>
calico-system      calico-kube-controllers-75d48fb9d9-spb6x   1/1     Running   0          2m55s   10.100.253.132   worker51   <none>           <none>
calico-system      calico-node-d4629                          1/1     Running   0          2m43s   10.0.0.52        worker52   <none>           <none>
calico-system      calico-node-t9mcb                          1/1     Running   0          2m45s   10.0.0.50        master50   <none>           <none>
calico-system      calico-node-txzsz                          1/1     Running   0          2m26s   10.0.0.51        worker51   <none>           <none>
calico-system      calico-typha-76d459cd49-7cbtb              1/1     Running   0          2m55s   10.0.0.52        worker52   <none>           <none>
calico-system      calico-typha-76d459cd49-jswn7              1/1     Running   0          2m49s   10.0.0.50        master50   <none>           <none>
calico-system      csi-node-driver-6sd8q                      2/2     Running   0          2m56s   10.100.45.129    master50   <none>           <none>
calico-system      csi-node-driver-pz7ls                      2/2     Running   0          2m56s   10.100.203.129   worker52   <none>           <none>
calico-system      csi-node-driver-q4477                      2/2     Running   0          2m56s   10.100.253.134   worker51   <none>           <none>
kube-system        coredns-66f779496c-2zqzv                   1/1     Running   0          59m     10.100.253.135   worker51   <none>           <none>
kube-system        coredns-66f779496c-4l95w                   1/1     Running   0          59m     10.100.253.129   worker51   <none>           <none>
kube-system        etcd-master50                              1/1     Running   0          59m     10.0.0.50        master50   <none>           <none>
kube-system        kube-apiserver-master50                    1/1     Running   0          59m     10.0.0.50        master50   <none>           <none>
kube-system        kube-controller-manager-master50           1/1     Running   0          59m     10.0.0.50        master50   <none>           <none>
kube-system        kube-proxy-7dr6s                           1/1     Running   0          58m     10.0.0.52        worker52   <none>           <none>
kube-system        kube-proxy-b29zg                           1/1     Running   0          58m     10.0.0.51        worker51   <none>           <none>
kube-system        kube-proxy-w6dm6                           1/1     Running   0          59m     10.0.0.50        master50   <none>           <none>
kube-system        kube-scheduler-master50                    1/1     Running   0          59m     10.0.0.50        master50   <none>           <none>
tigera-operator    tigera-operator-6b66d7d577-228zm           1/1     Running   0          30m     10.0.0.52        worker52   <none>           <none>

4. 再次查看节点状态

再次检查集群节点状态,此时节点状态应为 Ready

[root@master50:~]# kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master50   Ready    control-plane   68m   v1.28.15
worker51   Ready    <none>          66m   v1.28.15
worker52   Ready    <none>          66m   v1.28.15

六、部署 Nginx 应用测试集群

1. 编写 Nginx 资源清单

创建一个 nginx.yaml 文件,定义 Nginx 的 Deployment 和 Service:

[root@master50:~]# cat nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.27.4
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30001   
  type: NodePort

2. 创建 Nginx Pod

提前导入 Nginx 镜像:

通过网盘分享的文件:nginx-v1.27.4.tar.gz
链接: https://pan.baidu.com/s/1NhNn3NtnnqUdYwenAYovzQ?pwd=yxfm 提取码: yxfm

ctr -n k8s.io i import nginx-v1.27.4.tar.gz

应用 nginx.yaml 文件,创建 Nginx Pod:

[root@master50:~]# kubectl apply -f nginx.yaml

3. 查看 Pod 和 Service 状态

[root@master50:~]# kubectl get pods,svc -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
pod/nginx-deployment-7c79c4bf97-fwmrv   1/1     Running   0          38s   10.100.203.131   worker52   <none>           <none>
pod/nginx-deployment-7c79c4bf97-x7z9w   1/1     Running   0          38s   10.100.253.136   worker51   <none>           <none>

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes      ClusterIP   10.200.0.1      <none>        443/TCP        63m   <none>
service/nginx-service   NodePort    10.200.48.223   <none>        80:30001/TCP   38s   app=nginx

4. 访问 Nginx 服务

通过 svc IP 访问 Nginx 服务:

[root@master50:~]# curl 10.200.48.223
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

通过集群 IP 访问 Nginx 服务:

[root@master50:~]# curl 10.0.0.50:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

七、启用 kubectl 自动补全功能

为了方便使用 kubectl 工具,可以启用自动补全功能:

kubectl completion bash > ~/.kube/completion.bash.inc
echo source '$HOME/.kube/completion.bash.inc' >> ~/.bashrc
source ~/.bashrc
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值