记录ubuntu22.04通过kubeadm安装k8s-1.28.2并挂载nfs作为存储

环境:

主机分配:

u-k8s-master   192.168.100.100
u-k8s-node-01  192.168.100.101
u-nfs          192.168.100.199

步骤

一、安装好系统的初始化
#设置时区为上海
timedatectl set-timezone Asia/Shanghai
#将root密码设置为123456
echo 'root:123456' | chpasswd
#允许root登陆
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
systemctl restart sshd
#将网卡名称改为eth0并禁用ipv6
sed -i 's#GRUB_CMDLINE_LINUX="#GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 ipv6.disable=1#g' /etc/default/grub
update-grub
#重启
reboot
#重启完后删除安装时配置的ubuntu这个用户
userdel ubuntu && rm -fr /home/ubuntu
二、系统配置修改&必备工具安装:
#前面加的 DEBIAN_FRONTEND=noninteractive  可以让系统不弹窗让选择重启哪些服务
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
DEBIAN_FRONTEND=noninteractive apt install open-vm-tools vim lrzsz nfs-common -y
#禁用交换分区和流量转发相关的配置
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system
三、必备软件下载

下面是一些2024年4月较新的容器类文件的下载地址,在文末同样有完全打好包放到百度云盘,链接:https://pan.baidu.com/s/1TzYf0uD7VYSCg8D52Om5JQ?pwd=kube 提取码:kube 失效了Q我来补链接。

1、 https://github.com/opencontainers/runc/releases
下载 runc.amd64
2、 https://github.com/containerd/containerd/releases
下载 containerd
3、 https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
下载 containerd.service
4、 https://github.com/containernetworking/plugins/releases
下载  cni-plugins-linux-amd64-v1.4.1.tgz
5、 #可以手动生成一个文件然后对照修改的config.toml,为了方便我已经把修改好的上传了
6、https://github.com/kubernetes-sigs/cri-tools/releases
下载 crictl-v1.28.0-linux-amd64.tar.gz
7、 https://github.com/flannel-io/flannel/releases
下载 kube-flannel.yml                 #注1
#另,kube-flannel.yml经过我修改使用了国内转存的镜像以加快加载速度,建议大家自行从官方下载。

假如这些文件都放到了/root/kubernetes目录下面。

开始安装
install -m 755 /root/kubernetes/runc.amd64 /usr/local/sbin/runc
tar Cxzvf /usr/local /root/kubernetes/containerd-1.6.31-linux-amd64.tar.gz
mv containerd.service /usr/lib/systemd/system/containerd.service 
chmod +x /usr/lib/systemd/system/containerd.service
mkdir -p /opt/cni/bin 
tar Cxzvf /opt/cni/bin /root/kubernetes/cni-plugins-linux-amd64-v1.4.1.tgz
mkdir /etc/containerd  -p
mv config.toml /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd --now
tar zxvf crictl-v1.28.0-linux-amd64.tar.gz -C /usr/local/bin

cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false
EOF
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install kubeadm=1.28.2-00 kubectl=1.28.2-00 kubelet=1.28.2-00 -y
systemctl enable kubelet

这时候发现包时还有一个kubeadm没用上,这玩意实际上是我自己根据代码重新编译的kubeadm修改了证书时长为十年,就不用年年换证书了。

mv kubeadm /usr/bin/kubeadm -f

到这个时候,所有需要的工具都已经准备好了,建议重启一下主机。

集群初始化

通过如下命令初始化集群

kubeadm init \
 --kubernetes-version=1.28.2 \
 --apiserver-cert-extra-sans 0.0.0.0 \
 --image-repository registry.aliyuncs.com/google_containers \
 --pod-network-cidr 172.16.0.0/16 \
 --service-cidr 172.17.0.0/16

注1: --pod-network-cidr 172.16.0.0/16 这个值应该与kube-flannel.yml里面的91行相对应
等到屏幕上面大概出现如下内容的时候,复制这些内容并执行,就可以使用命令查看集群资源了:

   mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node

此时看到k8s-master的状态为NotReady因为没有安装网络插件,通过如下命令安装网络插件:

kubectl apply -f kube-flannel.yml

等所有pod启动好后,再查看kubectl get node就发现master已经Ready了。

node节点安装

除了最后一步的kubeadm init命令换成刚刚执行初始化后面打印出来的类似这样的命令外,其他全都一样:

kubeadm join 192.168.100.100:6443 --token abcdef.4wh0nv4v0oredvkg \
	--discovery-token-ca-cert-hash sha256:9b3f8e99693e2e07f0a40858fbd2231e0f5728430016d81dfa0ce92ad93ee1ad
安装nfs

另外一台机器(基于初始化完成)192.168.100.199上面执行如下命令来安装nfs

apt-get install nfs-kernel-server
mkdir -p /data/nfs
chmod 777 /data/nfs/
cat <<EOF | sudo tee /etc/exports
/data/nfs 192.168.100.0/24(rw,sync,no_subtree_check)
EOF
exportfs -a
systemctl enable nfs-kernel-server
reboot

下面是几个yaml
nfs.yaml:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: nfs
    pod-security.kubernetes.io/enforce: privileged
  name: nfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: nfs-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-chengdu.aliyuncs.com/kube_cn/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner
            - name: NFS_SERVER
              value: 192.168.100.199 #修改为你nfs对应的IP
            - name: NFS_PATH
              value: /data/nfs #修改为你nfs对应的路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.100.199 #修改为你nfs对应的IP
            path: /data/nfs #修改为你nfs对应的路径

等上面的执行完 kubectl apply -f nfs.yaml并查看pod状态正常:

root@k8s-master:~#  kubectl -n nfs-provisioner get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6794d866b5-bp5pr   1/1     Running   0          44m

之后就可以执行下面这个(2024-09-12修改,增加kind):
test-claim.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

执行效果如下:

root@k8s-master:~/kubernetes/nfs-subdir-external-provisioner/deploy# kubectl create -f test-claim.yaml
persistentvolumeclaim/test-claim configured
root@k8s-master:~/kubernetes/nfs-subdir-external-provisioner/deploy# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-a49cb8eb-7c6a-455d-8d7a-eb2376b23f2a   1Mi        RWX            nfs-client     50m

此时去nfs服务器查看nfs目录,会发现下面多了一个目录:

root@nfs:/data/nfs# pwd
/data/nfs
root@nfs:/data/nfs# ll
total 12
drwxrwxrwx 3 root   root    4096 Apr  9 18:13 ./
drwxr-xr-x 3 root   root    4096 Apr  9 17:36 ../
drwxrwxrwx 2 nobody nogroup 4096 Apr  9 18:14 default/

证明nfs挂载完成,可以使用

kubectl delete -f test-claim.yaml

删除掉刚刚创建的pvc,并将其设置为默认存储:

kubectl patch storageclasses.storage.k8s.io nfs-client -p \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

接下来,安装kubesphere

wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1-patch.0/cluster-configuration.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1-patch.0/kubesphere-installer.yaml

需要对它们进行修改:

#cluster-configuration.yaml第15行改为:
local_registry: "registry.cn-beijing.aliyuncs.com"
#cluster-configuration.yaml第135、136行注释取消并改为:
    prometheus:
      replicas: 1
#kubesphere-installer.yaml第295行改为:
image: registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1-patch.0

然后依次执行

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

默默等待安装完成即可

### 安装 Kubernetes 1.28.2 on CentOS #### 准备工作 确保操作系统是最新的状态,关闭防火墙和服务冲突项。 ```bash sudo yum update -y && sudo yum upgrade -y sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab ``` 为了使内核参数适应Kubernetes的要求,设置必要的sysctl参数: ```bash cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system ``` #### 安装 Docker 和 CRI-Dockerd 安装Docker作为容器运行时工具,配置CRI-Dockerd以便于与Kubernetes兼容[^2]。 ```bash # 添加docker-ce仓库安装最新版docker-ce sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum install docker-ce docker-ce-cli containerd.io -y # 启动启用docker服务 sudo systemctl start docker sudo systemctl enable docker # 下载cri-dockerd二进制文件解压到指定位置 wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.3/cri-dockerd-v0.2.3-linux-amd64.tar.gz tar zxvf cri-dockerd-v0.2.3-linux-amd64.tar.gz -C / rm -rf cri-dockerd-v0.2.3-linux-amd64.tar.gz # 创建cri-dockerd.service单元文件 cat <<EOF | sudo tee /usr/lib/systemd/system/cri-dockerd.service [Unit] Description=CRI interface for Docker application container runtime After=network-online.target firewalld.service Wants=docker.service [Service] Environment="PATH=/bin:/sbin" ExecStartPre=-/usr/bin/mkdir -p /var/run/cri-dockerd ExecStart=/usr/local/sbin/cri-dockerd \ --container-runtime-endpoint 'unix:///run/containerd/containerd.sock' Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF # 加入开机自启启动cri-dockerd服务 sudo systemctl daemon-reload sudo systemctl enable cri-dockerd --now ``` #### 设置 Kubelet 使用 CRI-Dockerd 编辑`/etc/default/kubelet`来指明kubelet应该通过什么途径连接至CRI接口。 ```bash echo 'KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock"' | sudo tee /etc/default/kubelet ``` #### 安装 Kubernetes 组件 (kubeadm, kubelet and kubectl) 添加官方的yum源用于获取最新的稳定版本软件包。 ```bash cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF # 安装所需的组件 sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo systemctl enable --now kubelet ``` #### 初始化集群 Master 节点 使用预定义好的配置文件初始化master节点。这一步骤会创建一个单机或多机组成的控制平面实例。 ```bash # 复制默认配置模板供修改之用 cp /etc/kubernetes/kubeadm-config.yaml /root/kubeadm-config.yaml_$(date +%Y%m%d) # 编辑/root/kubeadm-config.yaml调整所需选项后执行如下命令完成初始化过程 sudo kubeadm init --config=/root/kubeadm-config.yaml --upload-certs ``` #### 配置kubectl访问权限 为了让当前用户能够管理新建立起来的cluster,在home目录下准备好相应的认证证书链接。 ```bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` #### 安装 Pod 网络插件 Flannel 最后一步是为整个Cluster提供Pod间通信能力,这里选择了Flannel作为网络方案之一。 ```bash kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 等待一段时间直到所有Node都变为Ready状态表示成功完成了全部准备工作。 ```bash watch kubectl get nodes ```
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值