安装k8s v1.28.2版本

常用命令

关闭防火墙和SELinux

# 关闭防火墙并设置为开机自动关闭
systemctl disable --now firewalld
# 关闭SELinuxsed -i 
"s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configsetenforce 0

时间同步

yum install -y chrony
sed -ri 's/^server.*/#&/' /etc/chrony.conf
cat >> /etc/chrony.conf << EOF
server ntp1.aliyun.com iburst
EOF
systemctl restart chronyd
chronyc sources

# 修改时区,如果之前是这个时区就不用修改
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone

关闭交换分区(kubernetes强制要求禁止)

# 关闭当前交换分区
swapoff -a
# 禁止开机自动启动交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab

优化Linux内核参数,添加网桥过滤器和地址转发功能

cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl --system

# 加载网桥过滤模块
modprobe br_netfilter
lsmod | grep br_netfilter  # 查看是否加载完成

安装ipvs转发支持

在kubernetes中Service有两种代理模型,一种是基于iptables的,一种是基于ipvs,两者对比ipvs的性能要高。默认使用iptables,所以要手动加载ipvs。

# 安装依赖包
yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
# 执行脚本
/etc/sysconfig/modules/ipvs.modules

# 验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装docker,containerd

1、dockerd实际真实调用的还是containerd的api接口,containerd是dockerd和runC之间的一个中间交流组件。所以启动docker服务的时候,也会启动containerd服务的。

2、kubernets自v1.24.0后,就不再使用docker.shim,替换采用containerd作为容器运行时端点。因此需要安装containerd(在docker的基础下安装),上面安装docker的时候就自动安装了containerd了。这里的docker只是作为客户端而已。容器引擎还是containerd。

# 创建 /etc/modules-load.d/containerd.conf 配置文件:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# 获取阿里云YUM源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装yum-config-manager配置工具
yum -y install yum-utils

# 设置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker-ce版本,使用最新版本24.0.7,也可以指定安装版本
yum install -y docker-ce
# 启动并设置开机自启动
systemctl enable --now docker

# 查看版本号
docker --version
# 查看版本具体信息
docker version

# Docker镜像源设置
# 修改文件 /etc/docker/daemon.json,没有这个文件就创建
# 添加以下内容后,重启docker服务:
cat >/etc/docker/daemon.json<<EOF
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加载
systemctl reload docker

# 查看
systemctl status docker containerd

# 其他配置
# containerd生成配置文件
containerd config default > /etc/containerd/config.toml

# 配置containerd cgroup 驱动程序为systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

# 将 sandbox_image 镜像源设置为阿里云google_containers镜像源
sed -i "s#registry.k8s.io/pause:3.6#registry.aliyuncs.com/google_containers/pause:3.9#g"       /etc/containerd/config.toml

# 启动containerd并设置开机自启动
systemctl enable --now containerd
# 查看containerd状态
systemctl status containerd

配置K8S的yum源(所有节点)

[root@node1 ~]# cat  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

安装K8S集群(所有节点)

  • kubeadm:用来初始化k8s集群的指令。
  • kubelet:在集群的每个节点上用来启动 Pod 和容器等。
  • kubectl:用来与k8s集群通信的命令行工具,查看、创建、更新和删除各种资源。
所有节点都安装,按实际情况选择版本
[root@node1 ~]# yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
所有节点设置开机启动
[root@node1 ~]# systemctl enable kubelet

初始化k8s集群

只在主节点执行
kubeadm init \
--apiserver-advertise-address=192.168.5.10 \   #这里是master的ip
--control-plane-endpoint=cluster-endpoint \    #这里是master的host
--image-repository  registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=172.20.0.0/16

解释:
--apiserver-advertise-address 集群apiServer地址(master节点IP即可)
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致

若集群初始化失败,可以通过排障后重新初始化

kubeadm reset
rm -fr ~/.kube/  /etc/kubernetes/* var/lib/etcd/*

加入节点

这里需要看初始化后最后显示的命令,直接复制给出的命令即可,下面的是我在部署时显示的,就找这种样子的找即可
kubeadm join 192.168.5.10:6443 --token kdy4ka.jz5otwd1l3l2of5v \\
--discovery-token-ca-cert-hash sha256:d40fe1c0af2bef8143106d27d418a4b7026f1f79a6dfe30cb4691d35755719ad

先看看token过期没

[root@node1 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
kdy4ka.jz5otwd1l3l2of5v   7h          2023-10-03T19:46:32+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

如果token过期了就重新创建一个

kubeadm token create --print-join-command

修改角色

# node节点执行
[root@node1 ~]# kubectl label node node2 [node-role.kubernetes.io/worker=worker](<http://node-role.kubernetes.io/worker=worker>)

# master节点执行
[root@node1 ~]# kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
node1   Ready    control-plane,master   16h   v1.20.9
node2   Ready    worker                 15h   v1.20.9
node3   Ready    worker                 15h   v1.20.9

安装网络插件

Calico 在每一个节点实现了一个虚拟路由器来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。


# 下载calico的yaml文件
wget --no-check-certificate https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml
# 安装calico
kubectl apply -f calico.yaml
# 验证是否成功
kubectl get pod -A | grep calico
# 输出如下
# kube-system calico-kube-controllers-577f77cb5c-s6zfl 1/1 Running 0 15h
# kube-system calico-node-7gsfr 1/1 Running 0 15h
# kube-system calico-node-hb2k8 1/1 Running 0 15h
# kube-system calico-node-xt4bl 1/1 Running 0 15h

给node节点打标签

kubectl label nodes k8s-node1 node-role.kubernetes.io/worker=worker
kubectl label nodes k8s-node2 node-role.kubernetes.io/worker=worker

k8s-node1是你的主机名

测试验证集群

[root@k8s-master ~]#  kubectl cluster-info
Kubernetes control plane is running at https://10.0.0.105:6443
CoreDNS is running at https://10.0.0.105:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   39m   v1.28.2
k8s-node1    Ready    worker          38m   v1.28.2
k8s-node2    Ready    worker          38m   v1.28.2

[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-658d97c59c-qjdrt   1/1     Running   0          11m
kube-system   calico-node-5msx4                          1/1     Running   0          11m
kube-system   calico-node-5q749                          1/1     Running   0          11m
kube-system   calico-node-z9gtq                          1/1     Running   0          11m
kube-system   coredns-66f779496c-2v6l9                   1/1     Running   0          40m
kube-system   coredns-66f779496c-kj6db                   1/1     Running   0          40m
kube-system   etcd-k8s-master                            1/1     Running   7          41m
kube-system   kube-apiserver-k8s-master                  1/1     Running   6          41m
kube-system   kube-controller-manager-k8s-master         1/1     Running   3          41m
kube-system   kube-proxy-77vqw                           1/1     Running   0          40m
kube-system   kube-proxy-cmnxt                           1/1     Running   0          39m
kube-system   kube-proxy-ksrmj                           1/1     Running   0          39m
kube-system   kube-scheduler-k8s-master                  1/1     Running   3          41m

获取用户token

[root@node1 ~]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
admin-user-token-vfj8s             [kubernetes.io/service-account-token](<http://kubernetes.io/service-account-token>)   3      10s
default-token-w8jgn                [kubernetes.io/service-account-token](<http://kubernetes.io/service-account-token>)   3      15m
kubernetes-dashboard-certs         Opaque                                0      15m
kubernetes-dashboard-csrf          Opaque                                1      15m
kubernetes-dashboard-key-holder    Opaque                                2      15m
kubernetes-dashboard-token-xjt6l   [kubernetes.io/service-account-token](<http://kubernetes.io/service-account-token>)   3      15m

# 查看名为admin-user-token-vfj8s的secret

[root@node1 ~]# kubectl describe secret admin-user -n kubernetes-dashboard
Name:         admin-user-token-vfj8s
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  [kubernetes.io/service-account.name:](<http://kubernetes.io/service-account.name:>) admin-user
[kubernetes.io/service-account.uid:](<http://kubernetes.io/service-account.uid:>) 36b4e5f5-2f46-488d-960c-899cb4309d50

Type:  [kubernetes.io/service-account-token](<http://kubernetes.io/service-account-token>)

# Data

ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkhMMXFCMGVaVHVrV0hHampTRExxdHlMcjBvTVlXRHd0Vl9hc29lSXU0TG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZmajhzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNmI0ZTVmNS0yZjQ2LTQ4OGQtOTYwYy04OTljYjQzMDlkNTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.ZoE_jz20Jf3ImLl9BwTmk5VM7Y_VacRI3ZaTbaD8ipdsDV7CCBjE9edrVtQ-L86HOU0Qb_SA3HHqO0wtGagfAVHahHJaNLcr-MAOURWmIyLg8A2K07OT_5Qr9BJC-xxFym25sOc04Cyj-Z86-LsECSbIKLhUwsxXSzAQKuPmD471MMO-_JL-FWAJ-3jdZ8E4uAMD-mhJrKyORqMgoRxPJXPgwkzd2PRPrHoiaunbxiGo6qWhONGiMITjfCW77or32TbPIDuxy94j64tWvJyVDbmyGq1J0WeOzjfobdnbyM6BRGdjP86F_P-DyTXWSfOJHbAVYcgpDcqYO_DImtg8_g

安装dashbaord
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

可以看到运行完apply之后,会创建一个叫做kubernetes-dashboard的namespace,然后创建相关的servicesecretconfigmap以及重要的deployment等相关内容。

查看状态

kubectl get pods,svc -n kubernetes-dashboard

登录GUI

kubectl patch svc kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]' -n kubernetes-dashboard
查看分配的端口,分配了一个外部Port443:30306/TCP中的30306,利用上面配置的用户token即可登录,使用https://任意一个node节点:30306
ubuntu@master:~$ kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.43.174.125   <none>        8000/TCP        92s
kubernetes-dashboard        NodePort    10.43.236.108   <none>        443:30306/TCP   92s
安装Prometheus

根据自己的k8s版本选择对应的版本

git clone -b release-0.13 https://github.com/prometheus-operator/kube-prometheus.git

修改镜像地址(懒得修改的花就下载我的)
https://download.youkuaiyun.com/download/weixin_43921427/89302981

cd manifests

# 可以通过如下命令来查看
ls | xargs -I {} grep -iH "image:" {}

# alertmanager-alertmanager.yaml
quay.io/prometheus/alertmanager:v0.26.0
swr.cn-north-4.myhuaweicloud.com/ctl456/alertmanager:v0.26.0

# blackboxExporter-deployment.yaml
quay.io/prometheus/blackbox-exporter:v0.24.0
swr.cn-north-4.myhuaweicloud.com/ctl456/blackbox-exporter:v0.24.0

jimmidyson/configmap-reload:v0.5.0
swr.cn-north-4.myhuaweicloud.com/ctl456/configmap-reload:v0.5.0

quay.io/brancz/kube-rbac-proxy:v0.14.2
swr.cn-north-4.myhuaweicloud.com/ctl456/kube-rbac-proxy:v0.14.2

# grafana-deployment.yaml
grafana/grafana:9.5.3
swr.cn-north-4.myhuaweicloud.com/ctl456/grafana:9.5.3

# kubeStateMetrics-deployment.yaml
registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.9.2
swr.cn-north-4.myhuaweicloud.com/ctl456/kube-state-metrics:v2.9.2

quay.io/brancz/kube-rbac-proxy:v0.14.2
swr.cn-north-4.myhuaweicloud.com/ctl456/kube-rbac-proxy:v0.14.2

# nodeExporter-daemonset.yaml
quay.io/prometheus/node-exporter:v1.6.1
swr.cn-north-4.myhuaweicloud.com/ctl456/node-exporter:v1.6.1

quay.io/brancz/kube-rbac-proxy:v0.14.2
swr.cn-north-4.myhuaweicloud.com/ctl456/kube-rbac-proxy:v0.14.2

# prometheusAdapter-deployment.yaml
registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.11.1
swr.cn-north-4.myhuaweicloud.com/ctl456/prometheus-adapter:v0.11.1

# prometheusOperator-deployment.yaml
quay.io/prometheus-operator/prometheus-operator:v0.67.1
swr.cn-north-4.myhuaweicloud.com/ctl456/prometheus-operator:v0.67.1

quay.io/brancz/kube-rbac-proxy:v0.14.2
swr.cn-north-4.myhuaweicloud.com/ctl456/kube-rbac-proxy:v0.14.2

# prometheus-prometheus.yaml
quay.io/prometheus/prometheus:v2.46.0
swr.cn-north-4.myhuaweicloud.com/ctl456/prometheus:v2.46.0

部署

kubectl apply --server-side -f manifests/setup
kubectl wait \
	--for condition=Established \
	--all CustomResourceDefinition \
	--namespace=monitoring
kubectl apply -f manifests/
# 查看是否全部运行
kubectl get svc,pod -n monitoring

# 修改type为NodePort
kubectl edit svc grafana -n monitoring

在这里插入图片描述

# 删除规则(这里我不太懂,如果有懂的可以交流一下,规则是要求各pod组件之间进行沟通的规则限制,我看里面规则也只是针对内部组件进行的限制,但如果不删除就无法使用NodePort登录)
kubectl -n monitoring delete networkpolicy --all


# grafana默认账号密码
admin
admin

### 安装 Kubernetes 1.28.2 on CentOS #### 准备工作 确保操作系统是最新的状态,并关闭防火墙和服务冲突项。 ```bash sudo yum update -y && sudo yum upgrade -y sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab ``` 为了使内核参数适应Kubernetes的要求,设置必要的sysctl参数: ```bash cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system ``` #### 安装 Docker 和 CRI-Dockerd 安装Docker作为容器运行时工具,并配置CRI-Dockerd以便于与Kubernetes兼容[^2]。 ```bash # 添加docker-ce仓库并安装最新版docker-ce sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum install docker-ce docker-ce-cli containerd.io -y # 启动并启用docker服务 sudo systemctl start docker sudo systemctl enable docker # 下载cri-dockerd二进制文件并解压到指定位置 wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.3/cri-dockerd-v0.2.3-linux-amd64.tar.gz tar zxvf cri-dockerd-v0.2.3-linux-amd64.tar.gz -C / rm -rf cri-dockerd-v0.2.3-linux-amd64.tar.gz # 创建cri-dockerd.service单元文件 cat <<EOF | sudo tee /usr/lib/systemd/system/cri-dockerd.service [Unit] Description=CRI interface for Docker application container runtime After=network-online.target firewalld.service Wants=docker.service [Service] Environment="PATH=/bin:/sbin" ExecStartPre=-/usr/bin/mkdir -p /var/run/cri-dockerd ExecStart=/usr/local/sbin/cri-dockerd \ --container-runtime-endpoint 'unix:///run/containerd/containerd.sock' Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF # 加入开机自启并启动cri-dockerd服务 sudo systemctl daemon-reload sudo systemctl enable cri-dockerd --now ``` #### 设置 Kubelet 使用 CRI-Dockerd 编辑`/etc/default/kubelet`来指明kubelet应该通过什么途径连接至CRI接口。 ```bash echo 'KUBELET_EXTRA_ARGS="--container-runtime remote --container-runtime-endpoint unix:///var/run/cri-dockerd.sock"' | sudo tee /etc/default/kubelet ``` #### 安装 Kubernetes 组件 (kubeadm, kubelet and kubectl) 添加官方的yum源用于获取最新的稳定版本软件包。 ```bash cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF # 安装所需的组件 sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo systemctl enable --now kubelet ``` #### 初始化集群 Master 节点 使用预定义好的配置文件初始化master节点。这一步骤会创建一个单机或多机组成的控制平面实例。 ```bash # 复制默认配置模板供修改之用 cp /etc/kubernetes/kubeadm-config.yaml /root/kubeadm-config.yaml_$(date +%Y%m%d) # 编辑/root/kubeadm-config.yaml调整所需选项后执行如下命令完成初始化过程 sudo kubeadm init --config=/root/kubeadm-config.yaml --upload-certs ``` #### 配置kubectl访问权限 为了让当前用户能够管理新建立起来的cluster,在home目录下准备好相应的认证证书链接。 ```bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` #### 安装 Pod 网络插件 Flannel 最后一步是为整个Cluster提供Pod间通信能力,这里选择了Flannel作为网络方案之一。 ```bash kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` 等待一段时间直到所有Node都变为Ready状态表示成功完成了全部准备工作。 ```bash watch kubectl get nodes ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值