kubeadm部署方式:
环境准备
安装三台虚拟机,系统为Centos7.6
软件名称 | 版本 |
---|---|
Linux | CentOS Linux release 7.6.1810 (Core) |
docker | 20.+ |
kubelet | 1.23.6 |
kubeadm | 1.23.6 |
kubectl | 1.23.6 |
主机角色
主机名 | 角色 |
---|---|
192.168.1.10 | master,NTP |
192.168.1.11 | node |
192.168.1.12 | node |
配置/etc/hosts 和HOSTNAME
192.168.1.10 master.example.com master
192.168.1.11 node1.example.com node1
192.168.1.12 node2.example.com node2
说明:设置所有节点
对所有节点分别进行设置
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
关闭防火墙
说明:设置所有节点
systemctl stop firewalld
systemctl disable firewalld
安装工具
说明:设置所有节点
yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data ipvsadm lvm2 -y
配置limit
说明:设置所有节点
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加以下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
Selinux设置
说明:设置所有节点
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
关闭交换内存
说明:设置所有节点
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
升级内核
说明:设置所有节点,也可以不升级内核
参数以下链接:
https://blog.youkuaiyun.com/gswcfl/article/details/131985620?spm=1001.2014.3001.5502
Master节点配置免密钥登录
ssh-keygen -t rsa #按3次回车即可
for i in master node1 node2;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
内核配置
说明:设置所有节点
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
#执行如下命令使修改生效:
sysctl --system && sysctl -p /etc/sysctl.d/k8s.conf
上面参数说明:
- net.ipv4.ip_forward:允许系统在网络接口之间转发IP数据包
- net.bridge.bridge-nf-call-iptables:启用iptables处理桥接的网络流量
- net.bridge.bridge-nf-call-ip6tables:启用ip6tables处理桥接的网络流量
时间同步
服务器端配置
yum -y install chrony
vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#如果联不通外网,则把上面几行注释,然后添加下面两行,意思是设置成时间服务器是自己
server 192.168.1.10 iburst
allow 192.168.1.0/24 #允许其他节点可以连接到服务端
#开启同步层
# Serve time even if not synchronized to a time source.
local stratum 10
启动服务
systemctl start chronyd
systemctl enable chronyd
客户端配置
yum -y install chrony
vim /etc/chrony.conf
#注销所有server,增加下面一行
server 192.168.1.10 iburst
启动服务
systemctl start chronyd
systemctl enable chronyd
测试服务
服务端&客户端查看时间同步信息
chronyc sources
配置离线yum源
说明:所有节点都需要安装kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
在有网络的服务器上下载需要的rpm安装包
配置在线YUM源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
#http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#缓存kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
#如果安装其它版本k8s,直接变更版本即可
mkdir kubernetes
repotrack kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6 -p ./kubernetes
#创建RPM元数据repodata存储仓库
#到此步yum仓库已经制作完成,配置repo文件即可
createrepo -v kubernetes
安装docker
说明:所有节点安装docker,注意docker版本与kubernetes对应关系。
下载离线docker安装包,版本为20.10.23,链接:https://download.youkuaiyun.com/download/gswcfl/88215719
将docker-20.10.23.tar.gz包上传至kubernetes集群所有主机,在主机上进行如下操作:
tar -xvf docker-20.10.23.tar.gz
install-docker.sh
systemctl daemon-reload && systemctl restart docker
配置daemon.json
cat << EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://n5jclonh.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload && systemctl restart docker
安装kubelet kubeadm kubectl
说明:所有节点进行安装
#安装kubelet kubeadm kubectl
yum -y install kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
#kubelet设置开机启动
systemctl enable kubelet
准备kubernetes镜像
说明:所有节点都需要配置镜像文件,如果有harbor仓库会方便很多。
下载离线kubernetes-v1.23.6镜像包,链接:https://download.youkuaiyun.com/download/gswcfl/88438095
将kubernetes-1.23.6.zip上传到所有kubernetes集群主机,进行如下镜像加载操作:
#解压离线镜像
unzip kubernetes-1.23.6.zip
cd kubernetes-1.23.6
#加载所有镜像
bash load.sh
初始化master节点
说明:仅初始化master节点即可
[root@k8smaster ~]# kubeadm init \
--apiserver-advertise-address=192.168.135.26 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.174.175:6443 --token lh1spe.19iqzkrlt24nwh2g \
--discovery-token-ca-cert-hash sha256:cb25dd3a87b5392ce5a1b057192aed851b48f8b222ff64880ee1e407844a765e
[root@k8smaster ~]#
[root@k8smaster ~]# mkdir -p $HOME/.kube
[root@k8smaster ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8smaster ~]# chown $(id -u):$(id -g) $HOME/.kube/config
扩展: kubeadm reset 重置master节点配置。
查看组件状态信息
[root@k8smaster ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
#出现这种情况是kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0,在文件中注释掉就可以了。(每台master节点都要执行操作)
vim /etc/kubernetes/manifests/kube-scheduler.yaml
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
# - --port=0
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
# - --port=0
#每台master重启kubelet
[root@k8smaster ~]# systemctl restart kubelet.service
[root@k8smaster ~]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true"}
controller-manager Healthy ok
scheduler Healthy ok
安装网络插件CNI
master节点上查看节点信息
[root@k8smaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster NotReady control-plane,master 123m v1.23.6
*NotReady: 表示缺少网络组件
*master: 1个主节点
master节点calico组件配置
calico.yaml
https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
也可以从此链接下载calico.yaml(https://download.youkuaiyun.com/download/gswcfl/88438354),地址和镜像地址已经修改完成。
如果从网络下载,需要修改如下地址和镜像下载地址。
4800行位置
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
修改镜像下载地址
sed -i 's/docker.io\///g' calico.yaml
应用calico.yml
kubectl apply -f calico.yaml
加入节点
查看token是否过期
kubeadm token list
#重新生产tokent
kubeadm token create
获取discovery-token-ca-cert-hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
说明:此时master节点已经由NotReady
状态变以Ready
。
执行命令:
格式:kubeadm join 192.168.174.175:6443 --token --discovery-token-ca-cert-hash sha256:
节点角色设置
新加入节点角色默认为:,将角色设置为node,命令如下:
kubectl label node node1 kubernetes.io/role=node
说明:对新加入的节点都设置为node。
节点安装kubectl
- 将master节点中/etc/kubernetes/admin.conf拷贝到需要运行的主机的/etc/kubernetes目录中
scp /etc/kubernetes/admin.conf root@k8s-node1:/etc/kubernetes
- 配置环境变量
echo ‘export KUBECONFIG=/etc/kubernetes/admin.conf’ >> ~/.bash_profile
source ~/.bash_profile
安装Dashboard
准备yaml文件
此链接下载dashboard和dashboard-admin(https://download.youkuaiyun.com/download/gswcfl/88438354)
解压出两个yaml文件,dashboard-v2.5.0.yaml、dashboard-admin.yaml
#创建dashboard服务
kubectl create -f dashboard-v2.5.0.yaml
#账号权限绑定
kubectl apply -f dashboard-admin.yaml
#查看pod,deployment,svc运行状态
#通过此命令找到dashboard NodePort端口号
kubectl get pods,deployment,svc -n kubernetes-dashboard
生成证书
如果不更新证书,会有提示安全信息
#新建目录:
mkdir key && cd key
#生成证书
openssl genrsa -out dashboard.key 2048
#这里写的自己的node1节点,因为这里是通过nodeport访问的;如果通过apiserver访问,可以写成自己的master节点ip
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.1.11'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#删除原有的证书secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
#创建新的证书secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
#查看pod
kubectl get pod -n kubernetes-dashboard
#重启pod
kubectl delete pod kubernetes-dashboard-7b544877d5-2xqcr -n kubernetes-dashboard
此时已经已经访问dashboard界面了,换成自己的地址和nodeport端口,通过此命令查看端口kubectl get svc -n kubernetes-dashboard
https://node1:nodeport
查找token,找到和dashboard-admin-token-pmqqw
相类似的
kubectl get secret -n kubernetes-dashboard
查看token
kubectl describe secret dashboard-admin-token-pmqqw -n kubernetes-dashboard
此时就可以登陆dashboard了。