Kubernets二进制集群—多节点部署

本文档详细介绍了如何使用二进制方式部署Kubernetes多节点集群,包括master节点和node节点的配置,以及通过Nginx和Keepalived实现负载均衡和高可用。步骤涵盖了主机名设置、防火墙关闭、服务启动、配置文件修改、证书复制、服务监控等关键环节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

前言

要先部署k8s单节点集群,之前的博客有写过
链接: https://blog.youkuaiyun.com/m0_47219942/article/details/108889931.

一:K8s二进制方式多节点部署

1.1:环境介绍

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-oWvxrCEY-1602078292343)(C:\Users\kevin\AppData\Roaming\Typora\typora-user-images\image-20201004161355025.png)]

  • 主机分配
主机名IP地址资源分配部署服务
master20.0.0.51apiserver、scheduler、controller-manager、etcd
master0220.0.0.52apiserver、scheduler、controller-manager
VIP20.0.0.100
node0120.0.0.54kubelet、kube-proxy、docker、flannel、etcd
node0220.0.0.56kubelet、kube-proxy、docker、flannel、etcd
nginx0120.0.0.55nginx、keepalived
nginx0220.0.0.57nginx、keepalived

1.2:master02节点操作

  • 修改主机名,关闭防火墙,关闭核心防护,关闭网络管理功能(生成环境中一定要关闭它)
[root@localhost ~]# hostnamectl set-hostname master02	'修改主机名'
[root@localhost ~]# su
[root@master02 ~]# 
[root@master02 ~]# systemctl stop firewalld && systemctl disable firewalld	'关闭防火墙'
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master02 ~]#  setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	'关闭核心防护'
[root@master02 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager	'关闭网络管理功能'
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
  • 在master节点操作,将master节点的kubernetes配置文件和启动脚本复制到master02节点
[root@master ~]# scp -r /opt/kubernetes/ root@20.0.0.52:/opt
The authenticity of host '20.0.0.52 (20.0.0.52)' can't be established.
ECDSA key fingerprint is SHA256:0rTN1pFp1TpC4aoV9WTxZTu3FG5eISUT1khb0hgqxyA.
ECDSA key fingerprint is MD5:d1:f3:ff:8f:24:91:a4:6b:cd:0d:01:06:33:e3:9b:15.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '20.0.0.52' (ECDSA) to the list of known hosts.
root@20.0.0.52's password: 
token.csv                                              100%   84    77.7KB/s   00:00    
kube-apiserver                                         100%  909   678.3KB/s   00:00    
kube-scheduler                                         100%   94   144.0KB/s   00:00    
kube-controller-manager                                100%  483   810.4KB/s   00:00    
kube-apiserver                                         100%  184MB 105.0MB/s   00:01    
kubectl                                                100%   55MB 119.2MB/s   00:00    
kube-controller-manager                                100%  155MB 115.1MB/s   00:01    
kube-scheduler                                         100%   55MB 126.8MB/s   00:00    
ca-key.pem                                             100% 1675     1.2MB/s   00:00    
ca.pem                                                 100% 1359     1.2MB/s   00:00    
server-key.pem                                         100% 1675     2.1MB/s   00:00    
server.pem                                             100% 1643     1.8MB/s   00:00    
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@20.0.0.52:/usr/lib/systemd/system/
root@20.0.0.52's password: 
kube-apiserver.service                                 100%  282   186.1KB/s   00:00    
kube-controller-manager.service                        100%  317   358.7KB/s   00:00    
kube-scheduler.service                                 100%  281   487.6KB/s   00:00  
  • master02上修改apiserver配置文件中的IP地址
[root@master02 ~]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master02 cfg]# vim kube-apiserver 

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.51:2379,https://20.0.0.54:2379,https://20.0.0.56:2379 \
--bind-address=20.0.0.52 \		'修改此处的绑定IP地址'
--secure-port=6443 \
--advertise-address=20.0.0.52 \		'修改此处的IP地址'
...省略
  • 将master节点的etcd证书复制到master02节点(master02上一定要有etcd证书,用来与etcd通信
[root@master ~]# scp -r /opt/etcd/ root@20.0.0.52:/opt
  • master02节点查看etcd证书,并启动三个服务
[root@master02 cfg]# yum install tree -y
[root@master02 cfg]# tree /opt/etcd/
/opt/etcd/
├── bin
   ├── etcd
   └── etcdctl
├── cfg
   └── etcd
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

3 directories, 7 files
[root@master02 cfg]# systemctl start kube-apiserver.service 
[root@master02 cfg]# systemctl status kube-apiserver.service
[root@master02 cfg]# systemctl enable kube-apiserver.service
[root@master02 cfg]# systemctl start kube-controller-manager.service
[root@master02 cfg]# systemctl status kube-controller-manager.service
[root@master02 cfg]# systemctl enable kube-controller-manager.service
[root@master02 cfg]# systemctl start kube-scheduler.service 
[root@master02 cfg]# systemctl enable kube-scheduler.service
[root@master02 cfg]# systemctl status kube-scheduler.service
  • 添加环境变量并查看节点状态
[root@master02 cfg]# echo export PATH=$PATH:/opt/kubernetes/bin >> /etc/profile
[root@master02 cfg]# source /etc/profile
[root@master02 cfg]# kubectl get node
NAME        STATUS   ROLES    AGE     VERSION
20.0.0.54   Ready    <none>   4d21h   v1.12.3
20.0.0.56   Ready    <none>   4d21h   v1.12.3

1.3:nginx负载均衡集群部署

  • 两个nginx主机修改主机名(仅展示nginx01的操作):关闭防火墙和核心防护
[root@localhost ~]# hostnamectl set-hostname nginx01	'修改主机吗' 
[root@localhost ~]# su 
[root@nginx01 ~]#   
[root@nginx01 ~]# systemctl stop firewalld && systemctl disable firewalld	'关闭防火墙与核心防护' 
[root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
  • 两台nginx主机编辑nginx yum源(仅展示nginx01的操作)
[root@nginx01 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx.repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
enabled=1
gpgcheck=0
[root@nginx01 ~]# yum clean all
[root@nginx01 ~]# yum list
  • 两台nginx主机安装nginx并开启四层转发(仅展示nginx01的操作)
[root@nginx01 ~]# yum -y install nginx	'安装nginx'
[root@nginx01 ~]# vi /etc/nginx/nginx.conf 
events {
    worker_connections  1024;
}
'添加stream这段'
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;	'指定日志目录'

    upstream k8s-apiserver {
        server 20.0.0.51:6443;	'此处为master的ip地址和端口'
        server 20.0.0.52:6443;	'此处为master02的ip地址和端口,6443是apiserver的端口号'
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
http {
  • 启动nginx服务
[root@nginx01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl status nginx
 nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since  2020-10-04 16:58:11 CST; 6s ago
     Docs: http://nginx.org/en/docs/
  Process: 10931 ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf (code=exited, status=0/SUCCESS)
 Main PID: 10933 (nginx)
    Tasks: 2
   CGroup: /system.slice/nginx.service
           ├─10933 nginx: master process /usr/sbin/nginx -c /etc/nginx/n...
           └─10934 nginx: worker process

10 04 16:58:11 nginx01 systemd[1]: Starting nginx - high performance....
10 04 16:58:11 nginx01 systemd[1]: Started nginx - high performance ....
Hint: Some lines were ellipsized, use -l to show in full.
[root@nginx01 ~]# netstat -ntap |grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      10933/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      10933/nginx: master 

1.4:部署keepalived

  • 两台nginx主机部署keepalived服务
[root@nginx01 ~]# yum -y install keepalived
[root@nginx01 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   } 
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     failover@firewall.loc
   } 
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30
   router_id NGINX_MASTER 
}  

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}   

vrrp_instance VI_1 {
    state MASTER 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90  
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1
    authentication {  
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        20.0.0.100/24 
    }   
    track_script {
        check_nginx
    }   
}

[root@nginx02 ~]# yum -y install keepalived
[root@nginx02 ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        20.0.0.100/24
    }
    track_script {
        check_nginx
    }
}
  • 创建监控脚本,启动keepalived服务,查看VIP地址
[root@nginx01 ~]# mkdir -p /usr/local/nginx/sbin/	'创建监控脚本目录'
[root@nginx01 ~]# vim /usr/local/nginx/sbin/check_nginx.sh	'编写监控脚本配置文件'
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh	'给执行权限'
[root@nginx01 ~]# systemctl start keepalived	'开启服务'
[root@nginx01 ~]# systemctl status keepalived
[root@nginx01 ~]# ip a	'两个nginx服务器查看IP地址'
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:39:ca:50 brd ff:ff:ff:ff:ff:ff
    inet 20.0.0.55/24 brd 20.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 20.0.0.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe39:ca50/64 scope link 
       valid_lft forever preferred_lft forever   
'VIP在nginx01上'
[root@nginx02 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a5:94:87 brd ff:ff:ff:ff:ff:ff
    inet 20.0.0.57/24 brd 20.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1ed0:f5b9:5749:839e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
'2上是没有VIP的'
  • 验证漂移地址(lb01中使用pkill nginx,再在lb02中使用ip a 查看)
[root@nginx01 ~]# pkill nginx	'关闭nginx服务'
[root@nginx01 ~]# systemctl status keepalived	'发现keepalived服务关闭了'
[root@nginx02 ~]# ip a	'现在发现VIP地址跑到nginx02上了'
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a5:94:87 brd ff:ff:ff:ff:ff:ff
    inet 20.0.0.57/24 brd 20.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 20.0.0.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1ed0:f5b9:5749:839e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
  • 恢复漂移地址的操作
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl start keepalived	'先开启nginx,在启动keepalived服务,因为脚本会自动检测nginx服务是否开启,如果先启动keepalived服务,还是会根据nginx服务down自动关闭'
[root@nginx01 ~]# ip a	'再次查看,发现VIP回到了nginx01节点上'
  • 在node节点上,修改两个node节点配置文件(bootstrap.kubeconfig 、kubelet.kubeconfig),统一VIP地址,仅展示node01节点的操作
[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
'统统修改为VIP'
server: https://20.0.0.100:6443
  • 重启两个node节点的服务
[root@node1 ~]# systemctl restart kubelet
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]#  systemctl restart kube-proxy
[root@node1 ~]# cd /opt/k8s/cfg/
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://20.0.0.100:6443
kubelet.kubeconfig:    server: https://20.0.0.100:6443
kube-proxy.kubeconfig:    server: https://20.0.0.100:6443
  • 在nginx01上查看k8s日志
[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
20.0.0.54 20.0.0.51:6443 - [04/Oct/2020:17:25:54 +0800] 200 2301
20.0.0.54 20.0.0.51:6443 - [04/Oct/2020:17:25:54 +0800] 200 1115
20.0.0.54 20.0.0.51:6443 - [04/Oct/2020:17:25:54 +0800] 200 1115
  • master节点测试创建pod
[root@master ~]# kubectl run nginx --image=nginx	'创建一个nginx测试pod'
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master ~]# kubectl get pods		'查看状态,是正在创建'
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-gsmf7   0/1     ContainerCreating   0          9s
[root@master ~]#  kubectl get pods		'稍等一下再次查看,发现pod已经创建完成,在master02节点也可以查看'
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-gsmf7   0/1     Running   0          16s
' ContainerCreating表示正在创建中,running表示创建已完成'
  • 查看pod日志
[root@master ~]# kubectl logs nginx-dbddb74b8-gsmf7	'查看pod日志发现报错原因是权限问题'
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-5s6h7)
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous	'指定集群中的匿名用户有管理员权限'
[root@master ~]# kubectl logs nginx-dbddb74b8-gsmf7	'此时可以访问,但是没有日志产生'
  • 查看pod网络
[root@master ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE
nginx-dbddb74b8-gsmf7   1/1     Running   0          4m47s   172.17.54.3   20.0.0.54   <none>
  • 访问node节点的pod资源产生日志,并在两个master节点查看
[root@node1 ~]# curl 172.17.54.3		'在对应的节点访问pod'
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
'访问就会产生日志'
'回到master操作'
[root@master ~]# kubectl logs nginx-dbddb74b8-gsmf7	'再次在master节点访问日志情况,master02节点同样可以访问'
172.17.54.1 - - [04/Oct/2020:09:41:19 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值