k8s集群部署

架构:
在这里插入图片描述

节点规划:
172.20.10.2	master01
172.20.10.3	master02
172.20.10.4	master03
172.20.10.5	node01
172.20.10.100	k8s-lb
基础环境信息:
系统:	cento-7.4
软件:	kubenetes-1.18.0
Docker-20.10.8
基础环境配置:
1. 配置主机名---所有master和node节点
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
2. 配置所有主机hosts映射
(1)	vim /etc/hosts
	172.20.10.2	master01
172.20.10.3	master02
172.20.10.4	master03
172.20.10.5	node01
172.20.10.100	k8s-lb
(2)	配置测试
for host in master01 master02 master03 node01 k8s-lb;do ping -c 1 $host;done
3. 禁用防火墙
(1)	systemctl stop firewalld 
systemctl disable firewalld
(2)	setenforce 0 
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config
4. 关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
vim /etc/fastab
#/dev/mapper/centos-swap	swap	swap	defaults		0 0
	reboot
5. 时间同步
yum install chrony -y 
systemctl enable chronyd 
systemctl start chronyd 
chronyc sources
6. 配置ulimit
ulimit -SHn 65535
7. 配置内核参数
(1)	cat >> /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
EOF
(2)	sysctl -p /etc/sysctl.d/k8s.conf
8. 所有master节点SSH互信
ssh-keygen 
ssh-copy-id root@172.20.10.3 
ssh-copy-id root@172.20.10.4
9. 配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes] 
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1 
gpgcheck=0  
EOF
wget https://download.docker.com/linux/centos/docker-ce.repo	//docker源

组件安装
1. 安装ipvs
(1) 安装
yum install -y  ipvsadm ipset sysstat conntrack libseccomp
(2) 加载模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF 
#!/bin/bash 
modprobe -- ip_vs 
modprobe -- ip_vs_rr 
modprobe -- ip_vs_wrr 
modprobe -- ip_vs_sh 
modprobe -- nf_conntrack_ipv4
modprobe -- ip_tables 
modprobe -- ip_set 
modprobe -- xt_set 
modprobe -- ipt_set 
modprobe -- ipt_rpfilter 
modprobe -- ipt_REJECT 
modprobe -- ipip 
EOF
注意:在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack
(3) 配置重启自动加载
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
2. 安装docker-ce
(1) yum list| grep docker-ce
yum -y install docker-ce
systemctl start docker 
systemctl enable docker
(2)各节点配置docker加速器并修改成k8s驱动
daemon.json文件如果没有自己创建
cat /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
(3)systemctl restart docker

3. 安装kubernetes组件1.18.0-----所有节点yum 
yum -y install kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl start kubelet

systemctl enable kubelet

集群初始化
简介:
高可用采用的是HAProxy+Keepalived,HAProxy和KeepAlived以守护进程的方式在所有Master节点部署。
1.	安装haproxy------所有master节点
(1)	安装
yum -y install haproxy
(2)	配置
vim /etc/haproxy/haproxy.cfg
最下面更改:
#--------------------------------------------------------------------- 
# round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver 
mode	tcp 
balance	roundrobin 
server	k8s-master01	172.20.10.2:6443	check 
server	k8s-master02	172.20.10.3:6443	check 
server	k8s-master03	172.20.10.4:6443	check #--------------------------------------------------------------------- 
# collection haproxy statistics message #--------------------------------------------------------------------- listen stats 
bind			*:9999 
stats auth		admin:P@ssW0rd 
stats refresh	5s 
stats realm		HAProxy\ Statistics 
stats uri		/admin?stats

2.	安装keepalived---所有master节点
(1) 安装
yum -y install keepalived
(2) 配置
①	第一台master------vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      172.20.10.100
    }

    # 调用脚本
    track_script {
        check_apiserver
    }
}
②	第二台master
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth32
    virtual_router_id 51
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      172.20.10.100
    }

    # 调用脚本
    track_script {
        check_apiserver
    }
}

③	第三台master
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
# 定义脚本
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth32
    virtual_router_id 51
    priority 98
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      172.20.10.100
    }

    # 调用脚本
    track_script {
        check_apiserver
    }
}

健康监测脚本:
#!/bin/bash

function check_apiserver(){
  for ((i=0;i<5;i++))
  do
    apiserver_job_id=${pgrep kube-apiserver}
    if [[ ! -z ${apiserver_job_id} ]];then
      return
    else
      sleep 2
    fi
  done
  apiserver_job_id=0
}

# 1->running    0->stopped
check_apiserver
if [[ $apiserver_job_id -eq 0 ]];then
  /usr/bin/systemctl stop keepalived
  exit 1
else
  exit 0
fi
(3) 启动haproxy和keepalived
systemctl start haproxy
systemctl start keepalived
systemctl enable haproxy
systemctl enable keepalived
			此时IP漂移在了第一个master节点上,其他节点不显示。

集群部署:
	1 编辑初始化yaml并修改自己的配置:
	cat kubeadm-config.yaml
 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.20.10.2     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1        # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "172.20.10.100:16443"    # 虚拟IP和haproxy端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io    # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.18.0     # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
 
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

	2 预下载所需要的镜像:
[root@master3 ~]# kubeadm config images pull --config kubeadm-config.yaml
W1230 23:53:20.541662   22029 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.18.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.18.2
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.7

	3 初始化集群:
	kubeadm init --config kubeadm-config.yaml
	输出加入集群认证:
[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
W1231 14:11:50.231964  120564 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods 
[addons] Applied essential addon: kube-proxy
 					。。。。。。。中间省略。。。。。。。
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
 
  kubeadm join 172.20.10.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:fef5d84a8287453eb96fd5d0096ff00fa91164bbde8ab9ecf7439c4eec53cec9 \
    --control-plane
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 192.168.200.16:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:f0489748e3b77a9a29443dae2c4c0dfe6ff4bde0daf3ca8740dd9ab6a9693a78

	3如果初始化失败就重置:
		kubeadm reset
4在其他master节点创建证书目录并拷贝
		mkdir -p /etc/kubernetes/pki/etcd
	主mater复制到其他master:
scp /etc/kubernetes/pki/ca.* root@172.20.10.3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@172.20.10.3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@172.20.10.3:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@172.20.10.3:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@172.20.10.3:/etc/kubernetes/
scp /etc/kubernetes/pki/ca.* root@172.20.10.4:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@172.20.10.4:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@172.20.10.4:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@172.20.10.4:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@172.20.10.4:/etc/kubernetes/
主master复制到各其他node节点:
scp /etc/kubernetes/admin.conf root@172.20.10.5:/etc/kubernetes/

	5其他master节点加入集群执行以下命令:
	kubeadm join 172.20.10.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:161f8fc6109378fbeecf4eee9d2aa17ae05b17a819cf5023f0a279efbbe4b4f8 \
    --control-plane

	6 node节点加入集群执行以下命令
	kubeadm join 172.20.10.100:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:161f8fc6109378fbeecf4eee9d2aa17ae05b17a819cf5023f0a279efbbe4b4f8

7安装网络组件calico:
	wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
	kubectl apply -f calico.yaml
8所有master环境变量:
root用户执行以下命令:
	echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
	非root用户执行以下命令:
		mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

	9查看节点状态:
[root@master1 ~]# kubectl get nodes 
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   39m   v1.18.2
master2   Ready    master   37m   v1.18.2
master3   Ready    master   36m   v1.18.2
node1     Ready    <none>   35m   v1.18.2
	kubectl get pods --all-namespaces		//查看pods
	kubectl get cs						//集群状态
	kubectl get ns						//集群命名空间
	kubectl get pods -n kube-system -o wide	//各pods的组件  
-o wide //显示更多信息
	kubectl get nodes		//集群节点查看
。。。。。。。。。集群安装完毕。。。。。。。。。。。。。。。。

Dashboard部署:
	1下载yaml文件:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
2 增加nodeort:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort ##############增加此行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000 ###############3增加此行

	3 创建登录管理员和授权:
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount= kubernetes-dashboard:dashboard-admin
	4 查看token:
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
5 登录dashboard:
	https://物理机IP:30000
	输入token
···········完毕························
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

OPS_akai

奥利给

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值