本文仅做测试之用,如要上线,请完善安全机制。
环境:
CentOS Linux release 7.3.1611 (Core)
docker:17-03
一、系统配置
所有节点操作(↓)
1. 关闭防火墙:
# systemctl stop firewalld
# systemctl disable firewalld
2. 关闭selinux:
# setenforce 0
# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
3. 配置hosts文件
# vim /etc/hosts
10.10.100.160 lb1
10.10.100.161 lb2
10.10.100.162 master1
10.10.100.163 master2
10.10.100.164 master3
10.10.100.165 etcd1
10.10.100.166 etcd2
10.10.100.167 etcd3
10.10.100.168 node1
10.10.100.169 node2
10.10.100.170 node3
4. 配置YUM源
国内使用阿里的yum源
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
如果能访问Google,也可以配置Google yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
k8s集群节点操作(master or node)(↓)
5. 关闭swap:
为什么要关闭swap?问的好,当时我也不太清楚,后来有一次服务器重启后,k8s启不了,然后…你懂的。
如果不关闭swap,K8S组件无法启动,当然也可以使用–flas,强制跳过swap检查,避免不必要的问题,本文关闭swap.
可以执行命令刷新一次SWAP(将SWAP里的数据转储回内存,并清空SWAP里的数据)
# swapoff -a
记得把/etc/fstab中挂载swap分区的命令注释掉
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
6. 创建k8s.conf文件
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
执行命令使修改生效。
# sysctl --system
7. 安装docker-ce
a. 安装 yum-utils,它提供了 yum-config-manager,可用来管理yum源
# yum install -y yum-utils
b. 添加yum源
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
c. 安装docker-ce-selinux-17.03.2.ce
要先安装docker-ce-selinux-17.03.2.ce,否则安装docker-ce会报错
# yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
d. 安装 docker-ce-17.03.2.ce
# yum install docker-ce-17.03.2.ce-1.el7.centos
在线安装较慢,建议使用yumdownloader把docker-ce下载到本地,然后分发到各个节点。
e. 启动docker并加入开机自启动
# systemctl start docker
# systemctl enable docker
8. Centos7查询开机启动项服务
使用 systemctl list-unit-files 可以查看启动项
二、时间服务器配置
确保集群内时间是同步的并且将系统时间写入到系统硬件当中,避免重启服务器时间覆盖
# 安装ntpdate命令
# yum install -y ntpdate
# ntpdate 10.10.200.247 (这里我用的是内网环境搭建的ntp_server)
# 加入到开机自动同步时间
# echo "ntpdate 10.10.200.247" >> /etc/rc.d/rc.local
# 加入到计划任务中,每两个小时执行一次
# crontab -l
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# tine rsync
00 */2 * * * ntpdate 10.10.200.247 >/dev/null 2>&1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 显示系统硬件时间
# hwclock
# 将系统时间写入到系统硬件当中
# hwclock -w
三、配置LB
1. keepalived配置(非抢占模式)
为什么要用非抢占模式?如果按正常的ms模式,master宕机后,vip漂移到slave,如果master恢复正常后,vip会立马转移到master上,这时,如果有cilent在访问集群服务,可能会出现一些不必要的问题。
yum install -y keepalived
配置文件位于/etc/keepalived/keepalived.conf,记得先备份。
LB1:
------------------------------------------
[root@lb1 keepalived]# cat /etc/keepalived/keepalived.conf |grep -Ev "$^|#"
! Configuration File for keepalived
global_defs {
router_id lb-100
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
weight -2
}
vrrp_instance VI-kube-master {
state BACPUP ## 这里采用非抢占模式,因此两个节点都是BACKUP
priority 100 ## lb2上这个值要低于lb1,建议99,这个值决定了谁是主谁是从
dont_track_primary
interface ens192 ## IP地址漂移到的网卡
virtual_router_id 51
advert_int 2
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check-haproxy
}
virtual_ipaddress {
10.10.100.159 dev ens192 ## VIP地址及申明要绑定在哪个网卡上
}
}
注: #vrrp_strict 此命令需要注意掉,不然无法ping通VIP
------------------------------------------
lb2,使用同样的模板配置,注意修改priority的值
注:这个浮动的vip:10.10.100.159,通过ipconfig命令一般是看不到的,要使用ip addr命令进行查看。
2. haproxy配置
# yum install -y haproxy
# 配置文件位于/etc/haproxy/haproxy.cfg,记得先备份。
LB1:
------------------------------------------
[root@lb1 haproxy]# cat /etc/haproxy/haproxy.cfg |grep -Ev "$^|#"
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
##如果这里要启动admin_stats,记得修改密码
listen admin_stats
bind 0.0.0.0:10080
mode http
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /status
stats realm welcome login\ Haproxy
stats auth admin:123456
stats hide-version
stats admin if TRUE
frontend kubernetes
bind *:6443
mode tcp
default_backend kubernetes-master
backend kubernetes-master
balance roundrobin
server 10.10.100.162 10.10.100.162:6443 check inter 2000 fall 2 rise 2 weight 1
server 10.10.100.163 10.10.100.163:6443 check inter 2000 fall 2 rise 2 weight 1
server 10.10.100.164 10.10.100.164:6443 check inter 2000 fall 2 rise 2 weight 1
------------------------------------------
# systemctl enable haproxy
# systemctl start haproxy
四、安装cfssl(为生成https证书)
# 在master1上安装即可
wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
五、配置etcd
1. 生成etcd证书文件
# 在安装cfssl的master1上操作
# mkdir -pv $HOME/ssl && cd $HOME/ssl
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
cat > etcd-ca-csr.json << EOF
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.10.100.165",
"10.10.100.166",
"10.10.100.167"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
EOF
生成证书并复制证书至其他etcd节点
# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca && ls etcd-ca*.pem
# cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
# ls etcd-key.pem etcd.pem
在master和etcd节点上创建目录,并copy证书文件
mkdir -pv /etc/etcd/ssl
mkdir -pv /etc/kubernetes/pki/etcd
cp etcd*.pem /etc/etcd/ssl
cp etcd*.pem /etc/kubernetes/pki/etcd
scp -r /etc/kubernetes master2:/etc/
scp -r /etc/kubernetes master3:/etc/
scp -r /etc/etcd etcd1:/etc/
scp -r /etc/etcd etcd2:/etc/
scp -r /etc/etcd etcd3:/etc/
2. etcd1主机配置并启动etcd
yum install -y etcd
只修改列出的项,多台etcd请参照修改。
[root@etcd1 ~]# grep -Ev "$^|#" /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#如果是多台etcd,请修改以下3处
ETCD_LISTEN_PEER_URLS="https://10.10.100.165:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://10.10.100.165:2379"
ETCD_NAME="etcd1"
#[Clustering]
#如果是多台etcd,请修改以下2处
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.100.165:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://10.10.100.165:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.100.165:2380,etcd2=https://10.10.100.166:2380,etcd3=https://10.10.100.167:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
# chown -R etcd.etcd /etc/etcd
# systemctl enable etcd
# systemctl start etcd
注:etcd是一个集群,当etcd1先启动时,会报错无法启动,不用管,当etcd2和etcd3启动后,etcd1也会启动成功 。
3. 检查集群
etcdctl --endpoints "https://10.10.100.165:2379,https://10.10.100.166:2379,https://10.10.100.167:2379" \
--ca-file=/etc/etcd/ssl/etcd-ca.pem\
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
cluster-health
## 预期输出结果
member 110483acf5b9633d is healthy: got healthy result from https://10.10.100.165:2379
member 7766361616a00a51 is healthy: got healthy result from https://10.10.100.167:2379
member 884194b82399cb2d is healthy: got healthy result from https://10.10.100.166:2379
cluster is healthy
六、初始化master
1. 安装kubeadm\kubectl\kubelet
# 指定版本下载rpm包安装:
mkdir -pv /opt/k8srpm && cd /opt/k8srpm
## 安装yum-utils工具包才能使用yumdownloader 命令
yum install -y yum-utils
# yumdownloader kubectl-1.11.3 kubelet-1.11.3 kubeadm-1.11.3 cri-tools-1.12.0 kubernetes-cni-0.6.0 socat-1.7.3.2
# 需要用到的rpm包
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cri-tools-1.12.0-0.x86_64.rpm
kubeadm-1.11.3-0.x86_64.rpm
kubectl-1.11.3-0.x86_64.rpm
kubelet-1.11.3-0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# copy到master各节点上
scp -r /op t/k8srpm master2:/opt/
scp -r /op t/k8srpm master3:/opt/
scp -r /op t/k8srpm node1:/opt/
scp -r /op t/k8srpm node2:/opt/
# 安装
# yum install ./*.rpm -y
# systemctl enable kubelet.service
2. 初始化master1
注意:如果master上配置了代理,请先取消代理,再进行初始化,否则即使初始化完成,也会出现一些未知的错误。
a. 创建kubeadm-init.yaml配置文件
# cd $HOME
cat << EOF > /root/kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3 # kubernetes的版本
api:
advertiseAddress: 10.10.100.162 #此处写当前服务器IP地址
bindPort: 6443
controlPlaneEndpoint: 10.10.100.159:6443 #VIP地址
apiServerCertSANs: #此处填所有的masterip和lbip和其它你可能需要通过它访问apiserver的地址和域名或者主机名等
- master1
- master2
- master3
- 10.10.100.160 # lb1
- 10.10.100.161 # lb2
- 10.10.100.162 # master1
- 10.10.100.163 # master2
- 10.10.100.164 # master3
- 10.10.100.159 # vip
- 127.0.0.1
etcd: #ETCD的地址
external:
endpoints:
- "https://10.10.100.165:2379"
- "https://10.10.100.166:2379"
- "https://10.10.100.167:2379"
caFile: /etc/kubernetes/pki/etcd/etcd-ca.pem
certFile: /etc/kubernetes/pki/etcd/etcd.pem
keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem
networking:
podSubnet: 10.244.0.0/16 # pod网络的网段
kubeProxy:
config:
mode: ipvs #启用IPVS模式
featureGates:
CoreDNS: true
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # image的仓库源
EOF
b. 下载镜像并初始化
systemctl enable kubelet
kubeadm config images pull --config kubeadm-init.yaml
## master初始化时需要用到“k8s.gcr.io/pause:3.1”,所以需要给pause:3.1 tag下,因为我们并不是直接从Google下载的,其他镜像不需要更改。
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
kubeadm init --config /root/kubeadm-init.yaml
# 以下是初始化完成后需要执行的命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
c. 命令自动补全
## docker命令自动补全
# 自动补齐需要依赖工具bash-complete,如果没有,则需要手动安装,命令如下:
yum install -y bash-completion
# 安装成功后,得到文件为 /usr/share/bash-completion/bash_completion,如果没有这个文件,则说明系统上没有安装这个工具。
source /usr/share/bash-completion/bash_completion
##kubectl命令自动补全
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
d. 把kubernetes生成的文件拷贝到两外两台机器上
scp -r /etc/kubernetes/pki master2:/etc/kubernetes/
scp -r kubeadm-init.yaml master2:/root/
scp -r /etc/kubernetes/pki master3:/etc/kubernetes/
scp -r kubeadm-init.yaml master3:/root/
3. 初始化master2、master3
# 在初始化master2和master3时,需要先删除从master1上拷贝过来的apiserver.crt、apiserver.key两个文件
cd /etc/kubernetes/pki/
rm -fr apiserver.crt apiserver.key
# 然后修改kubeadm-init.yaml中的advertiseAddress为本机IP地址
cd $HOME && grep -A 1 api kubeadm-init.yaml
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
api:
advertiseAddress: 10.10.100.163 #此处写当前服务器IP地址
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 其余步骤同master1,从b章节开始,到c章节结束。
七、node节点加入集群
1. 下载node节点需要的镜像
## flannel需要在master和node上都需要
docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
2. 获取token
# 在master主机执行获取join命令
kubeadm token create --print-join-command
3. 将node加入集群
# systemctl enable kubelet.service
[root@master1 ~]# kubeadm token create --print-join-command
kubeadm join 10.10.100.159:6443 --token 152i9n.fofe6cp5uk9ynyi1 --discovery-token-ca-cert-hash sha256:7681b4b5455d8304af9225308021079494604b0fb11539e81bb6f3070d08a7c9
4. 查看节点
# 在master上执行
# kubectl get node
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady master 2m v1.11.3
master2 NotReady master 1m v1.11.3
master3 NotReady master 1m v1.11.3
node1 NotReady <none> 18s v1.11.3
node2 NotReady <none> 12s v1.11.3
# 由于没有配置网络,所以现在所有节点状态为NotReady
八、配置网络
1. 使用flannel网络
注意:这里用到的镜像quay.io/coreos/flannel:v0.10.0-amd64,有些网络可能无法直接pull,需要FQ。
# 在master1上操作
# cd /root/
# mkdir flannel
# cd flannel
# wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
2. 查看node状态
# kubectl get pod -n kube-system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@master1 flannel]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-777d78ff6f-9s5f4 1/1 Running 0 17h
coredns-777d78ff6f-r8x7k 1/1 Running 0 17h
kube-apiserver-master1 1/1 Running 7 17h
kube-apiserver-master2 1/1 Running 5 1d
kube-apiserver-master3 1/1 Running 5 1d
kube-controller-manager-master1 1/1 Running 5 17h
kube-controller-manager-master2 1/1 Running 0 1d
kube-controller-manager-master3 1/1 Running 0 1d
kube-flannel-ds-4kmcn 1/1 Running 7 1d
kube-flannel-ds-55v4t 1/1 Running 2 1d
kube-flannel-ds-gmpn9 1/1 Running 1 1d
kube-flannel-ds-qhdkh 1/1 Running 0 1d
kube-flannel-ds-wn5wt 1/1 Running 0 1d
kube-proxy-jfr8b 1/1 Running 3 1d
kube-proxy-m4m2z 1/1 Running 0 1d
kube-proxy-p8llr 1/1 Running 0 1d
kube-proxy-pd27p 1/1 Running 1 1d
kube-proxy-t9lt6 1/1 Running 4 1d
kube-scheduler-master1 1/1 Running 1 17h
kube-scheduler-master2 1/1 Running 0 1d
kube-scheduler-master3 1/1 Running 4 1d
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 当上面的kube-flannel-ds-xxxx的容器都处于Running状态时,node的状态应该是Ready
# kubectl get node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@master1 flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 1d v1.11.3
master2 Ready master 1d v1.11.3
master3 Ready master 1d v1.11.3
node1 Ready <none> 1d v1.11.3
node2 Ready <none> 1d v1.11.3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
注:如果容器的状态和node节点的状态出现错误时,可以查看日志,查找原因
# tail -f /var/log/message
九、测试
1. 创建一个nginx,测试应用和dns是否正常
# 在master上操作
cd /root && mkdir nginx && cd nginx
cat << EOF > nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
type: NodePort
ports:
- port: 80
nodePort: 31000
name: nginx-port
targetPort: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
部署nginx
kubectl apply -f nginx.yaml
查看pod
[root@master1 nginx]# kubectl get pods |grep nginx
nginx-d95d64c75-5nxqv 0/1 ContainerCreating 0 39s
nginx-d95d64c75-z2qhl 0/1 ContainerCreating 0 39s
可以看到,创建的nginx已经正常启动了
[root@master1 nginx]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-d95d64c75-5nxqv 1/1 Running 0 1m 10.244.4.2 node2 <none>
nginx-d95d64c75-z2qhl 1/1 Running 0 1m 10.244.3.4 node1 <none>
2. 创建一个POD来测试DNS解析
# 在master上操作
# kubectl run curl --image=radial/busyboxplus:curl -i --tty
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@master1 flannel]# kubectl run curl --image=radial/busyboxplus:curl -i --tty
If you don't see a command prompt, try pressing enter.
[ root@curl-87b54756-52hdl:/ ]$ nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# nslookup nginx
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ root@curl-87b54756-52hdl:/ ]$ nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.109.34.162 nginx.default.svc.cluster.local
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# curl nginx/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ root@curl-87b54756-52hdl:/ ]$ curl nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 删除创建的curl
# kubectl delete deployment curl
3. 测试master高可用
#将master1:10.10.100.162关掉
# init 0
#切换至master2
#执行get node
# kubectl get node
#master已经宕机了!!!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady master 1h v1.11.3
master2 Ready master 59m v1.11.3
master3 Ready master 59m v1.11.3
node1 Ready <none> 58m v1.11.3
node2 Ready <none> 58m v1.11.3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#重新创建一个pod,看看是否能创建成功
# kubectl run curl --image=radial/busyboxplus:curl -i --tty
# 如果可以创建,那么master高可用测试完成
# 删除创建的临时POD
# kubectl delete deployment curl
4. 测试Haproxy高可用
首先停止lb1上的haproxy
systemctl stop haproxy
rolling-update
rolling-update是一个非常重要的命令,对于已经部署并且正在运行的业务,rolling-update提供了不中断业务的更新方式。rolling-update每次起一个新的pod,等新pod完全起来后删除一个旧的pod,然后再起一个新的pod替换旧的pod,直到替换掉所有的pod。
rolling-update需要确保新的版本有不同的name,Version和label,否则会报错 。
kubectl rolling-update rc-nginx-2 -f rc-nginx.yaml
如果在升级过程中,发现有问题还可以中途停止update,并回滚到前面版本
kubectl rolling-update rc-nginx-2 —rollback