使用Kubeadm搭建Kubernetes HA(1.10.1)

本文详细介绍了使用Kubeadm搭建Kubernetes 1.10.1版本的高可用集群的过程,包括集群结构设计、环境配置、ETCD集群部署、Keepalived配置、Flannel网络插件部署等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

使用Kubeadm搭建Kubernetes HA(1.10.1)

我不能回到过去,也无法预测未来,我能做的就是活在当下!

1、集群结构

这里写图片描述

简单说明一下:

1、本次高可用集群使用三个master节点 + 一个node节点,node节点可以随意增加;

2、etcd集群的三个节点直接部署在物理节点上;

3、三个master节点共同维护一个VIP,node节点使用VIP连接master;

4、用于暴露服务的node节点共同维护一个VIP,外部DNS服务使用泛域名解析到此VIP;

5、DNS泛域名解析配合Ingress实现集群服务访问;

2、环境清单

2.1、系统清单

IPHostnameRoleOS
192.168.115.210master-1、etcd-0master、etcdUbuntu 16.04.2
192.168.115.211master-2、etcd-1master、etcdUbuntu 16.04.2
192.168.115.212master-3、etcd-2master、etcdUbuntu 16.04.2
192.168.115.213node-1nodeUbuntu 16.04.2
192.168.119.5node-dnsDNSCentOS 7.3.1

2.2、软件清单

2.2.1、镜像
  • k8s.gcr.io/kube-scheduler-amd64:v1.10.1
  • k8s.gcr.io/kube-apiserver-amd64:v1.10.1
  • k8s.gcr.io/kube-proxy-amd64:v1.10.1
  • k8s.gcr.io/kube-controller-manager-amd64:v1.10.1
  • k8s.gcr.io/etcd-amd64:3.1.12
  • k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
  • k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
  • k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
  • k8s.gcr.io/pause-amd64:3.1
  • quay.io/coreos/flannel:v0.9.1-amd64
  • quay.io/coreos/etcd:v3.1.10

后面的操作中,将镜像全部上传到私库,地址:192.168.101.88:5000/k8s1.10

2.2.2、软件
  • kubeadm_1.10.1-00_amd64.deb
  • kubectl_1.10.1-00_amd64.deb
  • kubelet_1.10.1-00_amd64.deb
  • kubernetes-cni_0.6.0-00_amd64.deb
  • socat_1.7.3.1-1_amd64.deb
  • docker-ce_17.03.2ce-0ubuntu-xenial_amd64.deb
  • keepalived-1.4.3:http://www.keepalived.org/software/keepalived-1.4.3.tar.gz
  • etcd-v3.1.12-linux-amd64:https://github.com/coreos/etcd/releases/download/v3.1.12/etcd-v3.1.12-linux-amd64.tar.gz
  • cfssl_linux-amd64:https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
  • cfssljson_linux-amd64:https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
2.2.3、可能被依赖的其他软件
  • apt-transport-https_1.2.26_amd64.deb
  • ca-certificates_20170717~16.04.1_all.deb
  • curl_7.47.0-1ubuntu2.7_amd64.deb
  • ebtables_2.0.10.4-3.4ubuntu2_amd64.deb
  • ethtool_1:4.5-1_amd64.deb
  • software-properties-common_0.96.20.7_all.deb

3、部署流程

整个部署过程大部分参考官方教程:Creating HA clusters with kubeadm

3.1、部署ETCD集群

3.1.1、生成证书
  • 安装cfssl和cfssljson,用于创建证书
curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x /usr/local/bin/cfssl*
  • 创建CSR(Certificate Signing Request
root@master-1:/etc/etcd/ssl# pwd
/etc/etcd/ssl
root@master-1:/etc/etcd/ssl# cat >ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
  • 创建CA根证书
root@master-1:/etc/etcd/ssl# cat >ca-csr.json <<EOF
{
    "CN": "etcd",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
EOF
root@master-1:/etc/etcd/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 创建client证书
root@master-1:/etc/etcd/ssl# cat >client.json <<EOF
{
    "CN": "client",
    "key": {
        "algo": "ecdsa",
        "size": 256
    }
}
EOF
root@master-1:/etc/etcd/ssl# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
  • 创建service证书和peer证书(在每个etcd节点上执行,注意修改config.json的内容)
root@master-1:/etc/etcd/ssl# cat >config.json <<EOF
{
    "CN": "etcd-0",
    "hosts": [
        "etcd-0",
        "192.168.115.210"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}
EOF
root@master-1:/etc/etcd/ssl# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server
root@master-1:/etc/etcd/ssl# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer
root@master-1:~# ll /etc/etcd/ssl/
total 104
drwxr-xr-x 2 root root 4096 Apr 27 03:03 ./
drwxr-xr-x 3 root root 4096 Apr 27 03:07 ../
-rw-r--r-- 1 root root  867 Apr 25 03:52 ca-config.json
-rw-r--r-- 1 root root 1025 Apr 25 03:52 ca.crt
-rw-r--r-- 1 root root  883 Apr 25 03:52 ca.csr
-rw-r--r-- 1 root root   85 Apr 25 03:52 ca-csr.json
-rw------- 1 root root 1675 Apr 25 03:52 ca.key
-rw------- 1 root root 1675 Apr 25 03:52 ca-key.pem
-rw-r--r-- 1 root root 1127 Apr 25 03:52 ca.pem
-rw-r--r-- 1 root root  351 Apr 25 03:52 client.csr
-rw-r--r-- 1 root root   88 Apr 25 03:52 client.json
-rw------- 1 root root  227 Apr 25 03:52 client-key.pem
-rw-r--r-- 1 root root  875 Apr 25 03:52 client.pem
-rw-r--r-- 1 root root  277 Apr 25 20:52 config.json
-rw-r--r-- 1 root root 1099 Apr 25 03:52 healthcheck-client.crt
-rw------- 1 root root 1679 Apr 25 03:52 healthcheck-client.key
-rw-r--r-- 1 root root 1094 Apr 25 03:52 peer.crt
-rw-r--r-- 1 root root  477 Apr 25 20:54 peer.csr
-rw------- 1 root root 1675 Apr 25 03:52 peer.key
-rw------- 1 root root  227 Apr 25 20:54 peer-key.pem
-rw-r--r-- 1 root root  993 Apr 25 20:54 peer.pem
-rw-r--r-- 1 root root 1078 Apr 25 03:52 server.crt
-rw-r--r-- 1 root root  481 Apr 25 20:54 server.csr
-rw------- 1 root root 1679 Apr 25 03:52 server.key
-rw------- 1 root root  227 Apr 25 20:54 server-key.pem
-rw-r--r-- 1 root root  993 Apr 25 20:54 server.pem

在一个节点上生存证书之后,将证书复制到其他etcd节点上~~~~

3.1.2、安装配置Etcd
  • 安装(在所有etcd节点上执行)
root@master-1:~# export ETCD_VERSION=v3.1.12
root@master-1:~# curl -sSL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz | tar -xzv --strip-components=1 -C /usr/local/bin/
root@master-1:~# cat >/etc/systemd/system/etcd.service <<EOF
[Install]
WantedBy=multi-user.target

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
Conflicts=etcd.service
Conflicts=etcd2.service

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
Restart=always
RestartSec=5s
EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/bin/etcd"
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 配置(在所有etccd节点上执行,但是每个节点上的:ETCD_NAME、ETCD_INITIAL_ADVERTISE_PEER_URLS、ETCD_ADVERTISE_CLIENT_URLS的值是不一样的)
root@master-1:~# cat >/etc/etcd/etcd.conf <<EOF
ETCD_NAME=etcd-0
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd-0:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://
..."
ETCD_INITIAL_CLUSTER="etcd-0=https://etcd-0:2380,etcd-1=https://etcd-1:2380,etcd-2=https://etcd-2:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://etcd-0:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
ETCD_CERT_FILE="/etc/etcd/ssl/server.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/server-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/peer-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#[profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
EOF
3.1.3、启动并测试
root@master-1:~# systemctl start etcd.service
root@master-2:~# systemctl start etcd.service
root@master-3:~# systemctl start etcd.service

分别在三个节点上执行,第一个节点会检查第二个节点是否启动,否则会一直等待~~~

root@master-1:~# systemctl status etcd.service
root@master-2:~# systemctl status etcd.service
root@master-3:~# systemctl status etcd.service
root@master-1:~# etcdctl --ca-file /etc/etcd/ssl/ca.pem --cert-file /etc/etcd/ssl/client.pem --key-file /etc/etcd/ssl/client-key.pem --endpoints https://etcd-0:2379,https://etcd-1:2379,https://etcd-2:2379 cluster-health
2018-04-27 03:17:18.296851 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-04-27 03:17:18.297144 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member f0f92d99677353f is healthy: got healthy result from https://etcd-0:2379
member 1a147ce6336081c1 is healthy: got healthy result from https://etcd-1:2379
member ed2c681b974a3802 is healthy: got healthy result from https://etcd-2:2379
cluster is healthy

觉得etcdctl命令太长?

alias etcdctl="etcdctl --ca-file /etc/etcd/ssl/ca.pem --cert-file /etc/etcd/ssl/client.pem --key-file /etc/etcd/ssl/client-key.pem --endpoints https://etcd-0:2379,https://etcd-1:2379,https://etcd-2:2379"

可以写入.profile文件

3.2、部署Keepalived(Master)

  • 安装(在所有master节点上执行)
root@master-1:~/kubernetes# wget http://www.keepalived.org/software/keepalived-1.4.3.tar.gz
root@master-1:~/kubernetes# tar -xvf keepalived-1.4.3.tar.gz
root@master-1:~/kubernetes# cd keepalived-1.4.3/
root@master-1:~/kubernetes/keepalived-1.4.3# ./configure 
root@master-1:~/kubernetes/keepalived-1.4.3# make && make install
root@master-1:~/kubernetes/keepalived-1.4.3# cd keepalived/
root@master-1:~/kubernetes/keepalived-1.4.3/keepalived# cp keepalived.service /etc/systemd/system/
  • 配置(在所有master节点上执行,注意:state、priority,所以master节点中只能有一个state为MASTER,其他的为BACKUP、priority表示权重,每个节点的值应该都不相同)
root@master-1:~# mkdir -p /etc/keepalived/
root@master-1:~# cat >/etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
  router_id LVS_DEVEL
}

vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens160
    virtual_router_id 51
    priority 101
    authentication {
        auth_type PASS
        auth_pass 4be37dc3b4c90194d1600c483e10ad1d
    }
    virtual_ipaddress {
        192.168.115.200
    }
    track_script {
        check_apiserver
    }
}
EOF

interface的值为你实际网络设备的名称,使用 root@master-1:~# ip a 查看~~

root@master-1:~# cat >/etc/keepalived/check_apiserver.sh <<EOF
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/"
if ip addr | grep -q 192.168.115.200; then
    curl --silent --max-time 2 --insecure https://192.168.115.200:6443/ -o /dev/null || errorExit "Error GET https://192.168.115.200:6443/"
fi
EOF
  • 启动
root@master-1:~# systemctl start keepalived.service 
root@master-2:~# systemctl start keepalived.service 
root@master-3:~# systemctl start keepalived.service 
root@master-1:~# systemctl status keepalived.service 
root@master-2:~# systemctl status keepalived.service 
root@master-3:~# systemctl status keepalived.service 
root@master-1:~# ip a show ens160
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:8e:33:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.115.210/22 brd 192.168.115.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet 192.168.115.200/32 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe8e:3344/64 scope link 
       valid_lft forever preferred_lft forever

3.3、部署Master-1

  • 准备kubeadm init配置文件
root@master-1:~# cat >kubeadm-conf.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 192.168.115.210
etcd:
  endpoints:
    - https://192.168.115.210:2379
    - https://192.168.115.211:2379
    - https://192.168.115.212:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/client.pem
  keyFile: /etc/etcd/ssl/client-key.pem
apiServerExtraArgs:
  admission-control: Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
  requestheader-allowed-names: front-proxy-client
  requestheader-extra-headers-prefix: X-Remote-Extra-
  requestheader-group-headers: X-Remote-Group
  requestheader-username-headers: X-Remote-User
  requestheader-client-ca-file: /etc/kubernetes/pki/front-proxy-ca.crt
  proxy-client-cert-file: /etc/kubernetes/pki/front-proxy-client.crt
  proxy-client-key-file: /etc/kubernetes/pki/front-proxy-client.key
kubernetesVersion: v1.10.1
networking:
  podSubnet: 10.244.0.0/16
apiServerCertSANs:
  - 192.168.115.200
apiServerExtraArgs:
  endpoint-reconciler-type: lease
imageRepository: 192.168.101.88:5000/k8s1.10
EOF
  • requestheader-*和proxy-key-*可以省略,这里主要是为了未来使用扩展API~~~

  • 复制kubeadm-conf.yaml文件到其他master节点,修改advertiseAddress地址

root@master-1:~# swapoff -a
root@master-1:~# kubeadm init --config kubernetes/kubeadm-conf.yaml 

3.4、部署Master-n

  • 复制Master-1证书
root@master-2:~# mkdir -p /etc/kubernetes/pki
root@master-2:~# scp root@192.168.115.210:/etc/kubernetes/pki/* /etc/kubernetes/pki
root@master-2:~# rm /etc/kubernetes/pki/apiserver*
  • 执行kubeadm init
root@master-2:~# kubeadm init --config kubernetes/kubeadm-conf.yaml

3.5、部署CNI(flannel)

root@master-1:~/kubernetes/flannel# wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

修改yml文件中的镜像为本地镜像

root@master-1:~/kubernetes/flannel# kubectl apply -f kube-flannel.yml
root@master-1:~# kubectl get nodes -o wide
NAME       STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master-1   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2
master-2   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2
master-3   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2

3.6、部署Node

  • 加入node节点
root@master-1:~# kubeadm token create --print-join-command
kubeadm join 192.168.115.210:6443 --token c0d4g0.1jtwhe5hfb24p7u0 --discovery-token-ca-cert-hash sha256:7c68eadafc4ad956d6027e9b1591de6f94511313106d1bfa3cd3f08c380d52bb
root@node-1:~# kubeadm join 192.168.115.210:6443 --token c0d4g0.1jtwhe5hfb24p7u0 --discovery-token-ca-cert-hash sha256:7c68eadafc4ad956d6027e9b1591de6f94511313106d1bfa3cd3f08c380d52bb
  • 配置kube-proxy访问VIP
root@master-1:~/kubernetes# kubectl get configmap -n kube-system kube-proxy -o yaml > kube-proxy-cm.yaml
root@master-1:~/kubernetes# sed -i 's#server:.*#server: https://192.168.115.200:6443#g' kube-proxy-cm.yaml
root@master-1:~/kubernetes# kubectl apply -f kube-proxy-cm.yaml --force
root@master-1:~/kubernetes# kubectl delete pod -n kube-system -l k8s-app=kube-proxy
  • 配置kubelet访问VIP
root@node-1:~# sudo sed -i 's#server:.*#server: https://192.168.115.200:6443#g' /etc/kubernetes/kubelet.conf
root@master-1:~/kubernetes# kubectl get pods --all-namespaces -o wide
NAMESPACE      NAME                                      READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system    kube-apiserver-master-1                   1/1       Running   0          1d        192.168.115.210   master-1
kube-system    kube-apiserver-master-2                   1/1       Running   0          1d        192.168.115.211   master-2
kube-system    kube-apiserver-master-3                   1/1       Running   0          1d        192.168.115.212   master-3
kube-system    kube-controller-manager-master-1          1/1       Running   0          1d        192.168.115.210   master-1
kube-system    kube-controller-manager-master-2          1/1       Running   0          1d        192.168.115.211   master-2
kube-system    kube-controller-manager-master-3          1/1       Running   0          1d        192.168.115.212   master-3
kube-system    kube-dns-7cfc456f5b-r5tpc                 3/3       Running   0          1d        10.244.2.2        master-3
kube-system    kube-flannel-ds-5bqc2                     1/1       Running   0          1d        192.168.115.212   master-3
kube-system    kube-flannel-ds-qkw89                     1/1       Running   0          1d        192.168.115.213   node-1
kube-system    kube-flannel-ds-tvw97                     1/1       Running   0          1d        192.168.115.211   master-2
kube-system    kube-flannel-ds-wznpx                     1/1       Running   0          1d        192.168.115.210   master-1
kube-system    kube-proxy-2dlp4                          1/1       Running   0          1d        192.168.115.213   node-1
kube-system    kube-proxy-2gh4r                          1/1       Running   0          1d        192.168.115.211   master-2
kube-system    kube-proxy-bmq62                          1/1       Running   0          1d        192.168.115.212   master-3
kube-system    kube-proxy-mcwp8                          1/1       Running   0          1d        192.168.115.210   master-1
kube-system    kube-scheduler-master-1                   1/1       Running   0          1d        192.168.115.210   master-1
kube-system    kube-scheduler-master-2                   1/1       Running   0          1d        192.168.115.211   master-2
kube-system    kube-scheduler-master-3                   1/1       Running   0          1d        192.168.115.212   master-3

root@master-1:~/kubernetes# kubectl get nodes -o wide
NAME       STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master-1   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2
master-2   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2
master-3   Ready     master    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2
node-1     Ready     <none>    1d        v1.10.1   <none>        Ubuntu 16.04.2 LTS   4.4.0-62-generic   docker://17.3.2

3.7、部署Keepalived(Edge)

略,这里的配置和上面的差不多,我这里只部署了一个Node。这里的VIP是为了在多个Edge Node上实现HA;

3.8、部署Ingress

  • 准备部署文件
root@master-1:~# mkdir -p /root/kubernetes/ingress-nginx

root@master-1:~/kubernetes/ingress-nginx# cat >ns.yaml <<EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    name: ingress-nginx
EOF
root@master-1:~/kubernetes/ingress-nginx# cat >default-backend.yaml <<EOF
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: 192.168.101.88:5000/k8s1.10/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
EOF
root@master-1:~/kubernetes/ingress-nginx# cat >tcp-services-configmap.yaml <<EOF
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
EOF
root@master-1:~/kubernetes/ingress-nginx# cat >udp-services-configmap.yaml <<EOF
---
ind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
EOF
root@master-1:~/kubernetes/ingress-nginx# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
root@master-1:~/kubernetes/ingress-nginx# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml
  • 将ingress-nginx修改为以DaemonSet部署,启用hostNetwork并设置nodeSelector
root@master-1:~/kubernetes/ingress-nginx# cat with-rbac.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        edgenode: "true"
      containers:
        - name: nginx-ingress-controller
          image: 192.168.101.88:5000/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
  • 设置Edge node
root@master-1:~/kubernetes/ingress-nginx# kubectl label node node-1 edgenode=true
  • 部署
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f ns.yaml 
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f rbac.yaml 
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f tcp-services-configmap.yaml 
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f udp-services-configmap.yaml 
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f default-backend.yaml 
root@master-1:~/kubernetes/ingress-nginx# kubectl apply -f with-rbac.yaml 

root@master-1:~/kubernetes/ingress-nginx# kubectl get all -n ingress-nginx
NAME                                        READY     STATUS    RESTARTS   AGE
pod/default-http-backend-79454f85bb-g972f   1/1       Running   0          23m
pod/nginx-ingress-controller-xtc2b          1/1       Running   0          22m

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/default-http-backend   ClusterIP   10.103.189.16   <none>        80/TCP    23m

NAME                                      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/nginx-ingress-controller   1         1         1         1            1           edgenode=true   22m

NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/default-http-backend   1         1         1            1           23m

NAME                                              DESIRED   CURRENT   READY     AGE
replicaset.apps/default-http-backend-79454f85bb   1         1         1         23m

3.9、配置DNS Server

参考:搭建DNS服务器

3.10、配置ingress事例

  • prometheus
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-service-ingress
  namespace: ns-monitor
spec:
  rules:
    - host: prometheus.chenlei.com
      http:
        paths:
          - backend:
              serviceName: prometheus-service
              servicePort: 9090

这里写图片描述

  • grafana
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-service-ingress
  namespace: ns-monitor
spec:
  rules:
    - host: grafana.chenlei.com
      http:
        paths:
          - backend:
              serviceName: grafana-service
              servicePort: 3000

这里写图片描述

  • dashboard
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
  tls:
    - hosts:
        - dashboard.chenlei.com
  rules:
    - host: dashboard.chenlei.com
      http:
        paths:
          - backend:
              serviceName: kubernetes-dashboard
              servicePort: 443

就是这几行配置,在我早已满目疮痍的内心深处又留下了一道不可磨灭的痕迹~~~~~~~

这里写图片描述

看到这个界面不代表成功,有可能无法登录的~~~~~

4、参考资料

  • https://kubernetes.io/docs/setup/independent/high-availability/
  • https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
  • https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

5、遗留问题

  • 在部署HA集群时,开始准备用Calico作为网络插件,但是一直提失败,DNS无法连接到APIService,和GitHub上的这个问题很像,最终没有找到解决办法,换用flannel后正常~~~

  • 在部署Ingress时,开始准备用istio,有两个问题无法解决:

    • Istio配置hostNetwork无效,无法监听本地的80端口和443端口,目前的权宜之计使用rinetd配置端口映射;
    • Istio无法代理https服务,一般的配置都是在http的基础上配置https,但是已有的https服务不知道该如何配置;

如果哪位碰到了类似的问题,有好的解决办法,还请不吝赐教~~~

评论 25
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值