
目录
- 1、安装ansible
- 2、安装k8s
- 3、检查环境
- 3.1、检查etcd
- 3.2、检查flanneld
- 3.3、检查nginx和keepalived
- 3.4、检查kube-apiserver
- 3.5、检查 kube-controller-manager
- 3.6、检查kube-scheduler
- 3.7、检查kubelet
- 3.8、检查kube-proxy
- 4、检查附加组件
- 4.1、检查coredns
- 4.2、检查dashboard
- 4.3、检查traefik
- 4.4、检查metrics
- 4.5、检查EFK
- 5、验证集群
- 6、重启所有组件
1、安装ansible
# 系统改成阿里yum源,并更新系统mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.$(date +%Y%m%d)wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repoyum clean all && yum makecache && yum update -y#安装ansibleyum -y install epel-releaseyum install ansible -yssh-keygen -t rsassh-copy-id xx.xx.xx.xx## 批量拷贝秘钥#### ##编写机器ip 访问端口登录密码cat < hostname.txt192.168.10.11 22 fana192.168.10.12 22 fana192.168.10.13 22 fana192.168.10.14 22 fanaEOF#### 不输入yes,修改后重启sshdsed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config#### 然后执行拷贝秘钥cat hostname.txt | while read ip port pawd;do sshpass -p $pawd ssh-copy-id -p $port root@$ip;done#### 安装sshpasswget http://sourceforge.net/projects/sshpass/files/sshpasstar xvzf sshpass-1.06.tar.gz ./configure make make install## 升级内核参考:https://www.cnblogs.com/fan-gx/p/11006762.html
2、安装k8s
## 下载ansible脚本#链接:https://pan.baidu.com/s/1VKQ5txJ2xgwUVim_E2P9kA #提取码:3cq2## ansible 安装k8sansible-playbook -i inventory installK8s.yml ## 版本:k8s: 1.14.8etcd: 3.3.18flanneld: 0.11.0docker: 19.03.5nginx: 1.16.1 ## 自签TLS证书etcd:ca.pem server.pem server-key.pemflannel:ca.pem server.pem server-key.pemkube-apiserver:ca.pem server.pem server-key.pemkubelet:ca.pem ca-key.pemkube-proxy:ca.pem kube-proxy.pem kube-proxy-key.pemkubectl:ca.pem admin.pem admin-key.pem ------ 用于管理员访问集群## 检查证书时长,官方建议一年最少升级一次k8s集群,升级的时候证书时长也会升级openssl x509 -in ca.pem -text -noout### 显示如下Certificate: Data: Version: 3 (0x2) Serial Number: 51:5c:66:8b:40:24:d7:bb:ea:94:e7:5a:33:fe:44:a2:e2:18:51:b3 Signature Algorithm: sha256WithRSAEncryption Issuer: C=CN, ST=ShangHai, L=ShangHai, O=k8s, OU=System, CN=kubernetes Validity Not Before: Dec 14 13:26:00 2019 GMT Not After : Dec 11 13:26:00 2029 GMT#时长为10年 Subject: C=CN, ST=ShangHai, L=ShangHai, O=k8s, OU=System, CN=kubernetes Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:c2:5c:92:dd:36:67:3f:d4:f1:e0:5f:e0:48:40:# 使用镜像kubelet: 243662875/pause-amd64:3.1coredns: 243662875/coredns:1.3.1dashboard: 243662875/kubernetes-dashboard-amd64:v1.10.1metrics-server: 243662875/metrics-server-amd64:v0.3.6traefik: traefik:latestes: elasticsearch:6.6.1fluentd-es: 243662875/fluentd-elasticsearch:v2.4.0kibana: 243662875/kibana-oss:6.6.1
3、检查环境
3.1、检查etcd
etcd参考:https://www.cnblogs.com/winstom/p/11811373.html
systemctl status etcd|grep activeetcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem cluster-health##显示如下:member 1af68d968c7e3f22 is healthy: got healthy result from https://192.168.10.12:2379member 7508c5fadccb39e2 is healthy: got healthy result from https://192.168.10.11:2379member e8d9a97b17f26476 is healthy: got healthy result from https://192.168.10.13:2379cluster is healthyetcdctl --endpoints=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/etcd.pem --key-file=/etc/kubernetes/ssl/etcd-key.pem member listETCDCTL_API=3 etcdctl -w table --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" endpoint status### 显示如下+----------------------------+------------------+---------+---------+-----------+-----------+------------+| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |+----------------------------+------------------+---------+---------+-----------+-----------+------------+| https://192.168.10.11:2379 | 7508c5fadccb39e2 | 3.3.18 | 762 kB | false | 421 | 287371 || https://192.168.10.12:2379 | 1af68d968c7e3f22 | 3.3.18 | 762 kB | true | 421 | 287371 || https://192.168.10.13:2379 | e8d9a97b17f26476 | 3.3.18 | 762 kB | false | 421 | 287371 |+----------------------------+------------------+---------+---------+-----------+-----------+------------+#遇到报错: cannot unmarshal event: proto: wrong wireType = 0 for field Key#解决办法参考:https://blog.youkuaiyun.com/dengxiafubi/article/details/102627341#查询etcd API3的键ETCDCTL_API=3 etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem get / --prefix --keys-only
3.2、检查flanneld
systemctl status flanneld|grep Activeip addr show|grep flannelip addr show|grep dockercat /run/flannel/dockercat /run/flannel/subnet.env#### 列出键值存储的目录etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls -r## 显示如下/kubernetes/kubernetes/network/kubernetes/network/config/kubernetes/network/subnets/kubernetes/network/subnets/172.30.12.0-24/kubernetes/network/subnets/172.30.43.0-24/kubernetes/network/subnets/172.30.9.0-24#### 检查分配的pod网段etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem get /kubernetes/network/config#### 检查分配的pod子网列表etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls /kubernetes/network/subnets#### 检查pod网段对于的IP和flannel接口etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem get /kubernetes/network/subnets/172.30.74.0-24
3.3、检查nginx和keepalived
ps -ef|grep nginxps -ef|grep keepalivednetstat -lntup|grep nginxip add|grep 192.168# 查看VIP,显示如下inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute ens32 inet 192.168.10.100/32 scope global ens32
3.4、检查kube-apiserver
netstat -lntup | grep kube-apiser# 显示如下tcp 0 0 192.168.10.11:6443 0.0.0.0:* LISTEN 115454/kube-apiserv kubectl cluster-info# 显示如下Kubernetes master is running at https://192.168.10.100:8443Elasticsearch is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxyKibana is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kibana-logging/proxyCoreDNS is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxykubernetes-dashboard is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyMetrics-server is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.kubectl get all --all-namespaceskubectl get cs# 显示如下NAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} #### 打印kube-apiserver写入etcd数据ETCDCTL_API=3 etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem get /registry/ --prefix --keys-only#### 遇到报错unexpected ListAndWatch error: storage/cacher.go:/secrets: Failed to list *core.Secret: unable to transform key "/registry/secrets/kube-system/bootstrap-token-2z8s62": invalid padding on input##### 原因,集群上的,kube-apiserver 的token 不一致 文件是:encryption-config.yaml 必须保证 secret的参数 一致
3.5、检查 kube-controller-manager
netstat -lntup|grep kube-control# 显示如下tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 117775/kube-control tcp6 0 0 :::10257 :::* LISTEN 117775/kube-controlkubectl get cskubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml# 显示如下,可以看到 kube12变成leaderapiVersion: v1kind: Endpointsmetadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_753e65bf-1e65-11ea-b9c4-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:32:49Z","renewTime":"2019-12-14T12:43:20Z","leaderTransitions":0}' creationTimestamp: "2019-12-14T11:32:49Z" name: kube-controller-manager namespace: kube-system resourceVersion: "8282" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 753d2be7-1e65-11ea-b980-000c29e3f448
3.6、检查kube-scheduler
netstat -lntup|grep kube-sche# 显示如下tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 119678/kube-schedul tcp6 0 0 :::10259 :::* LISTEN 119678/kube-schedulkubectl get cskubectl get endpoints kube-scheduler --namespace=kube-system -o yaml# 显示如下,可以看到kube12变成leaderapiVersion: v1kind: Endpointsmetadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_89050e00-1e65-11ea-8f5e-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:33:23Z","renewTime":"2019-12-14T12:45:22Z","leaderTransitions":0}' creationTimestamp: "2019-12-14T11:33:23Z" name: kube-scheduler namespace: kube-system resourceVersion: "8486" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: 899d1625-1e65-11ea-b980-000c29e3f448
3.7、检查kubelet
netstat -lntup|grep kubelet# 显示如下tcp 0 0 127.0.0.1:35173 0.0.0.0:* LISTEN 123215/kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 123215/kubelet tcp 0 0 192.168.10.11:10250 0.0.0.0:* LISTEN 123215/kubelet kubeadm token list --kubeconfig ~/.kube/config# 查看创建的tokenTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPShf0fa4.ta6haf1wsz1fnobf 22h 2019-12-15T19:33:26+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube11oftjgn.01tob30h8v9l05lm 22h 2019-12-15T19:33:26+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube12zuezc4.7kxhmayoue16pycb 22h 2019-12-15T19:33:26+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:kube13kubectl get csr# 已经批准NAME AGE REQUESTOR CONDITIONnode-csr-Oarn7xdWDiq7-CLn7yrE3fkTtmJtoSenmlGj3XL85lM 72m system:bootstrap:zuezc4 Approved,Issuednode-csr-hJrfQXlhIqJTROLD1ExmcXq74J78uu6rjHuh5ZyVlMg 72m system:bootstrap:zuezc4 Approved,Issuednode-csr-s-BAbqc8hOKfDj8xqdJ6fWjwdustqG9LhwbpYxa9x68 72m system:bootstrap:zuezc4 Approved,Issuedkubectl get nodes# 显示如下NAME STATUS ROLES AGE VERSION192.168.10.11 Ready 73m v1.14.8192.168.10.12 Ready 73m v1.14.8192.168.10.13 Ready 73m v1.14.8systemctl status kubelet#### 1.遇到报错: Failed to connect to apiserver: the server has asked for the client to provide credentials#### 检查api是不是有问题,如没有问题,需要重新生成kubelet-bootstrap.kubeconfig文件,然后重启kubelet#### 2.启动不起来,没有报错信息#检查kubelet.config.json 文件 "address": "192.168.10.12", 是不是本机IP#### 3.遇到问题:failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "192.168.10.12" is forbidden: User "system:node:192.168.10.11" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease": can only access node lease with the same name as the requesting nodeUnable to register node "192.168.10.12" with API server: nodes "192.168.10.12" is forbidden: node "192.168.10.11" is not allowed to modify node "192.168.10.12"#检查kubelet.config.json 文件 "address": "192.168.10.12", 是不是本机IP
3.8、检查kube-proxy
netstat -lnpt|grep kube-proxy# 显示如下tcp 0 0 192.168.10.11:10249 0.0.0.0:* LISTEN 125459/kube-proxy tcp 0 0 192.168.10.11:10256 0.0.0.0:* LISTEN 125459/kube-proxy tcp6 0 0 :::32698 :::* LISTEN 125459/kube-proxy tcp6 0 0 :::32699 :::* LISTEN 125459/kube-proxy tcp6 0 0 :::32700 :::* LISTEN 125459/kube-proxyipvsadm -ln
4、检查附加组件
4.1、检查coredns
kubectl get pods -n kube-system#查看pod是否都启动完成#使用容器验证 kubectl run dig --rm -it --image=docker.io/azukiapp/dig /bin/sh#ping 百度ping www.baidu.comPING www.baidu.com (180.101.49.11): 56 data bytes64 bytes from 180.101.49.11: seq=0 ttl=127 time=10.772 ms64 bytes from 180.101.49.11: seq=1 ttl=127 time=9.347 ms64 bytes from 180.101.49.11: seq=2 ttl=127 time=10.937 ms64 bytes from 180.101.49.11: seq=3 ttl=127 time=11.149 ms64 bytes from 180.101.49.11: seq=4 ttl=127 time=10.677 mscat /etc/resolv.conf #查看nameserver 10.254.0.2search default.svc.cluster.local. svc.cluster.local. cluster.local.options ndots:5nslookup www.baidu.com#显示如下Server: 10.254.0.2Address: 10.254.0.2#53Non-authoritative answer:www.baidu.com canonical name = www.a.shifen.com.Name: www.a.shifen.comAddress: 180.101.49.12Name: www.a.shifen.comAddress: 180.101.49.11 nslookup kubernetes.default#执行Server: 10.254.0.2Address: 10.254.0.2#53Name: kubernetes.default.svc.cluster.localAddress: 10.254.0.1nslookup kubernetes#执行Server: 10.254.0.2Address: 10.254.0.2#53Name: kubernetes.default.svc.cluster.localAddress: 10.254.0.1
4.2、检查dashboard
### 使用谷歌浏览器访问https://192.168.10.13:10250/metrics 报Unauthorized 是需要使用证书,生成证书方式参考如下#1.Windows机器,需要安装jdk然后使用keytool工具在bin目录下, 需要把ca.pem拷贝下来,我放在E盘了,执行导入证书命令.keytool -import -v -trustcacerts -alias appmanagement -file "E:ca.pem" -storepass password -keystore cacerts#导入证书.keytool -delete -v -trustcacerts -alias appmanagement -file "E:ca.pem" -storepass password -keystore cacerts#删除证书#2.执行过后,然后在linux上执行如下:openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pem#3.然后通过浏览器把admin.pfx证书导进去,就可以正常访问了。# 然后访问dashboardhttps://192.168.10.13:32700#### 或者https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy#### 需要使用kubeconfig:已经自动生成了在/etc/kubernetes/dashboard.kubeconfig#令牌保存在 {{k8s_home}}/dashboard_login_token.txt文件里,也可以用下面的命令获取tokenkubectl -n kube-system describe secret `kubectl -n kube-system get secret|grep dashboard | awk '{print $1}'`
4.3、检查traefik
#每个node节点上部署一个traefikkubectl get pod,deploy,daemonset,service,ingress -n kube-system | grep traefik### 显示如下pod/traefik-ingress-controller-gl7vs 1/1 Running 0 43mpod/traefik-ingress-controller-qp26j 1/1 Running 0 43mpod/traefik-ingress-controller-x99ls 1/1 Running 0 43mdaemonset.extensions/traefik-ingress-controller 3 3 3 3 3 43mservice/traefik-ingress-service ClusterIP 10.254.148.220 80/TCP,8080/TCP 43mservice/traefik-web-ui ClusterIP 10.254.139.95 80/TCP 43mingress.extensions/traefik-web-ui traefik-ui 80 43m# 访问返回如下:curl -H 'host:traefik-ui' 192.168.10.11Found.curl -H 'host:traefik-ui' 192.168.10.12Found.curl -H 'host:traefik-ui' 192.168.10.13Found.#查看端口netstat -lntup|grep traefiktcp6 0 0 :::8080 :::* LISTEN 66426/traefik tcp6 0 0 :::80 :::* LISTEN 66426/traefik #然后访问http://192.168.10.11:8080/
4.4、检查metrics
kubectl top node###报错:Error from server (Forbidden): forbidden: User "system:anonymous" cannot get path "/apis/metrics.k8s.io/v1beta1"Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope###解决办法kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin### 遇到报错:Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
4.5、检查EFK
es:http://192.168.10.11:32698/Kibana:http://192.168.10.11:32699
5、验证集群
# 部署glusterfs 参考:https://www.cnblogs.com/fan-gx/p/12101686.htmlkubectl create ns myappkubectl apply -f nginx.yaml kubectl get pod,svc,ing -n myapp -o wide###显示如下NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/my-nginx-69f8f65796-zd777 1/1 Running 0 19m 172.30.36.15 192.168.10.11 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORservice/my-nginx ClusterIP 10.254.131.1 80/TCP 21m app=my-nginxNAME HOSTS ADDRESS PORTS AGEingress.extensions/my-nginx myapp.nginx.com 80 21m#验证访问是否正常curl http://172.30.36.15curl http://10.254.131.1curl -H "host:myapp.nginx.com" 192.168.10.11### 通过谷歌浏览器访问:http://192.168.10.100:8088/### 我们部署的时候已经通过nginx代理了traefik地址 /data/nginx/conf/nginx.confkubectl exec -it my-nginx-69f8f65796-zd777 -n myapp bashecho "hello world" >/usr/share/nginx/html/index.html#然后浏览器访问http://192.168.10.100:8088/ 显示 hello world
6、重启所有组件
systemctl restart etcd && systemctl status etcdsystemctl restart flanneld && systemctl status flanneldsystemctl restart docker && systemctl status dockersystemctl stop nginx && systemctl start nginx && systemctl status nginxsystemctl restart keepalived && systemctl status keepalivedsystemctl restart kube-apiserver && systemctl status kube-apiserversystemctl restart kube-controller-manager && systemctl status kube-controller-managersystemctl restart kube-scheduler && systemctl status kube-schedulersystemctl restart kubelet && systemctl status kubeletsystemctl restart kube-proxy && systemctl status kube-proxy
作者:Fantasy
出处:http://dwz.date/bWku
20个免费 K8S 名额:http://dwz.date/bUTc