原文链接:https://blog.youkuaiyun.com/xxb249/article/details/79437989
(1)kubernetes一步一步搭建
众所周知,kubernetes(简称k8s)是用于管理docker集群的,最近一段时间一直在折腾环境问题,在此写一篇博客,来帮助像我一样的小白,避免走弯路。
一、环境
集群环境
| 角色 | IP地址 | 版本号 | Docker版本 | 系统版本 |
| master | 192.63.63.1/24 | v1.9.1 | 17.12.0-ce | Centos7.1 |
| node1 | 192.63.63.10/24 | v1.9.1 | 17.12.0-ce | Centos7.1 |
| node2 | 192.63.63.20/24 | v1.9.1 | 17.12.0-ce | Centos7.1 |
Master节点必需组件
| 组件名称 | 作用 | 版本号 |
| etcd | 非关系型数据库 | v1.9.1 |
| kube-apiserver | 核心组件,所有组件均与其通信,提供Http Restful接口 | v1.9.1 |
| kube-controller-manager | 集群内部管理中心,负责各类资源管理,如RC,Pod,命名空间等 | v1.9.1 |
| kube-scheduler | 调度组件,负责node的调度 | v1.9.1 |
Node节点必需组件
| 组件名称 | 作用 | 版本号 |
| kubelet | Node节点中核心组件,负责执行Master下发的任务 | v1.9.1 |
| kube-proxy | 代理,负责kubelet与apiserver网络。相当于负载均衡,将请求转到后端pod中 | v1.9.1 |
二、安装
在看《Kubernetes权威指南》通过yum install方式安装。截止目前(2018-02-27)通过yum安装的版本是1.5.2,而最新版本是1.9.1。两个版本之间差异还是比较大,主要差异kubelet配置文件中不在支持api-server参数。
虽然yum安装的不是最新版本,但是我们还是可以借鉴一些内容,例如systemd脚本服务,k8s各个配置文件。
2.0 安装etcd
上面介绍了etcd是数据库,用于存储k8s相关数据。etcd并不属于k8s组件,因此需要单独安装一下,安装很方便,通过yum安装即可。目前yum安装的最新版本是3.2.11。
[root@localhost ~]# yum install etcd
2.1 下载并安装
最新版本下载地址,只需下载Server Binaries。因为node节点必需的组件,也包含在这个Server Binaries中。下载完毕后,进行解压并把可执行文件拷贝到系统目录中
[root@localhost packet]#
[root@localhost packet]# tar -zxf kubernetes-server-linux-amd64.tar.gz
[root@localhost packet]# ls
kubernetes kubernetes-server-linux-amd64.tar.gz opensrc
[root@localhost packet]#
[root@localhost packet]# cd kubernetes/server/bin
[root@localhost bin]# cp apiextensions-apiserver cloud-controller-manager hyperkube kubeadm kube-aggregator kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler mounter /usr/bin
[root@localhost bin]
2.2 配置systemd服务
下面这些文件均来自kubernetes1.5.2 rpm包,存放目录为/usr/lib/systemd/system
[root@localhost system]# cat kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@localhost system]#
[root@localhost system]# cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=kube
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@localhost system]#
[root@localhost system]#
[root@localhost system]# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
[root@localhost system]#
[root@localhost system]#
[root@localhost system]# cat kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@localhost system]#
[root@localhost system]#
[root@localhost system]# cat kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@localhost system]#
2.3 配置k8s
通过systemd服务配置文件可知需创建/etc/kubernetes目录以及相关文件
[root@localhost kubernetes]# ls
apiserver config controller-manager kubelet proxy scheduler
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
[root@localhost kubernetes]#
Apiserver需要将--insecure-bind-address地址修改为0.0.0.0(修改为大网ip地址),接收任意地址的连接。
[root@localhost kubernetes]# cat apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
[root@localhost kubernetes]#
Kubelet配置文件,最重要的配置是执行apiserver所在的地址,但是在v1.8版本之后不再支持--api-servers,因此需要注释掉,那么问题来了,kubelet是如何指定api-server地址呢?
[root@localhost kubernetes]# cat kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
##KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/kubernetes/pause"
# Add your own!
KUBELET_ARGS="--fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
下面几个配置文件内容基本是空,没有什么内容。
[root@localhost kubernetes]# cat controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--loglevel=0"
以上配置均是在master节点中进行配置,配置完成后这样k8s就是一个单节点集群--既运行master也运行node的环境。只不过node还有问题,下面介绍。
三、http方式
前面已经介绍过,在v1.8版本之后kubelet不再支持api-server参数,那么在新版本kubelet如何才能与api-server进行通信呢?是通过kubeconfig参数,指定配置文件(这个地方是一个大坑,坑了我好长一段时间)。
在/etc/kubernetes/kubelet配置文件中有一个配置项,
KUBELET_ARGS="--fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
指定kubeconfig所在的目录,内容如下:
[root@localhost kubernetes]#
[root@localhost kubernetes]# cat /var/lib/kubelet/kubeconfig
apiVersion: v1
clusters:
- cluster:
server: http://127.0.0.1:8080
name: myk8s
contexts:
- context:
cluster: myk8s
user: ""
name: myk8s-context
current-context: myk8s-context
kind: Config
preferences: {}
users: []
[root@localhost kubernetes]#
解释一下上面内容:
1) clusters - 代表集群,支持多个集群。里面需要制定server,即api-server所在地址。此处也支持https方式,后面详细介绍。
2) contexts - 集群上下文,支持多个上下文
3) current-context - 表示当前使用的上下文
其他字段在介绍https方式时在进行说明。
至此,单节点集群部署完成,我们需要启动各个服务。
[root@localhost k8s]# systemctl start docker
[root@localhost k8s]# systemctl start etcd
[root@localhost k8s]# systemctl start kube-apiserver
[root@localhost k8s]# systemctl start kube-controller-manager
[root@localhost k8s]# systemctl start kube-scheduler
[root@localhost k8s]# systemctl start kubelet
[root@localhost k8s]# systemctl start kube-proxy
[root@localhost k8s]#
验证环境是否正常:
[root@localhost k8s]#
[root@localhost k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
127.0.0.1 Ready <none> 16d v1.9.1
[root@localhost k8s]#
[root@localhost k8s]#
以上内容只是master节点上配置,下面我们在node1中配置http方式访问api-server。
首先需要将kubelet、kube-proxy进程以及先关配置文件拷贝到node1中,然后把可执行程序以及配置文件拷贝到对应目录中:
[root@node1 k8s_node]#
[root@node1 k8s_node]# ls bin-file/ config-file/
bin-file/:
kubelet kube-proxy
config-file/:
config kubeconfig kubelet kubelet.service kube-proxy.service proxy
[root@node1 k8s_node]#
[root@node1 k8s_node]#
[root@node1 k8s_node]# mv bin-file/kubelet bin-file/kube-proxy /usr/bin
[root@node1 k8s_node]# mkdir /etc/kubernetes
[root@node1 k8s_node]# mv config-file/config config-file/kubelet config-file/proxy /etc/kubernetes/
[root@node1 k8s_node]# mv config-file/kubelet.service config-file/kube-proxy.service /usr/lib/systemd/system
[root@node1 k8s_node]#
[root@node1 k8s_node]# mkdir /var/lib/kubelet
[root@node1 k8s_node]# mv config-file/kubeconfig /var/lib/kubelet/
[root@node1 k8s_node]#
重点:
1)修改/var/lib/kubelet/kubeconfig文件中server的ip地址,修改为http://192.63.63.1:8080
2)修改/etc/kubernetes/kubelet文件中KUBELET_HOSTNAME将其修改为"--hostname-override=node1"
分别启动服务docker、kubelet、kube-proxy,然后在master进行验证:
[root@localhost ~]#
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
127.0.0.1 Ready <none> 17d v1.9.1
node1 Ready <none> 5m v1.9.1
[root@localhost ~]#
当出现name是node1,且status是Ready,则表示部署成功。
至此,k8s部署且http方式介绍完
(2)k8s实现https方式通讯
http方式是不安全,如果内部使用还可以,若是部署到外部则需要使用https增加安全性。下面介绍如何在node2上采用https方式。
上面说到/var/lib/kubelet/kubeconfig文件,并非手动创建,而是通过命令行kubectl config进行配置,配置完成之后就会自动生成。
简单介绍一下ApiServer认证,认证一共有三种方式:
1)Https双向认证,是双向认证啊,不是单向认证(最安全)。
2)Http Token认证
3)Http Base认证,用户名和密码
我们之前介绍http方式是没有任何认证措施的,也就是说只要能访问master的主机都可以与其进行通信。特别说明:kubectl命令行工具既同时支持CA双向认证也支持简单认证(http base或者token)两种模式与apiserver进行通信,但其他组件只能配置成一种模式。
下面开始进行各类证书的生成以及kubeconfig文件的生成。
证书生成,可采用openssl,也可以采用CFSSL工具。下面这篇博客,采用的是CFSSL工具:http://www.cnblogs.com/netsa/p/8126155.html
我比较熟悉openssl,因此介绍openssl使用方式
一、 生成各类证书
0)环境配置
[root@localhost ~]# mkdir kube-ca
[root@localhost kube-ca]#
[root@localhost kube-ca]# mkdir -p ./{certs,private,newcerts}
[root@localhost kube-ca]# touch ./index.txt
[root@localhost kube-ca]# echo 01 > ./serial
[root@localhost kube-ca]#
修改openssl配置文件,主要是扩展x509,设置多ip。
[root@localhost kube-ca]# vi /etc/pki/tls/openssl.cnf
[ CA_default ]
#dir = /etc/pki/CA # Where everything is kept
dir = /etc/kubernetes/kube-ca # 重新指定目录
certs = $dir/certs # Where the issued certs are kept
crl_dir = $dir/crl # Where the issued crl are kept
database = $dir/index.txt # database index file.
#unique_subject = no # Set to 'no' to allow creation of
# several ctificates with same subject.
new_certs_dir = $dir/newcerts # default place for new certs.
certificate = $dir/cacert.pem # The CA certificate
serial = $dir/serial # The current serial number
crlnumber = $dir/crlnumber # the current crl number
# must be commented out to leave a V1 CRL
crl = $dir/crl.pem # The current CRL
private_key = $dir/private/cakey.pem # The private key
RANDFILE = $dir/private/.rand # private random number file
x509_extensions = usr_cert # 扩展x509_extension是usr_cert项
[usr_cer]
subjectAltName = @alt_names
#扩展多个IP
[alt_names]
IP.1 = 127.0.0.1
IP.2 = 192.168.1.105
IP.3 = 192.63.63.1
IP.4 = 192.63.63.20
1)生成CA根证书
生成https私钥
[root@localhost kube-ca]#
[root@localhost kube-ca]# openssl genrsa -out private/ca.key 2048
Generating RSA private key, 2048 bit long modulus
..................................+++
.......................................................................+++
e is 65537 (0x10001)
[root@localhost kube-ca]#
生成https证书
[root@localhost kube-ca]# openssl req -new -x509 -key private/ca.key -out certs/ca.crt
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:BeiJing
Locality Name (eg, city) [Default City]:BeiJing
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:mykubeca.io
Email Address []:abcd@abc.com
[root@localhost kube-ca]#
其中Common Name 是随意指定mykubeca.io。
2)生成apiserver证书
生成服务端私钥
[root@localhost kube-ca]# mkdir apiserver
[root@localhost kube-ca]# openssl genrsa -out apiserver/apiserver.key 2048
Generating RSA private key, 2048 bit long modulus
...............+++
................+++
e is 65537 (0x10001)
[root@localhost kube-ca]#
2)修改/etc/kubernetes/apiserver配置文件,在KUBE_API_ARGS中增加如下配置:
--client-ca-file=/etc/kubernetes/ca/ca.crt --tls-private-key-
file=/etc/kubernetes/ca/apiserver.key --tls-cert-file=/etc/kubernetes/ca/apiserver.crt
重启apiserver,apiserver默认监听端口是6443端口,通过curl进行校验:
[root@localhost kube-ca] curl https://192.63.63.1:6443/api/v1/nodes --cert
/etc/kubernetes/kube-ca/client/client.crt --key /etc/kubernetes/kube-ca/client/client.key
--cacert /etc/kubernetes/kube-ca/certs/ca.crt -v
如果能正常显示数据,则认为证书配置成功,否则证书配置失败。
1.2.2 修改node节点
1)Node节点,一般运行kubelet、kube-proxy两个组件,为了方便二者使用同一份客户端证书。将ca根证书,client私钥和证书拷贝到node2中/etc/kubernets/ca目录中。
2)创建kebeconfig文件,依次执行一下命令,会生成两个文件kubelet.kubeconfig和kube-proxy.kubeconfig
--------------------- 本文来自 xxb249 的优快云 博客 ,全文地址请点击:https://blog.youkuaiyun.com/xxb249/article/details/79449434?utm_source=copy
export KUBE_APISERVER="https://192.63.63.1:6443"kubectl config set-cluster kubernetes \ --
certificate-authority=/etc/kubernetes/ca/ca.crt \ --server=${KUBE_APISERVER} \ --
kubeconfig=kubelet.kubeconfig kubectl config set-credentials kubelet \ --client-
certificate=/etc/kubernetes/ca/client.crt \ --client-key=/etc/kubernetes/ca/client.key \
--kubeconfig=kubelet.kubeconfig kubectl config set-context default \ --cluster=kubernetes
\ --user=kubelet \ --kubeconfig=kubelet.kubeconfig kubectl config use-context default --
kubeconfig=kubelet.kubeconfig kubectl config set-cluster kubernetes \ --certificate-
authority=/etc/kubernetes/ca/ca.crt \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-
proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-
certificate=/etc/kubernetes/ca/client.crt \ --client-key=/etc/kubernetes/ca/client.key \
--kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --
cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl
config use-context default --kubeconfig=kube-proxy.kubeconfig
将文件kubele.kubeconfig拷贝到/var/lib/kubelet以及文件kube-proxy.kubeconfig拷贝到/var/lib/kube-proxy中,如果没有上述目录则创建。
3)修改配置文件,
修改/etc/kubernetes/kubelet文件,在KUBELET_ARGS中增加--kubeconfig=/var/lib/kubelet/kubelet.kubeconfig
修改修改/etc/kubernetes/proxy文件,在KUBELET_ARGS中增加--kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig
重新启动kubelet和kube-proxy服务,然后在master节点中,查看nodes信息:
--------------------- 本文来自 xxb249 的优快云 博客 ,全文地址请点击:https://blog.youkuaiyun.com/xxb249/article/details/79449434?utm_source=copy
[root@localhost ~]#[root@localhost ~]# kubectl get nodesNAME STATUS ROLES AGE
VERSION127.0.0.1 Ready <none> 17d v1.9.1node1 Ready <none> 5m
v1.9.1node2 Ready <none> 1m v1.9.1[root@localhost ~]#
下面是kubelet.kubeconfig和kube-proxy.kubeconfig文件内容如下:
[root@localhost kube-proxy]# cat /var/lib/kubelet/kubelet.kubeconfigapiVersion:
v1clusters:- cluster: certificate-authority: /etc/kubernetes/ca/ca.crt server:
https://192.169.122.1:6443 name: kubernetescontexts:- context: cluster: kubernetes
user: kubelet name: defaultcurrent-context: defaultkind: Configpreferences: {}users:-
name: kubelet user: as-user-extra: {} client-certificate:
/etc/kubernetes/ca/client.crt client-key: /etc/kubernetes/ca/client.key[root@localhost
kube-proxy]#
[root@localhost kube-proxy]# cat /var/lib/kube-proxy/kube-proxy.kubeconfigapiVersion:
v1clusters:- cluster: certificate-authority: /etc/kubernetes/ca/ca.crt server:
https://192.169.122.1:6443 name: kubernetescontexts:- context: cluster: kubernetes
user: kube-proxy name: defaultcurrent-context: defaultkind: Configpreferences: {}users:-
name: kube-proxy user: as-user-extra: {} client-certificate:
/etc/kubernetes/ca/client.crt client-key: /etc/kubernetes/ca/client.key[root@localhost
kube-proxy]#
二、遇到问题
问题1:
[root@localhost controller]# curl https://127.0.0.1:443/api/v1/nodes --cert /etc/kubernetes/kube-ca/client/client.crt --key /etc/kubernetes/kube-ca/client/client.key --cacert /etc/kubernetes/kube-ca/certs/ca.crt -v
* About to connect() to 127.0.0.1 port 443 (#0)
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/kubernetes/kube-ca/certs/ca.crt
CApath: none
* NSS error -12190 (SSL_ERROR_PROTOCOL_VERSION_ALERT)
* Peer reports incompatible or unsupported protocol version.
* Closing connection 0
curl: (35) Peer reports incompatible or unsupported protocol version.
解决方法:
更新以下软件:
yum update nss nss-util nspr
yum update curl
问题2:之前通过kubectl get nodes提示无法找到master之类错误(具体是啥错误不清楚了)
解决方式1:kubectl get nodes --kubeconfig=XXX 指定kubeconfig文件,可以参考kubelet的文件
解决方式2:kubectl默认读取~/.kube/config,这个config文件里面有设置server地址。其实这个config文件就是kubeconfig文件。
至此,https方式访问介绍完成,后面介绍token和basic方式。
--------------------- 本文来自 xxb249 的优快云 博客 ,全文地址请点击:https://blog.youkuaiyun.com/xxb249/article/details/79449434?utm_source=copy
本文详细介绍Kubernetes集群环境的搭建过程,包括组件安装、配置文件解析及通过HTTP与HTTPS方式实现Master与Node间的通信。重点讲解了HTTPS双向认证配置步骤。
2363

被折叠的 条评论
为什么被折叠?



