什么是Kubernetes?
Kubernetes是Google开源的容器集群管理系统,实现基于Docker构建容器,利用Kubernetes能很方面管理多台Docker主机中的容器。
主要功能如下:
1)将多台Docker主机抽象为一个资源,以集群方式管理容器,包括任务调度、资源管理、弹性伸缩、滚动升级等功能。
2)使用编排系统(YAML File)快速构建容器集群,提供负载均衡,解决容器直接关联及通信问题
3)自动管理和修复容器,简单说,比如创建一个集群,里面有十个容器,如果某个容器异常关闭,那么,会尝试重启或重新分配容器,始终保证会有十个容器在运行,反而杀死多余的。
Kubernetes角色组成:
1)Pod
Pod是kubernetes的最小操作单元,一个Pod可以由一个或多个容器组成;
同一个Pod只能运行在同一个主机上,共享相同的volumes、network、namespace;
2)ReplicationController(RC)
RC用来管理Pod,一个RC可以由一个或多个Pod组成,在RC被创建后,系统会根据定义好的副本数来创建Pod数量。在运行过程中,如果Pod数量小于定义的,就会重启停止的或重新分配Pod,反之则杀死多余的。当然,也可以动态伸缩运行的Pods规模或熟悉。
RC通过label关联对应的Pods,在滚动升级中,RC采用一个一个替换要更新的整个Pods中的Pod。
3)Service
Service定义了一个Pod逻辑集合的抽象资源,Pod集合中的容器提供相同的功能。集合根据定义的Label和selector完成,当创建一个Service后,会分配一个Cluster IP,这个IP与定义的端口提供这个集合一个统一的访问接口,并且实现负载均衡。
4)Label
Label是用于区分Pod、Service、RC的key/value键值对;
Pod、Service、RC可以有多个label,但是每个label的key只能对应一个;
主要是将Service的请求通过lable转发给后端提供服务的Pod集合;
Kubernetes组件组成:
1)kubectl
客户端命令行工具,将接受的命令格式化后发送给kube-apiserver,作为整个系统的操作入口。
2)kube-apiserver
作为整个系统的控制入口,以REST API服务提供接口。
3)kube-controller-manager
用来执行整个系统中的后台任务,包括节点状态状况、Pod个数、Pods和Service的关联等。
4)kube-scheduler
负责节点资源管理,接受来自kube-apiserver创建Pods任务,并分配到某个节点。
5)etcd
负责节点间的服务发现和配置共享。
6)kube-proxy
运行在每个计算节点上,负责Pod网络代理。定时从etcd获取到service信息来做相应的策略。
7)kubelet
运行在每个计算节点上,作为agent,接受分配该节点的Pods任务及管理容器,周期性获取容器状态,反馈给kube-apiserver。
8)DNS
一个可选的DNS服务,用于为每个Service对象创建DNS记录,这样所有的Pod就可以通过DNS访问服务了。
基本部署步骤:
1)minion节点安装docker
2)minion节点配置跨主机容器通信
3)master节点部署并启动etcd、kube-apiserver、kube-controller-manager和kube-scheduler组件
4)minion节点部署并启动kubelet、kube-proxy组件
注意:如果minion主机没有安装docker,启动kubelet时会报如下错误:
W0116 23:36:24.205672 2589 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W0116 23:36:24.205751 2589 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I0116 23:36:24.205817 2589 plugins.go:71] No cloud provider specified. |
1、环境介绍及准备:
1.1 物理机操作系统
物理机操作系统采用Centos7.3 64位,细节如下。
[root@localhost~]# uname -a
Linux localhost.localdomain3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 1813:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost~]# cat /etc/redhat-release
CentOS Linuxrelease 7.3.1611 (Core)
1.2 主机信息
本文准备了三台机器用于部署k8s的运行环境,细节如下:
节点及功能 | 主机名 | IP |
Master、etcd、registry | K8s-master | 10.0.251.148 |
Node1 | K8s-node-1 | 10.0.251.153 |
Node2 | K8s-node-2 | 10.0.251.155 |
设置三台机器的主机名:
Master上执行:
[root@localhost~]# hostnamectl --static set-hostname k8s-master
Node1上执行:
[root@localhost~]# hostnamectl --static set-hostname k8s-node-1
Node2上执行:
[root@localhost~]# hostnamectl --static set-hostname k8s-node-2
在三台机器上设置hosts,均执行如下命令:
echo'10.0.251.148 k8s-master
10.0.251.148 etcd
10.0.251.148 registry
10.0.251.153 k8s-node-1
10.0.251.155 k8s-node-2'>> /etc/hosts
1.3 关闭三台机器上的防火墙
systemctl disablefirewalld.service
systemctl stopfirewalld.service
2、部署etcd
k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:
[root@localhost~]# yuminstall etcd -y
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:
[root@localhost~]# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
启动并验证状态
[root@localhost~]# systemctl start etcd
[root@localhost~]# etcdctl set testdir/testkey0 0
0
[root@localhost~]# etcdctl get testdir/testkey0
0
[root@localhost~]# etcdctl -C http://etcd:4001cluster-health
member8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
[root@localhost~]# etcdctl -C http://etcd:2379cluster-health
member8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
扩展:Etcd集群部署参见——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html
3、部署master
3.1 安装Docker
[root@k8s-master~]# yuminstall docker
配置Docker配置文件,使其允许从registry中拉取镜像。
[root@k8s-master~]# vim /etc/sysconfig/docker
#/etc/sysconfig/docker
# Modify theseoptions if you want to change the way the docker daemonruns
OPTIONS='--selinux-enabled --log-driver=journald--signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registryregistry:5000'
设置开机自启动并开启服务
[root@k8s-master~]# chkconfig docker on
[root@k8s-master~]# service docker start
3.2 安装kubernets
[root@k8s-master~]# yuminstall kubernetes
3.3 配置并启动kubernetes
在kubernetesmaster上需要运行以下组件:
Kubernets APIServer
KubernetsController Manager
KubernetsScheduler
相应的要更改以下几个配置中带颜色部分信息:
3.3.1/etc/kubernetes/apiserver
[root@k8s-master~]# vim /etc/kubernetes/apiserver
###
# kubernetessystem config
#
# The followingvalues are used to configure the kube-apiserver
#
# The address onthe local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on thelocal server to listen on.
KUBE_API_PORT="--port=8080"
# Port minionslisten on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separatedlist of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range touse for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# defaultadmission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
3.3.2 /etc/kubernetes/config
[root@k8s-master~]# vim /etc/kubernetes/config
###
# kubernetessystem config
#
# The followingvalues are used to configure various aspects of all
# kubernetesservices, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging tostderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal messagelevel, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should thiscluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How thecontroller-manager, scheduler, and proxy find theapiserver
KUBE_MASTER="--master=http://k8s-master:8080"
启动服务并设置开机自启动
[root@k8s-master~]# systemctl enable kube-apiserver.service
[root@k8s-master~]# systemctl start kube-apiserver.service
[root@k8s-master~]# systemctl enable kube-controller-manager.service
[root@k8s-master~]# systemctl start kube-controller-manager.service
[root@k8s-master~]# systemctl enable kube-scheduler.service
[root@k8s-master~]# systemctl start kube-scheduler.service
4、部署node
4.1 安装docker
参见3.1
4.2 安装kubernets
参见3.2
4.3 配置并启动kubernetes
在kubernetes node上需要运行以下组件:
Kubelet
Kubernets Proxy
相应的要更改以下几个配置文中带颜色部分信息:
4.3.1/etc/kubernetes/config
[root@K8s-node-1 ~]# vim /etc/kubernetes/config
###
# kubernetessystem config
#
# The followingvalues are used to configure various aspects of all
# kubernetesservices, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging tostderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal messagelevel, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should thiscluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How thecontroller-manager, scheduler, and proxy find theapiserver
KUBE_MASTER="--master=http://k8s-master:8080"
4.3.2/etc/kubernetes/kubelet
[root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet
###
# kuberneteskubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or ""for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leavethis blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"
# location of theapi-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
# podinfrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
启动服务并设置开机自启动
[root@k8s-master~]# systemctl enable kubelet.service
[root@k8s-master~]# systemctl start kubelet.service
[root@k8s-master~]# systemctl enable kube-proxy.service
[root@k8s-master~]# systemctl start kube-proxy.service
4.4 查看状态
在master上查看集群中节点及节点状态
[root@k8s-master~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 16s
[root@k8s-master~]# kubectl get nodes
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 43s
至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。
5、创建覆盖网络——Flannel
5.1 安装Flannel
在master、node上均执行如下命令,进行安装
[root@k8s-master~]# yuminstall flannel
版本为0.0.5
5.2 配置Flannel
master、node上均编辑/etc/sysconfig/flanneld,修改红色部分
[root@k8s-master~]# vi /etc/sysconfig/flanneld
# Flanneldconfiguration options
# etcd urllocation. Point this to the server whereetcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
# etcd configkey. This is the configuration key thatflannel queries
# For addressrange assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additionaloptions that you want to pass
#FLANNEL_OPTIONS=""
5.3 配置etcd中关于flannel的key
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
[root@k8s-master~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }
5.4 启动
启动Flannel之后,需要依次重启docker、kubernete。
在master执行:
systemctl enableflanneld.service
systemctl startflanneld.service
service dockerrestart
systemctl restartkube-apiserver.service
systemctl restartkube-controller-manager.service
systemctl restartkube-scheduler.service
在node上执行:
systemctl enableflanneld.service
systemctl startflanneld.service
service dockerrestart
systemctl restartkubelet.service
systemctl restartkube-proxy.service
2、检验配置文件的正确性。
当你不确定声明的配置文件是否书写正确时,可以使用以下命令要验证:
$ kubectl create -f ./hello-world.yaml --validate
http://www.cnblogs.com/zhenyuyaodidiao
http://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html
http://www.cnblogs.com/zhenyuyaodidiao/p/6500897.html
部署dashboard
基于kubernetes集群部署DashBoard
在之前一篇文章:Centos7部署Kubernetes集群,中已经搭建了基本的K8s集群,本文将在此基础之上继续搭建K8sDashBoard。
1、yaml文件
编辑dashboard.yaml,注意或更改以下红色部分:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name:kubernetes-dashboard-latest
namespace:kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app:kubernetes-dashboard
version:latest
kubernetes.io/cluster-service: "true"
spec:
containers:
- name:kubernetes-dashboard
image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
resources:
# keeprequest = limit to keep this container in guaranteed class
limits:
cpu:100m
memory:50Mi
requests:
cpu:100m
memory:50Mi
ports:
-containerPort: 9090
args:
- --apiserver-host=http://10.0.251.148:8080
livenessProbe:
httpGet:
path: /
port:9090
initialDelaySeconds: 30
timeoutSeconds: 30
编辑dashboardsvc.yaml文件:
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app:kubernetes-dashboard
ports:
- port: 80
targetPort:9090
2、镜像准备
在dashboard.yaml中定义了dashboard所用的镜像:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1(当然你可以选择其他的版本),另外,启动k8s的pod还需要一个额外的镜像:registry.access.redhat.com/rhel7/pod-infrastructure:latest(node中,/etc/kubernetes/kubelet的配置),由于一些众所周知的原因,这两个镜像在国内是下载不下来的,以下介绍如何准备这两个镜像。
2.1 国外下载,国内导入
从海外的服务器上pull下来对应的镜像,之后通过docker save保存成tar包,将tar包传回国内,在每个node上执行docker load将镜像导入。类似的命令如下:
海外服务器执行:
dockersave gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 >dashboard.tar
dockersave registry.access.redhat.com/rhel7/pod-infrastructure:latest >podinfrastructure.tar
scp *.tarroot@你国内的外网IP:/home/tar
各个node上执行:
dockerload < dashboard.tar
dockerload < podinfrastructure.tar
2.2 搭梯子
在node所在同网段(相同交换机)内,搭建一个可以正常访问google、Facebook等网站的fq网关,将集群中所有机器的GATEWAY指向该地址,之后重启网络。这样,所有的机器就能够正常下载这两个镜像了。
3、启动
在master执行如下命令:
kubectl create -f dashboard.yaml
kubectl create -f dashboardsvc.yaml
之后,dashboard搭建完成。
4、验证
命令验证,master上执行如下命令:
[root@k8s-master~]# kubectl get deployment --all-namespaces
NAMESPACENAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-systemkubernetes-dashboard-latest 1 1 1 1 1h
[root@k8s-master~]# kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.254.0.1 <none> 443/TCP 9d
kube-system kubernetes-dashboard 10.254.44.119 <none> 80/TCP 1h
[root@k8s-master~]# kubectl get pod -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-systemkubernetes-dashboard-latest-3866786896-vsf3h 1/1 Running 0 1h 10.0.82.2k8s-node-1
界面验证,浏览器访问:http://10.0.251.148:8080/ui
5、销毁应用
在master上执行:
kubectl delete deployment kubernetes-dashboard-latest --namespace=kube-system
kubectl delete svc kubernetes-dashboard --namespace=kube-system