Kubernetes的学习笔记总结之k8s集群安装部署

kubernets 集群安装部署。

安装 Docker

所有节点都需要安装 Docker。

apt-get update && apt-get install docker.io

安装 kubelet、kubeadm 和 kubectl

在所有节点上安装 kubelet、kubeadm 和 kubectl。

kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

kubeadm 用于初始化 Cluster。

kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。


apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get updateapt-get install -y kubelet kubeadm kubectl

这里是有阿里云的ubuntu源来安装k8s。

#!/bin/bash

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat << EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl

下载Docker镜像

这里需要几个docker的image,这里从镜像网站拉取images,直接执行 kubeadm init 拉取的images
是从k8s.gcr.io网站拉取的,这个网站国内访问不来的,除非你。。。。(这个敏感不能说出来。你懂得。)

#/bin/bash
# 从docker上拉取k8s的镜像
docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull kuberneter/coredns:1.2.2
# 拉取之后 要改个tag的名称的。
docker tag mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag kuberneter/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2

# 这个后面碰到问题,需要拉取这个镜像
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64



docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-proxy:v1.12.3 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-apiserver:v1.12.3 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-controller-manager:v1.12.3 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-scheduler:v1.12.3 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/etcd:3.2.24 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/coredns:1.2.2 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/flannel:v0.10.0-amd64 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/pause:3.1 ;\
docker pull registry.cn-hangzhou.aliyuncs.com/kuberimages/kubernetes-dashboard-amd64:v1.10.0


docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-proxy:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3 ;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-apiserver:v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/kube-scheduler:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 ;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 ;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64   ;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/pause:3.1 k8s.gcr.io/pause:3.1  ;\
docker tag registry.cn-hangzhou.aliyuncs.com/kuberimages/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0


#gcr.io/kubernetes-helm/tiller:v2.11.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0 gcr.io/kubernetes-helm/tiller:v2.11.0

使用另外一种网络需要的docker镜像,开始是使用的flannel网络。

#quay.io/calico/node:v3.3.2
docker pull registry.cn-hangzhou.aliyuncs.com/liuq/calico-node:v2.6.2
docker tag  registry.cn-hangzhou.aliyuncs.com/liuq/calico-node:v2.6.2 quay.io/calico/node:v2.6.2


# k8s.gcr.io/heapster-amd64:v1.5.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.4
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.4 k8s.gcr.io/heapster-amd64:v1.5.4

# k8s.gcr.io/heapster-grafana-amd64:v5.0.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v5.0.4 k8s.gcr.io/heapster-grafana-amd64:v5.0.4

# k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2 k8s.gcr.io/heapster-influxdb-amd64:v1.5.2

在k8s-master机器上执行kubeadm init初始化。

root@k8s-master:~# kubeadm init --apiserver-advertise-address 10.0.63.47 --pod-network-cidr=10.244.0.0/16
I1121 21:55:58.472084    2033 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: read tcp 10.0.63.47:48026->23.236.58.218:443: read: connection reset by peer
I1121 21:55:58.473142    2033 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.0.63.47 127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.63.47]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 32.503912 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: s56myc.82qpolpdadevbt8r
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernete
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值