1.1.下载所需的 docker 二进制文件(也可以去笔者百度网盘获取资源,链接在最下方)
https://github.com/moby/moby/releases
1.3.将解压目录下的所有docker*文件复制到/usr/bin下
cp /usr/docker/docker/docker* /usr/bin
vi /usr/lib/systemd/system/docker.service
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
Environment="PATH=/root/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/etc/sysconfig/docker
ExecStart=/usr/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS --insecure-registry 172.16.3.30:5000
systemctl stop firewalld systemctl disable firewalld
systemctl enable docker systemctl start docker
2.Kubernetes的安装与配置
2.1.下载所需版本的K8S二进制文件(下载需要翻墙,没有翻墙工具的可以去笔者百度网盘获取资源,链接在最下方)
https://github.com/kubernetes/kubernetes/releases
Service Binaries中的kubernetes-server-linux-amd64.tar.gz文件已经包含了 K8S所需要的全部组件,无需单独下载Client等组件
master:etcd、kube-apiserver、kube-controller-manager、kube-scheduler、docker
slaver:kubelet、kube-proxy、flanneld、docker
etcd是k8s集群的主数据库,在安装k8s其他服务之前首先安装与启动。
https://github.com/coreos/etcd/releases/
tar –zxvf etcd-v3.3.11-linux-amd64.tar.gz
2.3.1.3.将解压目录下的etcd和etcdctl文件复制到/usr/bin下
vi /usr/lib/systemd/system/etcd.service
WorkingDirectory=/var/lib/etcd/
(其中WorkingDirectory为etcd数据保存的目录,需要在启动etcd服务之前首先创建)
2.3.1.5.创建配置/etc/etcd/etcd.conf文件(红色部分为master节点的ip)
vi /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://172.16.3.30:2379,http://172.16.3.30:4001,http://127.0.0.1:2379,http://127.0.0.1:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.3.30:2379,http://172.16.3.30:4001,http://127.0.0.1:2379,http://127.0.0.1:4001"
2.3.2.kube-apiserver服务
2.3.2.1.将准备好的k8s二进制包上传并解压到/usr/k8s目录下
tar –zxvf kubernetes-server-linux-amd64.tar.gz
cp -r /usr/k8s/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /usr/bin/
2.3.2.3.创建 kube-apiserver的启动文件
vi /usr/lib/systemd/system/kube-apiserver.service
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
EnvironmentFile=-/etc/kubernetes/apiserver
2.3.2.4.创建配置文件apiserver(红色部分为master节点的ip)
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://172.16.3.30:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
2.3.3.kube-controller-manger服务
2.3.3.1.创建kube-controller-manager的启动文件
vi /usr/lib/systemd/system/kube-controller-manager.service
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
2.3.3.2.创建配置文件controller-manager(红色部分为master节点的ip)
vi /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://172.16.3.30:8080 --logtostderr=true --log-dir=/var/lib/kubernetes --v=0"
vi /usr/lib/systemd/system/kube-scheduler.service
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=kube-apiserver.service
EnvironmentFile=-/etc/kubernetes/scheduler
2.3.4.2.创建配置文件scheduler(红色部分为master节点的ip)
KUBE_SCHEDULER_ARGS="--master=http://172.16.3.30:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=0"
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
2.3.6.验证 master 节点功能,并查看其状态(状态为running即为正常)
systemctl status etcd
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler
2.4.slaver节点部署
2.4.0.设置iptables重启自动执行
vi ~/.bashrc
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
source ~/.bashrc
2.4.1.1.将准备好的k8s二进制包上传并解压到/usr/k8s目录下
tar –zxvf kubernetes-server-linux-amd64.tar.gz
cp -r /usr/k8s/kubernetes/server/bin/{kube-proxy,kubelet} /usr/bin/
vi /usr/lib/systemd/system/kubelet.service
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
WorkingDirectory=/var/lib/kubelet/
EnvironmentFile=-/etc/kubernetes/kubelet
(其中WorkingDirectory为kubelet数据保存的目录,需要在启动kubelet服务之前首先创建)
2.4.1.4.创建配置文件kubelet(红色部分为master节点的ip,蓝色部分为本节点的ip,黄色部分为私服ip)
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=172.16.3.37"
KUBELET_API_SERVER="--api-servers=http://172.16.3.30:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=172.16.3.30:5000/pod-infrastructure:latest"
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service
vi /usr/lib/systemd/system/kube-proxy.service
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
2.4.2.2.创建配置文件kube-proxy(红色部分为master节点的ip)
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
2.4.4.部署 Flannel 网络
http://rpmfind.net/linux/rpm2html/search.php?query=flannel
2.4.4.3.将flannel的rpm包上传解压到k8s目录下
rpm -ivh flannel-0.7.1-4.el7.x86_64.rpm
2.4.4.4.配置flannel网络(红色部分为master节点的ip,黄色部分为本身节点的网卡和ip)
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://172.16.3.30:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
FLANNEL_ETCD="http://172.16.3.30:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"
FLANNEL_OPTIONS="-iface=ens33 -public-ip=172.16.3.37 -ip-masq=true"
2.4.4.5.配置etcd中关于flannel的key(在master节点执行)
etcdctl mk /atomic.io/network/config '{"Network":"172.19.0.0/16", "SubnetLen":24, "Backend":{"Type":"vxlan"}}'
systemctl enable flanneld.service
systemctl start flanneld.service
systemctl status flanneld.service
文件链接:https://pan.baidu.com/s/1wZtoEgpQd9kWShbxbOzjlg
参考博客:https://blog.youkuaiyun.com/ljx1528/article/details/81545187