实验环境说明
实验架构图
lab1: etcd master haproxy keepalived 11.11.11.111
lab2: etcd master haproxy keepalived 11.11.11.112
lab3: etcd master haproxy keepalived 11.11.11.113
lab4: node 11.11.11.114
lab5: node 11.11.11.115
lab6: node 11.11.11.116
vip(loadblancer ip): 11.11.11.110
实验使用的Vagrantfile
-- mode: ruby --
vi: set ft=ruby :
ENV[“LC_ALL”] = “en_US.UTF-8”
Vagrant.configure(“2”) do |config|
(1…6).each do |i|
config.vm.define “lab#{i}” do |node|
node.vm.box = “centos-7.4-docker-17”
node.ssh.insert_key = false
node.vm.hostname = “lab#{i}”
node.vm.network “private_network”, ip: “11.11.11.11#{i}”
node.vm.provision “shell”,
inline: “echo hello from node #{i}”
node.vm.provider “virtualbox” do |v|
v.cpus = 2
v.customize [“modifyvm”, :id, “–name”, “lab#{i}”, “–memory”, “2048”]
end
end
end
end
安装配置docker
v1.11.0版本推荐使用docker v17.03,
v1.11,v1.12,v1.13, 也可以使用,再高版本的docker可能无法正常使用。
测试发现17.09无法正常使用,不能使用资源限制(内存CPU)
如下操作在所有节点操作
安装docker
卸载安装指定版本docker-ce
apt-get install -y docker-ce=18.06.1ce3-0~ubuntu
启动docker
systemctl enable docker && systemctl restart docker
安装 kubeadm, kubelet 和 kubectl
如下操作在所有节点操作
使用阿里镜像安装
配置源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装
apt-get install -y kubelet=1.11.0-00 kubectl=1.11.0-00 kubeadm=1.11.0-00
配置系统相关参数
临时禁用selinux
永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i ‘s/SELINUX=permissive/SELINUX=disabled/’ /etc/sysconfig/selinux
setenforce 0
临时关闭swap
永久关闭 注释/etc/fstab文件里swap相关的行
swapoff -a
开启forward
Docker从1.13版本开始调整了默认的防火墙规则
禁用了iptables filter表中FOWARD链
这样会引起Kubernetes集群中跨Node的Pod无法通信
iptables -P FORWARD ACCEPT
配置转发相关参数,否则可能会出错
cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
加载ipvs相关内核模块
如果重新开机,需要重新加载
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
lsmod | grep ip_vs
配置hosts解析
如下操作在所有节点操作
cat >>/etc/hosts<<EOF
11.11.11.111 lab1
11.11.11.112 lab2
11.11.11.113 lab3
11.11.11.114 lab4
11.11.11.115 lab5
11.11.11.116 lab6
EOF
配置haproxy代理和keepalived
如下操作在节点lab1,lab2,lab3操作
拉取haproxy镜像
docker pull haproxy:1.7.8-alpine
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
log 127.0.0.1 local0 err
maxconn 50000
uid 99
gid 99
#daemon
nbproc 1
pidfile haproxy.pid
defaults
mode http
log 127.0.0.1 local0 err
maxconn 50000
retries 3
timeout connect 5s
timeout client 30s
timeout server 30s
timeout check 2s
listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth will:will
stats hide-version
stats admin if TRUE
frontend k8s-https
bind 0.0.0.0:8443
mode tcp
#maxconn 50000
default_backend k8s-https
backend k8s-https
mode tcp
balance roundrobin
server lab1 11.11.11.111:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server lab2 11.11.11.112:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server lab3 11.11.11.113:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF
启动haproxy
docker run -d --name my-haproxy
-v /etc/haproxy:/usr/local/etc/haproxy:ro
-p 8443:8443
-p 1080:1080
–restart always
haproxy:1.7.8-alpine
查看日志
docker logs my-haproxy
浏览器查看状态
http://11.11.11.111:1080/haproxy-status
http://11.11.11.112:1080/haproxy-status
拉取keepalived镜像
docker pull osixia/keepalived:1.4.4
启动
载入内核相关模块
lsmod | grep ip_vs
modprobe ip_vs
启动keepalived
eth1为本次实验11.11.11.0/24网段的所在网卡
docker run --net=host --cap-add=NET_ADMIN
-e KEEPALIVED_INTERFACE=eth1
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:[‘11.11.11.110’]"
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:[‘11.11.11.111’,‘11.11.11.112’,‘11.11.11.113’]"
-e KEEPALIVED_PASSWORD=hello
–name k8s-keepalived
–restart always
-d osixia/keepalived:1.4.4
查看日志
会看到两个成为backup 一个成为master
docker logs k8s-keepalived
此时会配置 11.11.11.110 到其中一台机器
ping测试
ping -c4 11.11.11.110
如果失败后清理后,重新实验
docker rm -f k8s-keepalived
ip a del 11.11.11.110/32 dev eth1
配置启动kubelet
如下操作在所有节点操作
配置kubelet使用国内pause镜像
配置kubelet的cgroups
获取docker的cgroups
DOCKER_CGROUPS=$(docker info | grep ‘Cgroup’ | cut -d’ ’ -f3)
echo D O C K E R C G R O U P S c a t > / e t c / s y s c o n f i g / k u b e l e t < < E O F K U B E L E T E X T R A A R G S = " − − c g r o u p − d r i v e r = DOCKER_CGROUPS cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver= DOCKERCGROUPScat>/etc/sysconfig/kubelet<<EOFKUBELETEXTRAARGS="−−cgroup−driver=DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
EOF
启动
systemctl daemon-reload
systemctl enable kubelet && systemctl restart kubelet
配置master
配置第一个master节点
如下操作在lab1节点操作
1.11 版本 centos 下使用 ipvs 模式会出问题
参考 https://github.com/kubernetes/kubernetes/issues/65461
生成配置文件
CP0_IP=“11.11.11.111”
CP0_HOSTNAME=“lab1”
cat >kubeadm-master.config<<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- “lab1”
- “lab2”
- “lab3”
- “11.11.11.111”
- “11.11.11.112”
- “11.11.11.113”
- “11.11.11.110”
- “127.0.0.1”
api:
advertiseAddress: $CP0_IP
controlPlaneEndpoint: 11.11.11.110:8443
etcd:
local:
extraArgs:
listen-client-urls: “https://127.0.0.1:2379,https:// C P 0 I P : 2379 " a d v e r t i s e − c l i e n t − u r l s : " h t t p s : / / CP0_IP:2379" advertise-client-urls: "https:// CP0IP:2379"advertise−client