k8s集群搭建
虚拟机环境准备
- vagrant初始化三台sentos/7环境(需要先安装vagrant)
Vagrantfile文件:
Vagrant.configure("2") do |config|
(1..3).each do |i|
config.vm.define "k8s-node#{i}" do |node|
# 设置虚拟机的Box
node.vm.box = "centos/7"
# 设置虚拟机的主机名
node.vm.hostname="k8s-node#{i}"
# 设置虚拟机的IP
node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"
# 设置主机与虚拟机的共享目录
# node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"
# VirtaulBox相关配置
node.vm.provider "virtualbox" do |v|
# 设置虚拟机的名称
v.name = "k8s-node#{i}"
# 设置虚拟机的内存大小
v.memory = 2048
# 设置虚拟机的CPU个数
v.cpus = 4
end
end
end
end
在当前目录下运行
#启动
vagrant up
#连接某个虚拟机(以下步骤三台都需要执行)
vagrant ssh k8s-node1
#切换root用户
su root
#输入密码
vagrant
#设置可远程账号密码登陆
vi /etc/ssh/sshd_config
#PasswordAuthentication no改为PasswordAuthentication yes 并保存编辑
#重启sshd服务
service sshd restart
#退出,执行其他两台操作
exit
- 设置每个虚拟机网络连接方式为NET网络,MAC地址记得重新刷下保存
# 查看默认的网卡
[root@k8s-node1 ~]# ip route show
default via 10.0.2.2 dev eth0 proto dhcp metric 100 # 默认使用eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 # 网关和ip
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101
能够看到路由表中记录的是,通过端口eth0进行数据包的收发。
分别查看k8s-node1,k8s-node2和k8s-node3的eth0所绑定的IP地址,发现它们都是相同的,全都是10.0.2.15,这些地址是供kubernetes集群通信用的;eth1上的IP地址,是通远程管理使用的。
[root@k8s-node1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:8d:e4:a1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 488sec preferred_lft 488sec
inet6 fe80::a00:27ff:fe8d:e4a1/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:bb:c0:06 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
*原因分析:这是因为它们使用是端口转发规则,使用同一个地址,通过不同的端口来区分。但是这种端口转发规则在以后的使用中会产生很多不必要的问题,所以需要修改为NAT网络类型
*解决:在virtualBox中 管理->全局设置->网络->添加新NET网络 保存
修改为NET网络连接,并刷新MAC地址

-
保证三台虚拟机可以互相ping通和外网也能ping通
-
设置Linux环境
关闭防火墙
systemctl stop firewalld systemctl disable firewalld关闭seLinux Linux默认安全策略
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0关闭swap(关闭内存交换)
#临时关闭 swapoff -a #永久关闭 sed -ri 's/.*swap.*/#&/' /etc/fstab添加主机名对应ip关系
#查看主机名 hostname如果主机名不正确,可以通过“hostnamectl set-hostname :指定新的hostname”命令来进行修改。
vi /etc/hosts 10.0.2.4 k8s-node1 10.0.2.15 k8s-node2 10.0.2.5 k8s-node3将桥接的IPV4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF应用规则:
sysctl --system
所有节点安装docker、kubeam、kubelet、kubectl
安装docker
-
卸载之前的docker
sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine -
安装Docker -CE
sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 # 设置docker repo的yum位置 sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # 安装docker,docker-cli sudo yum -y install docker-ce docker-ce-cli containerd.io -
配置docker加速
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://8eorvk5t.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker -
设置docker开机启动
systemctl enable docker -
添加阿里云yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF -
安装kubeadm、kubelet、kubectl
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3 -
开机启动 & 启动
systemctl enable kubelet systemctl start kubelet
部署k8s-master
(1)master节点初始化
- 在master节点上创建并执行 master_images.sh
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
-
初始化kubeadm (注:10.0.2.4需要根据ip addr查看默认eth0网卡的地址,pod-network-cidr为pod之间访问网络)
kubeadm init \ --apiserver-advertise-address=10.0.2.4 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16#运行后结果 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.2.4:6443 --

最低0.47元/天 解锁文章
8332

被折叠的 条评论
为什么被折叠?



