In this lab, we will install 3 nodes + 1 master

本文详细介绍了一个包含1个主节点和3个从节点的Kubernetes集群的安装过程,涉及内存配置要求、NTP服务同步、主机名映射设置、依赖包安装、防火墙停用、etcd配置与启动、flannel网络配置等关键步骤。

In this lab, we will install 3 nodes + 1 master

Pay your attention : 3 GB at minimum memory required for each minion node !

Enable NTP on master and all nodes :

[root@k-master ~]# yum -y install ntp
[root@k-master ~]# systemctl start ntpd
[root@k-master ~]# systemctl enable ntpd
[root@k-master ~]# hwclock --systohc
[root@k-node1 ~]# yum -y install ntp
[root@k-node1 ~]# systemctl start ntpd
[root@k-node1 ~]# systemctl enable ntpd
[root@k-node1 ~]# hwclock --systohc
[root@k-node2 ~]# yum -y install ntp
[root@k-node2 ~]# systemctl start ntpd
[root@k-node2 ~]# systemctl enable ntpd
[root@k-node2 ~]# hwclock --systohc
[root@k-node3 ~]# yum -y install ntp
[root@k-node3 ~]# systemctl start ntpd
[root@k-node3 ~]# systemctl enable ntpd
[root@k-node3 ~]# hwclock --systohc

Add entries in “/etc/hosts” or reccords in your DNS :

[root@k-master ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node1 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node2 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node3 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3

Install required RPM :

  • On master :
[root@k-master ~]# yum -y install etcd kubernetes
...
...
...
Installed:
  etcd.x86_64 0:2.1.1-2.el7                       kubernetes.x86_64 0:1.0.3-0.2.gitb9a88a7.el7

Dependency Installed:
  audit-libs-python.x86_64 0:2.4.1-5.el7                   checkpolicy.x86_64 0:2.1.12-6.el7
  docker.x86_64 0:1.8.2-10.el7.centos                      docker-selinux.x86_64 0:1.8.2-10.el7.centos
  kubernetes-client.x86_64 0:1.0.3-0.2.gitb9a88a7.el7      kubernetes-master.x86_64 0:1.0.3-0.2.gitb9a88a7.el7
  kubernetes-node.x86_64 0:1.0.3-0.2.gitb9a88a7.el7        libcgroup.x86_64 0:0.41-8.el7
  libsemanage-python.x86_64 0:2.1.10-18.el7                policycoreutils-python.x86_64 0:2.2.5-20.el7
  python-IPy.noarch 0:0.75-6.el7                           setools-libs.x86_64 0:3.3.7-46.el7
  socat.x86_64 0:1.7.2.2-5.el7

Complete!
  • On nodes :
[root@k-node1 ~]# yum -y install flannel kubernetes
[root@k-node2 ~]# yum -y install flannel kubernetes
[root@k-node3 ~]# yum -y install flannel kubernetes

Stop the firewall

For for many convenience, we will stop firewalls during this lab :

[root@k-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node2 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node3 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

On Kubernetes master

  • Configure “etcd” distributed key-value store :
[root@k-master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
  • Kubernetes API server configuration file :
[root@k-master ~]# egrep -v "^#|^$" /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
  • Start all Kubernetes services :
[root@k-master ~]# for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler
 > do
 > systemctl restart $SERVICE
 > systemctl enable $SERVICE
 > done
 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service

We have now those LISTEN port :

[root@k-master ~]# netstat -ntulp | egrep -v "ntpd|sshd"
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      2913/kube-scheduler
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      2887/kube-controlle
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      2828/etcd
tcp        0      0 127.0.0.1:7001          0.0.0.0:*               LISTEN      2828/etcd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2356/master
tcp6       0      0 :::2379                 :::*                    LISTEN      2828/etcd
tcp6       0      0 :::8080                 :::*                    LISTEN      2858/kube-apiserver
tcp6       0      0 ::1:25                  :::*                    LISTEN      2356/master
  • Create “etcd” key :
[root@k-master ~]# etcdctl mk /frederic.wou/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.31.0.0/16"}
[root@k-master ~]# etcdctl ls /frederic.wou --recursive
/frederic.wou/network
/frederic.wou/network/config
[root@k-master ~]# etcdctl get /frederic.wou/network/config
{"Network":"172.17.0.0/16"}

On each minion nodes

  • flannel configuration:
[root@k-node1 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
  • Kubernates :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
  • kubelet :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node1"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node2"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node3"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
  • Start all services :
[root@k-node1 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node2 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node3 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.

Kubernetes is now ready

[root@k-master ~]# kubectl get nodes
NAME            LABELS                                 STATUS
192.168.1.131   kubernetes.io/hostname=192.168.1.131   Ready
192.168.1.132   kubernetes.io/hostname=192.168.1.132   Ready
192.168.1.133   kubernetes.io/hostname=192.168.1.133   Ready

Troubleshooting

Unable to start Docker on minion nodes

[root@k-node1 ~]# systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details

Check “ntp” service :

[root@k-node1 ~]# ntpq -p
 remote refid st t when poll reach delay offset jitter
==============================================================================
+173.ip-37-59-12 36.224.68.195 2 u - 64 7 32.539 -0.030 0.477
*moz75-1-78-194- 213.251.128.249 2 u 4 64 7 30.108 -0.988 0.967
-ntp.tuxfamily.n 138.96.64.10 2 u 67 64 7 25.934 -1.495 0.504
+x1.f2tec.de 10.2.0.1 2 u 62 64 7 32.307 -0.044 0.466

Is “flanneld” up & running ?

[root@k-node1 ~]# ip addr show dev flannel0
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
 link/none
 inet 172.17.85.0/16 scope global flannel0
 valid_lft forever preferred_lft forever

Is this node able to connect to “etcd” master :

[root@k-node1 ~]# curl -s -L http://192.168.1.130:2379/version
{"etcdserver":"2.1.1","etcdcluster":"2.1.0"}[root@k-node1 ~]

Is “kube-proxy” service running ?

[root@k-node1 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-02-03 14:50:25 CET; 1min 0s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2072 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─2072 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.1.130:8080

Feb 03 14:50:25 k-node1 systemd[1]: Started Kubernetes Kube-Proxy Server.
Feb 03 14:50:25 k-node1 systemd[1]: Starting Kubernetes Kube-Proxy Server...

Try to manually start Docker daemon :

[root@k-node1 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=true --mtu=1472 "
[root@k-node1 ~]# /usr/bin/docker daemon -D --selinux-enabled --bip=172.17.85.1/24 --ip-masq=true --mtu=1472
...
...
...
INFO[0001] Docker daemon                                 commit=a01dc02/1.8.2 execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-el7.centos

 

 

PDF24    Send article as PDF     

Post navigation

Requirements The code is tested on clean Ubuntu 20.04 with ROS noetic installation. Install the required package toppra: sudo pip3 install toppra catkin_pkg PyYAML empy matplotlib pyrfc3339 Run the code 1. Download and compile the code git clone https://github.com/ZJU-FAST-Lab/Primitive-Planner.git cd Primitive-Planner catkin_make -DCMAKE_BUILD_TYPE=Release 2. Generate the motion primitive library cd src/scripts python3 swarm_path.py The generated motion primitive library is stored in "src/planner/plan_manage/primitive_library/". 3. Run the planner In the "Primitive-Planner/" directory: source devel/setup.bash # or setup.zsh cd src/scripts python3 gen_position_swap.py 20 # It will generate the swarm.launch with 20 drones roslaunch primitive_planner swarm.launch Wait until all the nodes have their launch process finished and keep printing "[FSM]: state: WAIT_TARGET. Waiting for trigger". Open another terminal, publish the trigger cd src/scripts bash pub_trigger.sh Then the drones (drone number is 40) will start to fly like this Change the drone number when executing "python3 gen_position_swap.py <drone_number>". Before starting the "roslaunch" command, please open another terminal and run "htop" to monitor the Memory usage and CPU usage. Each drone requires about 200 MB memory. Keep the htop opened for entire flight. The computation time is printed on the terminal in blue text, named as "plan_time". To get the accurate computation time, please fix the CPU frequency to its maximum by sudo apt install cpufrequtils sudo cpufreq-set -g performance Otherwise the CPU will run in powersave mode with low frequency分析代码,生成复现过程
07-17
Requirements The code is tested on clean Ubuntu 20.04 with ROS noetic installation. Install the required package toppra: sudo pip3 install toppra catkin_pkg PyYAML empy matplotlib pyrfc3339 Run the code 1. Download and compile the code git clone https://github.com/ZJU-FAST-Lab/Primitive-Planner.git cd Primitive-Planner catkin_make -DCMAKE_BUILD_TYPE=Release 2. Generate the motion primitive library cd src/scripts python3 swarm_path.py The generated motion primitive library is stored in "src/planner/plan_manage/primitive_library/". 3. Run the planner In the "Primitive-Planner/" directory: source devel/setup.bash # or setup.zsh cd src/scripts python3 gen_position_swap.py 20 # It will generate the swarm.launch with 20 drones roslaunch primitive_planner swarm.launch Wait until all the nodes have their launch process finished and keep printing "[FSM]: state: WAIT_TARGET. Waiting for trigger". Open another terminal, publish the trigger cd src/scripts bash pub_trigger.sh Then the drones (drone number is 40) will start to fly like this Change the drone number when executing "python3 gen_position_swap.py <drone_number>". Before starting the "roslaunch" command, please open another terminal and run "htop" to monitor the Memory usage and CPU usage. Each drone requires about 200 MB memory. Keep the htop opened for entire flight. The computation time is printed on the terminal in blue text, named as "plan_time". To get the accurate computation time, please fix the CPU frequency to its maximum by sudo apt install cpufrequtils sudo cpufreq-set -g performance Otherwise the CPU will run in powersave mode with low frequency分析代码,生成复现过程
最新发布
07-17
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值