saltstack进阶
masterless
应用场景
- master 与 minion 网络不通或通信有延迟,即网络不稳定
- 想在 minion 端直接执行状态
- 传统的 SaltStack 是需要通过 master 来执行状态控制 minion 从而实现状态的管理,但是当网络不稳定的时候,当想在minion本地执行状态的时候,当在只有一台主机的时候,想执行状态该怎么办呢?这就需要用到 masterless 了。
- 有了masterless,即使你只有一台主机,也能玩saltstack,而不需要你有N台主机架构。
masterless配置
修改配置文件minion
- 注释master行
- 取消注释file_client并设其值为local
- 设置file_roots
- 设置pillar_roots
[root@localhost ~]# vim /etc/salt/minion
......
#master: salt #注释此行
file_client: local #取消此行注释并将值设为local
file_roots: #设置file_roots的路径和环境,可有多套环境
base:
- /srv/salt/base
prod:
- /srv/salt/prod
pillar_roots: #设置pillar_roots的路径和环境,可有多套环境
base:
- /srv/pillar/base
prod:
- /srv/pillar/prod
关闭salt-minion服务
使用 masterless 模式时不需要启动salt任何服务
[root@localhost ~]# systemctl status salt-minion
● salt-minion.service - The Salt Minion
Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:salt-minion(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
salt-call
masterless模式执行模块或状态时使用salt-call命令,而不再是salt或者salt-ssh。
注
意
:
使
用
s
a
l
t
−
c
a
l
l
需
要
加
–
l
o
c
a
l
选
项
\color{HotPink}{注意:使用salt-call需要加–local选项}
注意:使用salt−call需要加–local选项
[root@localhost ~]# salt-call --local cmd.run 'date'
local:
Mon Nov 29 18:33:09 CST 2021
[root@localhost ~]# salt-call --local cmd.run 'uptime'
local:
18:34:17 up 42 min, 4 users, load average: 0.13, 0.11, 0.09
[root@localhost ~]# salt-call --local cmd.run 'ls /root'
local:
anaconda-ks.cfg
grafana-enterprise-8.2.5-1.x86_64.rpm
salt-master高可用
salt-master高可用配置
我们需要用salt来管理公司的所有机器,那么salt的master就不能宕机,否则就会整个瘫痪,所以我们必须要对salt进行高可用。salt的高可用配置非常简单,只需要改一下minion配置文件,将master用列表的形式列出即可。
[root@minion ~]# vim /etc/salt/minion
....
master:
- 192.168.129.134
- 192.168.129.135
....
salt-master高可用之数据同步
涉及到高可用时,数据的同步是个永恒的话题,我们必须保证高可用的2个master间使用的数据是一致的,包括:
- /etc/salt/master配置文件
- /etc/salt/pki目录下的所有key
- /srv/下的salt和pillar目录下的所有文件
保障这些数据同步的方案有:
- nfs挂载
- rsync同步
- 使用gitlab进行版本控制
安全相关:
为保证数据的同步与防止丢失,可将状态文件通过gitlab进行版本控制管理。
环境说明
主机IP | 角色 | 安装的应用 |
---|---|---|
192.168.129.134 | master | salt-master |
192.168.129.135 | backup-master | salt-master |
192.168.129.136 | minion | salt-minion |
需要安装前的yum源提前配置好
本次使用三台设备:master、backup-master、minion。其中backup-master作为备master,minion为minion端
- minion端
minion端安装salt-minion
[root@minion ~]# ls /etc/yum.repos.d/
centos-8.repo epel-8.repo redhat.repo salt-8.repo
[root@minion ~]# yum -y install salt-minion
修改minion端的配置文件,将master设为主控端的IP
[root@minion ~]# sed -i '/^#master:/a master: 192.168.129.134' /etc/salt/minion #master的ip
启动
[root@minion ~]# systemctl enable --now salt-minion
Created symlink /etc/systemd/system/multi-user.target.wants/salt-minion.service → /usr/lib/systemd/system/salt-minion.service.
[root@minion ~]# tree /etc/salt/pki
/etc/salt/pki
├── master
└── minion
├── minion.pem
└── minion.pub
在master端接收minion端key(注意防火墙)
[root@master ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
minion
Rejected Keys:
[root@master ~]# salt-key -ya minion
The following keys are going to be accepted:
Unaccepted Keys:
minion
Key for minion minion accepted.
[root@master ~]# salt-key -L
Accepted Keys:
minion
Denied Keys:
Unaccepted Keys:
Rejected Keys:
测试master端指定minion主机是否存活
[root@master ~]# salt 'minion' test.ping
minion:
True
- backup-master端
backup-master端安装salt-master
[root@backup-master ~]# ls /etc/yum.repos.d/
centos-8.repo epel-8.repo redhat.repo salt-8.repo
[root@backup-master ~]# yum -y install salt-master
将master上的公钥与私钥拷贝到backup-master上去
[root@master ~]# tree /etc/salt/pki/master/
/etc/salt/pki/master/
├── master.pem
├── master.pub
├── minions
│ └── minion
├── minions_autosign
├── minions_denied
├── minions_pre
├── minions_rejected
└── ssh
├── salt-ssh.rsa
└── salt-ssh.rsa.pub
[root@master ~]# scp /etc/salt/pki/master/master.p* 192.168.129.135:/etc/salt/pki/master/
root@192.168.129.135's password:
master.pem 100% 1679 2.3MB/s 00:00
master.pub 100% 451 281.6KB/s 00:00
//启动salt-master
[root@backup-master ~]# systemctl enable --now salt-master
Created symlink /etc/systemd/system/multi-user.target.wants/salt-master.service → /usr/lib/systemd/system/salt-master.service.
[root@backup-master ~]# tree /etc/salt/pki/master/
/etc/salt/pki/master/
├── master.pem
├── master.pub
├── minions
├── minions_autosign
├── minions_denied
├── minions_pre
└── minions_rejected
5 directories, 2 files
测试backup-master端指定minion主机是否存活
[root@minion ~]# vim /etc/salt/minion
......
# master: salt
master: 192.168.129.135 #backup-master端的ip
# Set http proxy information for the minion when doing requests
......
//重启
[root@minion ~]# systemctl restart salt-minion
在backup-master端接收minion主机的key(注意防火墙)
[root@backup-master ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
p1
Rejected Keys:
[root@backup-master ~]# salt-key -ya minion
The following keys are going to be accepted:
Unaccepted Keys:
minion
Key for minion minion accepted.
[root@backup-master ~]# salt-key -L
Accepted Keys:
p1
Denied Keys:
Unaccepted Keys:
Rejected Keys:
测试backup-master端指定minion主机是否存活
[root@backup-master ~]# salt 'minion' test.ping
p1:
True
同步配置和数据
[root@master ~]# scp /etc/salt/master 192.168.129.135:/etc/salt/master
[root@master ~]# scp -r /etc/salt/pki 192.168.129.135:/etc/salt/
[root@master ~]# scp -r /srv/salt 192.168.129.135:/srv/
当两台master都能ping通之后,最后再进行高可用设置并重启服务
[root@minion ~]# vim /etc/salt/minion
......
# master: salt
master:
- 192.168.129.134 #主master的ip
- 192.168.129.135 #备master的ip
# Set http proxy information for the minion when doing requests
......
# beacons) without a master connection
master_type: failover #取消注释并把str修改为failover(故障转移)
# Poll interval in seconds for checking if the master is still there. Only
......
# of TCP connections, such as load balancers.)
master_alive_interval: 3 #取消注释并把30修改为3(当主节点挂了,3秒后接管)
# If the minion is in multi-master mode and the master_type configuration option
......
//重启
[root@minion ~]# systemctl restart salt-minion
此时两台salt-master都为开启状态
[root@master ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:4505 0.0.0.0:*
LISTEN 0 128 0.0.0.0:4506 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@backup-master ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 0.0.0.0:4505 0.0.0.0:*
LISTEN 0 128 0.0.0.0:4506 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
测试,当在master可以ping通minion端,backup-master就ping不通
[root@master ~]# salt '*' test.ping
p1:
True
[root@backup-master ~]# salt '*' test.ping
minion:
Minion did not return. [No response]
The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
salt-run jobs.lookup_jid 20211129140956785275
当master与backup-master都在运行状态时,minion只能连接主master上
[root@minion ~]# systemctl status salt-minion
● salt-minion.service - The Salt Minion
Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-11-29 22:07:07 CST; 10min ago
Docs: man:salt-minion(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 134052 (salt-minion)
Tasks: 6 (limit: 11300)
Memory: 91.3M
CGroup: /system.slice/salt-minion.service
├─134052 /usr/bin/python3.6 /usr/bin/salt-minion
├─134059 /usr/bin/python3.6 /usr/bin/salt-minion
└─134061 /usr/bin/python3.6 /usr/bin/salt-minion
11月 29 22:07:07 minion systemd[1]: Stopped The Salt Minion.
11月 29 22:07:07 minion systemd[1]: Starting The Salt Minion...
11月 29 22:07:07 minion systemd[1]: Started The Salt Minion.
11月 29 22:07:18 minion salt-minion[134052]: [CRITICAL] 'master_type' set to 'failover' but 'retry_dns' >
停止master的服务,backup-master就能ping通
[root@master ~]# systemctl stop salt-master
[root@master ~]# ss -anlt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
在backup-master主机上进行test.ping测试
[root@backup-master ~]# salt '*' test.ping
p1:
True
此时minion端的状态文件是
[root@minion ~]# systemctl status salt-minion
● salt-minion.service - The Salt Minion
Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-11-29 22:07:07 CST; 15min ago
Docs: man:salt-minion(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 134052 (salt-minion)
Tasks: 6 (limit: 11300)
Memory: 92.4M
CGroup: /system.slice/salt-minion.service
├─134052 /usr/bin/python3.6 /usr/bin/salt-minion
├─134059 /usr/bin/python3.6 /usr/bin/salt-minion
└─134061 /usr/bin/python3.6 /usr/bin/salt-minion
11月 29 22:07:07 minion systemd[1]: Stopped The Salt Minion.
11月 29 22:07:07 minion systemd[1]: Starting The Salt Minion...
11月 29 22:07:07 minion systemd[1]: Started The Salt Minion.
11月 29 22:07:18 minion salt-minion[134052]: [CRITICAL] 'master_type' set to 'failover' but 'retry_dns' >
11月 29 22:20:36 minion salt-minion[134052]: [WARNING ] Master ip address changed from 192.168.129.134 t>
11月 29 22:20:36 minion salt-minion[134052]: [WARNING ] Master ip address changed from 192.168.129.134 t>
salt-syndic
salt-syndic分布式架构图
salt-syndic的特点
-
可以通过syndic实现更复杂的salt架构
-
减轻master的负担
-
syndic的/srv目录下的salt和pillar目录内容要与最顶层的master下的一致,所以要进行数据同步,同步方案同salt-master高可用
-
最顶层的master不知道自己有几个syndic,它只知道自己有多少个minion,并不知道这些minion是由哪些syndic来管理的
salt-syndic部署
环境说明
主机ip | 角色 | 主机名 | 安装应用 |
---|---|---|---|
192.168.129.134 | Master | master | salt-master |
192.168.129.135 | Syndic | syndic | salt-master salt-syndic |
192.168.129.136 | Minion | p1 | salt-minion |
192.168.129.137 | Minion2 | p2 | salt-minion |
提前配置好所需要的yum源
安装软件
//在master上安装salt-master
[root@master ~]# yum -y install salt-master
[root@master ~]# rpm -qa | grep salt
salt-master-3004-1.el8.noarch
salt-3004-1.el8.noarch
//在syndic上安装salt-master和salt-syndic
[root@syndic ~]# yum -y install salt-master salt-syndic
[root@sybdic ~]# rpm -qa | grep salt
salt-3004-1.el8.noarch
salt-syndic-3004-1.el8.noarch
salt-master-3004-1.el8.noarch
在p1上安装salt-minion
[root@p1 ~]# yum -y install salt-minion
[root@p1 yum.repos.d]# rpm -qa | grep salt
salt-3004-1.el8.noarch
salt-minion-3004-1.el8.noarch
在p2上安装salt-minion
[root@p2 ~]# yum -y install salt-minion
[root@p2 yum.repos.d]# rpm -qa | grep salt
salt-minion-3004-1.el8.noarch
salt-3004-1.el8.noarch
配置master端
在master的配置文件将order_master的值设为True
[root@master ~]# vim /etc/salt/master
.....
# masters' syndic interfaces.
order_masters: True
[root@master ~]# systemctl enable --now salt-master
[root@master ~]# systemctl restart salt-master
配置syndic端
在syndic的配置文件上将syndic_master的值设为master的IP
[root@syndic ~]# vim /etc/salt/master
.....此处省略N行
syndic_master: 192.168.129.134
[root@sybdic ~]# systemctl enable --now salt-master
Created symlink /etc/systemd/system/multi-user.target.wants/salt-master.service → /usr/lib/systemd/system/salt-master.service.
[root@sybdic ~]# systemctl enable --now salt-syndic
Created symlink /etc/systemd/system/multi-user.target.wants/salt-syndic.service → /usr/lib/systemd/system/salt-syndic.service.
[root@syndic ~]# systemctl restart salt-master
[root@syndic ~]# systemctl restart salt-syndic
//查看状态
[root@sybdic ~]# systemctl status salt-master
● salt-master.service - The Salt Master Server
Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-11-29 23:49:32 CST; 1min 7s ago
Docs: man:salt-master(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 74084 (salt-master)
Tasks: 32 (limit: 23789)
Memory: 307.4M
CGroup: /system.slice/salt-master.service
├─74084 /usr/bin/python3.6 /usr/bin/salt-master
├─74111 /usr/bin/python3.6 /usr/bin/salt-master
├─74113 /usr/bin/python3.6 /usr/bin/salt-master
├─74116 /usr/bin/python3.6 /usr/bin/salt-master
├─74117 /usr/bin/python3.6 /usr/bin/salt-master
├─74118 /usr/bin/python3.6 /usr/bin/salt-master
├─74119 /usr/bin/python3.6 /usr/bin/salt-master
├─74120 /usr/bin/python3.6 /usr/bin/salt-master
├─74121 /usr/bin/python3.6 /usr/bin/salt-master
├─74128 /usr/bin/python3.6 /usr/bin/salt-master
[root@sybdic ~]# systemctl status salt-syndic
● salt-syndic.service - The Salt Master Server
Loaded: loaded (/usr/lib/systemd/system/salt-syndic.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-11-29 23:49:43 CST; 1min 3s ago
Docs: man:salt-syndic(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltproject.io/en/latest/contents.html
Main PID: 76018 (salt-syndic)
Tasks: 6 (limit: 23789)
Memory: 78.6M
CGroup: /system.slice/salt-syndic.service
├─76018 /usr/bin/python3.6 /usr/bin/salt-syndic
└─76082 /usr/bin/python3.6 /usr/bin/salt-syndic
11月 29 23:49:43 sybdic systemd[1]: Starting The Salt Master Server...
11月 29 23:49:43 sybdic systemd[1]: Started The Salt Master Server.
11月 29 23:49:52 sybdic salt-syndic[76018]: [ERROR ] The Salt Master has cached the public key for>
11月 29 23:50:02 sybdic salt-syndic[76018]: [ERROR ] The Salt Master has cached the public key for>
11月 29 23:50:12 sybdic salt-syndic[76018]: [ERROR ] The Salt Master has cached the public key for>
11月 29 23:50:22 sybdic salt-syndic[76018]: [ERROR ] The Salt Master has cached the public key for>
11月 29 23:50:32 sybdic salt-syndic[76018]: [ERROR ] The Salt Master has cached the public key for>
[root@sybdic ~]#
配置minion端
配置minion,将master指向syndic所在主机
[root@p1 ~]# vim /etc/salt/minion
......
# master: salt
master: 192.168.129.135
# Set http proxy information for the minion when doing requests
......
[root@p1 ~]# systemctl enable --now salt-minion
Created symlink /etc/systemd/system/multi-user.target.wants/salt-minion.service → /usr/lib/systemd/system/salt-minion.service.
[root@p2 ~]# vim /etc/salt/minion
......
# master: salt
master: 192.168.129.135
# Set http proxy information for the minion when doing requests
......
[root@p2 ~]# systemctl enable --now salt-minion
Created symlink /etc/systemd/system/multi-user.target.wants/salt-minion.service → /usr/lib/systemd/system/salt-minion.service.
[root@p1 ~]# tree /etc/salt/pki
/etc/salt/pki
├── master
└── minion
├── minion.pem
└── minion.pub
[root@p2 ~]# tree /etc/salt/pki
/etc/salt/pki
├── master
└── minion
├── minion.pem
└── minion.pub
在syndic上接受minion主机的key
顺序一定不要弄反,一定是先接受syndic,然后是master
[root@sybdic ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
p1
p2
Rejected Keys:
[root@sybdic ~]# salt-key -yA
The following keys are going to be accepted:
Unaccepted Keys:
p1
p2
Key for minion p1 accepted.
Key for minion p2 accepted.
master上接受syndic主机的key
[root@master ~]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
sybdic
Rejected Keys:
[root@master ~]# salt-key -yA
The following keys are going to be accepted:
Unaccepted Keys:
sybdic
Key for minion sybdic accepted.
[root@master ~]# salt-key -L
Accepted Keys:
sybdic
Denied Keys:
Unaccepted Keys:
Rejected Keys:
在master上执行模块或状态检验有几个minion应答
[root@master ~]# salt '*' test.ping
p1:
True
p2:
True
[root@master ~]# salt 'p1' cmd.run 'date'
p1:
Tue Nov 30 00:34:57 CST 2021
p2:
Tue Nov 30 00:35:11 CST 2021
[root@master ~]# salt 'p1' state.sls init.yum.main
p1:
Data failed to compile:
----------
No matching sls found for 'init.yum.main' in env 'base'
ERROR: Minions returned with non-zero exit code
syndic的/srv目录下的salt和pillar目录内容要与最顶层的master下的一致
[root@master ~]# scp -r /srv/* 192.168.129.135:/srv/
[root@syndic ~]# vim /etc/salt/master
...此处省略N行
file_roots:
base:
- /srv/salt/base
test:
- /srv/salt/test
dev:
- /srv/salt/dev
prod:
- /srv/salt/prod
...此处省略N行
# highstate format, and is generally just key/value pairs.
pillar_roots:
base:
- /srv/pillar/base
#ext_pillar:
......
[root@sybdic ~]# systemctl restart salt-master salt-syndic
测试
[root@master ~]# salt 'p1' test.ping
p1:
True
执行
[root@master ~]# salt 'p1' state.sls init.yum.main
p1:
----------
ID: /etc/yum.repos.d/centos-8.repo
Function: file.managed
Result: True
Comment: File /etc/yum.repos.d/centos-8.repo is in the correct state
Started: 00:33:16.927298
Duration: 32.354 ms
Changes:
----------
ID: /etc/yum.repos.d/epel-8.repo
Function: file.managed
Result: True
Comment: File /etc/yum.repos.d/epel-8.repo is in the correct state
Started: 00:33:16.959822
Duration: 15.09 ms
Changes:
----------
ID: /etc/yum.repos.d/salt-8.repo
Function: file.managed
Result: True
Comment: File /etc/yum.repos.d/salt-8.repo is in the correct state
Started: 00:33:16.975053
Duration: 15.066 ms
Changes:
Summary for p1
------------
Succeeded: 3
Failed: 0
------------
Total states run: 3
Total run time: 62.510 ms
[root@master ~]# salt 'p1' cmd.run 'ls /etc/yum.repos.d'
p1:
centos-8.repo
epel-8.repo
redhat.repo
salt-8.repo