saltstack高级状态与数据系统
文章目录
saltstack高级状态的使用
YAML语言
YAML
是一种只管的能够被电脑识别的数据序列化格式,是一个可读性高并且容易被人类阅读,容易和脚本语言交互,用来表达资料序列的编程语言。
它类似于标准通用标记语言的子集XML的数据描述语言,语法比XML简单很多。
YAML
语言的格式如下:
house:
family:
parents:
- John
- Jane
children:
- Paul
- Mark
- Simone
address:
number: 34
street: Main Street
city: Nowheretown
zipcode: 12345
YAML的基本规则:
- 使用缩进来表示层级关系,每层2个空格,禁止使用TAB键
- 当冒号不是处于最后时,冒号后面必须有一个空格
- 用 - 表示列表,- 的后面必须有一个空格
- 用 # 表示注释
YAML
配置文件要放到SaltStack
让我们放的位置,可以在SaltStack
的 Master 配置文件中查找file_roots
即可看到。
[root@master ~]# vim /etc/salt/master
......
file_roots:
base:
- /srv/salt/base
test:
- /srv/salt/test
dev:
- /srv/salt/dev
prod:
- /srv/salt/prod
[root@master ~]# tree /srv/salt/
/srv/salt/
├── base
├── dev
├── prod
└── test
[root@master ~]# systemctl restart salt-master //修改配置文件应重启对应服务
需要注意:
- base是默认的位置,如果file_roots只有一个,则base是必备的且必须叫base,不能改名
用saltstack配置一个nginx实例
在master上编写sls配置文件并执行
[root@master ~]# cd /srv/salt/base/
[root@master base]# mkdir -p web/nginx
[root@master base]# vim web/nginx/install.sls
nginx-install: //YAML配置文件中顶格写的被称作ID,必须全局唯一,不能重复
pkg.installed:
- name: nginx
//saltstack读YAML配置文件时是从上往下读,所以要把先执行的写在前面
nginx-service:
service.running:
- name: nginx
- enable: true
先测试master与minion之间能否通信
[root@master ~]# salt 'node1' test.ping
node1:
True
执行状态描述文件
[root@master ~]# salt 'node1' state.sls web.nginx.install
node1:
----------
ID: nginx-install
Function: pkg.installed
Name: nginx
Result: True
Comment: The following packages were installed/updated: nginx
Started: 03:15:15.057937
Duration: 83364.275 ms
Changes:
----------
gd:
----------
new:
2.2.5-7.el8
old:
libXpm:
----------
new:
3.5.12-8.el8
old:
libwebp:
----------
new:
1.0.0-5.el8
old:
nginx:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-all-modules:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-filesystem:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-mod-http-image-filter:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-mod-http-perl:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-mod-http-xslt-filter:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-mod-mail:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
nginx-mod-stream:
----------
new:
1:1.14.1-9.module_el8.0.0+184+e34fea82
old:
perl-Carp:
----------
new:
1.42-396.el8
old:
perl-Data-Dumper:
----------
new:
2.167-399.el8
old:
perl-Digest:
----------
new:
1.17-395.el8
old:
perl-Digest-MD5:
----------
new:
2.55-396.el8
old:
perl-Encode:
----------
new:
4:2.97-3.el8
old:
perl-Errno:
----------
new:
1.28-420.el8
old:
perl-Exporter:
----------
new:
5.72-396.el8
old:
perl-File-Path:
----------
new:
2.15-2.el8
old:
perl-File-Temp:
----------
new:
0.230.600-1.el8
old:
perl-Getopt-Long:
----------
new:
1:2.50-4.el8
old:
perl-HTTP-Tiny:
----------
new:
0.074-1.el8
old:
perl-IO:
----------
new:
1.38-420.el8
old:
perl-IO-Socket-IP:
----------
new:
0.39-5.el8
old:
perl-IO-Socket-SSL:
----------
new:
2.066-4.module_el8.4.0+517+be1595ff
old:
perl-MIME-Base64:
----------
new:
3.15-396.el8
old:
perl-Mozilla-CA:
----------
new:
20160104-7.module_el8.3.0+416+dee7bcef
old:
perl-Net-SSLeay:
----------
new:
1.88-1.module_el8.4.0+517+be1595ff
old:
perl-PathTools:
----------
new:
3.74-1.el8
old:
perl-Pod-Escapes:
----------
new:
1:1.07-395.el8
old:
perl-Pod-Perldoc:
----------
new:
3.28-396.el8
old:
perl-Pod-Simple:
----------
new:
1:3.35-395.el8
old:
perl-Pod-Usage:
----------
new:
4:1.69-395.el8
old:
perl-Scalar-List-Utils:
----------
new:
3:1.49-2.el8
old:
perl-Socket:
----------
new:
4:2.027-3.el8
old:
perl-Storable:
----------
new:
1:3.11-3.el8
old:
perl-Term-ANSIColor:
----------
new:
4.06-396.el8
old:
perl-Term-Cap:
----------
new:
1.17-395.el8
old:
perl-Text-ParseWords:
----------
new:
3.30-395.el8
old:
perl-Text-Tabs+Wrap:
----------
new:
2013.0523-395.el8
old:
perl-Time-Local:
----------
new:
1:1.280-1.el8
old:
perl-URI:
----------
new:
1.73-3.el8
old:
perl-Unicode-Normalize:
----------
new:
1.25-396.el8
old:
perl-constant:
----------
new:
1.33-396.el8
old:
perl-interpreter:
----------
new:
4:5.26.3-420.el8
old:
perl-libnet:
----------
new:
3.11-3.el8
old:
perl-libs:
----------
new:
4:5.26.3-420.el8
old:
perl-macros:
----------
new:
4:5.26.3-420.el8
old:
perl-parent:
----------
new:
1:0.237-1.el8
old:
perl-podlators:
----------
new:
4.11-1.el8
old:
perl-threads:
----------
new:
1:2.21-2.el8
old:
perl-threads-shared:
----------
new:
1.58-2.el8
old:
----------
ID: nginx-service
Function: service.running
Name: nginx
Result: True
Comment: Started service nginx
Started: 03:16:38.431002
Duration: 126.696 ms
Changes:
----------
nginx:
True
Summary for node1
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Total run time: 83.491 s
在node1上查看
[root@node1 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
[root@node1 ~]# systemctl status nginx.service
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-11-02 03:16:38 EDT; 3min 17s ago
Main PID: 9489 (nginx)
Tasks: 3 (limit: 23485)
Memory: 5.6M
CGroup: /system.slice/nginx.service
├─9489 nginx: master process /usr/sbin/nginx
├─9490 nginx: worker process
└─9491 nginx: worker process
Nov 02 03:16:38 node1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 02 03:16:38 node1 nginx[9486]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Nov 02 03:16:38 node1 nginx[9486]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Nov 02 03:16:38 node1 systemd[1]: Started The nginx HTTP and reverse proxy server.
由以上内容可知nginx已部署成功
执行状态文件的技巧:
- 先用test.ping测试需要执行状态文件的主机是否能正常通信,然后再执行状态文件
top file
top file介绍
直接通过命令执行sls文件时够自动化吗?答案是否定的,因为我们还要告诉某台主机要执行某个任务,自动化应该是我们让它干活时,它自己就知道哪台主机要干什么活,但是直接通过命令执行sls文件并不能达到这个目的,为了解决这个问题,top file 应运而生。
top file就是一个入口,top file的文件名可通过在 Master的配置文件中搜索top.sls找出,且此文件必须在 base 环境中,默认情况下此文件必须叫top.sls。
top file的作用就是告诉对应的主机要干什么活,比如让web服务器启动web服务,让数据库服务器安装mysql等等。
top file 实例:
[root@master base]# ls
web
[root@master base]# vim top.sls
base: //要执行状态文件的环境
node1: //要执行状态文件的目标
- web.nginx.install //要执行的状态文件位置
停止node1的nginx
[root@node1 ~]# systemctl stop nginx.service
[root@node1 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
使用高级状态来执行
[root@master ~]# salt 'node1' state.highstate
node1:
----------
ID: nginx-install
Function: pkg.installed
Name: nginx
Result: True
Comment: All specified packages are already installed
Started: 03:56:59.166842
Duration: 1592.74 ms
Changes:
----------
ID: nginx-service
Function: service.running
Name: nginx
Result: True
Comment: Service nginx is already enabled, and is running
Started: 03:57:00.764967
Duration: 408.227 ms
Changes:
----------
nginx:
True
Summary for node1
------------
Succeeded: 2 (changed=1)
Failed: 0
------------
Total states run: 2
Total run time: 2.001 s
查看node1的nginx状态
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-11-02 03:59:13 EDT; 9s ago
Process: 10040 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 10038 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 10036 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 10042 (nginx)
Tasks: 3 (limit: 23485)
Memory: 5.6M
CGroup: /system.slice/nginx.service
├─10042 nginx: master process /usr/sbin/nginx
├─10043 nginx: worker process
└─10044 nginx: worker process
Nov 02 03:59:13 node1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 02 03:59:13 node1 nginx[10038]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Nov 02 03:59:13 node1 nginx[10038]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Nov 02 03:59:13 node1 systemd[1]: Started The nginx HTTP and reverse proxy server.
注意:
若top file里面的目标是用 * 表示的,要注意的是,top file里面的 * 表示的是所有要执行状态的目标,而
salt '*' state.highstate
里面的 * 表示通知所有机器干活,而是否要干活则是由top file来指定的
高级状态highstate的使用
管理SaltStack
时一般最常用的管理操作就是执行高级状态
[root@master ~]# salt '*' state.highstate //生产环境禁止这样使用salt命令
注意:
上面让所有人执行高级状态,但实际工作当中,一般不会这么用,工作当中一般都是通知某台或某些台目标主机来执行高级状态,具体是否执行则是由top file来决定的。
若在执行高级状态时加上参数test=True
,则它会告诉我们它将会做什么,但是它不会真的去执行这个操作。
停掉node1上的nginx服务
[root@node1 ~]# systemctl stop nginx.service
[root@node1 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
在master上执行高级状态的测试
[root@master ~]# salt 'node1' state.highstate test=true
node1:
----------
ID: nginx-install
Function: pkg.installed
Name: nginx
Result: True
Comment: All specified packages are already installed
Started: 04:01:59.580363
Duration: 1422.831 ms
Changes:
----------
ID: nginx-service
Function: service.running
Name: nginx
Result: None
Comment: Service nginx is set to start //将要启动nginx
Started: 04:02:01.008023
Duration: 115.343 ms
Changes:
Summary for node1
------------
Succeeded: 2 (unchanged=1)
Failed: 0
------------
Total states run: 2
Total run time: 1.538 s
在node1上查看nginx是否启动
● nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2021-11-02 04:01:08 EDT; 1min 35s ago
Process: 10040 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 10038 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 10036 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 10042 (code=exited, status=0/SUCCESS)
Nov 02 03:59:13 node1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 02 03:59:13 node1 nginx[10038]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Nov 02 03:59:13 node1 nginx[10038]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Nov 02 03:59:13 node1 systemd[1]: Started The nginx HTTP and reverse proxy server.
Nov 02 04:01:08 node1 systemd[1]: Stopping The nginx HTTP and reverse proxy server...
Nov 02 04:01:08 node1 systemd[1]: nginx.service: Succeeded.
Nov 02 04:01:08 node1 systemd[1]: Stopped The nginx HTTP and reverse proxy server.
由此可见高级状态并没有执行,因为nginx并没有启动
saltstack数据系统
saltstack有两大数据系统, 分别是:
- Grains
- Pillar
saltstack数据系统组件
Grains
Grains
是SaltStack
的一个组件,其存放着minion启动时收集到的信息。
Grains
是SaltStack
组件中非常重要的组件之一,因为我们在做配置部署的过程中会经常使用它,Grains
是SaltStack
记录minion
的一些静态信息的组件。可简单理解为Grains
记录着每台minion
的一些常用属性,比如CPU、内存、磁盘、网络信息等。我们可以通过grains.items
查看某台minion
的所有Grains
信息。
Grains的功能:
- 收集资产信息
Grains应用场景:
- 信息查询
- 在命令行下进行目标匹配
- 在top file中进行目标匹配
- 在模板中进行目标匹配
模板中进行目标匹配请看:https://docs.saltstack.com/en/latest/topics/pillar/
信息查询实例:
列出所有grains的key和value
[root@master ~]# salt 'node1' grains.items
node1:
----------
ID: nginx-install
Function: pkg.installed
Name: nginx
Result: True
Comment: All specified packages are already installed
Started: 04:01:59.580363
Duration: 1422.831 ms
Changes:
----------
ID: nginx-service
Function: service.running
Name: nginx
Result: None
Comment: Service nginx is set to start
Started: 04:02:01.008023
Duration: 115.343 ms
Changes:
Summary for node1
------------
Succeeded: 2 (unchanged=1)
Failed: 0
------------
Total states run: 2
Total run time: 1.538 s
[root@master ~]# Z
[root@master ~]# salt 'node1' grains.items
node1:
----------
biosreleasedate:
07/22/2020
biosversion:
6.00
cpu_flags:
- fpu
- vme
- de
- pse
- tsc
- msr
- pae
- mce
- cx8
- apic
- sep
- mtrr
- pge
- mca
- cmov
- pat
- pse36
- clflush
- mmx
- fxsr
- sse
- sse2
- ss
- ht
- syscall
- nx
- pdpe1gb
- rdtscp
- lm
- constant_tsc
- arch_perfmon
- rep_good
- nopl
- xtopology
- tsc_reliable
- nonstop_tsc
- cpuid
- pni
- pclmulqdq
- ssse3
- fma
- cx16
- pcid
- sse4_1
- sse4_2
- x2apic
- movbe
- popcnt
- tsc_deadline_timer
- aes
- xsave
- avx
- f16c
- rdrand
- hypervisor
- lahf_lm
- abm
- 3dnowprefetch
- cpuid_fault
- invpcid_single
- ssbd
- ibrs
- ibpb
- stibp
- ibrs_enhanced
- fsgsbase
- tsc_adjust
- bmi1
- avx2
- smep
- bmi2
- erms
- invpcid
- avx512f
- avx512dq
- rdseed
- adx
- smap
- avx512ifma
- clflushopt
- clwb
- avx512cd
- sha_ni
- avx512bw
- avx512vl
- xsaveopt
- xsavec
- xgetbv1
- xsaves
- arat
- avx512vbmi
- umip
- pku
- ospke
- avx512_vbmi2
- gfni
- vaes
- vpclmulqdq
- avx512_vnni
- avx512_bitalg
- avx512_vpopcntdq
- rdpid
- movdiri
- movdir64b
- fsrm
- avx512_vp2intersect
- md_clear
- flush_l1d
- arch_capabilities
cpu_model:
11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
cpuarch:
x86_64
cwd:
/
disks:
- sr0
- sda
dns:
----------
domain:
ip4_nameservers:
- 114.114.114.114
ip6_nameservers:
nameservers:
- 114.114.114.114
options:
search:
sortlist:
domain:
efi:
False
efi-secure-boot:
False
fqdn:
node1
fqdn_ip4:
- 192.168.100.120
fqdn_ip6:
- ::1
fqdns:
- node1
gid:
0
gpus:
|_
----------
model:
SVGA II Adapter
vendor:
vmware
groupname:
root
host:
node1
hwaddr_interfaces:
----------
ens33:
00:0c:29:e4:bd:ac
lo:
00:00:00:00:00:00
id:
node1
init:
systemd
ip4_gw:
192.168.100.2
ip4_interfaces:
----------
ens33:
- 192.168.100.120
lo:
- 127.0.0.1
ip6_gw:
False
ip6_interfaces:
----------
ens33:
lo:
- ::1
ip_gw:
True
ip_interfaces:
----------
ens33:
- 192.168.100.120
lo:
- 127.0.0.1
- ::1
ipv4:
- 127.0.0.1
- 192.168.100.120
ipv6:
- ::1
kernel:
Linux
kernelparams:
|_
- BOOT_IMAGE
- (hd0,msdos1)/vmlinuz-4.18.0-257.el8.x86_64
|_
- root
- /dev/mapper/cs-root
|_
- ro
- None
|_
- crashkernel
- auto
|_
- resume
- /dev/mapper/cs-swap
|_
- rd.lvm.lv
- cs/root
|_
- rd.lvm.lv
- cs/swap
|_
- rhgb
- None
|_
- quiet
- None
kernelrelease:
4.18.0-257.el8.x86_64
kernelversion:
#1 SMP Thu Dec 3 22:16:23 UTC 2020
locale_info:
----------
defaultencoding:
UTF-8
defaultlanguage:
en_US
detectedencoding:
UTF-8
timezone:
EDT
localhost:
node1
lsb_distrib_codename:
CentOS Stream 8
lsb_distrib_id:
CentOS Stream
lsb_distrib_release:
8
lvm:
----------
cs:
- home
- root
- swap
machine_id:
62d6c37d6bc14b8fbafa14c091988c44
manufacturer:
VMware, Inc.
master:
192.168.100.110
mdadm:
mem_total:
3708
nodename:
node1
num_cpus:
2
num_gpus:
1
os:
CentOS Stream
os_family:
RedHat
osarch:
x86_64
oscodename:
CentOS Stream 8
osfinger:
CentOS Stream-8
osfullname:
CentOS Stream
osmajorrelease:
8
osrelease:
8
osrelease_info:
- 8
path:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
pid:
3692
productname:
VMware Virtual Platform
ps:
ps -efHww
pythonexecutable:
/usr/bin/python3.6
pythonpath:
- /usr/bin
- /usr/lib64/python36.zip
- /usr/lib64/python3.6
- /usr/lib64/python3.6/lib-dynload
- /usr/lib64/python3.6/site-packages
- /usr/lib/python3.6/site-packages
pythonversion:
- 3
- 6
- 8
- final
- 0
saltpath:
/usr/lib/python3.6/site-packages/salt
saltversion:
3004
saltversioninfo:
- 3004
selinux:
----------
enabled:
True
enforced:
Permissive
serialnumber:
VMware-56 4d d0 bc 2a 56 6e 32-cf f3 e1 1f f0 e4 bd ac
server_id:
1797241226
shell:
/bin/sh
ssds:
swap_total:
4031
systemd:
----------
features:
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
version:
239
systempath:
- /usr/local/sbin
- /usr/local/bin
- /usr/sbin
- /usr/bin
transactional:
False
uid:
0
username:
root
uuid:
bcd04d56-562a-326e-cff3-e11ff0e4bdac
virtual:
VMware
zfs_feature_flags:
False
zfs_support:
False
zmqversion:
4.3.4
查询某个key的值,比如想获取IP地址
[root@master ~]# salt '*' grains.get fqdn_ip4
master:
- 192.168.100.110
node2:
- 192.168.100.150
node1:
- 192.168.100.120
node3:
- 192.168.100.100
[root@master ~]# salt '*' grains.get ip4_interfaces:ens33 //查询网卡名为ens33的IP地址
node2:
- 192.168.100.150
master:
- 192.168.100.110
node1:
- 192.168.100.120
node3: //这里因为node3使用的RedHat系统,在RedHat中网卡名默认为eth0,所以获取不到IP
目标匹配实例:
用Grains
来匹配minion
:
在所有centos系统中执行命令
[root@master ~]# salt -G 'os:CentOS Stream' cmd.run 'uptime'
node2:
04:12:34 up 1:57, 2 users, load average: 0.00, 0.00, 0.00
node1:
04:12:34 up 7:00, 1 user, load average: 0.01, 0.03, 0.00
master:
04:12:34 up 7:00, 1 user, load average: 0.17, 0.08, 0.02
在top file里面使用Grains:
[root@master ~]# vim /srv/salt/base/top.sls
base:
'os:CentOS Stream' //为所有系统为CentOS Stream版本的主机安装nginx
- match: grain
- web.nginx.install
自定义Grains的两种方法:
- minion配置文件,在配置文件中搜索grains
- 在/etc/salt下生成一个grains文件,在此文件中定义(推荐方式)
[root@node1 ~]# vim /etc/salt/grains
aaa: test
[root@node1 ~]# systemctl restart salt-minion.service
[root@master ~]# salt 'node1' grains.get aaa
node1:
test
不重启的情况下自定义Grains
:
[root@node1 ~]# vim /etc/salt/grains
aaa: test
bbb: 456
[root@master ~]# salt 'node1' saltutil.sync_grains
node1:
[root@master ~]# salt 'node1' grains.get bbb
node1:
456
Pillar
Pillar也是SaltStack组件中非常重要的组件之一,是数据管理中心,经常配置states在大规模的配置管理工作中使用它。Pillar在SaltStack中主要的作用就是存储和定义配置管理中需要的一些数据,比如软件版本号、用户名密码等信息,它的定义存储格式与Grains类似,都是YAML格式。
在Master配置文件中有一段Pillar settings选项专门定义Pillar相关的一些参数:
#pillar_roots:
# base:
# - /srv/pillar
默认Base环境下Pillar的工作目录在/srv/pillar目录下。若你想定义多个环境不同的Pillar工作目录,只需要修改此处配置文件即可。
Pillar的特点:
- 可以给指定的minion定义它需要的数据
- 只有指定的人才能看到定义的数据
- 在master配置文件里设置
[root@master ~]# salt '*' pillar.items
master:
----------
node1:
----------
node2:
----------
node3:
----------
默认pillar
是没有任何信息的,如果想查看信息,需要在 master 配置文件上把 pillar_opts
的注释取消,并将其值设为 True。
[root@master ~]# vim /etc/salt/master
......
pillar_opts: True
[root@master ~]# systemctl restart salt-master.service
......
tags
svnfs_trunk:
trunk
svnfs_update_interval:
60
syndic_dir:
/var/cache/salt/master/syndics
syndic_event_forward_timeout:
0.5
syndic_failover:
random
syndic_forward_all_events:
False
syndic_jid_forward_cache_hwm:
100
syndic_log_file:
/var/log/salt/syndic
syndic_master:
masterofmasters
syndic_pidfile:
/var/run/salt-syndic.pid
syndic_wait:
5
tcp_keepalive:
True
tcp_keepalive_cnt:
-1
tcp_keepalive_idle:
300
tcp_keepalive_intvl:
-1
tcp_master_pub_port:
4512
tcp_master_publish_pull:
4514
tcp_master_pull_port:
4513
tcp_master_workers:
4515
test:
False
thin_extra_mods:
thorium_interval:
0.5
thorium_roots:
----------
base:
- /srv/thorium
thorium_top:
top.sls
thoriumenv:
None
timeout:
5
token_dir:
/var/cache/salt/master/tokens
token_expire:
43200
token_expire_user_override:
False
top_file_merging_strategy:
merge
transport:
zeromq
unique_jid:
False
user:
root
utils_dirs:
- /var/cache/salt/master/extmods/utils
verify_env:
True
winrepo_branch:
master
winrepo_cachefile:
winrepo.p
winrepo_dir:
/srv/salt/win/repo
winrepo_dir_ng:
/srv/salt/win/repo-ng
winrepo_fallback:
winrepo_insecure_auth:
False
winrepo_passphrase:
winrepo_password:
winrepo_privkey:
winrepo_pubkey:
winrepo_refspecs:
- +refs/heads/*:refs/remotes/origin/*
- +refs/tags/*:refs/tags/*
winrepo_remotes:
- https://github.com/saltstack/salt-winrepo.git
winrepo_remotes_ng:
- https://github.com/saltstack/salt-winrepo-ng.git
winrepo_ssl_verify:
True
winrepo_user:
worker_threads:
5
zmq_backlog:
1000
zmq_filtering:
False
zmq_monitor:
False
pillar自定义数据:
在master的配置文件里找pillar_roots可以看到其存放pillar的位置
[root@master ~]# vim /etc/salt/master
......
pillar_roots:
base:
- /srv/pillar/base
[root@master ~]# mkdir -p /srv/pillar/base
[root@master ~]# tree /srv/pillar/
/srv/pillar/
└── base
[root@master ~]# systemctl restart salt-master.service
[root@master ~]# vim /srv/pillar/base/apache.sls
{% if grains['os'] == 'CentOS Stream' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}
定义top file入口文件
[root@master ~]# vim /srv/pillar/base/top.sls
base:
node2:
- apache //这个top.sls文件的意思表示的是node1这台主机的base环境能够访问到apache这个pillar
[root@master ~]# salt 'node1' pillar.items
node2:
----------
apache:
httpd
在salt下修改apache的状态文件,引用pillar的数据
[root@master ~]# vim /srv/salt/base/web/httpd/install.sls
httpd_install:
pkg.installed:
- name: {{ pillar['apache'] }}
httpd_service:
service.running:
- name: {{ pillar['apache'] }}
- enable: true
执行高级状态文件
[root@master ~]# salt 'node2' state.highstate
node2:
----------
ID: httpd_install
Function: pkg.installed
Name: httpd
Result: True
Comment: The following packages were installed/updated: httpd
Started: 05:16:28.538422
Duration: 39046.254 ms
Changes:
----------
apr:
----------
new:
1.6.3-12.el8
old:
apr-util:
----------
new:
1.6.1-6.el8
old:
apr-util-bdb:
----------
new:
1.6.1-6.el8
old:
apr-util-openssl:
----------
new:
1.6.1-6.el8
old:
centos-logos-httpd:
----------
new:
85.8-1.el8
old:
httpd:
----------
new:
2.4.37-40.module_el8.5.0+852+0aafc63b
old:
httpd-filesystem:
----------
new:
2.4.37-40.module_el8.5.0+852+0aafc63b
old:
httpd-tools:
----------
new:
2.4.37-40.module_el8.5.0+852+0aafc63b
old:
mailcap:
----------
new:
2.1.48-3.el8
old:
mod_http2:
----------
new:
1.15.7-3.module_el8.4.0+778+c970deab
old:
----------
ID: httpd_service
Function: service.running
Name: httpd
Result: True
Comment: Service httpd has been enabled, and is running
Started: 05:17:07.597638
Duration: 5819.774 ms
Changes:
----------
httpd:
True
Summary for node2
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Total run time: 44.866 s
检查httpd服务是否安装
[root@node2 ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
[root@node2 ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-11-02 05:17:13 EDT; 1min 19s ago
Docs: man:httpd.service(8)
Main PID: 16694 (httpd)
Status: "Running, listening on: port 80"
Tasks: 213 (limit: 23485)
Memory: 26.0M
CGroup: /system.slice/httpd.service
├─16694 /usr/sbin/httpd -DFOREGROUND
├─16787 /usr/sbin/httpd -DFOREGROUND
├─16788 /usr/sbin/httpd -DFOREGROUND
├─16789 /usr/sbin/httpd -DFOREGROUND
└─16790 /usr/sbin/httpd -DFOREGROUND
Nov 02 05:17:07 node2 systemd[1]: Starting The Apache HTTP Server...
Nov 02 05:17:13 node2 httpd[16694]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.100.150. Set the >
Nov 02 05:17:13 node2 systemd[1]: Started The Apache HTTP Server.
Nov 02 05:17:13 node2 httpd[16694]: Server configured, listening on: port 80
Grains与Pillar的区别
存储位置 | 类型 | 采集方式 | 应用场景 | |
---|---|---|---|---|
Grains | minion | 静态 | minion启动时采集 可通过刷新避免重启minion服务 | 1.信息查询 2.在命令行下进行目标匹配 3.在top file中进行目标匹配 4.在模板中进行目标匹配 |
Pillar | master | 动态 | 指定,实时生效 | 1.目标匹配 2.敏感数据配置 |