SaltStack之return与job管理
1. SaltStack组件之return
return组件可以理解为SaltStack系统对执行Minion返回后的数据进行存储或者返回给其他程序,它支持多种存储方式,比如用MySQL、MongoDB、Redis、Memcache等,通过return我们可以对SaltStack的每次操作进行记录,对以后日志审计提供了数据来源。目前官方已经支持30种return数据存储与接口,我们可以很方便的配置与使用它。当然也支持自己定义的return,自定义的return需由python来编写。在选择和配置好要使用的return后,只需在salt命令后面指定return即可。
//查看所有return列表
[root@master ~]# salt 'node1' sys.list_returners
node1:
- carbon
- couchdb
- etcd
- highstate
- local
- local_cache
- mattermost
- multi_returner
- pushover
- rawfile_json
- slack
- slack_webhook
- smtp
- splunk
- sqlite3
- syslog
- telegram
1.1 return流程
return是在Master端触发任务,然后Minion接受处理任务后直接与return存储服务器建立连接,然后把数据return存到存储服务器。关于这点一定要注意,因为此过程都是Minion端操作存储服务器,所以要确保Minion端的配置跟依赖包是正确的,这意味着我们将必须在每个Minion上安装指定的return方式依赖包,假如使用Mysql作为return存储方式,那么我们将在每台Minion上安装python-mysql模块。
使用mysql作为return存储方式
在所有minion上安装python3-PyMySQL
模块
[root@master ~]# salt 'node1' pkg.install python3-mysql
node1:
----------
mariadb-connector-c:
----------
new:
3.1.11-2.el8_3
old:
mariadb-connector-c-config:
----------
new:
3.1.11-2.el8_3
old:
python3-mysql:
----------
new:
1.4.6-5.el8
old:
部署一台mysql服务器用作存储服务器,此处就直接在192.168.72.143
这台主机上部署
//部署mysql
[root@node2 ~]# dnf -y install mariadb-*
[root@node2 ~]# systemctl enable --now mariadb
Created symlink /etc/systemd/system/mysql.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/mysqld.service → /usr/lib/systemd/system/mariadb.service.
Created symlink /etc/systemd/system/multi-user.target.wants/mariadb.service → /usr/lib/systemd/system/mariadb.service.
[root@node2 ~]# mysql -uroot -p1
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.3.28-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE `salt`
-> DEFAULT CHARACTER SET utf8
-> DEFAULT COLLATE utf8_general_ci;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> USE `salt`;
Database changed
MariaDB [salt]> --
MariaDB [salt]> -- Table structure for table `jids`
MariaDB [salt]> --
MariaDB [salt]> DROP TABLE IF EXISTS `jids`;
Query OK, 0 rows affected, 1 warning (0.000 sec)
MariaDB [salt]> CREATE TABLE `jids` (
-> `jid` varchar(255) NOT NULL,
-> `load` mediumtext NOT NULL,
-> UNIQUE KEY `jid` (`jid`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.002 sec)
MariaDB [salt]> --
MariaDB [salt]> -- Table structure for table `salt_returns`
MariaDB [salt]> --
MariaDB [salt]> DROP TABLE IF EXISTS `salt_returns`;
Query OK, 0 rows affected, 1 warning (0.000 sec)
MariaDB [salt]> CREATE TABLE `salt_returns` (
-> `fun` varchar(50) NOT NULL,
-> `jid` varchar(255) NOT NULL,
-> `return` mediumtext NOT NULL,
-> `id` varchar(255) NOT NULL,
-> `success` varchar(10) NOT NULL,
-> `full_ret` mediumtext NOT NULL,
-> `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
-> KEY `id` (`id`),
-> KEY `jid` (`jid`),
-> KEY `fun` (`fun`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.004 sec)
MariaDB [salt]> --
MariaDB [salt]> -- Table structure for table `salt_events`
MariaDB [salt]> --
MariaDB [salt]> DROP TABLE IF EXISTS `salt_events`;
Query OK, 0 rows affected, 1 warning (0.000 sec)
MariaDB [salt]> CREATE TABLE `salt_events` (
-> `id` BIGINT NOT NULL AUTO_INCREMENT,
-> `tag` varchar(255) NOT NULL,
-> `data` mediumtext NOT NULL,
-> `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
-> `master_id` varchar(255) NOT NULL,
-> PRIMARY KEY (`id`),
-> KEY `tag` (`tag`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query OK, 0 rows affected (0.003 sec)
MariaDB [salt]> grant all on salt.* to salt@'%' identified by 'salt';
Query OK, 0 rows affected (0.000 sec)
MariaDB [salt]> flush privileges;
Query OK, 0 rows affected (0.000 sec)
配置minion
[root@node1 ~]# vim /etc/salt/minion
.....此处省略N行
mysql.host: '192.168.72.143'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@node1 ~]# systemctl restart salt-minion
在Master上测试存储到mysql中
[root@master ~]# salt 'node1' test.ping --return mysql
node1:
True
在数据库中查询
MariaDB [salt]> select * from salt_returns\G
*************************** 1. row ***************************
fun: test.ping
jid: 20211105032327878364
return: true
id: node1
success: 1
full_ret: {"success": true, "return": true, "retcode": 0, "jid": "20211105032327878364", "fun": "test.ping", "fun_args": [], "id": "node1"}
alter_time: 2021-11-05 19:23:28
1 row in set (0.000 sec)
2. job cache
2.1 job cache流程
return时是由Minion直接与存储服务器进行交互,因此需要在每台Minion上安装指定的存储方式的模块,比如python-mysql,那么我们能否直接在Master上就把返回的结果给存储到存储服务器呢?
答案是肯定的,这种方式被称作 job cache 。意思是当Minion将结果返回给Master后,由Master将结果给缓存在本地,然后将缓存的结果给存储到指定的存储服务器,比如存储到mysql中。
在master安装python3-PyMySQL
[root@master ~]# dnf -y install python3-PyMySQL
上次元数据过期检查:1:59:42 前,执行于 2021年11月04日 星期四 21时39分18秒。
软件包 python3-PyMySQL-0.10.1-2.module_el8.5.0+761+faacb0fb.noarch 已安装。
依赖关系解决。
无需任何处理。
完毕!
开启master端的master_job_cache
[root@master ~]# vim /etc/salt/master
....此处省略N行
master_job_cache: mysql
mysql.host: '192.168.72.143'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@master ~]# systemctl restart salt-master
在数据库服务器中清空表内容
MariaDB [salt]> show tables;
+----------------+
| Tables_in_salt |
+----------------+
| jids |
| salt_events |
| salt_returns |
+----------------+
3 rows in set (0.000 sec)
MariaDB [salt]> delete from salt_returns;
Query OK, 2 rows affected (0.001 sec)
MariaDB [salt]> select * from salt_returns\G
Empty set (0.000 sec)
在master上再次测试能否存储至数据库
[root@master ~]# salt 'node1' cmd.run 'df -h'
node1:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 371M 0 371M 0% /dev
tmpfs 391M 180K 391M 1% /dev/shm
tmpfs 391M 5.7M 385M 2% /run
tmpfs 391M 0 391M 0% /sys/fs/cgroup
/dev/mapper/cs-root 64G 6.4G 58G 10% /
/dev/sda1 1014M 195M 820M 20% /boot
/dev/mapper/cs-home 32G 255M 31G 1% /home
tmpfs 79M 0 79M 0% /run/user/0
在数据库中查询
MariaDB [salt]> select * from salt_returns\G
*************************** 1. row ***************************
fun: cmd.run
jid: 20211105034921576715
return: "Filesystem Size Used Avail Use% Mounted on\ndevtmpfs 371M 0 371M 0% /dev\ntmpfs 391M 180K 391M 1% /dev/shm\ntmpfs 391M 5.7M 385M 2% /run\ntmpfs 391M 0 391M 0% /sys/fs/cgroup\n/dev/mapper/cs-root 64G 6.4G 58G 10% /\n/dev/sda1 1014M 195M 820M 20% /boot\n/dev/mapper/cs-home 32G 255M 31G 1% /home\ntmpfs 79M 0 79M 0% /run/user/0"
id: node1
success: 1
full_ret: {"cmd": "_return", "id": "node1", "success": true, "return": "Filesystem Size Used Avail Use% Mounted on\ndevtmpfs 371M 0 371M 0% /dev\ntmpfs 391M 180K 391M 1% /dev/shm\ntmpfs 391M 5.7M 385M 2% /run\ntmpfs 391M 0 391M 0% /sys/fs/cgroup\n/dev/mapper/cs-root 64G 6.4G 58G 10% /\n/dev/sda1 1014M 195M 820M 20% /boot\n/dev/mapper/cs-home 32G 255M 31G 1% /home\ntmpfs 79M 0 79M 0% /run/user/0", "retcode": 0, "jid": "20211105034921576715", "fun": "cmd.run", "fun_args": ["df -h"], "_stamp": "2021-11-05T03:49:21.685125"}
alter_time: 2021-11-05 19:49:21
1 row in set (0.000 sec)
2.2 job管理
获取任务的jid
[root@master ~]# salt 'node1' cmd.run 'uptime' -v
Executing job with jid 20211105035400391548
-------------------------------------------
node1:
23:54:00 up 2:42, 2 users, load average: 0.07, 0.09, 0.09
通过jid获取此任务的返回结果
[root@master ~]# salt-run jobs.lookup_jid 20211105035400391548
node1:
23:54:00 up 2:42, 2 users, load average: 0.07, 0.09, 0.09