Zabbix 监控系统搭建

本文提供了一步一步的指南,详细介绍了如何在CentOS7环境下搭建Zabbix监控系统。首先,文章涵盖了实验环境的准备,包括系统服务器的配置。接着,深入讲解了Zabbix的安装过程,包括修改主机名、关闭防火墙、安装数据库及授权、配置Web GUI等步骤。最后,阐述了在被监控节点上配置Zabbix Agent的完整流程,确保所有节点能被监控服务器有效管理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Zabbix 监控系统搭建

一、实验环境准备

  • centos7.5 系统服务器3台、 一台作为监控服务器, 两台台作为被监控节点, 配置好yum源、 防火墙关闭、 各节点时钟服务同步、 各节点之间可以通过主机名互相通信。
主机名IP 配置服务器角色备注
zabbix10.11.59.175zabbix-server开启监控功能
node110.11.59.176zabbix-agent开启客户端
node210.11.59.177zabbix-agent开启客户端

二、Zabbix的安装

1、修改主机名
[root@localhost ~]# hostnamectl --static set-hostname zabbix
2、关闭防火墙
[root@zabbix ~]#  systemctl stop iptables firewalld
[root@zabbix ~]#  systemctl disable iptables firewalld
3、开启邮件服务
[root@zabbix ~]# systemctl start postfix
[root@zabbix ~]# systemctl enable postfix
4、关闭 SELinux
[root@zabbix ~]#  sed -ri '/SELINUX=/cSELINUX=disabled' /etc/selinux/config
[root@zabbix ~]#  setenforce 0           # 临时关闭SELinux
[root@zabbix ~]#  reboot
5、添加 hosts
[root@zabbix ~]# vim /etc/hosts
10.11.59.175 zabbix
10.11.59.176 agent1
10.11.59.177 agent2
安装 yum 仓库
[root@zabbix ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
更新 yum 仓库
[root@zabbix ~]# yum repolist
安装 Zabbix server and agent:
[root@zabbix ~]# yum -y install epel-release.noarch
[root@zabbix ~]# yum -y install zabbix-agent zabbix-get zabbix-sender zabbix-server-mysql 
安装 Zabbix 前端:
[root@zabbix ~]# yum -y install centos-release-scl
  • 开启 前端安装源仓库配置
[root@zabbix ~]# vim /etc/yum.repos.d/zabbix.repo
[zabbix-frontend]
...
enabled=1
...
###或者直接使用  yum -y install --enablerepo=zabbix-frontend  zabbix-web-mysql-scl zabbix-apache-conf-scl
  • 安装 zabbix 前端
[root@zabbix ~]#yum -y install zabbix-web-mysql-scl zabbix-apache-conf-scl
安装设置数据库:
1、创建 mariadb.repo
[root@zabbix ~]# vim /etc/yum.repos.d/mariadb.repo
[mariadb]
name = MariaDB 
baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.5/centos7-amd64 
gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB 
gpgcheck=1
2、yum 安装最新版本 mariadb
[root@zabbix ~]# yum install -y MariaDB-server MariaDB-client
3、修改配置文件
[root@zabbix ~]# vim /etc/my.cnf.d/server.cnf
    [mysqld]
    skip_name_resolve = ON          # 跳过主机名解析
    innodb_file_per_table = ON      # 开启独立表空间
    innodb_buffer_pool_size = 256M  # 缓存池大小
    max_connections = 2000          # 最大连接数
    log-bin = master-log            # 开启二进制日志
4、重启数据库服务
[root@zabbix ~]# systemctl restart mariadb
[root@zabbix ~]# mysql_secure_installation  # 初始化mariadb

5、创建数据库并授权账号
[root@zabbix ~]# mysql
MariaDB [(none)]> create database zabbix character set utf8 collate utf8_bin;  # 创建zabbix数据库
MariaDB [(none)]> grant all on zabbix.* to 'zabbix'@'10.11.59.%' identified by '1234.com';	# 注意授权网段
MariaDB [(none)]> flush privileges;           # 刷新授权
6、导入 Zabbix 服务表
  • 查看 zabbix-server-mysql 这个包提供了什么
[root@zabbix ~]# rpm -ql zabbix-server-mysql
/usr/share/doc/zabbix-server-mysql-5.0.4/create.sql.gz   # 生成表的各种脚本
使用 create.sql.gz 生成所需要的表
[root@zabbix ~]# gzip -d /usr/share/doc/zabbix-server-mysql-5.0.4/create.sql.gz
[root@zabbix ~]# mysql -uzabbix -h192.168.37.111 -p'1234.com' zabbix < /usr/share/doc/zabbix-server-mysql-5.0.4/create.sql
  • 导入以后查看数据库:
[root@zabbix ~]# mysql -uzabbix -h192.168.37.111 -p'1234.com'
MariaDB [(none)]> show databases
MariaDB [(none)]> use zabbix
MariaDB [zabbix]> show tables;
166 rows in set (0.001 sec)
  • 数据已经导入成功了。

配置 server 端

  • 数据库准备好了以后,我们要去修改 server 端的配置文件。
[root@zabbix ~]# cd /etc/zabbix/
[root@zabbix zabbix]# ls
web  zabbix_agentd.conf  zabbix_agentd.d  zabbix_server.conf
#为了方便我们以后恢复,我们把配置文件备份一下
[root@zabbix zabbix]# cp zabbix_server.conf{,.bak}
[root@zabbix zabbix]# vim zabbix_server.conf
ListenPort=10051            # 默认监听端口
SourceIP=10.11.59.175      # 发采样数据请求的 IP
数据库相关的设置:
DBHost=10.11.59.175         #数据库对外的主机
DBName=zabbix               #数据库名称
DBUser=zabbix              #数据库用户
DBPassword=zabbix             #数据库密码
DBPort=3306                 #数据库启动端口
[root@zabbix zabbix]# vim zabbix_agentd.conf
Server=10.11.59.175     #zabbix的ip
ServerActive=10.11.59.175   #zabbix的ip
Hostname=zabbix  #主机名
启动服务
[root@zabbix zabbix]# systemctl start zabbix-server.service
   确认服务端口开启  ss -ntpl |grep 10051
配置 web GUI
1、配置php 前端
  • php 监听用户增加nginx,设置时区
[root@zabbix ~]# vim /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf
php_value[date.timezone] = Asia/Shanghai
注意: 时区是一定要设置的,这里被注释掉是因为,在 php 的配置文件中设置时区,如果在php配置文件中设置时区,则对所有的 php 服务均有效,如果在 zabbix.conf 中设置时区,则仅对zabbix服务有效。所以,在 php 配置文件中设置时区:
2、启动 httpd 服务
[root@qfedu.com ~]# systemctl restart zabbix-server zabbix-agent httpd rh-php72-php-fpm
[root@qfedu.com ~]# systemctl enable zabbix-server zabbix-agent httpd rh-php72-php-fpm

3、浏览器访问并进行初始化设置

浏览器访问http://10.11.59.175/zabbix/,第一次访问时需要进行一些初始化的设置,按照提示操作:默认用户名为:Admin ,密码为:zabbix ,登录后进入仪表盘:

三、配置 agent 端

  • 在被监控的主机安装好agent,设置好 server,并添加到 server 端,将其纳入监控系统中。
1、安装 zabbix 安装源
[root@node1 ~]# wget https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
[root@node1 ~]# rpm -ivh zabbix-release-5.0-1.el7.noarch.rpm
[root@node1 ~]# yum -y install epel-release.noarch
[root@node1 ~]# yum install zabbix-agent zabbix-sender -y
2、修改配置文件
[root@node1 zabbix]# rpm -ql zabbix-agent
[root@node1 ~]# cd /etc/zabbix/
[root@node1 zabbix]# ls
zabbix_agentd.conf  zabbix_agentd.d
[root@node1 zabbix]# cp zabbix_agentd.conf{,.bak}

[root@node1 zabbix]# vim zabbix_agentd.conf
Server=10.11.59.175      # 指明服务器是谁的
ListenPort=10050            # 自己监听的端口
ListenIP=10.11.59.176           # 自己监听的地址,0.0.0.0表示本机所有地址(生产环境必须写真实的实际ip)
StartAgents=3               # 优化时使用的
EnableRemoteCommands=1
LogRemoteCommands=1
ServerActive=10.11.59.175 # 主动监控时的服务器
Hostname=agent1  # 自己能被server端识别的名称

3、启动服务

[root@agent1 zabbix]# systemctl start zabbix-agent.service
查看端口是否已开启
[root@agent1 zabbix]# ss -ntul |grep 10050

剩下就是根据实际情况实际需求点点点来完成了

### OSPF Point-to-Multipoint Configuration with MGRE Tunneling Explanation and Setup Guide #### Understanding the Basics of OSPF Point-to-Multipoint Networks In a point-to-multipoint network type within OSPF, routers do not elect Designated Routers (DRs) or Backup Designated Routers (BDRs). Neighbors are established directly between endpoints. The default Hello interval is set to 30 seconds while the Dead interval stands at 120 seconds[^1]. This configuration simplifies neighbor establishment but requires careful planning when integrating advanced features like Multi-Point GRE tunnels. #### Introduction to Multi-Point GRE Tunnels Multi-Point GRE (mGRE) allows multiple remote sites to connect through a single virtual interface on an endpoint device without requiring individual physical interfaces for each connection. mGRE can significantly reduce complexity in hub-and-spoke topologies by enabling dynamic tunnel creation based on IPsec policies or other criteria. #### Combining OSPF Point-to-Multipoint Network Type with mGRE When combining these two technologies: - **Hub Router**: Configured as both ends of all spokes' mGRE sessions. - **Spoke Routers**: Each configured individually pointing back towards the central hub router's public address over which they will establish their respective mGRE session(s). The combination leverages OSPF’s ability to form adjacencies easily across non-broadcast multi-access networks such as those created via mGRE tunnels. It also benefits from OSPF's inherent support for point-to-multipoint configurations where DR/BDR elections aren't necessary due to direct adjacency formation among participating nodes. #### Example Configuration Steps for Hub Router Using Huawei CLI Syntax Below demonstrates how one might configure this scenario using commands similar to what would be found in a typical enterprise-grade environment provided by vendors like Huawei. ```shell # Enter system view mode system-view # Create loopback interface used for establishing mGRE connections interface LoopBack0 ip address 192.168.1.1 255.255.255.255 # Configure real Ethernet port that connects outside world interface GigabitEthernet0/0/1 ip address dhcp # Define mGRE template tunnel-template gre multipoint name mgre-tpl source-interface LoopBack0 destination any # Apply mGRE settings onto actual tunnel interface interface Tunnel0 undo ip address tunnel-protocol gre multipoint apply tunnel-template mgre-tpl ospf enable area 0 ``` For spoke routers, replace `source-interface` command under `tunnel-template` section accordingly depending upon local addressing scheme; ensure consistency throughout entire deployment regarding naming conventions applied here too. #### Verification Commands After Setup Completion Once completed successfully, verify operation status utilizing following instructions available inside most modern networking gear including models produced by companies mentioned earlier: ```shell display ospf peer brief # Check current state of OSPF peers display ip routing-table # Review learned routes after convergence has occurred ping -a <Source_IP> <Destination_IP> # Test reachability between different segments involved ``` --related questions-- 1. How does changing OSPF hello/dead timers affect stability in large-scale deployments? 2. What considerations should administrators take into account before implementing mGRE solutions? 3. Can you explain more about configuring OSPF route-id specifically for devices running OSPF protocol? 4. In what scenarios could deploying OSPF stub areas improve performance compared against standard ones?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值