Linux一键巡检

文章目录

一、巡检内容

二、巡检结果展示

三、巡检脚本

四、执行脚本

五、查看巡检报告

六、设置定时任务


一、巡检内容

[1] 系统基本信息
[2] CPU 信息
[3] 内存使用情况
[4] 磁盘使用情况
[5] 网络配置和连接
[6] 服务状态检查
[7] 安全检查
[8] 登录记录
[9] 系统日志检查
[10] 性能分析

二、巡检结果展示

======================[1] 系统基本信息========================
系统类型: Linux
系统版本: openEuler 20.03 (LTS-SP3)
主机名: localhost.localdomain
CPU处理器: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
内存空间: 27Gi/30Gi
交换空间: 2.3Gi/20Gi
编码格式: en_US.UTF-8
IP地址: 192.168.34.52
操作系统: openEuler 20.03 (LTS-SP3)
内核信息: 4.19.90-2112.8.0.0131.oe1.x86_64
启动时间: 2023-03-17 10:15:07
运行时长: up 1 year, 44 weeks, 6 hours, 27 minutes
系统运行天数: 672
系统当前时间: 2025-01-17 16:42:54
在线用户人数: 2
SELinux: disabled

======================[2] CPU 信息==========================
逻辑CPU核数: 4
物理CPU核数: 4
CPU架构: x86_64
CPU型号: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
CPU 1分钟负载: 0.11
CPU 5分钟负载: 0.21
CPU 10分钟负载: 0.18
使用CPU占比: 3.20 %
空闲CPU占比: 96.80 %
占用CPU排名前10的进程:
USER         PID %CPU %MEM COMMAND
root     2841921  1.4  2.2 java -jar -Xmx2g /data/backend/spn-sec/sys-gateway-1.0-SNAPSHOT.jar
root     2851627  1.3  2.1 java -jar -Xmx2g /data/backend/spn-sec/sys-service-maintenance-1.0-SNAPSHOT.jar
root     3393683  1.3  2.5 java -jar -Xmx2g /data/backend/spn-sec/sys-service-upms-1.0-SNAPSHOT.jar
root     2848581  1.0  2.6 java -jar -Xmx2g /data/backend/spn-sec/sys-service-security-1.0-SNAPSHOT.jar
root     2617554  0.8  3.4 /usr/lib/jvm/java-1.8.0-openjdk/bin/java -Xms1g -Xmx1g -Xmn512m -Dnacos.standalone=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar --spring.config.additional-location=/home/nacos/init.d/,file:/home/nacos/conf/ --spring.config.name=application,custom --logging.config=/home/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288
systemd+ 2617771  0.4  0.3 mongod --auth --bind_ip_all
root     2845410  0.3  2.5 java -jar -Xmx2g /data/backend/spn-sec/sys-service-receiver-1.0-SNAPSHOT.jar
systemd+ 2617980  0.2  4.0 /usr/local/lib/erlang/erts-12.2/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -B i -- -root /usr/local/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa  -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger [] -syslog syslog_error_logger false
root     2839828  0.2  2.1 java -jar -Xmx2g /data/backend/spn-sec/sys-service-admin-1.0-SNAPSHOT.jar
root      742024  0.1  0.3 /guanyu/guanyuagent

======================[3] 内存使用情况==========================
              total        used        free      shared  buff/cache   available
Mem:          31652       27672        1864         183        2116        3304
Swap:         21119        2312       18807
总共内存: 31652 MB
使用内存: 27672 MB
剩余内存: 1864 MB
内存使用占比: 87.00 %
占用内存排名前10的进程:
USER         PID %CPU %MEM COMMAND
systemd+ 2617737  0.0 52.2 mysqld
root     2601024  0.1  7.7 java -jar /data/backend/spn-portal/spn-portal-1.0-SNAPSHOT.jar --spring.profiles.active=dev
systemd+ 2617980  0.2  4.0 /usr/local/lib/erlang/erts-12.2/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -B i -- -root /usr/local/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa  -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger [] -syslog syslog_error_logger false
root     2617554  0.8  3.4 /usr/lib/jvm/java-1.8.0-openjdk/bin/java -Xms1g -Xmx1g -Xmn512m -Dnacos.standalone=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar --spring.config.additional-location=/home/nacos/init.d/,file:/home/nacos/conf/ --spring.config.name=application,custom --logging.config=/home/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288
root     2848581  1.0  2.6 java -jar -Xmx2g /data/backend/spn-sec/sys-service-security-1.0-SNAPSHOT.jar
root     2600186  0.0  2.6 java -jar /data/backend/spn-gateway/spn-gateway-1.0-SNAPSHOT.jar --spring.profiles.active=dev
root     3393683  1.3  2.5 java -jar -Xmx2g /data/backend/spn-sec/sys-service-upms-1.0-SNAPSHOT.jar
root     2845410  0.3  2.5 java -jar -Xmx2g /data/backend/spn-sec/sys-service-receiver-1.0-SNAPSHOT.jar
root     2841921  1.4  2.2 java -jar -Xmx2g /data/backend/spn-sec/sys-gateway-1.0-SNAPSHOT.jar
root     2839828  0.2  2.1 java -jar -Xmx2g /data/backend/spn-sec/sys-service-admin-1.0-SNAPSHOT.jar

======================[4] Swap使用情况==========================
Swap总大小: 21119 MB
已用Swap: 2312 MB
可用Swap: 18807 MB

======================[5] 磁盘使用情况==========================
Filesystem                 Type      Size  Used Avail Use% Mounted on
devtmpfs                   devtmpfs   16G     0   16G   0% /dev
tmpfs                      tmpfs      16G     0   16G   0% /dev/shm
tmpfs                      tmpfs      16G  1.6G   14G  10% /run
tmpfs                      tmpfs      16G     0   16G   0% /sys/fs/cgroup
/dev/mapper/openeuler-root ext4       44G   26G   16G  62% /
tmpfs                      tmpfs      16G  780K   16G   1% /tmp
/dev/sda1                  ext4      976M  126M  783M  14% /boot
/dev/mapper/datavg-lvdata  ext4      295G   27G  254G  10% /data
tmpfs                      tmpfs     3.1G     0  3.1G   0% /run/user/0
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/034e767aae1ed3a8e48c1061512e80f757e366771b9015b17307c50819aa582a/merged
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/89414adb5e07a1a7bccacf6317c0a026ce8cfbb0da5b28c1445767896ad86818/merged
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/eb99e3c18e1a29a68edd227eb1e25652518b3ba6043386123b72c48407646dc5/merged
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/ad0557b39bc7d2117410ccf2307586ad1b59a229c4a8663b8593be4ebb1d4183/merged
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/0a4f42e4e1a9ac3bb5fb4ebc346d4fd1b124db943342f2490dbb359538b5c18f/merged
overlay                    overlay    44G   26G   16G  62% /var/lib/docker/overlay2/1c60a720b1fde3a9f4dc0b6240285b6920e9784c4f1614da4dbf9ad6917b0929/merged

======================[6] 网络配置和连接==========================
IP地址: 192.168.34.52/24
172.17.0.1/16
网关: 192.168.34.1
DNS: 113.111.252.101 113.111.252.205
网络是否连通: 是
网络接口状态:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:99:fd:23 brd ff:ff:ff:ff:ff:ff
    inet 192.168.34.52/24 brd 192.168.34.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe99:fd23/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:82:a1:1e:b3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:82ff:fea1:1eb3/64 scope link 
       valid_lft forever preferred_lft forever
47: veth2b9c087@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 46:3f:96:80:25:8b brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::443f:96ff:fe80:258b/64 scope link 
       valid_lft forever preferred_lft forever
49: veth1930923@if48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether c6:9b:ee:ec:07:8f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::c49b:eeff:feec:78f/64 scope link 
       valid_lft forever preferred_lft forever
51: veth5762f1c@if50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 7a:cd:90:79:18:65 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::78cd:90ff:fe79:1865/64 scope link 
       valid_lft forever preferred_lft forever
53: veth9daceb2@if52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 7e:96:58:9e:17:5e brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::7c96:58ff:fe9e:175e/64 scope link 
       valid_lft forever preferred_lft forever
57: vethd70e538@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether be:7c:67:f4:2e:85 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::bc7c:67ff:fef4:2e85/64 scope link 
       valid_lft forever preferred_lft forever
59: vetha32f95b@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 0e:fb:fa:08:75:40 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::cfb:faff:fe08:7540/64 scope link 
       valid_lft forever preferred_lft forever
网络连接状态:
Netid   State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port  Process                                                                         
udp     UNCONN   0        0                      *:60324                *:*      users:(("java",pid=2839828,fd=26))                                             
udp     UNCONN   0        0                      *:47738                *:*      users:(("java",pid=2845410,fd=31))                                             
udp     UNCONN   0        0                      *:33264                *:*      users:(("java",pid=2841921,fd=9))                                              
udp     UNCONN   0        0                      *:37285                *:*      users:(("java",pid=2848581,fd=31))                                             
udp     UNCONN   0        0                      *:38595                *:*      users:(("java",pid=3393683,fd=31))                                             
udp     UNCONN   0        0                      *:56305                *:*      users:(("java",pid=2851627,fd=26))                                             
tcp     LISTEN   0        128              0.0.0.0:8117           0.0.0.0:*      users:(("nginx",pid=2770380,fd=7),("nginx",pid=2770379,fd=7),("nginx",pid=2770378,fd=7),("nginx",pid=2770377,fd=7),("nginx",pid=2770376,fd=7))
tcp     LISTEN   0        128              0.0.0.0:8118           0.0.0.0:*      users:(("nginx",pid=2770380,fd=8),("nginx",pid=2770379,fd=8),("nginx",pid=2770378,fd=8),("nginx",pid=2770377,fd=8),("nginx",pid=2770376,fd=8))
tcp     LISTEN   0        128              0.0.0.0:22             0.0.0.0:*      users:(("sshd",pid=3290353,fd=3))                                              
tcp     LISTEN   0        128              0.0.0.0:8119           0.0.0.0:*      users:(("nginx",pid=2770380,fd=9),("nginx",pid=2770379,fd=9),("nginx",pid=2770378,fd=9),("nginx",pid=2770377,fd=9),("nginx",pid=2770376,fd=9))
tcp     LISTEN   0        128              0.0.0.0:15672          0.0.0.0:*      users:(("docker-proxy",pid=2617663,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:18888          0.0.0.0:*      users:(("nginx",pid=2770380,fd=6),("nginx",pid=2770379,fd=6),("nginx",pid=2770378,fd=6),("nginx",pid=2770377,fd=6),("nginx",pid=2770376,fd=6))
tcp     LISTEN   0        128              0.0.0.0:5672           0.0.0.0:*      users:(("docker-proxy",pid=2617678,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:9000           0.0.0.0:*      users:(("docker-proxy",pid=2617438,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:27017          0.0.0.0:*      users:(("docker-proxy",pid=2617475,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:9001           0.0.0.0:*      users:(("docker-proxy",pid=2617425,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:3306           0.0.0.0:*      users:(("docker-proxy",pid=2617571,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:6379           0.0.0.0:*      users:(("docker-proxy",pid=2617457,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:8848           0.0.0.0:*      users:(("docker-proxy",pid=2617406,fd=4))                                      
tcp     LISTEN   0        128              0.0.0.0:8114           0.0.0.0:*      users:(("nginx",pid=2770380,fd=10),("nginx",pid=2770379,fd=10),("nginx",pid=2770378,fd=10),("nginx",pid=2770377,fd=10),("nginx",pid=2770376,fd=10))
tcp     LISTEN   0        100                    *:8501                 *:*      users:(("java",pid=2839828,fd=27))                                             
tcp     LISTEN   0        128                 [::]:22                [::]:*      users:(("sshd",pid=3290353,fd=4))                                              
tcp     LISTEN   0        128                 [::]:15672             [::]:*      users:(("docker-proxy",pid=2617670,fd=4))                                      
tcp     LISTEN   0        100                    *:8701                 *:*      users:(("java",pid=2848581,fd=32))                                             
tcp     LISTEN   0        100                    *:8801                 *:*      users:(("java",pid=2851627,fd=27))                                             
tcp     LISTEN   0        128                    *:8901                 *:*      users:(("java",pid=2600186,fd=30))                                             
tcp     LISTEN   0        128                    *:8902                 *:*      users:(("java",pid=2601024,fd=48))                                             
tcp     LISTEN   0        128                 [::]:5672              [::]:*      users:(("docker-proxy",pid=2617686,fd=4))                                      
tcp     LISTEN   0        128                 [::]:9000              [::]:*      users:(("docker-proxy",pid=2617448,fd=4))                                      
tcp     LISTEN   0        128                    *:8201                 *:*      users:(("java",pid=2841921,fd=33))                                             
tcp     LISTEN   0        128                 [::]:27017             [::]:*      users:(("docker-proxy",pid=2617482,fd=4))                                      
tcp     LISTEN   0        128                 [::]:9001              [::]:*      users:(("docker-proxy",pid=2617432,fd=4))                                      
tcp     LISTEN   0        128                 [::]:3306              [::]:*      users:(("docker-proxy",pid=2617586,fd=4))                                      
tcp     LISTEN   0        128                 [::]:6379              [::]:*      users:(("docker-proxy",pid=2617464,fd=4))                                      
tcp     LISTEN   0        100                    *:8301                 *:*      users:(("java",pid=3393683,fd=32))                                             
tcp     LISTEN   0        100                    *:8719                 *:*      users:(("java",pid=3393683,fd=47))                                             
tcp     LISTEN   0        100                    *:8720                 *:*      users:(("java",pid=2841921,fd=39))                                             
tcp     LISTEN   0        128                 [::]:8848              [::]:*      users:(("docker-proxy",pid=2617415,fd=4))                                      
tcp     LISTEN   0        100                    *:8721                 *:*      users:(("java",pid=2851627,fd=58))                                             
tcp     LISTEN   0        100                    *:8401                 *:*      users:(("java",pid=2845410,fd=33))                                             
tcp     LISTEN   0        100                    *:8722                 *:*      users:(("java",pid=2848581,fd=47))                                             
======================[7] 服务状态检查==========================
检查特定服务状态 (Firewalld,SSH,Nginx,Apache,MySQL):
firewalld 服务状态: 未运行
sshd 服务状态: 正在运行
nginx 服务状态: 未运行
apache2 服务状态: 未运行
mysqld 服务状态: 未运行

========================[8] 安全检查============================
SSH 配置:
Jan 13 09:22:48 localhost sshd[2126175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:49 localhost sshd[2126177]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126179]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126181]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126222]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126224]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126226]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126228]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:56 localhost sshd[2126230]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 16 16:25:24 localhost sshd[4075046]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.102.10  user=root

系统用户:
nobody
test
test1
test2

========================[9] 登录记录============================
当前登录用户:
root     pts/1        2025-01-17 14:16 (192.168.103.189)
root     pts/2        2025-01-17 15:29 (192.168.103.189)

最近登录记录:
root     pts/2        Fri Jan 17 15:29   still logged in    192.168.103.189
root     pts/1        Fri Jan 17 14:16   still logged in    192.168.103.189
root     pts/0        Fri Jan 17 12:44 - 15:53  (03:08)     192.168.103.189
root     pts/1        Thu Jan 16 16:25 - 18:45  (02:20)     192.168.102.10
root     pts/0        Thu Jan 16 16:04 - 18:37  (02:32)     192.168.102.10
root     pts/0        Thu Jan 16 09:56 - 09:57  (00:01)     192.168.103.224
root     pts/0        Wed Jan 15 15:13 - 17:31  (02:18)     192.168.102.219
root     pts/0        Wed Jan 15 13:13 - 13:13  (00:00)     192.168.102.14
root     pts/0        Wed Jan 15 13:05 - 13:13  (00:07)     192.168.102.14
root     pts/1        Wed Jan 15 10:02 - 10:02  (00:00)     192.168.102.162

========================[10] 系统日志检查============================
登录失败日志:
Jan 13 09:22:48 localhost sshd[2126175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:49 localhost sshd[2126177]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126179]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126181]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:51 localhost sshd[2126222]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126224]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126226]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:53 localhost sshd[2126228]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 13 09:22:56 localhost sshd[2126230]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.203.20  user=root
Jan 16 16:25:24 localhost sshd[4075046]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.102.10  user=root

检查系统重启记录:
reboot   system boot  4.19.90-2112.8.0 Fri Mar 17 10:12   still running
reboot   system boot  4.19.90-2112.8.0 Fri Feb 17 09:19 - 09:22  (00:03)
reboot   system boot  4.19.90-2112.8.0 Tue Jan 17 10:12 - 09:22 (30+23:10)
reboot   system boot  4.19.90-2112.8.0 Tue Jan 10 13:59 - 10:02 (6+20:02)
reboot   system boot  4.19.90-2112.8.0 Tue Jan 10 13:50 - 13:59  (00:09)
reboot   system boot  4.19.90-2112.8.0 Mon Jan  9 14:19 - 13:59  (23:39)
reboot   system boot  4.19.90-2112.8.0 Mon Jan  9 10:33 - 14:19  (03:45)
reboot   system boot  4.19.90-2112.8.0 Mon Jan  9 10:13 - 10:33  (00:19)

wtmp begins Mon Jan  9 10:13:50 2023

========================[11] 性能分析============================
内存占用排行前5:
systemd+ 2617737  0.0 52.2 19443784 16945272 ?   Ssl  Jan14   3:12 mysqld
root     2601024  0.1  7.7 12182764 2505940 ?    Sl   Jan14   4:51 java -jar /data/backend/spn-portal/spn-portal-1.0-SNAPSHOT.jar --spring.profiles.active=dev
systemd+ 2617980  0.2  4.0 4735648 1309728 ?     Sl   Jan14  12:06 /usr/local/lib/erlang/erts-12.2/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -sbwt none -sbwtdcpu none -sbwtdio none -B i -- -root /usr/local/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa  -noshell -noinput -s rabbit boot -boot start_sasl -syslog logger [] -syslog syslog_error_logger false
root     2617554  0.8  3.4 5102084 1110368 ?     Ssl  Jan14  38:09 /usr/lib/jvm/java-1.8.0-openjdk/bin/java -Xms1g -Xmx1g -Xmn512m -Dnacos.standalone=true -Dnacos.member.list= -Djava.ext.dirs=/usr/lib/jvm/java-1.8.0-openjdk/jre/lib/ext:/usr/lib/jvm/java-1.8.0-openjdk/lib/ext:/home/nacos/plugins/health:/home/nacos/plugins/cmdb:/home/nacos/plugins/mysql -Xloggc:/home/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dnacos.home=/home/nacos -jar /home/nacos/target/nacos-server.jar --spring.config.additional-location=/home/nacos/init.d/,file:/home/nacos/conf/ --spring.config.name=application,custom --logging.config=/home/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288
root     2848581  1.0  2.6 5976648 858488 ?      Sl   Jan14  47:43 java -jar -Xmx2g /data/backend/spn-sec/sys-service-security-1.0-SNAPSHOT.jar

CPU 占用排行前5:
root      493951  2.0  0.0 213900  3384 pts/2    S+   16:42   0:00 /bin/bash ./linux_inspection.sh
root     2841921  1.4  2.2 6015372 733376 ?      Sl   Jan14  61:51 java -jar -Xmx2g /data/backend/spn-sec/sys-gateway-1.0-SNAPSHOT.jar
root     2851627  1.3  2.1 5991740 687836 ?      Sl   Jan14  60:53 java -jar -Xmx2g /data/backend/spn-sec/sys-service-maintenance-1.0-SNAPSHOT.jar
root     3393683  1.3  2.5 6002116 842516 ?      Sl   Jan15  43:01 java -jar -Xmx2g /data/backend/spn-sec/sys-service-upms-1.0-SNAPSHOT.jar
root     2848581  1.0  2.6 5976648 858488 ?      Sl   Jan14  47:43 java -jar -Xmx2g /data/backend/spn-sec/sys-service-security-1.0-SNAPSHOT.jar

=============================巡检完成============================
巡检报告生成完成,保存路径: /opt/巡检报告_2025-01-17_16:42:54.log 
请根据巡检内容检查系统状态!

三、巡检脚本

# 编写巡检脚本
vi /opt/linux_inspection.sh

添加如下内容

#!/bin/bash
 
# Linux 一键巡检脚本
# 生成时间: $(date)
 
LOG_FILE="/opt/巡检报告_$(date +%F_%T).log "
 
# 初始化日志文件
echo "系统巡检报告" > $LOG_FILE
echo "生成时间: $(date)" >> $LOG_FILE
 
 
# 输出函数
log() {
    echo "$1" | tee -a $LOG_FILE
}

log ""
 
log "======================[1] 系统基本信息========================"

os_release=$(cat /etc/os-release 2>/dev/null)
uptime_info=$(uptime)
lscpu_info=$(lscpu)
free_info=$(free -h)

log "系统类型: $(uname -s)"
log "系统版本: $(grep PRETTY_NAME <<< "$os_release" | cut -d '"' -f2)"
log "主机名: $(hostname)"
log "CPU处理器: $(grep "Model name:" <<< "$lscpu_info" | sed 's/Model name:\s*//')"
log "内存空间: $(awk '/^Mem:/ { print $3 "/" $2 }' <<< "$free_info")"
log "交换空间: $(awk '/^Swap:/ { print $3 "/" $2 }' <<< "$free_info")"
log "编码格式: ${LANG:-未设置}"
log "IP地址: $(hostname -I | cut -d' ' -f1)"
log "操作系统: $(cat /etc/os-release | grep PRETTY_NAME | cut -d= -f2 | tr -d '\"')"
log "内核信息: $(uname -r)"
log "启动时间: $(uptime -s)"
log "运行时长: $(uptime -p)"
log "系统运行天数: $(awk '{print $3}' <<< "$uptime_info")"
log "系统当前时间: $(date '+%F %T')"
log "在线用户人数: $(who | wc -l)"
# SELinux状态可能不在/etc/selinux/config中,因此需要使用sestatus命令
selinux_status=$(sestatus 2>/dev/null | awk '/^SELinux status:/ {print $NF}')
if [ -z "$selinux_status" ]; then
   selinux_status=$(grep "^SELINUX=" /etc/selinux/config 2>/dev/null | awk -F= '{print $2}')
fi
   log "SELinux: ${selinux_status:-未安装或不可用}"
log ""

log "======================[2] CPU 信息=========================="
cpuinfo=$(cat /proc/cpuinfo)
loadavg=$(cat /proc/loadavg)
# 获取CPU空闲率,避免多次执行top命令
cpu_idle=$(top -bn1 | grep '%Cpu' | awk '{printf("%.2f\n", $8+$9)}')

# 获取逻辑CPU核数
logical_cpu_cores=$(grep -c '^processor' <<< "$cpuinfo")
log "逻辑CPU核数: $logical_cpu_cores"

# 获取物理CPU核数
physical_cpu_cores=$(grep -o 'physical id.*' <<< "$cpuinfo" | sort -u | wc -l)
log "物理CPU核数: $physical_cpu_cores"

# 获取CPU架构
cpu_architecture=$(uname -m)
log "CPU架构: $cpu_architecture"

# 获取CPU型号
cpu_model=$(grep "model name" <<< "$cpuinfo" | awk -F: '{print $2}' | sort -u | cut -c 2-50)
log "CPU型号: $cpu_model"

# 获取CPU负载信息
log "CPU 1分钟负载: $(awk '{print $1}' <<< "$loadavg")"
log "CPU 5分钟负载: $(awk '{print $2}' <<< "$loadavg")"
log "CPU 10分钟负载: $(awk '{print $3}' <<< "$loadavg")"

# 计算CPU使用和空闲占比
cpu_usage=$(echo "100 - $cpu_idle" | bc)
log "使用CPU占比: $cpu_usage %"
log "空闲CPU占比: $cpu_idle %"

# 输出占用CPU排名前10的进程
log "占用CPU排名前10的进程:"
ps -eo user,pid,pcpu,pmem,args --sort=-pcpu | head -n 11 | tee -a $LOG_FILE
log ""

log "======================[3] 内存使用情况=========================="
free -m | tee -a $LOG_FILE
memory_info=$(free -m)
total_memory=$(echo "$memory_info" | awk 'NR==2 {print $2}')
used_memory=$(echo "$memory_info" | awk 'NR==2 {print $3}')
free_memory=$(echo "$memory_info" | awk 'NR==2 {print $4}')
memory_usage_percentage=$(echo "scale=2; $used_memory / $total_memory * 100" | bc)

log "总共内存: $total_memory MB"
log "使用内存: $used_memory MB"
log "剩余内存: $free_memory MB"
log "内存使用占比: $memory_usage_percentage %"

# 输出占用内存排名前10的进程
log "占用内存排名前10的进程:"
ps -eo user,pid,pcpu,pmem,args --sort=-%mem | head -n 11 | tee -a $LOG_FILE
log ""

log "======================[4] Swap使用情况=========================="
swap_info=$(free -m)
total_swap=$(echo "$swap_info" | awk 'NR==3 {print $2}')
used_swap=$(echo "$swap_info" | awk 'NR==3 {print $3}')
free_swap=$(echo "$swap_info" | awk 'NR==3 {print $4}')

log "Swap总大小: $total_swap MB"
log "已用Swap: $used_swap MB"
log "可用Swap: $free_swap MB"
log ""

log "======================[5] 磁盘使用情况=========================="
df -hT | tee -a $LOG_FILE
log ""
 
log "======================[6] 网络配置和连接=========================="
# 获取并记录IP地址(排除本地回环接口和IPv6)
ip_addresses=$(ip -4 addr show scope global | awk '/inet/ {print $2}')
log "IP地址: $ip_addresses"

# 获取网关
gateway=$(ip route | grep default | awk '{print $3}')
log "网关: $gateway"

# 获取DNS
dns_servers=$(grep "nameserver" /etc/resolv.conf | awk '{print $2}' | tr '\n' ' ' | sed 's/ $/\n/')
log "DNS: $dns_servers"

# 检查网络连通性
if ping -c 2 -w 2 www.baidu.com &>/dev/null; then
  log "网络是否连通: 是"
else
  log "网络是否连通: 否"
fi

# 记录网络接口状态
log "网络接口状态:"
ip addr show | tee -a $LOG_FILE

# 记录网络连接状态
log "网络连接状态:"
ss -tunlp | tee -a $LOG_FILE
 
log "======================[7] 服务状态检查=========================="
 
log "检查特定服务状态 (Firewalld,SSH,Nginx,Apache,MySQL):"
# 定义要检查的服务列表
declare -a services=("firewalld" "sshd" "nginx" "apache2" "mysqld")
# 检查并记录每个服务的状态
for service in "${services[@]}"; do
    if systemctl is-active --quiet "$service"; then
        log "$service 服务状态: 正在运行"
    else
        log "$service 服务状态: 未运行"
    fi
done
log ""
 
log "========================[8] 安全检查============================"
log "SSH 配置:"
auth_log_file=$(find /var/log -name "secure" -o -name "auth.log")
if [ -f "$auth_log_file" ]; then
    grep "authentication failure" "$auth_log_file" | tail -10 | tee -a $LOG_FILE
else
    log "未检测到安全日志文件"
fi
log ""
 
 
log "系统用户:"
awk -F: '{if ($3 >= 1000) print $1}' /etc/passwd | tee -a $LOG_FILE
log ""
 
 
log "========================[9] 登录记录============================"
log "当前登录用户:"
who | tee -a $LOG_FILE
log ""
 
log "最近登录记录:"
last -a | head -10 | tee -a $LOG_FILE
log ""
 
log "========================[10] 系统日志检查============================"
log "登录失败日志:"
grep "authentication failure" /var/log/secure | tail -10 | tee -a $LOG_FILE || log "未检测到 secure 文件"
log ""
 
log "检查系统重启记录:"
last reboot | head -10 | tee -a $LOG_FILE
log ""
 
 
log "========================[11] 性能分析============================"
log "内存占用排行前5:"
ps aux --sort=-%mem | tail -n +2 | head -5 | tee -a $LOG_FILE
log ""
 
log "CPU 占用排行前5:"
ps aux --sort=-%cpu | tail -n +2 | head -5 | tee -a $LOG_FILE
log ""
 
 
log "=============================巡检完成============================"
log "巡检报告生成完成,保存路径: $LOG_FILE"
log "请根据巡检内容检查系统状态!"
log ""

四、执行脚本

# 给脚本添加权限
chmod +x linux_inspection.sh

# 运行脚本
./linux_inspection.sh

# 也可以直接执行
sh /opt/linux_inspection.sh

五、查看巡检报告

脚本运行完成后,日志文件会保存到 /opt/ 目录,文件名格式为:巡检报告_YYYY-MM-DD_HH:MM:SS.log,可以使用以下命令查看:

less /opt/巡检报告_2025-01-17_16:42:54.log

六、设置定时任务

使用 cron 定时运行脚本:

crontab -e

添加任务:每天早上9点执行脚本

0 9 * * * /bin/sh /opt/linux_inspection.sh
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

乌托邦的逃亡者

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值