基于cobra开发的k8s命令行管理工具k8s-manager
如果觉得好用,麻烦给个Star!
项目介绍:Github地址 ,代码已开源
- https://github.com/qinlang258/cobra-k8s-manager/tree/main
- 脚本下载地址 https://github.com/qinlang258/cobra-k8s-manager/releases/ 选择v1.0.3版本
- k8s-manager管理工具,fsnotify监听 /root/.kube/config的变化来修改 prometheus.yaml配置文件
- 新增导出excel的功能
该命令有以下几个功能:analysis,image,node,resource,top,config
使用 方法:下载项目github空间下,使用k8s-manager即可,无需进行额外配置,使用k8s-manager config获取在/root/.kube/文件夹下的yaml文件生成用于调用prometheus的配置文件。如果需要调用prometheus监控信息,可手动调整 /root/.kube/jcrose-prometheus/prometheus.yaml
jcrose的k8s管理工具,功能涵盖如下
1 analysis 分析某一节点的资源使用情况,类似于describe node xxx
2 config 生成 /root/.kube/jcrose-prometheus/prometheus.yaml 配置文件,获取默认路径/root/.kube/下的yaml文件导入prometheus的地址
3 image 获取镜像地址
4 node 获取所有node节点的信息
5 resource 获取pod所使用的 limit与requests 与java_opts的环境变量,可以加上-p获取对应集群的最近7天平均内存和CPU数值
6 获取容器的实际使用资源开销
Usage:
k8s-manager [flags]
k8s-manager [command]
Available Commands:
analysis 分析某一节点的资源使用情况
completion Generate the autocompletion script for the specified shell
config 初始化prometheus配置文件
help Help about any command
image 获取镜像信息
node 获取节点的资源信息
resource 获取pod资源的相关 Limit与Resource信息
top 获取容器的实际使用资源开销
Flags:
--export 是否输出Excel?默认不输出
-h, --help help for k8s-manager
--kubeconfig string 请输入 kubeconfig的文件路径 (default "/root/.kube/config")
Use "k8s-manager [command] --help" for more information about a command.
通用配置
-
所有命令均可附带 --kubeconfig指定配置文件
./k8s-manager --kubeconfig <指定使用的k8s配置文件> -
可以输入 --export导出当前的内容生成excel
-
监听因切换集群导致的/root/.kube/config变化。
执行了该命令之后切换集群导致的config变化,使prometheus.yaml 的/root/.kube/config字段对应的prometheus地址产生同样的变化
nohup pkg/fsnotify/fsnotify & 也可以在github下载 fsnotify 这个文件
1 node 分析所有node的资源情况
示例代码
1 获取所有节点的资源信息
./k8s-manager node
示例代码
root@k8s:/usr/local/cobra-k8s-manager# k8s-manager node
+--------+----------------+--------------------+-------------+---------------------------+-------------+-----------+-------------------------+--------------+------------+--------------------------+
| 节点名 | 节点IP | OS镜像 | KUBELET版本 | CONTAINER RUNTIME VERSION | 已使用的CPU | CPU总大小 | CPU使用占服务器的百分比 | 已使用的内存 | 内存总大小 | 内存使用占服务器的百分比 |
+--------+----------------+--------------------+-------------+---------------------------+-------------+-----------+-------------------------+--------------+------------+--------------------------+
| k8s | 192.168.44.134 | Ubuntu 22.04.4 LTS | v1.26.7 | containerd://1.6.8 | 363.00m | 8000m | 4.54% | 6585.25Mi | 7901.89Mi | 83.34% |
+--------+----------------+--------------------+-------------+---------------------------+-------------+-----------+-------------------------+--------------+------------+--------------------------+
2 analysis 分析Node节点上的资源使用构成
1 分析指定节点上的所有容器的资源开销
./k8s-manager analysis --node <节点名>
示例代码
root@k8s:/usr/local/cobra-k8s-manager# k8s-manager analysis --node k8s
E1222 05:55:41.502661 75816 analysis.go:46] context.BackgroundError fetching metrics for pod ingress-nginx-admission-create-r24ff: pods "ingress-nginx-admission-create-r24ff" not found
E1222 05:55:41.504124 75816 analysis.go:46] context.BackgroundError fetching metrics for pod ingress-nginx-admission-patch-w5gnn: pods "ingress-nginx-admission-patch-w5gnn" not found
+--------+---------------+--------------------------------------------------+---------------------------------+-----------------+-------------------------+------------------+--------------------------+
| 节点名 | NAMESPACE | POD NAME | 容器名 | 当前已使用的CPU | CPU使用占服务器的百分比 | 当前已使用的内存 | 内存使用占服务器的百分比 |
+--------+---------------+--------------------------------------------------+---------------------------------+-----------------+-------------------------+------------------+--------------------------+
| k8s | kube-system | kube-apiserver-k8s | kube-apiserver | 35.00m | 0.44% | 492.68m | 6.24% |
| k8s | monitoring | prometheus-k8s-0 | prometheus | 10.00m | 0.12% | 383.23m | 4.85% |
| k8s | monitoring | prometheus-k8s-1 | prometheus | 17.00m | 0.21% | 342.59m | 4.34% |
| k8s | ingress-nginx | ingress-nginx-controller-f87d69b54-t8kd8 | controller | 1.00m | 0.01% | 166.57m | 2.11% |
| k8s | monitoring | grafana-9bb74449d-8m8xl | grafana | 4.00m | 0.05% | 134.36m | 1.70% |
| k8s | kube-system | calico-node-7zbfk | calico-node | 24.00m | 0.30% | 102.68m | 1.30% |
| k8s | kube-system | etcd-k8s | etcd | 18.00m | 0.22% | 86.12m | 1.09% |
| k8s | kube-system | kube-controller-manager-k8s | kube-controller-manager | 11.00m | 0.14% | 85.29m | 1.08% |
| k8s | monitoring | prometheus-adapter-854d95bc45-pvfh7 | prometheus-adapter | 3.00m | 0.04% | 58.35m | 0.74% |
| k8s | monitoring | prometheus-operator-57cf88fbcb-wks8t | prometheus-operator | 1.00m | 0.01% | 45.33m | 0.57% |
| k8s | kube-system | coredns-5bbd96d687-x9qmp | coredns | 2.00m | 0.03% | 43.17m | 0.55% |
| k8s | kube-system | metrics-server-7d5c696976-wlms5 | metrics-server | 3.00m | 0.04% | 38.56m | 0.49% |
| k8s | kube-system | kube-scheduler-k8s | kube-scheduler | 2.00m | 0.03% | 35.71m | 0.45% |
| k8s | monitoring | node-exporter-6cqz8 | kube-rbac-proxy | 1.00m | 0.01% | 34.83m | 0.44% |
| k8s | monitoring | kube-state-metrics-79996cfcc5-5286s | kube-state-metrics | 1.00m | 0.01% | 33.56m | 0.42% |
| k8s | kube-system | calico-kube-controllers-57b57c56f-d5p6n | calico-kube-controllers | 1.00m | 0.01% | 31.89m | 0.40% |
| k8s | monitoring | prometheus-adapter-854d95bc45-tz822 | prometheus-adapter | 3.00m | 0.04% | 30.73m | 0.39% |
| k8s | kube-system | kube-proxy-nb9tq | kube-proxy | 1.00m | 0.01% | 29.99m | 0.38% |
| k8s | monitoring | alertmanager-main-1 | alertmanager | 2.00m | 0.03% | 29.04m | 0.37% |
| k8s | kube-system | coredns-5bbd96d687-thl59 | coredns | 2.00m | 0.03% | 29.00m | 0.37% |
| k8s | monitoring | alertmanager-main-0 | alertmanager | 2.00m | 0.03% | 27.72m | 0.35% |
| k8s | monitoring | alertmanager-main-2 | alertmanager | 2.00m | 0.03% | 27.71m | 0.35% |
| k8s | monitoring | alertmanager-main-0 | config-reloader | 0.00m | 0.00% | 22.56m | 0.29% |
| k8s | monitoring | blackbox-exporter-59dddb7bb6-8lp69 | blackbox-exporter | 1.00m | 0.01% | 21.18m | 0.27% |
| k8s | monitoring | prometheus-k8s-1 | config-reloader | 1.00m | 0.01% | 20.79m | 0.26% |
| k8s | monitoring | alertmanager-main-1 | config-reloader | 1.00m | 0.01% | 20.40m | 0.26% |
| k8s | monitoring | prometheus-k8s-0 | config-reloader | 0.00m | 0.00% | 20.16m | 0.26% |
| k8s | monitoring | alertmanager-main-2 | config-reloader | 1.00m | 0.01% | 18.77m | 0.24% |
| k8s | nfs | nfs-subdir-external-provisioner-65664b8954-qrs2q | nfs-subdir-external-provisioner | 1.00m | 0.01% | 17.30m | 0.22% |
| k8s | monitoring | node-exporter-6cqz8 | node-exporter | 13.00m | 0.16% | 15.90m | 0.20% |
| k8s | monitoring | kube-state-metrics-79996cfcc5-5286s | kube-rbac-proxy-main | 1.00m | 0.01% | 9.80m | 0.12% |
| k8s | monitoring | prometheus-operator-57cf88fbcb-wks8t | kube-rbac-proxy | 1.00m | 0.01% | 9.67m | 0.12% |
| k8s | monitoring | blackbox-exporter-59dddb7bb6-8lp69 | kube-rbac-proxy | 1.00m | 0.01% | 9.62m | 0.12% |
| k8s | monitoring | kube-state-metrics-79996cfcc5-5286s | kube-rbac-proxy-self | 1.00m | 0.01% | 9.15m | 0.12% |
| k8s | monitoring | blackbox-exporter-59dddb7bb6-8lp69 | module-configmap-reloader | 0.00m | 0.00% | 4.00m | 0.05% |
+--------+---------------+--------------------------------------------------+---------------------------------+-----------------+-------------------------+------------------+--------------------------+
3 image 获取指定namespace的所有镜像地址
示例代码
1 获取所有namespace的镜像地址
./k8s-manager image
2 获取指定namespace的镜像地址
./k8s-manager image -n <namespace>
示例代码
root@k8s:/usr/local/cobra-k8s-manager# k8s-manager image
+---------------+--------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------+
| NAMESPACE | 资源类型 | 资源名 | 容器名 | 镜像地址 |
+---------------+--------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------+
| ingress-nginx | deployment | ingress-nginx-controller | controller | registry.cn-zhangjiakou.aliyuncs.com/jcrose-k8s/ingress-nginx-controller:v1.7.0 |
| kube-system | deployment | calico-kube-controllers | calico-kube-controllers | docker.io/calico/kube-controllers:v3.25.0 |
| kube-system | deployment | coredns | coredns | registry.aliyuncs.com/google_containers/coredns:v1.9.3 |
| kube-system | deployment | metrics-server | metrics-server | k8s.dockerproxy.net/metrics-server/metrics-server:v0.7.2 |
| kube-system | daemonsets | calico-node | calico-node | docker.io/calico/node:v3.25.0 |
| kube-system | daemonsets | kube-proxy | kube-proxy | registry.aliyuncs.com/google_containers/kube-proxy:v1.26.7 |
| monitoring | deployment | blackbox-exporter | blackbox-exporter | quay.io/prometheus/blackbox-exporter:v0.24.0 |
| monitoring | deployment | blackbox-exporter | module-configmap-reloader | jimmidyson/configmap-reload:v0.5.0 |
| monitoring | deployment | blackbox-exporter | kube-rbac-proxy | quay.io/brancz/kube-rbac-proxy:v0.14.2 |
| monitoring | deployment | grafana | grafana | grafana/grafana:9.5.3 |
| monitoring | deployment | kube-state-metrics | kube-state-metrics | bitnami/kube-state-metrics:2.9.2 |
| monitoring | deployment | kube-state-metrics | kube-rbac-proxy-main | quay.io/brancz/kube-rbac-proxy:v0.14.2 |
| monitoring | deployment | kube-state-metrics | kube-rbac-proxy-self | quay.io/brancz/kube-rbac-proxy:v0.14.2 |
| monitoring | deployment | prometheus-adapter | prometheus-adapter | xuxiaoweicomcn/prometheus-adapter:v0.11.1 |
| monitoring | deployment | prometheus-operator | prometheus-operator | quay.io/prometheus-operator/prometheus-operator:v0.67.1 |
| monitoring | deployment | prometheus-operator | kube-rbac-proxy | quay.io/brancz/kube-rbac-proxy:v0.14.2 |
| monitoring | statefulsets | alertmanager-main | alertmanager | quay.io/prometheus/alertmanager:v0.26.0 |
| monitoring | statefulsets | alertmanager-main | config-reloader | quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1 |
| monitoring | statefulsets | prometheus-k8s | prometheus | quay.io/prometheus/prometheus:v2.46.0 |
| monitoring | statefulsets | prometheus-k8s | config-reloader | quay.io/prometheus-operator/prometheus-config-reloader:v0.67.1 |
| monitoring | daemonsets | node-exporter | node-exporter | quay.io/prometheus/node-exporter:v1.6.1 |
| monitoring | daemonsets | node-exporter | kube-rbac-proxy | quay.io/brancz/kube-rbac-proxy:v0.14.2 |
| nfs | deployment | nfs-subdir-external-provisioner | nfs-subdir-external-provisioner | dyrnq/nfs-subdir-external-provisioner:v4.0.2 |
+---------------+--------------+---------------------------------+---------------------------------+---------------------------------------------------------------------------------+
4 resource 获取指定namespace的所有limit 与 Requests大小
新增 Java XMX XMS的展示,比较直观的展示 CPU内存的Limit与Requests,Prometheus根据7天查询的内存CPU使用(已经根据转换得到与kubectl top po一直的数据格式),Java_opts的XMX,XMS,可以作为一个参考依据来调整。
示例代码
1 获取所有namespace的limit 与 Requests大小
./k8s-manager resource
2 获取指定namespace的limit 与 Requests大小
./k8s-manager resource -n <namespace>
3 在prometheus查询最近七天的内存CPU使用情况,启用prometheus监控数据
./k8s-manager resource prometheus -u
示例代码:获取requests与Limit,并且查询prometheus的实际开销
root@k8s:/usr/local/cobra-k8s-manager# go run main.go resource -n freshx -p
所使用的prometheus地址是: http://prometheus.dc-prod.yunlizhi.net
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| 节点名 | NAMESPACE | POD NAME | 容器名 | CPU限制 | CPU所需 | 最近7天已使用的CPU | 内存限制 | 内存所需 | JAVA-XMX | JAVA-XMS | 最近7天已使用的内存 |
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| prod-dc-10.22.126.151-zjk-ali | freshx | fe-freshx-customer-57c65f95bc-c5tbk | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 334.07Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | fe-freshx-customer-57c65f95bc-lbv7l | rest-service | 1 | 100m | 0.02m | 1Gi | 1Gi | | | 334.25Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | fe-freshx-platform-647f444577-gfv7s | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.55Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | fe-freshx-platform-647f444577-tt6xz | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.67Mi |
| prod-dc-10.22.126.151-zjk-ali | freshx | fe-freshx-procure-7b79d65c57-7pqrl | rest-service | 1 | 100m | 0.08m | 1Gi | 1Gi | | | 333.70Mi |
| prod-dc-10.22.209.105-zjk-ali | freshx | fe-freshx-procure-7b79d65c57-qgrxq | rest-service | 1 | 100m | 3.04m | 1Gi | 1Gi | | | 334.02Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | fe-freshx-supplier-5fd5ff9c5c-2wrrs | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 334.43Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | fe-freshx-supplier-5fd5ff9c5c-4d94k | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.90Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | feishu-companion-fd4f8c8fb-pls8l | rest-service | 2 | 200m | 3.75m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1358.15Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | feishu-companion-fd4f8c8fb-xhd46 | rest-service | 2 | 200m | 3.39m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1008.57Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | kafka-0 | kafka-broker | 2 | 100m | 30.51m | 4Gi | 4Gi | | | 1673.03Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | kafka-1 | kafka-broker | 2 | 100m | 33.30m | 4Gi | 4Gi | | | 1682.64Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | kafka-2 | kafka-broker | 2 | 100m | 33.90m | 4Gi | 4Gi | | | 1743.94Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | kafka-zookeeper-0 | zookeeper | 0 | 0 | 1.16m | 0 | 0 | | | 385.40Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | kafka-zookeeper-1 | zookeeper | 0 | 0 | 0.96m | 0 | 0 | | | 360.83Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | kafka-zookeeper-2 | zookeeper | 0 | 0 | 0.99m | 0 | 0 | | | 351.36Mi |
| prod-dc-10.22.6.228-zjk-ali | freshx | openapi-biz-67bd9555d-57krs | rest-service | 2 | 200m | 6.49m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1219.90Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | openapi-biz-67bd9555d-hkssw | rest-service | 2 | 200m | 5.05m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1222.02Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | reach-system-bd5fc4694-9nxmh | rest-service | 2 | 200m | 10.34m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1114.30Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | reach-system-bd5fc4694-v5pwf | rest-service | 2 | 200m | 12.46m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1115.27Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | saas-auth-77d66b4444-ct7v2 | rest-service | 2 | 200m | 5.42m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1105.18Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | saas-auth-77d66b4444-gtj9j | rest-service | 2 | 200m | 5.56m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1088.73Mi |
| prod-dc-10.22.209.105-zjk-ali | freshx | saas-freshx-biz-bdf68c6d4-bvzhw | rest-service | 2 | 200m | 26.03m | 6G | 6G | -Xmx3400m | -Xms3400m | 1960.09Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | saas-freshx-biz-bdf68c6d4-gphdf | rest-service | 2 | 200m | 25.86m | 6G | 6G | -Xmx3400m | -Xms3400m | 1921.45Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | saas-gateway-65f9b78d5b-dxm5b | rest-service | 2 | 200m | 16.67m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1436.83Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | saas-gateway-65f9b78d5b-q5ttb | rest-service | 2 | 200m | 14.20m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1665.02Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | saas-register-68f7bc75f5-qqjq7 | rest-service | 2 | 200m | 9.05m | 4G | 4G | -Xmx3096m | -Xms3096m | 1397.01Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | saas-upms-biz-6885b4b954-kl2pd | rest-service | 3 | 400m | 5.80m | 4Gi | 4Gi | -Xmx3200m | -Xms3200m | 1700.01Mi |
| prod-dc-10.22.6.228-zjk-ali | freshx | saas-upms-biz-6885b4b954-ln929 | rest-service | 3 | 400m | 6.27m | 4Gi | 4Gi | -Xmx3200m | -Xms3200m | 1701.80Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | saas-xxl-job-admin-849dbc777b-gltxm | rest-service | 2 | 200m | 4.39m | 2560Mi | 2560Mi | -Xmx2048m | -Xms2048m | 1193.35Mi |
| prod-dc-10.22.209.88-zjk-ali | freshx | saas-xxl-job-admin-849dbc777b-r9z4t | rest-service | 2 | 200m | 4.72m | 2560Mi | 2560Mi | -Xmx2048m | -Xms2048m | 1261.38Mi |
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
5 top 获取指定namespace的资源使用情况
这个命令是查看以namespace为单位的,不能top node节点,如果需要看 node信息,使用 k8s-manager node 或者 k8s-manager analysis --node
1 获取所有namespace的资源开销
./k8s-manager top
2 获取指定namespace的资源开销
./k8s-managerresource -n freshx -p
示例代码
root@jcrose 当前k8s集群:prod-dc-k8s-zjk-aliyun 当前namespace: null #/usr/local/github/cobra-k8s-manager go run main.go resource -n freshx -p
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| 节点名 | NAMESPACE | POD NAME | 容器名 | CPU限制 | CPU所需 | 最近7天已使用的CPU | 内存限制 | 内存所需 | JAVA-XMX | JAVA-XMS | 最近7天已使用的内存 |
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| prod-dc-10.22.126.151-zjk-ali | freshx | fe-freshx-customer-57c65f95bc-c5tbk | rest-service | 1 | 100m | 0.01m | 1Gi | 1Gi | | | 334.07Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | fe-freshx-customer-57c65f95bc-lbv7l | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 334.18Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | fe-freshx-platform-647f444577-gfv7s | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.55Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | fe-freshx-platform-647f444577-tt6xz | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.67Mi |
| prod-dc-10.22.126.151-zjk-ali | freshx | fe-freshx-procure-7b79d65c57-7pqrl | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.69Mi |
| prod-dc-10.22.209.105-zjk-ali | freshx | fe-freshx-procure-7b79d65c57-qgrxq | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 334.02Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | fe-freshx-supplier-5fd5ff9c5c-2wrrs | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 334.43Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | fe-freshx-supplier-5fd5ff9c5c-4d94k | rest-service | 1 | 100m | 0.00m | 1Gi | 1Gi | | | 333.90Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | feishu-companion-fd4f8c8fb-pls8l | rest-service | 2 | 200m | 1.81m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1358.13Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | feishu-companion-fd4f8c8fb-xhd46 | rest-service | 2 | 200m | 4.34m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1008.54Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | kafka-0 | kafka-broker | 2 | 100m | 42.58m | 4Gi | 4Gi | | | 1673.03Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | kafka-1 | kafka-broker | 2 | 100m | 29.96m | 4Gi | 4Gi | | | 1682.62Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | kafka-2 | kafka-broker | 2 | 100m | 31.34m | 4Gi | 4Gi | | | 1743.96Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | kafka-zookeeper-0 | zookeeper | 0 | 0 | 1.16m | 0 | 0 | | | 385.40Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | kafka-zookeeper-1 | zookeeper | 0 | 0 | 1.07m | 0 | 0 | | | 360.83Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | kafka-zookeeper-2 | zookeeper | 0 | 0 | 0.98m | 0 | 0 | | | 351.36Mi |
| prod-dc-10.22.6.228-zjk-ali | freshx | openapi-biz-67bd9555d-57krs | rest-service | 2 | 200m | 8.86m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1219.82Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | openapi-biz-67bd9555d-hkssw | rest-service | 2 | 200m | 19.94m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1221.82Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | reach-system-bd5fc4694-9nxmh | rest-service | 2 | 200m | 11.79m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1114.29Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | reach-system-bd5fc4694-v5pwf | rest-service | 2 | 200m | 9.82m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 1115.26Mi |
| prod-dc-10.22.209.100-zjk-ali | freshx | saas-auth-77d66b4444-ct7v2 | rest-service | 2 | 200m | 1.63m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1105.18Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | saas-auth-77d66b4444-gtj9j | rest-service | 2 | 200m | 3.73m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1088.73Mi |
| prod-dc-10.22.209.105-zjk-ali | freshx | saas-freshx-biz-bdf68c6d4-bvzhw | rest-service | 2 | 200m | 25.43m | 6G | 6G | -Xmx3400m | -Xms3400m | 1959.85Mi |
| prod-dc-10.22.6.210-zjk-ali | freshx | saas-freshx-biz-bdf68c6d4-gphdf | rest-service | 2 | 200m | 34.83m | 6G | 6G | -Xmx3400m | -Xms3400m | 1920.83Mi |
| prod-dc-10.22.126.146-zjk-ali | freshx | saas-gateway-65f9b78d5b-dxm5b | rest-service | 2 | 200m | 24.63m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1436.81Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | saas-gateway-65f9b78d5b-q5ttb | rest-service | 2 | 200m | 19.40m | 2500M | 2500M | -Xmx2048m | -Xms2048m | 1664.99Mi |
| prod-dc-10.22.209.86-zjk-ali | freshx | saas-register-68f7bc75f5-qqjq7 | rest-service | 2 | 200m | 7.57m | 4G | 4G | -Xmx3096m | -Xms3096m | 1397.01Mi |
| prod-dc-10.22.126.145-zjk-ali | freshx | saas-upms-biz-6885b4b954-kl2pd | rest-service | 3 | 400m | 9.03m | 4Gi | 4Gi | -Xmx3200m | -Xms3200m | 1699.96Mi |
| prod-dc-10.22.6.228-zjk-ali | freshx | saas-upms-biz-6885b4b954-ln929 | rest-service | 3 | 400m | 10.03m | 4Gi | 4Gi | -Xmx3200m | -Xms3200m | 1701.77Mi |
| prod-dc-10.22.6.217-zjk-ali | freshx | saas-xxl-job-admin-849dbc777b-gltxm | rest-service | 2 | 200m | 5.67m | 2560Mi | 2560Mi | -Xmx2048m | -Xms2048m | 1193.35Mi |
| prod-dc-10.22.209.88-zjk-ali | freshx | saas-xxl-job-admin-849dbc777b-r9z4t | rest-service | 2 | 200m | 6.59m | 2560Mi | 2560Mi | -Xmx2048m | -Xms2048m | 1261.38Mi |
+-------------------------------+-----------+-------------------------------------+--------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
6 config 自动获取K8S配置文件路径下所有集群的prometheus的ingress地址
如果配置文件路径不对,或者其他原因。可以手动在config/promtheus.yaml修改
示例代码
1 默认获取 /root/.kube/文件下所有yaml文件的prometheus地址
go run main.go config
2 获取指定文件下所有yaml文件的prometheus地址
go run main.go config -p /data/k8s
常用的使用套路
1 k8s节点异常卡顿,容器频繁重启
k8s-manager analysis --node k8s #查看实际node上的开销情况,数据是由metrics-server提供的,使用的资源在服务器的占比
2 优化limit requests
支持 namespace与 node的筛选
root@k8s:/usr/local/cobra-k8s-manager# go run main.go resource prometheus -u http://192.168.44.134:20248/ --node k8s -n nfs
+--------+-----------+--------------------------------------------------+---------------------------------+---------+---------+--------------------+----------+----------+---------------------+
| 节点名 | NAMESPACE | POD NAME | 容器名 | CPU限制 | CPU所需 | 最近7天已使用的CPU | 内存限制 | 内存所需 | 最近7天已使用的内存 |
+--------+-----------+--------------------------------------------------+---------------------------------+---------+---------+--------------------+----------+----------+---------------------+
| k8s | nfs | nfs-subdir-external-provisioner-65664b8954-qrs2q | nfs-subdir-external-provisioner | 0 | 0 | 4.10m | 0 | 0 | 9.60Mi |
+--------+-----------+--------------------------------------------------+---------------------------------+---------+---------+--------------------+----------+----------+---------------------+
3 分析某一namespace的资源开销并生成excel分析
root@jcrose 当前k8s集群:qa-dc-k8s-zjk-aliyun 当前namespace: null #/usr/local/github/cobra-k8s-manager k8s-manager resource -n freshx -p --export
所使用的prometheus地址是: http://prometheus.dc-prod.yunlizhi.net
+----------------------------+-----------+-------------------------------------+---------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| 节点名 | NAMESPACE | POD NAME | 容器名 | CPU限制 | CPU所需 | 最近7天已使用的CPU | 内存限制 | 内存所需 | JAVA-XMX | JAVA-XMS | 最近7天已使用的内存 |
+----------------------------+-----------+-------------------------------------+---------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
| qa-dc-10.24.99.236-zjk-ali | freshx | fe-freshx-customer-d5b65c59c-462dn | rest-service2 | 0 | 0 | 0.00m | 0 | 0 | | | 0.00Mi |
| qa-dc-10.24.99.240-zjk-ali | freshx | fe-freshx-platform-574969bbbb-xhqjq | rest-service2 | 0 | 0 | 0.00m | 0 | 0 | | | 0.00Mi |
| qa-dc-10.24.25.225-zjk-ali | freshx | fe-freshx-procure-ddc487c7b-62jj6 | rest-service2 | 0 | 0 | 0.00m | 0 | 0 | | | 0.00Mi |
| qa-dc-10.24.25.221-zjk-ali | freshx | fe-freshx-supplier-7bd5b79484-qxnwl | rest-service2 | 0 | 0 | 0.00m | 0 | 0 | | | 0.00Mi |
| qa-dc-10.24.99.243-zjk-ali | freshx | feishu-companion-5f7d47cd6c-gskrs | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.218-zjk-ali | freshx | kafka-0 | kafka-broker | 2 | 100m | 28.60m | 4Gi | 4Gi | | | 1675.85Mi |
| qa-dc-10.24.99.236-zjk-ali | freshx | kafka-1 | kafka-broker | 2 | 100m | 28.33m | 4Gi | 4Gi | | | 1688.93Mi |
| qa-dc-10.24.99.239-zjk-ali | freshx | kafka-2 | kafka-broker | 2 | 100m | 23.27m | 4Gi | 4Gi | | | 1748.72Mi |
| qa-dc-10.24.99.236-zjk-ali | freshx | kafka-zookeeper-0 | zookeeper | 0 | 0 | 1.02m | 0 | 0 | | | 385.39Mi |
| qa-dc-10.24.25.220-zjk-ali | freshx | kafka-zookeeper-1 | zookeeper | 0 | 0 | 0.86m | 0 | 0 | | | 360.83Mi |
| qa-dc-10.24.99.239-zjk-ali | freshx | kafka-zookeeper-2 | zookeeper | 0 | 0 | 0.97m | 0 | 0 | | | 351.36Mi |
| qa-dc-10.24.99.243-zjk-ali | freshx | openapi-biz-5b45595dc9-s5vb9 | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.221-zjk-ali | freshx | reach-system-5c786fd988-8dx6r | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.218-zjk-ali | freshx | redis-master-0 | redis | 2 | 100m | 0.00m | 5Gi | 5Gi | | | 0.00Mi |
| qa-dc-10.24.99.247-zjk-ali | freshx | saas-auth-5fc8f759d-q9d6w | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.221-zjk-ali | freshx | saas-freshx-biz-758d8cd4c9-5swwz | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.221-zjk-ali | freshx | saas-gateway-675745d64-x7scv | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.99.236-zjk-ali | freshx | saas-register-7ddbb448cc-z5dn9 | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.220-zjk-ali | freshx | saas-upms-biz-bf9699565-tnkzf | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.220-zjk-ali | freshx | saas-xxl-job-admin-74b94bf4c-qz6d4 | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
| qa-dc-10.24.25.220-zjk-ali | freshx | wechat-companion-5d756b994d-t5vhp | rest-service2 | 2 | 200m | 0.00m | 2Gi | 2Gi | -Xmx1536m | -Xms1536m | 0.00Mi |
+----------------------------+-----------+-------------------------------------+---------------+---------+---------+--------------------+----------+----------+-----------+-----------+---------------------+
# 查看是否生成excel?
root@jcrose 当前k8s集群:qa-dc-k8s-zjk-aliyun 当前namespace: null #/usr/local/github/cobra-k8s-manager ls
cmd demo go.mod go.sum LICENSE main.go pkg prod-dc-k8s-zjk-aliyun集群-analysis-cpu-memory-2025-01-21-092404.xlsx README.md tmp
可以发现已经生成了 prod-dc-k8s-zjk-aliyun集群-analysis-cpu-memory-2025-01-21-092404.xlsx 按照当前集群,以及时间戳生成xlsx