基于helm部署kube-prometheus stack全家桶

Kubernetes监控部署:Kube-PrometheusStack安装与配置

背景说明:

在日常使用原生k8s一般采用Prometheus进行监控,具体框架图如下

Kube-Prometheus Stack 仓库收集 Kubernetes 清单、Grafana 仪表板和 Prometheus 规则,结合相关文档和脚本,基于 Prometheus Operator 提供易于操作的端到端 Kubernetes 集群监控。

    此项目基于 jsonnet 编写,既可以被描述为一个包,也可以被描述为一个库。主要包含如下组件:

  • Grafana
  • kube-state-metrics
  • node-exporter
  • prometheus
  • Prometheus Adapter for Kubernetes Metrics APIs
  • Prometheus Operator
  • Alertmanager

一、环境

1.1 K8S版本:

[root@kubemaster01 system]# kubectl version 
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

这里的版本比较低,故我们这里选择对应的事release-0.4的版本。

详细的对应的kube-prometheus请参照github:https://github.com/prometheus-operator/kube-prometheus 

1.2 helm:

根据k8s的版本选择对应的helm3.5.4,尽可能的使用helm3

二、安装

新建ns  monitoring,默认用的就是该ns名。并添加helm repo源

[leonli@Leon minikube ] % kubectl create ns monitoring
namespace/monitoring created
[leonli@Leon minikube ] % helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
[leonli@Leon minikube ] % helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "komodorio" chart repository
...Successfully got an update from the "traefik-hub" chart repository
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈
[leonli@Leon minikube ] % helm install prometheus-community/kube-prometheus-stack --namespace monitoring --generate-name 
Error: INSTALLATION FAILED: failed to download "prometheus-community/kube-prometheus-stack"

直接用helm安装报错,选择从github上拉取

[root@kubemaster01 ~]#  git clone https://github.com/prometheus-operator/kube-prometheus.git -b release-0.4 
Cloning into 'kube-prometheus'...
remote: Enumerating objects: 17291, done.
remote: Counting objects: 100% (197/197), done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 17291 (delta 126), reused 146 (delta 91), pack-reused 17094
Receiving objects: 100% (17291/17291), 9.18 MiB | 6.19 MiB/s, done.
Resolving deltas: 100% (11319/11319), done.

 进入  kube-prometheus 目录下,安装 manifest/setup 目录下的所有 yaml 文件,具体如下:

[root@kubemaster01 ~]#  kubectl apply --server-side -f manifests/setup --force-conflicts                                 
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied

注:根据不同的k8的版本,部分yaml可能需要微调

[root@kubemaster01 ~]# cd manifests/setup
[root@kubemaster01 ~]#  ls -l
total 3040
-rw-r--r--  1 leonli  admin  169131 Dec  2 14:53 0alertmanagerConfigCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin  377495 Dec  2 14:53 0alertmanagerCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin   30361 Dec  2 14:53 0podmonitorCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin   31477 Dec  2 14:53 0probeCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin  502646 Dec  2 14:53 0prometheusCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin    4101 Dec  2 14:53 0prometheusruleCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin   31881 Dec  2 14:53 0servicemonitorCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin  385790 Dec  2 14:53 0thanosrulerCustomResourceDefinition.yaml
-rw-r--r--  1 leonli  admin      60 Dec  2 14:53 namespace.yaml
[root@kubemaster01 ~]#  until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
[root@kubemaster01 ~]#  cd ../..
[root@kubemaster01 ~]#  kubectl apply -f manifests/
alertmanager.monitoring.coreos.com/main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created

至此,全部应用安装完毕,检查

由于我这里只会用到Prometheus,所以将Prometheus的svc 单独暴露出30050端口,供给grafna访问,不会用到本文章中的grafna。

三、查看安装结果

3.1 查看Prometheus的targets结果,以及自发现情况

 

 

3.2 查看grafna 数据情况

<think> 我们正在讨论使用Helm安装kube-prometheus-stack时设置时区的方法。根据提供的引用和问题,我们需要找到如何通过Helm chart的配置来设置时区。 首先,kube-prometheus-stack是一个包含Prometheus、Grafana等组件的堆栈。时区问题通常涉及这些组件的显示时间。由于Prometheus本身默认使用UTC时区且通常不建议更改,而Grafana则允许用户在前端选择时区,因此我们可能需要分别考虑。 但是用户的问题是关于安装时设置时区,所以我们需要查看Helm chart中是否有相关配置。 根据引用[2]中提到的自定义参数,我们使用`--set`或`--values`来覆盖默认值。因此,我们需要知道设置时区的具体参数。 通常,对于Grafana,我们可以在部署时设置环境变量`TZ`来指定时区。而对于Prometheus,官方建议保持UTC,如果需要可以在Grafana中转换。 查阅kube-prometheus-stackHelm chart文档(由于我们没有直接文档,但根据经验): - Grafana的配置可以通过`grafana`下的配置项进行。例如,设置环境变量可以通过`grafana.env`或`grafana.env.TZ`(具体取决于chart的设计)。 - 另外,Grafana也支持在`grafana.ini`中设置时区,例如在`[date_formats]`部分,但更常见的是通过环境变量。 具体到该chart,通常设置Grafana时区的方法如下: - 通过设置环境变量:`grafana.env.TZ: "Asia/Shanghai"` - 或者通过`grafana.ini`:添加一个配置项: ``` grafana: grafana.ini: date_formats: timezone: Asia/Shanghai ``` 但注意,环境变量方式可能更直接。 对于Prometheus,我们一般不修改时区,但如果有需要,可以在Prometheus的配置中通过环境变量`TZ`设置(尽管不推荐)。该chart中Prometheus的配置在`prometheus`下,我们可以尝试设置`prometheus.prometheusSpec.env`来添加环境变量。 然而,根据社区的最佳实践,通常不在Prometheus中修改时区,而是在查询和展示时(如Grafana)调整。 因此,我们重点关注Grafana的时区设置。 此外,Alertmanager也可能有时区问题,但通常也是在Grafana中统一处理。 所以,我们可以通过以下方式在安装时设置Grafana的时区: 方法1:使用`--set`参数 ``` helm install prometheus-stack prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --set grafana.env.TZ=Asia/Shanghai ``` 方法2:通过values文件 创建一个values.yaml文件,例如: ```yaml grafana: env: TZ: "Asia/Shanghai" ``` 然后安装: ``` helm install prometheus-stack prometheus-community/kube-prometheus-stack -f values.yaml --namespace monitoring ``` 另外,如果还需要设置其他组件的时区,比如Alertmanager,可以类似地设置`alertmanager.env.TZ`。 但是,请注意,kube-prometheus-stack的chart文档是最准确的来源。由于我们没有具体版本信息,建议用户查看官方chart的values.yaml文件以获取准确的配置项。 根据引用[3],kube-prometheus-stack是基于Prometheus Operator的,所以组件的部署是通过Operator完成的,环境变量的设置应该通过相应的Spec中的env字段传递。 综上所述,设置时区的主要方法是针对Grafana组件,通过环境变量`TZ`设置。 因此,回答如下: </think> ### Helm 安装 kube-prometheus-stack 设置时区的方法 在 Helm 安装 `kube-prometheus-stack` 时,时区设置主要针对 Grafana 组件(因为 Prometheus 默认使用 UTC 时区)。以下是具体配置方法: #### 方法 1:通过环境变量设置 Grafana 时区(推荐) 在 Helm 命令中使用 `--set` 参数添加环境变量: ```bash helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --set grafana.env.TZ="Asia/Shanghai" \ # 设置为上海时区 --set grafana.defaultDashboardsTimezone="Asia/Shanghai" ``` - `grafana.env.TZ`: 设置 Grafana 容器时区环境变量 - `grafana.defaultDashboardsTimezone`: 设置仪表板默认时区显示 #### 方法 2:通过 values.yaml 文件配置(适用于复杂场景) 1. 创建自定义 `values.yaml` 文件: ```yaml grafana: env: TZ: "Asia/Shanghai" # 设置容器时区 defaultDashboardsTimezone: "Asia/Shanghai" # 仪表板时区 grafana.ini: # 高级配置 [date_formats] timezone = "Asia/Shanghai" ``` 2. 执行安装命令: ```bash helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \ -f values.yaml \ --namespace monitoring ``` #### 关键说明: 1. **时区格式**:必须使用 IANA 时区数据库名称(如 `Asia/Shanghai`、`America/New_York`) 2. **Prometheus 时区**:默认 UTC 不可更改(需在 Grafana 中转换) 3. **验证方法**: ```bash kubectl exec -it <grafana-pod> -- env | grep TZ # 应输出 TZ=Asia/Shanghai ``` 4. **其他组件**:Alertmanager 等组件如需时区设置,可通过对应组件的 `env` 参数配置 > **注意**:若已部署,可使用 `helm upgrade --set grafana.env.TZ=...` 更新配置[^2][^3]。 --- ### 相关问题 1. 如何为 Alertmanager 设置独立时区配置? 2. 如何在 Grafana 中验证时区设置是否生效? 3. kube-prometheus-stack 部署后如何动态修改时区? 4. Prometheus 查询结果如何转换为本地时区显示? 5. Helm values.yaml 中还有哪些常用时区相关参数?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

半生痴狂半生颠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值