k8s集群监控组件容器化部署prometheus+grafana+node-exporter

本文详细介绍如何在Kubernetes集群中部署Prometheus和Grafana,包括环境准备、镜像文件获取、Prometheus与Grafana的配置与部署,以及通过Keepalived与HAProxy实现Prometheus的高可用。

环境准备

单节点k8s集群

节点 IP地址 版本信息
master 192.168.1.4 k8s=v1.17.1、docker=v19.03
node 192.168.1.5 k8s=v1.17.1、docker=v19.03

镜像文件准备

prom/prometheus:v2.19.1
grafana/grafana:7.0.5
prom/node-exporter:v2.0.0

镜像tar包百度网盘分享:

链接:https://pan.baidu.com/s/1xZs-jbBUdG1EJ4rBqHwI_Q 
提取码:2hoq

部署node-exporter

部署前创建monitoring名称空间
kubectl create namespace monitoring

node-exporter组件采用ds方式部署
node-exporter-ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter:v2.0.0
        name: node-exporter
        ports:
        - containerPort: 9100
      hostNetwork: true
      tolerations:
      - operator: Exists

node-exporter-svc.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: node-exporter
  name: node-exporter
  namespace: monitoring
spec:
  clusterIP: None
  ports:
  - name: metrics
    port: 9100
    protocol: TCP
    targetPort: 9100
  type: ClusterIP
  selector:
    app: node-exporter

创建ds和svc

kubectl create -f node-exporter-ds.yaml
kubectl create -f node-exporter-svc.yaml

部署prometheus组件

我这里创建了个/tsdb目录本地存储prometheus和grafana数据

mkdir /tsdb

prometheus-clusterrole.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]

prometheus-sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring

prometheus-clusterrolebinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring

kubectl create -f  prometheus-clusterrole.yaml
kubectl create -f  prometheus-sa.yaml
kubectl create -f  prometheus-clusterrolebinding.yaml
采用configmap方式管理prometheus组件的配置文件
confimap可以配置想要采集的数据
global:
#全局配置
  scrape_interval: 30s
  #默认抓取周期,可用单位ms、smhdwy  默认值为1m
  scrape_timeout: 30s
  #默认抓取超时   默认值为10s
  evaluation_interval: 1m
  #估算规则的默认周期默认为1m
scrape_configs:
#抓取配置列表,及定义收集规则
- job_name: 'prometheus'
  static_configs:  #静态配置参数
    -  targets: {'localhost:9090'} #指定抓取对象IP+端口  未配置metrics_path访问路径都默认为/metrics
    
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值