kube-prometheus之alertmanager全

kube-prometheus中的alertmanger有8个yaml配置文件,分别是

  • alertmanager-alertmanager.yaml
  • alertmanager-podDisruptionBudget.yaml
  • alertmanager-secret.yaml
  • alertmanager-serviceAccount.yaml
  • alertmanager-serviceMonitor.yaml
  • alertmanager-networkPolicy.yaml
  • alertmanager-prometheusRule.yaml
  • alertmanager-service.yaml

alertmanager-podDisruptionBudget.yaml

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.28.1
  name: alertmanager-main
  namespace: monitoring
spec:
  maxUnavailable: 1
  selector:
    matchLabels:  # 匹配最终pod上的Labels
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus

该文件用于为 alertmanager-main 这组 Pod 设置弹性干扰保护策略,其中起作用的就是 maxUnavailable 字段, 该字段设置为1,也就是说明在k8s对pod进行升级回滚等操作的过程中要确保至少有一个Alertmanager能正常运行。

selector 字段用于选择哪些Pod受到该策略的作用,根据selector的规则,我们知道这些规则肯定在altermanager的pod配置中设置了,我们观察alertmanager-alternanager.yaml的配置

apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.28.1
  name: main
  namespace: monitoring
spec:
  image: quay.io/prometheus/alertmanager:v0.28.1
  nodeSelector:
    kubernetes.io/os: linux
  podMetadata:
    labels:
     # 这些配置会最终打到运行的pod上
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus
      app.kubernetes.io/version: 0.28.1

经过selector 的matchLabels之后,PodDisruptionBudget会作用到alertmanager-alternanager.yaml创建出来的pod上。

alertmanager-secret.yaml

alertmanager-sercret中存放的是alertmanager.yaml配置,该配置会用于配置altermanager的报警规则

apiVersion: v1
kind: Secret
metadata:
  labels:
    alertmanager: main
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.21.0
  name: alertmanager-main
  namespace: base-services
stringData:
  alertmanager.yaml: |-
    "global":
      "resolve_timeout": "5m"
    "inhibit_rules":
    - "equal":
      - "namespace"
      - "alertname"
      "source_match":
        "severity": "critical"
      "target_match_re":
        "severity": "warning|info"
    - "equal":
      - "namespace"
      - "alertname"
      "source_match":
        "severity": "warning"
      "target_match_re":
        "severity": "info"
    "receivers":
    - "name": "Default"
    - "name": "Watchdog"
    - "name": "Critical"
    "route":
     # 这个字段在labels字段里面,提示按照那个字段值进行分组
      "group_by":
      - "namespace"
      "group_interval": "5m"
      "group_wait": "30s"
      "receiver": "Default"
      "repeat_interval": "12h"
      "routes":
      - "match":
          "alertname": "Watchdog"
        "receiver": "Watchdog"
      - "match":
          "severity": "critical"
        "receiver": "Critical"
type: Opaque

假设prometheus发送的报警信息如下

// Alert 表示一个告警对象
type Alert struct {
	Status       string            `json:"status"`        // "firing" 或 "resolved"
	Labels       map[string]string `json:"labels"`        // 标签,用于匹配 route
	Annotations  map[string]string `json:"annotations"`   // 注解,用于详情展示
	StartsAt     time.Time         `json:"startsAt"`
	EndsAt       time.Time         `json:"endsAt,omitempty"`
	GeneratorURL string            `json:"generatorURL"` // 可选,点击跳转链接
}

	alertmanagerURL := "http://localhost:9093/api/v2/alerts"

	// 构造一条满足 match_re: severity: podoffline 的告警
	alert := Alert{
		Status: "firing",
		// 匹配信息匹配字段
		Labels: map[string]string{
			"alertname": "PodCrashLoop",           // 必须字段
			"severity": "podoffline-critical",     // 匹配正则 `podoffline`
			"namespace": "base-services",
			"pod":      "my-app-xyz",
			"job":      "kubernetes-pods",
		},
		Annotations: map[string]string{
			"description": "Pod is in crash loop back-off.",
			"summary":     "Pod offline detected",
			"runbook_url": "https://runbooks.example.com/pod-crashloop",
		},
		StartsAt:     time.Now(),
		EndsAt:       time.Time{}, // firing 状态不需要 EndsAt
		GeneratorURL: "http://prometheus.local/graph?g0.expr=up%7Bjob%3D%22kubernetes-pods%22%7D",
	}

如果你不想自己编写代码进行报警上报可以编辑prometheus的报警规则进行数据的上报】

groups:
  - name: pod.rules
    rules:
      - alert: PodOffline
        expr: up{job="kubernetes-pods"} == 0
        for: 1m
        labels:
          severity: podoffline-high
        annotations:
          summary: "Pod {{ $labels.pod }} is down"

alertmanager-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.21.0
  name: alertmanager-main
  namespace: base-services
spec:
  ports:
  - name: web
    # alertmanager pod暴露的端口是9093
    port: 9093
    # alertmanager 通过 nodePort向外暴露服务端口
    nodePort: 30099
    targetPort: 9093
    protocol: TCP
  ## 这里采用NodePort向外暴露服务
  type: NodePort
  selector:
    alertmanager: main
    app: alertmanager
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
  # 注意这个很重要,如果通过service暴露的服务一般都会进行负载均衡,因此多个alertmanager的时候,配置客户端Ip亲和性才能保证一个客户端短时间内访问的alertmanager是同一个
  sessionAffinity: ClientIP

service配置就是普通的k8s service配置,唯一需要重点关注的sessionAffinity: ClientIP 是 Kubernetes Service 中一个非常实用的配置项,它的作用是:实现基于客户端 IP 地址的会话保持(Sticky Session)

📌 效果:

  • 来自同一个客户端 IP 的请求,在一段时间内会被始终转发到后端相同的 Pod;
  • 实现了“会话粘性”,即 Sticky Sessions
情况负载均衡策略是否保持会话
❌ 默认 (sessionAffinity: None)轮询(Round Robin)或其他算法否,每次可能打到不同 Pod
sessionAffinity: ClientIP基于源 IP 哈希选择后端 Pod是,同一 IP 总是访问同一个 Pod

Kubernetes 使用 kube-proxy 实现服务代理。当启用 sessionAffinity: ClientIP 后:

在 iptables 模式下:

  • kube-proxy 会在 iptables 规则中加入一条基于 源 IP 哈希 的匹配逻辑;
  • 利用 ipsethashlimit 模块记录每个客户端 IP 对应的目标 Pod;
  • 确保相同源 IP 的流量总是被 DNAT 到同一个后端 Pod。

在 IPVS 模式下:

  • 使用 IPVS 的持久化功能(Persistent Connection);
  • 配置超时时间(默认 10800 秒),在此期间来自同一 IP 的连接都导向相同 Pod;
  • 支持更精细控制,如协议和端口粒度。

你可以进一步配置会话保持的持续时间:

spec:
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800  # 默认值,3小时

⚠️ 注意:

  • 超时后,下次请求可能会分配到新的 Pod;
  • 设置太长可能导致负载不均;
  • 设置太短则失去“粘性”意义。

alertmanager-serviceAccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: exporter
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.0.0
  name: kube-state-metrics
  namespace: base-services
  • 它不是给人用的(不像 User Account);
  • 它是给 Pod 内的应用程序访问 Kubernetes API 时使用的“身份”;
  • 比如:一个组件需要读取其他 Pod 的状态、修改 ConfigMap、获取节点信息等,就需要绑定一个有权限的 ServiceAccount。

ServiceAccount 如何获得权限?——必须配合 RBAC

光有 ServiceAccount 是不够的,它只是一个“用户名”。要让它能做事情,还需要通过 RBAC(基于角色的访问控制) 授权:

1. Role / ClusterRole(定义“能做什么”)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: alertmanager-main
  namespace: base-services
rules:
- apiGroups: [""]
  resources: ["services", "endpoints", "pods"]
  verbs: ["get", "list", "watch"]  # 只读权限
2. RoleBinding / ClusterRoleBinding(把角色绑定到 SA)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: alertmanager-main
  namespace: base-services
subjects:
- kind: ServiceAccount
  name: alertmanager-main
  namespace: base-services
roleRef:
  kind: Role
  name: alertmanager-main
  apiGroup: rbac.authorization.k8s.io

⚠️ 如果没有这些 RBAC 配置,即使 Pod 使用了这个 SA,也无法访问任何 API。


Pod 是如何使用这个 ServiceAccount 的?

当你部署 Alertmanager(比如通过 StatefulSet 或 Deployment),你会看到类似配置:

apiVersion: apps/v1
kind: StatefulSet
spec:
  template:
    spec:
      serviceAccountName: alertmanager-main  # ← 关键字段
      containers:
        - name: alertmanager
          image: quay.io/prometheus/alertmanager:v0.28.1

Kubernetes 会自动:

  1. 把该 SA 的 token 挂载到 Pod 的 /var/run/secrets/kubernetes.io/serviceaccount/ 目录;
  2. 容器内的程序可以用这个 token 调用 https://kubernetes.default.svc API Server。

alertmanager-serviceMonitor.yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.21.0
  name: alertmanager
  namespace: base-services
spec:
  endpoints:
  - interval: 30s
    port: web
  selector:
    matchLabels:
      alertmanager: main
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus

ServiceMonitor 是一个 ServiceMonitor 资源,它是 Prometheus Operator 提供的自定义资源(CRD),用于告诉 Prometheus “哪些 Kubernetes Service 的指标需要被采集”

这个 ServiceMonitor 告诉 Prometheus: “请去发现所有带有特定标签的 Service,然后通过其名为 web 的端口,每隔 30 秒抓取一次指标。”

暴露 /metrics on :9093
labels 匹配
被 Prometheus Operator 发现
自动配置 scrape job
Alertmanager Pod
Service: alertmanager-main
ServiceMonitor 'alertmanager'
Prometheus Server
开始采集指标

alertmanager-networkPolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.28.1
  name: alertmanager-main
  namespace: monitoring
spec:
  egress:
  - {}
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app.kubernetes.io/name: prometheus
    ports:
    - port: 9093
      protocol: TCP
    - port: 8080
      protocol: TCP
  - from:
    - podSelector:
        matchLabels:
          app.kubernetes.io/name: alertmanager
    ports:
    - port: 9094
      protocol: TCP
    - port: 9094
      protocol: UDP
  podSelector:
    matchLabels:
      app.kubernetes.io/component: alert-router
      app.kubernetes.io/instance: main
      app.kubernetes.io/name: alertmanager
      app.kubernetes.io/part-of: kube-prometheus
  policyTypes:
  - Egress
  - Ingress

NetworkPolicy 是一个 Kubernetes NetworkPolicy 资源,用于精细化控制 Pod 之间的网络通信。它专门保护名为 alertmanager-main 的 Alertmanager 实例(运行在 monitoring 命名空间),只允许特定来源的流量访问其特定端口。

下面我们逐层解析它的作用、安全含义和典型使用场景。


整体目标

限制 Alertmanager Pod 的入站(Ingress)和出站(Egress)流量,实现最小权限网络隔离。

这是云原生安全的重要实践:默认拒绝所有流量,仅放行必要通信


关键字段解析

1. podSelector(作用对象)
podSelector:
  matchLabels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
  • 这个 NetworkPolicy 只作用于带有上述标签的 Pod
  • 通常这些标签由 Prometheus Operator 在创建 Alertmanager StatefulSet 时自动打上;
  • 所以它精准地保护了 alertmanager-main 实例的 Pod。

2. policyTypes
policyTypes:
- Egress
- Ingress
  • 明确启用 入站 + 出站 策略;
  • 如果不写,默认行为取决于是否定义了 ingressegress 字段。

3. ingress(允许谁访问 Alertmanager)

规则 1:允许 Prometheus 访问 Web UI 和 Metrics
- from:
    - podSelector:
        matchLabels:
          app.kubernetes.io/name: prometheus
  ports:
    - port: 9093  # Alertmanager Web UI / API
      protocol: TCP
    - port: 8080  # 可能是 readiness/liveness probe 或 sidecar
      protocol: TCP
  • Prometheus 需要抓取 Alertmanager 的 /metrics(默认在 9093);
  • 可能还通过 8080 端口做健康检查(取决于部署方式)。
规则 2:允许 Alertmanager 自身 Pod 之间通信(集群模式)
- from:
    - podSelector:
        matchLabels:
          app.kubernetes.io/name: alertmanager
  ports:
    - port: 9094
      protocol: TCP
    - port: 9094
      protocol: UDP
  • Alertmanager 支持高可用集群模式,Pod 之间通过 9094 端口 使用 Gossip 协议 同步告警状态;
  • TCP 用于常规通信,UDP 用于快速广播(如成员发现)。

🔔 注意:这里 app.kubernetes.io/name: alertmanager 比主标签少 instance: main,是为了让同命名空间下所有 Alertmanager Pod(即使不同实例)都能互访——但通常只部署一个实例,所以没问题。


4. egress(允许 Alertmanager 主动访问外部)

egress:
- {}

⚠️ 这是一个非常宽松的配置!

  • {} 表示 允许所有出站流量到任意目的地、任意端口、任意协议
  • 相当于“不限制出口”。

为什么这样设计?

  • Alertmanager 需要发送通知到各种外部系统:
    • 邮箱(SMTP: 25/465/587)
    • Slack webhook(HTTPS: 443)
    • 企业微信、钉钉、PagerDuty、Webhook 服务器等
  • 这些目标地址多样且动态,难以预先枚举;
  • 因此在多数生产环境中,Alertmanager 的 egress 被有意放开

alertmanager-prometheusRule.yaml

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    app.kubernetes.io/component: alert-router
    app.kubernetes.io/instance: main
    app.kubernetes.io/name: alertmanager
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 0.28.1
    prometheus: k8s
    role: alert-rules
  name: alertmanager-main-rules
  namespace: monitoring
spec:
  groups:
  - name: alertmanager.rules
    rules:
    - alert: AlertmanagerFailedReload
      annotations:
        description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
        summary: Reloading an Alertmanager configuration has failed.
      expr: |
        # Without max_over_time, failed scrapes could create false negatives, see
        # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
        max_over_time(alertmanager_config_last_reload_successful{job="alertmanager-main",namespace="monitoring"}[5m]) == 0
      for: 10m
      labels:
        severity: critical
    - alert: AlertmanagerMembersInconsistent
      annotations:
        description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} has only found {{ $value }} members of the {{$labels.job}} cluster.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagermembersinconsistent
        summary: A member of an Alertmanager cluster has not found all other cluster members.
      expr: |
        # Without max_over_time, failed scrapes could create false negatives, see
        # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
          max_over_time(alertmanager_cluster_members{job="alertmanager-main",namespace="monitoring"}[5m])
        < on (namespace,service) group_left
          count by (namespace,service) (max_over_time(alertmanager_cluster_members{job="alertmanager-main",namespace="monitoring"}[5m]))
      for: 15m
      labels:
        severity: critical
    - alert: AlertmanagerFailedToSendAlerts
      annotations:
        description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} failed to send {{ $value | humanizePercentage }} of notifications to {{ $labels.integration }}.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedtosendalerts
        summary: An Alertmanager instance failed to send notifications.
      expr: |
        (
          rate(alertmanager_notifications_failed_total{job="alertmanager-main",namespace="monitoring"}[15m])
        /
          ignoring (reason) group_left rate(alertmanager_notifications_total{job="alertmanager-main",namespace="monitoring"}[15m])
        )
        > 0.01
      for: 5m
      labels:
        severity: warning
    - alert: AlertmanagerClusterFailedToSendAlerts
      annotations:
        description: The minimum notification failure rate to {{ $labels.integration }} sent from any instance in the {{$labels.job}} cluster is {{ $value | humanizePercentage }}.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
        summary: All Alertmanager instances in a cluster failed to send notifications to a critical integration.
      expr: |
        min by (namespace,service, integration) (
          rate(alertmanager_notifications_failed_total{job="alertmanager-main",namespace="monitoring", integration=~`.*`}[15m])
        /
          ignoring (reason) group_left rate(alertmanager_notifications_total{job="alertmanager-main",namespace="monitoring", integration=~`.*`}[15m])
        )
        > 0.01
      for: 5m
      labels:
        severity: critical
    - alert: AlertmanagerClusterFailedToSendAlerts
      annotations:
        description: The minimum notification failure rate to {{ $labels.integration }} sent from any instance in the {{$labels.job}} cluster is {{ $value | humanizePercentage }}.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
        summary: All Alertmanager instances in a cluster failed to send notifications to a non-critical integration.
      expr: |
        min by (namespace,service, integration) (
          rate(alertmanager_notifications_failed_total{job="alertmanager-main",namespace="monitoring", integration!~`.*`}[15m])
        /
          ignoring (reason) group_left rate(alertmanager_notifications_total{job="alertmanager-main",namespace="monitoring", integration!~`.*`}[15m])
        )
        > 0.01
      for: 5m
      labels:
        severity: warning
    - alert: AlertmanagerConfigInconsistent
      annotations:
        description: Alertmanager instances within the {{$labels.job}} cluster have different configurations.
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerconfiginconsistent
        summary: Alertmanager instances within the same cluster have different configurations.
      expr: |
        count by (namespace,service) (
          count_values by (namespace,service) ("config_hash", alertmanager_config_hash{job="alertmanager-main",namespace="monitoring"})
        )
        != 1
      for: 20m
      labels:
        severity: critical
    - alert: AlertmanagerClusterDown
      annotations:
        description: '{{ $value | humanizePercentage }} of Alertmanager instances within the {{$labels.job}} cluster have been up for less than half of the last 5m.'
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterdown
        summary: Half or more of the Alertmanager instances within the same cluster are down.
      expr: |
        (
          count by (namespace,service) (
            avg_over_time(up{job="alertmanager-main",namespace="monitoring"}[5m]) < 0.5
          )
        /
          count by (namespace,service) (
            up{job="alertmanager-main",namespace="monitoring"}
          )
        )
        >= 0.5
      for: 5m
      labels:
        severity: critical
    - alert: AlertmanagerClusterCrashlooping
      annotations:
        description: '{{ $value | humanizePercentage }} of Alertmanager instances within the {{$labels.job}} cluster have restarted at least 5 times in the last 10m.'
        runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclustercrashlooping
        summary: Half or more of the Alertmanager instances within the same cluster are crashlooping.
      expr: |
        (
          count by (namespace,service) (
            changes(process_start_time_seconds{job="alertmanager-main",namespace="monitoring"}[10m]) > 4
          )
        /
          count by (namespace,service) (
            up{job="alertmanager-main",namespace="monitoring"}
          )
        )
        >= 0.5
      for: 5m
      labels:
        severity: critical

PrometheusRule 是一个 PrometheusRule 自定义资源(CR),由 Prometheus Operator 管理,用于在 Kubernetes 集群中定义一组针对 Alertmanager 的告警规则。这些规则专门监控 Alertmanager 自身的健康状态、配置一致性、通知发送能力以及集群高可用性。

下面我们将从 结构、每条规则含义、设计原理、潜在问题和最佳实践 四个维度进行深度解析。


整体结构概览

  • 命名空间monitoring
  • 关联 Prometheus 实例:通过标签 prometheus: k8s 被 Prometheus Operator 选中
  • 规则组名alertmanager.rules
  • 目标组件alertmanager-main(job 名)
  • 用途:实现对 Alertmanager 自监控(self-monitoring)

逐条规则详解

1. AlertmanagerFailedReload
max_over_time(alertmanager_config_last_reload_successful{...}[5m]) == 0
  • 含义:过去 5 分钟内,配置从未成功重载。
  • 为什么用 max_over_time
    防止因 scrape 失败导致指标“消失”而误判为 false negative(参考你知识库中的 Alerting on gauges in Prometheus 2.0)。
  • 严重性critical
  • 典型原因alertmanager.yaml 语法错误、权限问题、config-reloader 故障。

2. AlertmanagerMembersInconsistent
alertmanager_cluster_members < count(...) 
  • 含义:某个 Alertmanager 实例发现的集群成员数 少于实际 Pod 数量
  • 机制:对比每个实例报告的 cluster_members 与服务端点总数。
  • 严重性critical
  • 影响:可能导致告警重复发送或丢失(Gossip 同步失败)。

3. AlertmanagerFailedToSendAlerts

rate(failed_total[15m]) / rate(total[15m]) > 0.01
  • 含义单个实例 对某个集成(如 email、slack)的通知失败率 > 1%。
  • 严重性warning
  • 注意:只要有一个实例能发,通常不影响最终通知(除非所有都失败)。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

andrewbytecoder

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值