kubernetes(k8s)中部署metrics-server遇到的问题

个人博客原文: kubernetes(k8s)中部署metrics-server遇到的问题

metrics-server版本:k8s.gcr.io/metrics-server-amd64:v0.3.3
k8s版本:v1.14.1

The '--source' flag is unavailable right now (v0.3.0-alpha.1)
containers:
 - name: metrics-server
 image: k8s.gcr.io/metrics-server-amd64:v0.2.1
 command:
 - /metrics-server
 - --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
 - --requestheader-allowed-names=
所以这种方式不能使用

应用metrics-server的yaml文件后,查看metrics-server-v0.3.3-6cbfd7955f-v29n7状态为CrashLoopBackOff

[root@master metrics-server]# kubectl get pods -n kube-system 
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-9bnvc 1/1 Running 3 20d
coredns-fb8b8dccf-n5bzb 1/1 Running 3 20d
etcd-master 1/1 Running 4 20d
kube-apiserver-master 1/1 Running 4 20d
kube-controller-manager-master 1/1 Running 5 20d
kube-proxy-cmmbw 1/1 Running 4 20d
kube-proxy-dnw9v 1/1 Running 4 20d
kube-proxy-s8zwl 1/1 Running 3 20d
kube-proxy-tt2vb 1/1 Running 4 20d
kube-scheduler-master 1/1 Running 4 20d
kubernetes-dashboard-5f7b999d65-n56td 1/1 Running 0 2d21h
metrics-server-v0.3.3-6cbfd7955f-v29n7 0/2 CrashLoopBackOff 1 16s
weave-net-6pv9w 2/2 Running 11 20d
weave-net-9dsxr 2/2 Running 10 20d
weave-net-lt8vv 2/2 Running 9 20d
weave-net-m2m8t 2/2 Running 11 20d

于是查看日志,容器metrics-server出现如下报错

[root@master metrics-server]# kubectl logs metrics-server-v0.3.3-549cc669c7-gxvgw -c metrics-server -n kube-system 
Flag --deprecated-kubelet-completely-insecure has been deprecated, This is rarely the right option, since it leaves kubelet communication completely insecure. If you encounter auth errors, make sure you've enabled token webhook auth on the Kubelet, and if you're in a test cluster with self-signed Kubelet certificates, consider using kubelet-insecure-tls instead.
I0531 04:36:16.382275 1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0531 04:36:16.963800 1 secure_serving.go:116] Serving securely on [::]:443
E0531 04:36:46.981720 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:worker1: unable to fetch metrics from Kubelet worker1 (10.5.24.223): Get http://10.5.24.223:10255/stats/summary/: dial tcp 10.5.24.223:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:worker3: unable to fetch metrics from Kubelet worker3 (10.5.24.225): Get http://10.5.24.225:10255/stats/summary/: dial tcp 10.5.24.225:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:master: unable to fetch metrics from Kubelet master (10.5.24.222): Get http://10.5.24.222:10255/stats/summary/: dial tcp 10.5.24.222:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:worker2: unable to fetch metrics from Kubelet worker2 (10.5.24.224): Get http://10.5.24.224:10255/stats/summary/: dial tcp 10.5.24.224:10255: connect: connection refused]
E0531 04:37:16.978839 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:worker1: unable to fetch metrics from Kubelet worker1 (10.5.24.223): Get http://10.5.24.223:10255/stats/summary/: dial tcp 10.5.24.223:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:master: unable to fetch metrics from Kubelet master (10.5.24.222): Get http://10.5.24.222:10255/stats/summary/: dial tcp 10.5.24.222:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:worker3: unable to fetch metrics from Kubelet worker3 (10.5.24.225): Get http://10.5.24.225:10255/stats/summary/: dial tcp 10.5.24.225:10255: connect: connection refused, unable to fully scrape metrics from source kubelet_summary:worker2: unable to fetch metrics from Kubelet worker2 (10.5.24.224): Get http://10.5.24.224:10255/stats/summary/: dial tcp 10.5.24.224:10255: connect: connection refused]

容器metrics-server-nanny出现如下报错

[root@master metrics-server]# kubectl logs metrics-server-v0.3.3-55f94f8d76-ghlfb -n kube-system -c metrics-server-nanny
ERROR: logging before flag.Parse: I0531 06:20:40.721189 1 pod_nanny.go:65] Invoked by [/pod_nanny --config-dir=/etc/config --cpu={{ base_metrics_server_cpu }} --extra-cpu=0.5m --memory={{ base_metrics_server_memory }} --extra-memory={{ metrics_server_memory_per_node }}Mi --threshold=5 --deployment=metrics-server-v0.3.3 --container=metrics-server --poll-period=300000 --estimator=exponential --minClusterSize={{ metrics_server_min_cluster_size }}]
invalid argument "{{ metrics_server_min_cluster_size }}" for "--minClusterSize" flag: strconv.ParseUint: parsing "{{ metrics_server_min_cluster_size }}": invalid syntax
Usage of /pod_nanny:
 --config-dir string Path of configuration containing base resource requirements. (default "MISSING")
 --container string The name of the container to watch. This defaults to the nanny itself. (default "pod-nanny")
 --cpu string The base CPU resource requirement.
 --deployment string The name of the deployment being monitored. This is required.
 --estimator string The estimator to use. Currently supported: linear, exponential (default "linear")
 --extra-cpu string The amount of CPU to add per node.
 --extra-memory string The amount of memory to add per node.
 --extra-storage string The amount of storage to add per node. (default "0Gi")
 --memory string The base memory resource requirement.
 --minClusterSize uint The smallest number of nodes resources will be scaled to. Must be > 1. This flag is used only when an exponential estimator is used. (default 16)
 --namespace string The namespace of the ward. This defaults to the nanny pod's own namespace. (default "kube-system")
 --pod string The name of the pod to watch. This defaults to the nanny's own pod. (default "metrics-server-v0.3.3-55f94f8d76-ghlfb")
 --poll-period int The time, in milliseconds, to poll the dependent container. (default 10000)
 --storage string The base storage resource requirement. (default "MISSING")
 --threshold int A number between 0-100. The dependent's resources are rewritten when they deviate from expected by more than threshold.

解决方案:修改metrics-server-deployment.yaml为如下内容

[root@master metrics-server]# cat metrics-server-deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
 name: metrics-server
 namespace: kube-system
 labels:
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
 name: metrics-server-config
 namespace: kube-system
 labels:
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: EnsureExists
data:
 NannyConfiguration: |-
 apiVersion: nannyconfig/v1alpha1
 kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: metrics-server-v0.3.3
 namespace: kube-system
 labels:
 k8s-app: metrics-server
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: Reconcile
 version: v0.3.3
spec:
 selector:
 matchLabels:
 k8s-app: metrics-server
 version: v0.3.3
 template:
 metadata:
 name: metrics-server
 labels:
 k8s-app: metrics-server
 version: v0.3.3
 annotations:
 scheduler.alpha.kubernetes.io/critical-pod: ''
 seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
 spec:
 priorityClassName: system-cluster-critical
 serviceAccountName: metrics-server
 containers:
 - name: metrics-server
 image: k8s.gcr.io/metrics-server-amd64:v0.3.3
 command:
 - /metrics-server
 - --metric-resolution=30s
 # These are needed for GKE, which doesn't support secure communication yet.
 # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
 #- --kubelet-port=10255
 #- --deprecated-kubelet-completely-insecure=true
 - --kubelet-insecure-tls
 - --kubelet-preferred-address-types=InternalIP
 #- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
 ports:
 - containerPort: 443
 name: https
 protocol: TCP
 - name: metrics-server-nanny
 image: k8s.gcr.io/addon-resizer:1.8.5
 resources:
 limits:
 cpu: 100m
 memory: 300Mi
 requests:
 cpu: 5m
 memory: 50Mi
 env:
 - name: MY_POD_NAME
 valueFrom:
 fieldRef:
 fieldPath: metadata.name
 - name: MY_POD_NAMESPACE
 valueFrom:
 fieldRef:
 fieldPath: metadata.namespace
 volumeMounts:
 - name: metrics-server-config-volume
 mountPath: /etc/config
 command:
 - /pod_nanny
 - --config-dir=/etc/config
 #- --cpu={{ base_metrics_server_cpu }}
 - --extra-cpu=0.5m
 #- --memory={{ base_metrics_server_memory }}
 #- --extra-memory={{ metrics_server_memory_per_node }}Mi
 - --threshold=5
 - --deployment=metrics-server-v0.3.3
 - --container=metrics-server
 - --poll-period=300000
 - --estimator=exponential
 # Specifies the smallest cluster (defined in number of nodes)
 # resources will be scaled to.
 - --minClusterSize=2
 volumes:
 - name: metrics-server-config-volume
 configMap:
 name: metrics-server-config
 tolerations:
 - key: "CriticalAddonsOnly"
 operator: "Exists"

修改resource-reader.yaml为:

[root@master metrics-server]# cat resource-reader.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
 name: system:metrics-server
 labels:
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
 - ""
 resources:
 - pods
 - nodes
 - nodes/stats
 - namespaces
 verbs:
 - get
 - list
 - watch
- apiGroups:
 - "extensions"
 resources:
 - deployments
 verbs:
 - get
 - list
 - update
 - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: system:metrics-server
 labels:
 kubernetes.io/cluster-service: "true"
 addonmanager.kubernetes.io/mode: Reconcile
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: system:metrics-server
subjects:
- kind: ServiceAccount
 name: metrics-server
 namespace: kube-system

修改后重新apply成功

[root@master metrics-server]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 
master 336m 8% 2791Mi 36% 
worker1 258m 6% 1583Mi 59% 
worker2 172m 4% 1451Mi 54% 
worker3 312m 7% 1672Mi 62% 
[root@master metrics-server]# kubectl top pods -n kube-system 
NAME CPU(cores) MEMORY(bytes) 
coredns-fb8b8dccf-9bnvc 11m 16Mi 
coredns-fb8b8dccf-n5bzb 9m 17Mi 
etcd-master 39m 361Mi 
kube-apiserver-master 116m 410Mi 
kube-controller-manager-master 44m 66Mi 
kube-proxy-cmmbw 2m 17Mi 
kube-proxy-dnw9v 3m 23Mi 
kube-proxy-s8zwl 5m 16Mi 
kube-proxy-tt2vb 8m 22Mi 
kube-scheduler-master 3m 18Mi 
kubernetes-dashboard-5f7b999d65-n56td 1m 25Mi 
metrics-server-v0.3.3-7856b88bf4-hstvq 3m 22Mi 
weave-net-6pv9w 3m 114Mi 
weave-net-9dsxr 3m 123Mi 
weave-net-lt8vv 4m 105Mi 
weave-net-m2m8t 5m 115Mi

参考:https://github.com/kubernetes-incubator/metrics-server/issues/131

转载请注明:lampNick » kubernetes(k8s)中部署metrics-server遇到的问题

<think>好的,用户想在Kubernetes集群上安装和配置Metrics Server来监控资源使用情况。我需要根据提供的引用资料整理出步骤。首先,检查Kubernetes版本,确保兼容性,因为不同版本的Metrics Server对应不同的Kubernetes版本。比如,0.5.x需要K8s 1.8+,用户需要确认集群版本是否符合。 接下来是下载和安装。引用2提到可以直接使用kubectl apply在线安装0.5.0版本,对应的YAML文件地址是给定的GitHub链接。但用户可能会遇到镜像拉取问题,尤其是国内环境,可能需要修改镜像地址为国内源,比如将k8s.gcr.io的镜像替换成registry.cn-hangzhou.aliyuncs.com/google_containers。 然后是配置部分,可能需要修改YAML文件中的参数,比如添加--kubelet-insecure-tls参数,绕过证书验证,特别是在使用自签名证书的环境里。引用3提到安装后测试时出现“metrics not available yet”,这可能是因为Metrics Server还没有收集到数据,需要等待一段时间,或者检查配置是否正确。 另外,用户可能需要关闭代理,如引用3提到的,或者检查网络策略是否允许Metrics Server与节点通信。安装完成后,通过kubectl top nodes和kubectl top pods命令验证是否成功。 还要注意,如果安装后仍然报错,可以查看Metrics Server的日志,使用kubectl logs命令来排查问题,比如证书错误或网络连接问题。最后,确保Metrics API已经注册,可以使用kubectl get apiservices检查。 需要把这些步骤整理成清晰的教程,分步骤说明,同时提醒用户常见的错误和解决方法。可能还需要生成相关问题,比如版本兼容性、故障排除等,帮助用户进一步理解。</think>### Kubernetes集群安装Metrics Server教程 #### 一、版本兼容性检查 Metrics Server版本需与Kubernetes版本匹配: - $0.5.x$ 支持 Kubernetes $1.8+$ - $0.6.x$ 需要 Kubernetes $1.19+$[^1] 可通过命令验证集群版本: ```bash kubectl version --short ``` #### 二、安装部署步骤 1. **直接在线安装(推荐)** 执行以下命令安装v0.5.0版本: ```bash kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml ``` 此方式会创建`metrics-server`命名空间及相关资源[^2]。 2. **离线安装(镜像加速)** 若遇到镜像拉取失败,修改YAML文件中的镜像地址: ```yaml spec: containers: - name: metrics-server image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.5.0 ``` 然后手动应用修改后的文件: ```bash kubectl apply -f components.yaml ``` #### 三、关键配置调整 在部署文件中添加以下参数确保正常通信: ```yaml args: - --kubelet-insecure-tls # 忽略证书验证 - --kubelet-preferred-address-types=InternalIP # 优先使用内网IP ``` #### 四、验证安装结果 1. 检查Pod状态: ```bash kubectl get pods -n kube-system | grep metrics-server ``` 正常状态应为`Running` 2. 测试资源监控(需等待1-2分钟数据采集): ```bash kubectl top nodes # 节点资源监控 kubectl top pods # Pod资源监控 ``` 若出现`error: metrics not available yet`,检查网络策略或等待数据同步[^3] #### 五、故障排查 1. **查看日志**: ```bash kubectl logs -n kube-system metrics-server-xxxxxx ``` 2. **检查API服务状态**: ```bash kubectl get apiservices | grep metrics ``` 正常应返回`Available`状态 #### 六、典型问题解决 - **镜像拉取失败**:手动替换为国内镜像源 - **证书验证失败**:添加`--kubelet-insecure-tls`参数 - **网络不通**:检查Calico/Flannel网络插件状态
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值