熟悉掌握pod部署管理
掌握Pod调度策略
掌握Pod标签管理
Pod资源配额与限额



Pod 调度策略
基于节点的调度
在创建Pod的过程中,我们可以配置相关的调度规则,从而让Pod运行在制定的节点上
[root@master ~]# vim myhttp.yaml --- kind: Pod apiVersion: v1 metadata: name: myhttp spec: nodeName: node-0001 # 基于节点名称进行调度 containers: - name: apache image: myos:httpd #测试验证,如果标签指定节点无法运行Pod,它不会迁移到其他节点,将一直等待下去 [root@master ~]# kubectl apply -f myhttp.yaml pod/myhttp created [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myhttp 1/1 Running 0 3s 10.244.1.6 node-0001
标签管理
kubectlnets.io属于系统功能标签不可更改


查询标签
[root@master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myhttp 1/1 Running 0 2m34s <none> [root@master ~]# kubectl get namespaces --show-labels NAME STATUS AGE LABELS default Active 3h44m kubernetes.io/metadata.name=default kube-node-lease Active 3h44m kubernetes.io/metadata.name=kube-node-lease kube-public Active 3h44m kubernetes.io/metadata.name=kube-public kube-system Active 3h44m kubernetes.io/metadata.name=kube-system [root@master ~]# kubectl get nodes --show-labels NAME STATUS ROLES VERSION LABELS master Ready control-plane v1.26.0 kubernetes.io/hostname=master node-0001 Ready <none> v1.26.0 kubernetes.io/hostname=node-0001 node-0002 Ready <none> v1.26.0 kubernetes.io/hostname=node-0002 node-0003 Ready <none> v1.26.0 kubernetes.io/hostname=node-0003 node-0004 Ready <none> v1.26.0 kubernetes.io/hostname=node-0004 node-0005 Ready <none> v1.26.0 kubernetes.io/hostname=node-0005
使用标签过滤
# 使用标签过滤资源对象 [root@master ~]# kubectl get nodes -l kubernetes.io/hostname=master NAME STATUS ROLES AGE VERSION master Ready control-plane 3h38m v1.26.0
添加标签
[root@master ~]# kubectl label pod myhttp app=apache pod/myhttp labeled [root@master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myhttp 1/1 Running 0 14m app=apache
删除标签
[root@master ~]# kubectl label pod myhttp app- pod/myhttp labeled [root@master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myhttp 1/1 Running 0 14m <none>
资源文件标签
[root@master ~]# vim myhttp.yaml --- kind: Pod apiVersion: v1 metadata: name: myhttp labels: # 声明标签 app: apache # 标签键值对 spec: containers: - name: apache image: myos:httpd [root@master ~]# kubectl delete pods myhttp pod "myhttp" deleted [root@master ~]# kubectl apply -f myhttp.yaml pod/myhttp created [root@master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myhttp 1/1 Running 0 14m app=apache
基于标签的调度
[root@master ~]# kubectl get nodes node-0002 --show-labels NAME STATUS ROLES VERSION LABELS node-0002 Ready <none> v1.26.0 kubernetes.io/hostname=node-0002 ... [root@master ~]# vim myhttp.yaml --- kind: Pod apiVersion: v1 metadata: name: myhttp labels: app: apache spec: nodeSelector: # 基于节点标签进行调度 kubernetes.io/hostname: node-0002 # 标签 containers: - name: apache image: myos:httpd [root@master ~]# kubectl delete pods myhttp pod "myhttp" deleted [root@master ~]# kubectl apply -f myhttp.yaml pod/myhttp created [root@master ~]# kubectl get pods -l app=apache -o wide NAME READY STATUS RESTARTS AGE IP NODE myhttp 1/1 Running 0 9s 10.244.2.11 node-0002
容器调度(案例2)

[root@master ~]# kubectl label nodes node-0002 node-0003 disktype=ssd node/node-0002 labeled node/node-0003 labeled [root@master ~]# vim myhttp.yaml --- kind: Pod apiVersion: v1 metadata: name: myhttp labels: app: apache spec: nodeSelector: disktype: ssd containers: - name: apache image: myos:httpd [root@master ~]# sed "s,myhttp,web1," myhttp.yaml |kubectl apply -f - [root@master ~]# sed "s,myhttp,web2," myhttp.yaml |kubectl apply -f - [root@master ~]# sed "s,myhttp,web3," myhttp.yaml |kubectl apply -f - [root@master ~]# sed "s,myhttp,web4," myhttp.yaml |kubectl apply -f - [root@master ~]# sed "s,myhttp,web5," myhttp.yaml |kubectl apply -f - [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myhttp 1/1 Running 0 29m 10.244.2.30 node-0002 web1 1/1 Running 0 10s 10.244.2.31 node-0002 web2 1/1 Running 0 10s 10.244.2.32 node-0002 web3 1/1 Running 0 10s 10.244.3.45 node-0003 web4 1/1 Running 0 10s 10.244.3.46 node-0003 web5 1/1 Running 0 10s 10.244.3.47 node-0003
清理实验配置
[root@master ~]# kubectl delete pod -l app=apache pod "myhttp" deleted pod "web1" deleted pod "web2" deleted pod "web3" deleted pod "web4" deleted pod "web5" deleted [root@master ~]# kubectl label nodes node-0002 node-0003 disktype- node/node-0002 labeled node/node-0003 labeled
Pod 资源配额requests:保证程序能运行

资源对象文件
[root@master ~]# vim minpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: minpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
terminationGracePeriodSeconds太长了,可以用kubectl explain Pod.spec查出来
内存资源配额
[root@master ~]# vim minpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: minpod
spec:
terminationGracePeriodSeconds: 0
nodeSelector: # 配置 Pod 调度节点
kubernetes.io/hostname: node-0003 # 在 node-0003 节点创建
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources: # 资源策略
requests: # 配额策略
memory: 1100Mi # 内存配额
# 验证配额策略
[root@master ~]# for i in app{1..5};do sed "s,minpod,${i}," minpod.yaml;done |kubectl apply -f -
pod/app1 created
pod/app2 created
pod/app3 created
pod/app4 created
pod/app5 created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
app1 1/1 Running 0 4s
app2 1/1 Running 0 4s
app3 1/1 Running 0 4s
app4 0/1 Pending 0 4s
app5 0/1 Pending 0 4s
# 清理实验配置
[root@master ~]# kubectl delete pod --all
计算资源配额
[root@master ~]# vim minpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: minpod
spec:
terminationGracePeriodSeconds: 0
nodeSelector:
kubernetes.io/hostname: node-0003
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
requests:
cpu: 800m # 计算资源配额
# 验证配额策略
[root@master ~]# for i in app{1..5};do sed "s,minpod,${i}," minpod.yaml;done |kubectl apply -f -
pod/app1 created
pod/app2 created
pod/app3 created
pod/app4 created
pod/app5 created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
app1 1/1 Running 0 8s
app2 1/1 Running 0 8s
app3 0/1 Pending 0 8s
app4 0/1 Pending 0 8s
app5 0/1 Pending 0 8s
# 清理实验配置
[root@master ~]# kubectl delete pod --all
Pod 资源限额limits
限额内存 CPU

限额内存 CPU
# 创建限额资源对象文件
[root@master ~]# vim maxpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: maxpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
limits:
cpu: 800m
memory: 2000Mi
[root@master ~]# kubectl apply -f maxpod.yaml
pod/maxpod created
验证内存限额
#拷贝测试文件到容器内 [root@master ~]# kubectl cp memtest.py maxpod:/usr/bin/ [root@master ~]# kubectl exec -it maxpod -- /bin/bash #大于2000Mi,获取资源失败 [root@maxpod /]# memtest.py 2500 Killed #小于2000Mi,获取资源成功 [root@maxpod /]# memtest.py 1500 use memory success press any key to exit :
验证 CPU 限额
[root@master ~]# kubectl exec -it maxpod -- ps aux
USER PID %CPU %MEM VSZ RSS STAT START TIME COMMAND
root 1 79.9 0.0 9924 720 Rs 18:25 1:19 awk BEGIN{while(1){}}
root 8 0.5 0.0 12356 2444 Ss 18:26 0:00 /bin/bash
[root@master ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
maxpod 834m 1Mi
# 清理实验 Pod
[root@master ~]# kubectl delete pod maxpod
pod "maxpod" deleted
全局资源管理

LimitRange
kubectl api-resources | grep -i limitrange寻找kind
默认配额策略
# 创建名称空间 [root@master ~]# kubectl create namespace work namespace/work created # 设置默认配额 [root@master ~]# vim limit.yaml --- apiVersion: v1 kind: LimitRange metadata: name: mylimit namespace: work spec: limits: - type: Container default: #设定了容器的默认资源限制(limits) cpu: 300m memory: 500Mi defaultRequest: #设定了容器的默认资源请求(requests) cpu: 8m memory: 8Mi [root@master ~]# kubectl -n work apply -f limit.yaml limitrange/mylimit created
验证配额策略
[root@master ~]# vim maxpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: maxpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
[root@master ~]# kubectl -n work apply -f maxpod.yaml
pod/maxpod created
[root@master ~]# kubectl -n work describe pod maxpod
... ...
Limits:
cpu: 300m
memory: 500Mi
Requests:
cpu: 10m
memory: 8Mi
... ...
[root@master ~]# kubectl -n work top pods
NAME CPU(cores) MEMORY(bytes)
maxpod 300m 0Mi
自定义资源
[root@master ~]# vim maxpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: maxpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 1100m
memory: 2000Mi
[root@master ~]# kubectl -n work delete -f maxpod.yaml
pod "maxpod" deleted
[root@master ~]# kubectl -n work apply -f maxpod.yaml
pod/maxpod created
[root@master ~]# kubectl -n work describe pod maxpod
... ...
Limits:
cpu: 1100m
memory: 2000Mi
Requests:
cpu: 10m
memory: 10Mi
... ...
[root@master ~]# kubectl -n work top pods maxpod
NAME CPU(cores) MEMORY(bytes)
maxpod 1000m 0Mi
资源配额范围
[root@master ~]# vim limit.yaml --- apiVersion: v1 kind: LimitRange metadata: name: mylimit namespace: work spec: limits: - type: Container default: # 默认资源配置 cpu: 300m memory: 500Mi defaultRequest: # 默认资源请求 cpu: 8m memory: 8Mi max: # 容器资源使用的最大限制 cpu: 800m memory: 1000Mi min: # 容器资源使用的最小限制 cpu: 2m memory: 8Mi [root@master ~]# kubectl -n work apply -f limit.yaml limitrange/mylimit configured [root@master ~]# kubectl -n work delete -f maxpod.yaml pod "maxpod" deleted [root@master ~]# kubectl -n work apply -f maxpod.yaml Error from server (Forbidden): error when creating "maxpod.yaml": pods "maxpod" is forbidden: [maximum cpu usage per Container is 800m, but limit is 1100, maximum memory usage per Container is 1000Mi, but limit is 2000Mi]
多容器资源配额
[root@master ~]# vim maxpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: maxpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 800m
memory: 1000Mi
- name: linux1
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 800m
memory: 1000Mi
[root@master ~]# kubectl -n work apply -f maxpod.yaml
pod/maxpod created
[root@master ~]# kubectl -n work get pods
NAME READY STATUS RESTARTS AGE
maxpod 2/2 Running 0 50s
[root@master ~]# kubectl -n work top pods maxpod
NAME CPU(cores) MEMORY(bytes)
maxpod 1610m 0Mi
Pod 资源配额
[root@master ~]# vim limit.yaml --- apiVersion: v1 kind: LimitRange metadata: name: mylimit namespace: work spec: limits: - type: Container default: cpu: 300m memory: 500Mi defaultRequest: cpu: 8m memory: 8Mi max: cpu: 800m memory: 1000Mi min: cpu: 2m memory: 8Mi - type: Pod max: cpu: 1200m memory: 1200Mi min: cpu: 2m memory: 8Mi [root@master ~]# kubectl -n work apply -f limit.yaml limitrange/mylimit configured [root@master ~]# kubectl -n work delete -f maxpod.yaml pod "maxpod" deleted [root@master ~]# kubectl -n work apply -f maxpod.yaml Error from server (Forbidden): error when creating "maxpod.yaml": pods "maxpod" is forbidden: [maximum cpu usage per Pod is 1200m, but limit is 1600m, maximum memory usage per Pod is 1200Mi, but limit is 2097152k]
多个 Pod 消耗资源
[root@master ~]# vim maxpod.yaml
---
kind: Pod
apiVersion: v1
metadata:
name: maxpod
spec:
terminationGracePeriodSeconds: 0
containers:
- name: linux
image: myos:8.5
command: ["awk", "BEGIN{while(1){}}"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 800m
memory: 1000Mi
# 创建太多Pod,资源也会耗尽
[root@master ~]# for i in app{1..9};do sed "s,maxpod,${i}," maxpod.yaml ;done |kubectl -n work apply -f -
# Pod 创建成功后,查看节点资源使用情况
[root@master ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 81m 4% 1040Mi 27%
node-0001 1800m 90% 403Mi 10%
node-0002 1825m 86% 457Mi 11%
node-0003 1816m 85% 726Mi 19%
node-0004 1823m 86% 864Mi 21%
node-0005 1876m 88% 858Mi 21%
# 清理实验配置
[root@master ~]# kubectl -n work delete pods --all
ResourceQuota:限制某一个资源空间下资源总量

全局配额策略
[root@master ~]# vim quota.yaml --- apiVersion: v1 kind: ResourceQuota metadata: name: myquota namespace: work spec: hard: requests.cpu: 1000m requests.memory: 2000Mi limits.cpu: 5000m limits.memory: 8Gi pods: 3 [root@master ~]# kubectl -n work apply -f quota.yaml resourcequota/myquota created
验证 quota 配额
[root@master ~]# for i in app{1..5};do sed "s,maxpod,${i}," maxpod.yaml ;done |kubectl -n work apply -f -
pod/app1 created
pod/app2 created
pod/app3 created
Error from server (Forbidden): error when creating "STDIN": pods "app4" is forbidden: exceeded quota: myquota, requested: pods=1, used: pods=3, limited: pods=3
Error from server (Forbidden): error when creating "STDIN": pods "app5" is forbidden: exceeded quota: myquota, requested: pods=1, used: pods=3, limited: pods=3
# 删除实验 Pod 与限额规则
[root@master ~]# kubectl -n work delete pods --all
pod "app1" deleted
pod "app2" deleted
pod "app3" deleted
[root@master ~]# kubectl -n work delete -f limit.yaml -f quota.yaml
limitrange "mylimit" deleted
resourcequota "myquota" deleted
[root@master ~]# kubectl delete namespace work
namespace "work" deleted

被折叠的 条评论
为什么被折叠?



