文章目录
kubectl taint
官方文档 Taints and Tolerations说的还是很详细的,但有几个点还是需要注意一下:
NoSchedule
The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the
.spec.nodeName
for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set.
简单直白理解就是节点有相关的污点,同时,他的 effect 是 NoSchedule
,正常情况没有设置匹配的容忍的话, pod
是不能调度到该节点上的。可是你的pod
没有设置容忍,但是手动设置了nodeName
,那么pod
完美的调度到该节点。(PS:不建议这样调度)
- pod 没有设置容忍,手动指定 nodeName
apiVersion: v1
kind: Pod
metadata:
name: test-nodename
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: backup
nodeName: node2-192-168-240-101
- pod 正常运行在节点 node2-192-168-240-101
# kubectl get pods test-nodename -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-nodename 1/1 Running 0 64m 10.233.108.30 node2-192-168-240-101 <none> <none>
- 节点 node2-192-168-240-101 设置了污点
# kubectl describe node node2-192-168-240-101 | grep Taint -A2
Taints: INFRA=true:NoSchedule
Unschedulable: false
Lease:
NoExecute
This affects pods that are already running on the node as follows:
- Pods that do not tolerate the taint are evicted immediately
- Pods that tolerate the taint without specifying tolerationSeconds in their toleration specification remain bound forever
- Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. After that time elapses, the node lifecycle controller evicts the Pods from the node.
这个需要留意一下
Pods that tolerate the taint without specifying tolerationSeconds in their toleration specification remain bound forever
节点设置了NoExecute
, 你的pod
设置了相关的容忍,但是没有设置tolerationSeconds
,那么假如你的节点宕机了。你的pod
也不会被重新调度。如果有时候发现节点各种原因NotReady
了,但是pod
没有被重新调度到其他节点,可以查一下是不是这个原因。