k8s nodename nodeselector deployment pod 测试 重启 运维

本文通过实操测试Kubernetes部署应用,并针对容器、节点出现的各种故障情况进行了验证,包括容器重启、节点重启及不可用时K8s的应对策略。

说明:

master :132,  node1:11 , node2:12


主要测试以下几个方面:

1. container指定到对应的node

2. container的死掉后

3. node重启后

4. node不可用后

5. node恢复后

主要是对上面的情况进行测试,看看k8s是否会按照咱们想的那样保证咱们的应用

1. container指定到对应的node

可以通过nodename、nodeselector实现

为了更好的扩展和定制这里使用 了nodeselector。

deployment yaml文件

[root@yzb-centos72-3 imgcloud]# more dfs-data-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: dfs-data-depl
  namespace: imgcloud
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: dfs-data
    spec:
      nodeSelector: 
        imgcloud/app: dfs-data
      containers:
      - name: dfs-data
        image: registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data
        imagePullPolicy: Always
        ports:
        - containerPort: 2101
          hostPort: 2101
        - containerPort: 3101
          hostPort: 3101

在创建该deployment之前,先进行label设定,通过kubectl label 对node进行打标签。

创建deployment后查看pod状态

[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces -o wide
NAMESPACE   NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
imgcloud    dfs-data-depl-2521169640-62m79   1/1       Running   0          19m       172.17.73.2   10.3.14.11
访问11



2. container的死掉后

停止container

[root@k8s-node-11 k8s]# docker ps
CONTAINER ID        IMAGE                                                           COMMAND             CREATED             STATUS              
1a0e1704331a        registry.access.redhat.com/rhel7/pod-infrastructure:latest      "/pod"              45 seconds ago      Up 43 seconds       
[root@k8s-node-11 k8s]#
[root@k8s-node-11 k8s]# docker stop 3fcf58bbd812
3fcf58bbd812
[root@k8s-node-11 k8s]# docker ps
CONTAINER ID        IMAGE                                                           COMMAND             CREATED             STATUS                                                        
fed8ef3177c6        registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data   "/usr/sbin/init"    8 seconds ago       Up 7 seconds        
1a0e1704331a        registry.access.redhat.com/rhel7/pod-infrastructure:latest      "/pod"              18 minutes ago      Up 18 minutes      
[root@k8s-node-11 k8s]# 

可以看到 很快又重启了一个container,复合我们的要求

3. node重启后

reboot node1 后,

[root@k8s-node-11 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@k8s-node-11 ~]# 
[root@k8s-node-11 ~]# 
[root@k8s-node-11 ~]# 
[root@k8s-node-11 ~]# netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2074/sshd           
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2317/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      751/kube-proxy      
tcp6       0      0 :::10250                :::*                    LISTEN      2317/kubelet        
tcp6       0      0 :::10255                :::*                    LISTEN      2317/kubelet        
tcp6       0      0 :::2101                 :::*                    LISTEN      2523/docker-proxy-c 
tcp6       0      0 :::3101                 :::*                    LISTEN      2514/docker-proxy-c 
tcp6       0      0 :::4194                 :::*                    LISTEN      2317/kubelet        
[root@k8s-node-11 ~]# 
[root@k8s-node-11 ~]# docker ps
CONTAINER ID        IMAGE                                                           COMMAND             CREATED             STATUS              PORTS                                            NAMES
ed9be07a17f6        registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data   "/usr/sbin/init"    13 seconds ago      Up 12 seconds                                                        k8s_dfs-data.a55c69cf_dfs-data-depl-2521169640-62m79_imgcloud_dd8bf189-5af9-11e7-a7aa-0669a40010d2_368201ac
5407f90223f6        registry.access.redhat.com/rhel7/pod-infrastructure:latest      "/pod"              15 seconds ago      Up 14 seconds       0.0.0.0:2101->2101/tcp, 0.0.0.0:3101->3101/tcp   k8s_POD.92d92fce_dfs-data-depl-2521169640-62m79_imgcloud_dd8bf189-5af9-11e7-a7aa-0669a40010d2_b6a7a804
[root@k8s-node-11 ~]# 

可以看到在node1,重启的这段时间里,node2并没有新建我们的pod,也就是说这段时间app是无法访问的。node1启动后很快就启动了咱们的服务。


4. node不可用后

我将node1的flannel关掉,然后查看node、demploy、pod

[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME           STATUS    AGE
10.3.14.11     Ready     1h
10.3.14.12     Ready     1h
172.20.4.133   Ready     3d
[root@yzb-centos72-3 imgcloud]# 
[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME           STATUS     AGE
10.3.14.11     NotReady   1h
10.3.14.12     Ready      1h
172.20.4.133   Ready      3d
[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE   NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
imgcloud    dfs-data-depl   1         1         1            0           51m
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE   NAME                             READY     STATUS    RESTARTS   AGE
imgcloud    dfs-data-depl-2521169640-62m79   1/1       Running   2          33m
[root@yzb-centos72-3 imgcloud]# 

node状态变成了notready,  deployment 的avaliable变成了0. 但是pod没有变化

通过describe查看pod状态,发现过了得一会了(没计时,感觉可能2分钟左右吧),controllermanager 起作用

Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  36m           33m             14      {default-scheduler }                    Warning         FailedScheduling        pod (dfs-data-depl-2521169640-62m79) failed to fit in any node
fit failure summary on nodes : MatchNodeSelector (1)
  33m   33m     1       {default-scheduler }                                    Normal  Scheduled               Successfully assigned dfs-data-depl-2521169640-62m79 to 10.3.14.11
  33m   33m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Created                 Created container with docker id 3fcf58bbd812; Security:[seccomp=unconfined]
  33m   33m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Started                 Started container with docker id 3fcf58bbd812
  33m   14m     2       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Pulling                 pulling image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
  33m   14m     2       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Pulled                  Successfully pulled image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
  14m   14m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Started                 Started container with docker id fed8ef3177c6
  14m   14m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Created                 Created container with docker id fed8ef3177c6; Security:[seccomp=unconfined]
  10m   10m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Pulling                 pulling image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
  10m   10m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Pulled                  Successfully pulled image "registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data"
  10m   10m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Created                 Created container with docker id ed9be07a17f6; Security:[seccomp=unconfined]
  10m   10m     1       {kubelet 10.3.14.11}    spec.containers{dfs-data}       Normal  Started                 Started container with docker id ed9be07a17f6
  21s   21s     1       {controllermanager }                                    Normal  NodeControllerEviction  Marking for deletion Pod dfs-data-depl-2521169640-62m79 from Node 10.3.14.11
在12上启动了container。

[root@iz2ze0fq2isg8vphkpos5tz ~]# docker ps
CONTAINER ID        IMAGE                                                           COMMAND             CREATED             STATUS              PORTS                                            NAMES
35701d1242b9        registry.cn-beijing.aliyuncs.com/zybros/env:imgcloud-dfs-data   "/usr/sbin/init"    39 seconds ago      Up 37 seconds                                                        k8s_dfs-data.a55c69cf_dfs-data-depl-2521169640-s62vc_imgcloud_e2d678e9-5afe-11e7-a7aa-0669a40010d2_6463e4cb
2b48e0f43df9        registry.access.redhat.com/rhel7/pod-infrastructure:latest      "/pod"              40 seconds ago      Up 38 seconds       0.0.0.0:2101->2101/tcp, 0.0.0.0:3101->3101/tcp   k8s_POD.92d92fce_dfs-data-depl-2521169640-s62vc_imgcloud_e2d678e9-5afe-11e7-a7aa-0669a40010d2_dd97ae88

5. node恢复后

[root@yzb-centos72-3 imgcloud]# kubectl get node
NAME           STATUS    AGE
10.3.14.11     Ready     1h
10.3.14.12     Ready     1h
将app scale 到 2

[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE   NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
imgcloud    dfs-data-depl   1         1         1            1           1h
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE   NAME                             READY     STATUS    RESTARTS   AGE
imgcloud    dfs-data-depl-2521169640-s62vc   1/1       Running   0          11m
[root@yzb-centos72-3 imgcloud]# 
[root@yzb-centos72-3 imgcloud]# kubectl scale deployment dfs-data-depl -n imgcloud --replicas=2
deployment "dfs-data-depl" scaled
[root@yzb-centos72-3 imgcloud]# kubectl get deployment --all-namespaces
NAMESPACE   NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
imgcloud    dfs-data-depl   2         2         2            2           1h
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces
NAMESPACE   NAME                             READY     STATUS    RESTARTS   AGE
imgcloud    dfs-data-depl-2521169640-ggptp   1/1       Running   0          10s
imgcloud    dfs-data-depl-2521169640-s62vc   1/1       Running   0          12m
[root@yzb-centos72-3 imgcloud]# kubectl get pod --all-namespaces -o wide
NAMESPACE   NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
imgcloud    dfs-data-depl-2521169640-ggptp   1/1       Running   0          17s       172.17.73.2   10.3.14.11
imgcloud    dfs-data-depl-2521169640-s62vc   1/1       Running   0          12m       172.17.97.2   10.3.14.12

在master etcd 高可用的前提下, node 电、网 正常下,基本算是无人值守,自动修复







<think>我们正在讨论Kubernetes中使用Selector筛选Pod的方法。根据引用内容,Selector在Kubernetes中是一个核心概念,主要用于选择具有特定标签的Pod。在Service、Deployment、ReplicaSet等资源中都会用到Selector。引用[1]提到:Service通过定义标签选择器(selector)来选择一组Pod,这样Service的端点列表会自动更新IP地址和端口,因为对应的Pod是根据Service上的标签选择器被选择出来的。引用[3]和引用[4]展示了nodeSelector的使用,它允许Pod被调度到具有特定标签的节点上。但这里我们主要关注的是如何用Selector过滤Pod(即选择Pod),而不是调度。在Kubernetes中,我们通常使用标签(Labels)和选择器(Selectors)来过滤资源。对于Pod,我们可以使用kubectl命令配合标签选择器来获取特定的Pod列表。根据引用[5],我们还可以使用字段选择器(field-selector)来过滤Pod,例如按照Pod的名称。因此,我们可以总结两种主要方法:1.标签选择器(LabelSelectors):根据Pod的标签进行筛选。2.字段选择器(FieldSelectors):根据Pod的其他字段(如名称、节点名、状态等)进行筛选。下面详细说明:一、使用标签选择器(Label Selectors)在Kubernetes中,我们可以给Pod打上标签,然后通过标签选择器来过滤。例如,给Pod打标签:kubectl labelpod <pod-name> key=value然后,我们可以使用以下命令来筛选:kubectlget pods-l key=value或者使用不等条件:kubectlget pods-l'key1=value1,key2!=value2'二、使用字段选择器(Field Selectors)字段选择器允许我们根据资源的一些字段进行筛选。例如,我们可以根据Pod所在的节点名、Pod的状态等。kubectl getpods --field-selector metadata.name=nginx-pod#引用[5]的例子kubectl getpods --field-selector spec.nodeName=k8s-node1#选择运行在节点k8s-node1上的Podkubectl getpods --field-selector status.phase=Running#选择所有运行中的Pod另外,在编写YAML文件时,我们也可以在Deployment、Service等资源的selector字段中定义标签选择器,以选择它们管理的Pod。例如,在Service中:apiVersion:v1kind:Servicemetadata:name:my-servicespec:selector:app:MyApp#选择所有具有标签app=MyApp的Podports:-protocol:TCPport:80targetPort:9376在Deployment中,同样通过selector来匹配Pod。注意:字段选择器通常用于kubectl命令行,而标签选择器既可以在命令行使用,也可以在资源定义文件中使用。下面我们给出一个结构清晰的回答。</think>### Kubernetes 使用 Selector 筛选 Pod 的方法 在 Kubernetes 中,**Selector(选择器)** 是通过标签(Labels)筛选资源的核心机制。以下是具体实现方法及示例: --- #### 一、标签选择器(Label Selector)[^1][^5] 通过 `key=value` 标签对筛选 Pod: ```bash # 筛选包含 app=nginx 标签的 Pod kubectl get pods -l app=nginx # 多条件筛选(AND 逻辑) kubectl get pods -l 'env=prod,app=web' # 排除特定标签 kubectl get pods -l 'tier!=backend' ``` --- #### 二、字段选择器(Field Selector)[^5] 基于 Pod 的元数据或状态字段筛选: ```bash # 按名称精确筛选 kubectl get pods --field-selector metadata.name=nginx-pod # 按节点筛选 kubectl get pods --field-selector spec.nodeName=k8s-node1 # 按状态筛选 kubectl get pods --field-selector status.phase=Running ``` --- #### 三、在资源定义中使用 Selector 1. **Service 中的 Selector** [^1] 自动关联匹配标签的 Pod: ```yaml apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx # 选择包含此标签的 Pod ports: - protocol: TCP port: 80 ``` 2. **Deployment/ReplicaSet 中的 Selector** 管理 Pod 副本: ```yaml apiVersion: apps/v1 kind: Deployment spec: selector: matchLabels: app: nginx # 必须与 Pod 模板中的标签一致 template: metadata: labels: app: nginx # 关键匹配标签 ``` --- #### 四、节点选择器(Node Selector)[^3][^4] 将 Pod 调度到特定节点: ```yaml apiVersion: v1 kind: Pod metadata: name: huazi spec: nodeSelector: kubernetes.io/hostname: k8s-node1.org # 节点主机名标签 containers: - name: myapp image: myapp:v1 ``` 验证调度结果: ```bash kubectl describe pod huazi | grep "Successfully assigned" # 输出:Successfully assigned default/huazi to k8s-node1 [^4] ``` --- #### 关键概念 - **标签(Labels)**:`key:value` 元数据,如 `env: prod` - **选择器匹配类型**: - 等值匹配(`=`, `==`, `!=`) - 集合匹配(`in`, `notin`, `exists`) - **自动标签**:节点自带 `kubernetes.io/hostname` 等系统标签 [^3] > **操作建议**: > 1. 始终为 Pod 添加业务相关的标签(如 `app`, `tier`) > 2. 使用 `kubectl get pods --show-labels` 查看标签 > 3. 复杂条件用逗号分隔实现 **AND** 逻辑
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值