如何 Scale Up/Down Deployment?- 每天5分钟玩转 Docker 容器技术(126)

本文介绍了如何使用Kubernetes通过修改Deployment来实现Pod的伸缩,包括增加和减少副本数量,并展示了具体的kubectl操作命令。

伸缩(Scale Up/Down)是指在线增加或减少 Pod 的副本数。
Deployment 
nginx-deployment 初始是两个副本。

k8s-node1 和 k8s-node2 上各跑了一个副本。现在修改 nginx.yml,将副本改成 5 个。

再次执行 kubectl apply

三个新副本被创建并调度到 k8s-node1 和 k8s-node2 上。

出于安全考虑,默认配置下 Kubernetes 不会将 Pod 调度到 Master 节点。如果希望将 k8s-master 也当作 Node 使用,可以执行如下命令:

kubectl taint node k8s-master node-role.kubernetes.io/master-

如果要恢复 Master Only 状态,执行如下命令:

kubectl taint node k8s-master node-role.kubernetes.io/master="":NoSchedule

接下来修改配置文件,将副本数减少为 3 个,重新执行 kubectl apply

可以看到两个副本被删除,最终保留了 3 个副本。

下一节我们学习 Deployment 的 Failover。

书籍:

1.《每天5分钟玩转Docker容器技术》
https://item.jd.com/16936307278.html


2.《每天5分钟玩转OpenStack》
https://item.jd.com/12086376.html

看下这个报错 Started by user Devops CRD Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] echo Original Release Version: 3 [Pipeline] node Running on Jenkins in /root/.jenkins/workspace/video-analysis-algorithm-pipeline [Pipeline] { [Pipeline] withEnv [Pipeline] { [Pipeline] timestamps [Pipeline] { [Pipeline] stage [Pipeline] { (Sync Up Release) [Pipeline] echo 19:14:57 Sync up video-analysis-algorithm release ... [Pipeline] script [Pipeline] { [Pipeline] echo 19:14:57 CLOUD:azure [Pipeline] echo 19:14:57 Major Deploy Version: 1.10.22 [Pipeline] sh 19:14:57 + helm3 registry login prdtplinkhelmchartzau1.azurecr.io -u bee8c203-08f1-4e0c-9799-a729a7fbfdd7 -p xWy8Q~Kdn~7ayTJL54lMJmyOdmiJVYc8zUksvaE2 19:14:57 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config 19:14:57 WARNING: Using --password via the CLI is insecure. Use --password-stdin. 19:15:00 Login Succeeded [Pipeline] sh 19:15:00 + helm3 show chart oci://prdtplinkhelmchartzau1.azurecr.io/video-analysis-algorithm --version 1.10.22 19:15:00 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config [Pipeline] sh 19:15:03 + cat ./temp 19:15:03 + grep appVersion 19:15:03 + head -1 19:15:03 + cut -d : -f 2 [Pipeline] echo 19:15:03 RELEASE_VERSION: 1.10.22 [Pipeline] echo 19:15:03 Latest Release Version: 1.10.22 [Pipeline] } [Pipeline] // script [Pipeline] echo 19:15:03 Creating build directories ... [Pipeline] sh 19:15:03 + mkdir ./build-3 [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Authorization) Stage "Authorization" skipped due to when conditional [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Promote Stage) [Pipeline] input 19:15:03 Promote to azure.uat-v2.azure-brazil-1? 19:15:03 Proceed or Abort 19:15:10 Approved by Devops CRD [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy Stage) [Pipeline] echo 19:15:11 Deploying video-analysis-algorithm Release v1.10.22 to azure.uat-v2.azure-brazil-1 ... [Pipeline] dir 19:15:11 Running in /root/.jenkins/workspace/video-analysis-algorithm-pipeline/build-3 [Pipeline] { [Pipeline] script [Pipeline] { [Pipeline] sh 19:15:11 + git ls-remote ssh://cicdtplinknbu@pdgerrit.tp-link.com:29418/tplinknbu/devops_cicd feature/20251110-tapocare-ai 19:15:11 + cut -f1 [Pipeline] echo 19:15:11 commit id: 2706169fa15101f001dfd146d48f61f657c8d7ef [Pipeline] sh 19:15:11 + rm -rf devops_cicd 19:15:11 + mkdir devops_cicd [Pipeline] sh 19:15:11 + git clone --depth 1 -b feature/20251110-tapocare-ai ssh://cicdtplinknbu@pdgerrit.tp-link.com:29418/tplinknbu/devops_cicd 19:15:11 Cloning into 'devops_cicd'... 19:15:13 Total 26483 (delta 8660), reused 21568 (delta 8660) 19:15:15 Updating files: 79% (15074/18954) Updating files: 80% (15164/18954) Updating files: 81% (15353/18954) Updating files: 82% (15543/18954) Updating files: 83% (15732/18954) Updating files: 84% (15922/18954) Updating files: 85% (16111/18954) Updating files: 86% (16301/18954) Updating files: 87% (16490/18954) Updating files: 88% (16680/18954) Updating files: 89% (16870/18954) Updating files: 90% (17059/18954) Updating files: 91% (17249/18954) Updating files: 92% (17438/18954) Updating files: 93% (17628/18954) Updating files: 94% (17817/18954) Updating files: 95% (18007/18954) Updating files: 96% (18196/18954) Updating files: 97% (18386/18954) Updating files: 98% (18575/18954) Updating files: 99% (18765/18954) Updating files: 100% (18954/18954) Updating files: 100% (18954/18954), done. [Pipeline] sh 19:15:15 + rm -rf k8s-values 19:15:15 + mkdir k8s-values [Pipeline] sh 19:15:15 + cp -rf ./devops_cicd/k8s-values/video-analysis-algorithm ./k8s-values/ [Pipeline] sh 19:15:15 + rm -rf ./devops_cicd [Pipeline] echo 19:15:16 ENV_FOLDER_NAME:azure-pet [Pipeline] echo 19:15:16 deploy() begins... [Pipeline] sh 19:15:16 + yq read ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml app.namespace [Pipeline] sh 19:15:16 + kubectl get deployment -n pet-app-ipc --context azure.uat-v2.azure-brazil-1 19:15:18 No resources found. [Pipeline] sh 19:15:18 + grep -E video-analysis-algorithm-[0-9]+(\.[0-9,a-z,A-Z]+)+(-[a-z,A-Z]+)* 19:15:18 + cat ./temp 19:15:18 + cut -d -f 1 [Pipeline] echo 19:15:18 PREVIOUS_DEPLOY: [Pipeline] echo 19:15:18 azure-pet PREVIOUS_VER: [Pipeline] echo 19:15:18 Deploying to azure-pet: azure-brazil-1 [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet ] 19:15:19 + echo true [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/global ] 19:15:19 + echo true [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/common ] 19:15:19 + echo true [Pipeline] sh 19:15:20 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/config ] 19:15:20 + echo true [Pipeline] sh 19:15:20 + kubectl create configmap video-analysis-algorithm-azure-pet-azure-brazil-1-tmp-config --from-file=./k8s-values/video-analysis-algorithm/global --from-file=./k8s-values/video-analysis-algorithm/azure-pet/common --from-file=./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/config --dry-run -o=yaml [Pipeline] sh 19:15:20 + yq read video-analysis-algorithm-azure-pet-azure-brazil-1-tmp.yaml data [Pipeline] sh 19:15:20 + yq prefix -i video-analysis-algorithm-azure-pet-azure-brazil-1-config-map-data.yaml configMap.data [Pipeline] sh 19:15:21 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1 ] 19:15:21 + echo true [Pipeline] sh 19:15:21 + echo NEW_RELEASE_NAME video-analysis-algorithm-1.10.22 19:15:21 NEW_RELEASE_NAME video-analysis-algorithm-1.10.22 [Pipeline] sh 19:15:21 + helm list --namespace pet-app-ipc --kube-context azure.uat-v2.azure-brazil-1 [Pipeline] sh 19:15:23 + cat ./temp 19:15:23 + grep video-analysis-algorithm 19:15:23 + head -1 19:15:23 + cut -f 1 [Pipeline] sh 19:15:24 + echo networking init 19:15:24 networking init [Pipeline] sh 19:15:24 + kubectl apply -f ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/networking/service.yaml --context azure.uat-v2.azure-brazil-1 19:15:27 service/video-analysis-algorithm-zbr1 created [Pipeline] sh 19:15:27 + kubectl apply -f ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/networking/istio.yaml --context azure.uat-v2.azure-brazil-1 19:15:30 gateway.networking.istio.io/video-analysis-algorithm-zbr1-gw created 19:15:32 virtualservice.networking.istio.io/video-analysis-algorithm-zbr1-vs created 19:15:32 virtualservice.networking.istio.io/video-analysis-algorithm-internal-zbr1-vs created 19:15:33 gateway.networking.istio.io/video-analysis-algorithm-internal-grpc-zbr1-gw created 19:15:34 virtualservice.networking.istio.io/video-analysis-algorithm-internal-grpc-zbr1-vs created [Pipeline] sh 19:15:34 + echo networking init successfully 19:15:34 networking init successfully [Pipeline] sh 19:15:34 + echo networking init successfully 19:15:34 networking init successfully [Pipeline] sh 19:15:35 + AWS_DEFAULT_REGION=ap-southeast-1 AWS_PROFILE=uat helm3 upgrade --install video-analysis-algorithm-1.10.22 oci://prdtplinkhelmchartzau1.azurecr.io/video-analysis-algorithm --version 1.10.22 --values ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml --values video-analysis-algorithm-azure-pet-azure-brazil-1-config-map-data.yaml --namespace pet-app-ipc --kube-context azure.uat-v2.azure-brazil-1 --timeout 1200s --wait 19:15:35 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config 19:15:37 Release "video-analysis-algorithm-1.10.22" does not exist. Installing it now. 19:15:51 W1117 19:15:51.108993 2395101 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] 19:35:58 Error: Get "https://uat-k8s-nbu-zbr1-dns-3wzszvat.hcp.brazilsouth.azmk8s.io:443/apis/apps/v1/namespaces/pet-app-ipc/deployments/video-analysis-algorithm-1.10.22": context deadline exceeded [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // dir [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Blue-Green Release) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] stage [Pipeline] { (Promote To BLUE-GREEN-ALL) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy To BLUE-GREEN-ALL) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (SCALE-DOWN-BLUE) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Promote To BLUE-GREEN-END) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy To BLUE-GREEN-END) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Test stage) Stage "Test stage" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timestamps [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: script returned exit code 1 Finished: FAILURE
最新发布
11-18
从日志分析,Pipeline 失败的主要原因是 **Helm 部署超时**,具体错误如下: ``` Error: Get "https://uat-k8s-nbu-zbr1-dns-3wzszvat.hcp.brazilsouth.azmk8s.io:443/apis/apps/v1/namespaces/pet-app-ipc/deployments/video-analysis-algorithm-1.10.22": context deadline exceeded ``` 以下是详细分析和解决方案: --- ### **1. 根本原因** #### **(1) Kubernetes API 连接超时** - Helm 在等待 Deployment `video-analysis-algorithm-1.10.22` 就绪时,无法在 1200s(20分钟)内完成与 Kubernetes API 的通信。 - 可能原因: - **网络问题**:Jenkins 到目标 AKS 集群的网络延迟或中断 - **资源不足**:集群节点资源不足导致 Pod 调度失败 - **镜像拉取问题**:容器镜像过大或仓库访问慢 - **配置错误**:Deployment 中资源请求/限制不合理 #### **(2) 其他警告** - `metadata.name: must not contain dots`:Helm 名称中包含点号(如 `1.10.22`),可能影响 DNS 解析(非致命警告)。 --- ### **2. 解决方案** #### **(1) 检查集群状态** ```bash # 检查集群节点状态 kubectl get nodes --context azure.uat-v2.azure-brazil-1 # 检查目标命名空间下的 Pod 事件 kubectl get events -n pet-app-ipc --context azure.uat-v2.azure-brazil-1 ``` #### **(2) 手动重试 Helm 安装** ```bash helm3 upgrade --install video-analysis-algorithm-1.10.22 \ oci://prdtplinkhelmchartzau1.azurecr.io/video-analysis-algorithm \ --version 1.10.22 \ --values ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml \ --namespace pet-app-ipc \ --kube-context azure.uat-v2.azure-brazil-1 \ --debug --timeout 1800s # 延长超时时间并启用调试日志 ``` #### **(3) 优化部署配置** - **减少首次部署的等待时间**:拆分 Deployment 和依赖服务(如 DB/缓存)的安装。 - **调整资源请求**:确保 `values.yaml` 中的 `resources.requests/limits` 符合集群容量。 - **预拉镜像**:在节点上手动拉取镜像以加速部署: ```bash kubectl create -f ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml --dry-run=client -o yaml | grep "image:" | awk '{print $2}' | xargs -I {} docker pull {} ``` #### **(4) 网络诊断** - 从 Jenkins 节点测试到 AKS API 的连通性: ```bash curl -v https://uat-k8s-nbu-zbr1-dns-3wzszvat.hcp.brazilsouth.azmk8s.io:443/healthz ``` --- ### **3. 预防措施** 1. **增加 Helm 超时时间**:将 `--timeout` 从 `1200s` 调整为 `1800s` 或更高。 2. **分阶段部署**: ```groovy stage('Deploy Core') { sh 'helm3 upgrade --install ... --set dependencies.enabled=false' } stage('Deploy Dependencies') { sh 'helm3 upgrade ... --set dependencies.enabled=true' } ``` 3. **监控集群资源**:在 Pipeline 中添加前置检查: ```bash kubectl describe nodes | grep -A5 "Allocated resources" ``` ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值