k8s live

本文记录了一次在Kubernetes中部署Nginx容器时遇到的健康检查失败问题。通过详细的事件日志,展示了容器从启动到因健康检查失败而被不断重启的过程。最终由于HTTP状态码返回404导致健康检查失败,并触发了回退重启机制。
k8s live 博客分类: 搜索引擎,爬虫 Kubernetes

 

---------	--------	-----	----					-------------		--------	------		-------
  11m		11m		1	{default-scheduler }						Normal		Scheduled	Successfully assigned pod-with-healthcheck to dev-mg4.internal.tude.com
  11m		11m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id ee8718fcfa36; Security:[seccomp=unconfined]
  11m		11m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id ee8718fcfa36
  9m		9m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id 030a5e32e952; Security:[seccomp=unconfined]
  9m		9m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id ee8718fcfa36: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  9m		9m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id 030a5e32e952
  7m		7m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id 28f81ea65711
  7m		7m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id 030a5e32e952: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  7m		7m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id 28f81ea65711; Security:[seccomp=unconfined]
  6m		6m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id 28f81ea65711: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  6m		6m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id f186b8dda69c; Security:[seccomp=unconfined]
  6m		6m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id f186b8dda69c
  4m		4m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id f186b8dda69c: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  4m		4m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id 008a30536782; Security:[seccomp=unconfined]
  4m		4m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id 008a30536782
  3m		3m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id 030f5bccb32b; Security:[seccomp=unconfined]
  3m		3m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id 030f5bccb32b
  3m		3m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id 008a30536782: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  1m		1m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Started		Started container with docker id 04046c952761
  11m		1m		7	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Pulled		Container image "reg.docker.tude.com/cmall/business:test-1-master" already present on machine
  1m		1m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id 030f5bccb32b: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  1m		1m		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Created		Created container with docker id 04046c952761; Security:[seccomp=unconfined]
  10m		40s		9	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Warning		Unhealthy	Liveness probe failed: HTTP probe failed with statuscode: 404
  9s		9s		1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal		Killing		Killing container with docker id 04046c952761: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  9s		8s		3	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Warning		BackOff		Back-off restarting failed docker container
  9s		8s		3	{kubelet dev-mg4.internal.tude.com}				Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx pod=pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)"

  1m	1m	1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal	Created		Created container with docker id 5c4597fa06b9; Security:[seccomp=unconfined]
  15m	1m	8	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal	Pulled		Container image "reg.docker.tude.com/cmall/business:test-1-master" already present on machine
  1m	1m	1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal	Started		Started container with docker id 5c4597fa06b9
  14m	54s	10	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Warning	Unhealthy	Liveness probe failed: HTTP probe failed with statuscode: 404
  23s	23s	1	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Normal	Killing		Killing container with docker id 5c4597fa06b9: pod "pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)" container "nginx" is unhealthy, it will be killed and re-created.
  4m	6s	18	{kubelet dev-mg4.internal.tude.com}	spec.containers{nginx}	Warning	BackOff		Back-off restarting failed docker container
  23s	6s	4	{kubelet dev-mg4.internal.tude.com}				Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=nginx pod=pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)"

    

apiVersion: v1
kind: Pod
metadata:
  name: pod-with-healthcheck
spec:
  containers:
  - name: nginx
    image: reg.docker.tude.com/cmall/business:test-1-master
    # defines the health checking
    livenessProbe:
      # an http probe
      httpGet:
        path: /111213asd
        port: 80
      # length of time to wait for a pod to initialize
      # after pod startup, before applying health checking
      initialDelaySeconds: 30
      timeoutSeconds: 10
      # 检查周期
      periodSeconds: 30 
      # 临界值 
      # successThreshold: 1
      # 失败次数报 	Unhealthy	Liveness probe failed: HTTP probe failed with statuscode: 404,不检测了 , kubernetes-dashboard 出现错误
      # failureThreshold: 3
    ports:
    - containerPort: 80

   

Liveness probe failed: HTTP probe failed with statuscode: 404
Back-off restarting failed docker container
Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx pod=pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)"
pod-with-healthcheck
Waiting: CrashLoopBackOff
7
16 minutes
 
Liveness probe failed: HTTP probe failed with statuscode: 404
Back-off restarting failed docker container
Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx pod=pod-with-healthcheck_default(f298e45a-0089-11e7-a3ad-005056ae01be)"

 

 

 

转载于:https://my.oschina.net/xiaominmin/blog/1598586

### Kubernetes Pod 的热更新或实时更新最佳实践 在 Kubernetes 中,Pod 是不可变的单元,这意味着一旦创建就不能直接修改其定义。然而,在某些情况下,开发人员希望实现类似于“热更新”或“实时更新”的功能来动态调整应用配置或行为而无需完全重启整个 Pod。以下是几种常见的方法和工具用于实现这一目标: #### 使用 ConfigMap 和 Secret 动态更新配置文件 ConfigMap 和 Secret 可以用来存储应用程序所需的外部化配置数据。当这些对象的内容发生变化时,容器内的进程可以通过监听挂载路径的变化或者定期轮询的方式检测到新的配置并重新加载。 如果应用程序支持通过信号处理机制(如 SIGHUP)触发重读配置,则可以直接利用此特性完成无缝切换[^1]。对于那些不具备内置能力的应用来说,可以考虑编写初始化脚本周期性检查是否有新版本可用,并据此采取行动。 ```yaml apiVersion: v1 kind: ConfigMap metadata: name: app-config data: config.json: | { "setting": "value" } --- apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: myapp-container volumeMounts: - mountPath: /etc/config name: config-volume volumes: - name: config-volume configMap: name: app-config ``` #### 实施 Live Reload 工具链 针对特定类型的负载比如 Web 应用服务器 (Node.js, Python Flask/Django),存在专门设计好的解决方案允许开发者更方便地实施 Hot Reloading 或者 Auto Restarting 当源码发生变更之后。例如 nodemon 对于 NodeJS 开发环境非常有用;而在 Java Spring Boot 生态圈子里也有 DevTools 插件提供相似的功能[^2]。 需要注意的是上述提到的技术手段主要适用于开发调试阶段而非生产环境中推荐的做法因为它们可能会引入额外的风险因素影响服务稳定性以及性能表现等方面。 #### 自动化 CI/CD 流程中的镜像重建与部署策略优化 尽管严格意义上讲这并不属于传统意义上的 “Hot Update”,但是借助现代化持续集成(CI)/持续交付(CD) 平台配合 GitOps 方法论仍然能够达到快速迭代发布的目的同时保持高度可控性和可追溯性。每当有代码提交至主分支或者其他指定位置时自动触发流水线执行构建测试打包最后推送最新版 Docker Image 至 Registry 同步通知 K8S 集群拉取运行从而减少人为干预频率提高整体效率水平。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值