啃k8s之Pod资源管理和harbor私有仓库搭建

本文详细介绍了Kubernetes中Pod的资源管理,包括创建方式、Pod的特点、容器分类以及镜像拉取策略。接着,阐述了如何部署和使用Harbor私有仓库,包括安装、配置、推送镜像等步骤。最后,讨论了k8s使用中可能遇到的问题,如Terminating状态的处理、无法从仓库下载报错和CrashLoopBackOff状态的解决方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一:Pod资源管理

1.1:创建资源的两种方式

  • kubectl命令
  • 基于文件:JSON用于参数传递,应用于开发中。而在k8s中更多的是使用yaml文件进行创建资源

1.2:pod特点

  • 最小部署单元
  • 一组容器的集合
  • 一个Pod中的容器共享网络命名空间
  • Pod是短暂的

1.3:容器的分类

1、infrastructure container:基础容器

  • 维护整个pod网络空间:可以在node节点操作查看容器的网络
[root@node1 ~]# cat /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.54 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"		'这是基础容器'

2、initcontainers:初始化容器

  • 先于业务容器开始执行,原先pod中容器是并行开启,现在进行了改进
  • 无论容器写在初始化容器前还是写在初始化容器后,最先执行的都是初始化容器。只有初始化容器执行成功后才可以启动容器。
  • 初始化容器的应用场景一般是多容器,例如:mysql和业务分开两个容器。将业务设为初始化容器,并检查mysql是否启动,若mysql启动,则业务容器启动;否则业务容器等待mysql启动。

3、container:业务容器

  • 业务容器就是我们创建的pod资源内的容器服务,业务容器也叫APP容器,并行启动

1.4:镜像拉取策略(image PullPolicy)

  • 1、ifnotpresent:默认值,镜像在宿主机上不存在时会拉取

  • 2、always:每次创建pod都会重新拉取一次镜像

  • 3、never:pod永远不会主动拉取这个镜像

1.5:查看镜像拉取策略(master节点查看)

[root@master ~]# kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-5s6h7       1/1     Running   1          10d
nginx-test-d55b94fd-9zmdj   1/1     Running   0          27h
nginx-test-d55b94fd-b8lkl   1/1     Running   0          27h
nginx-test-d55b94fd-w4c5k   1/1     Running   0          27h
[root@master ~]# kubectl edit deploy/nginx

在这里插入图片描述

1.6:编辑一个pod并指定拉去策略

[root@master ~]# cd test/
[root@master test]# ls
nginx-service.yaml  nginx-test01.yaml  nginx-test02.yaml  nginx-test.yaml
[root@master test]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
[root@master test]# kubectl apply -f pod1.yaml	'如果需要更新容器,需要删除原先的容器:kubectl delete -f pod1.yaml,修改yaml文件后使用apply命令重新部署:kubectl apply -f pod1.yaml '
pod/mypod created

[root@master test]# kubectl get pods
NAME                              READY   STATUS              RESTARTS   AGE
mypod                             0/1     ContainerCreating   0          7s
nginx-dbddb74b8-sxr6l             1/1     Running             0          11m
nginx-deployment-d55b94fd-q9vcm   1/1     Running             0          2d5h
nginx-deployment-d55b94fd-xhrv4   1/1     Running             0          11m
nginx-deployment-d55b94fd-xj67v   1/1     Running             0          2d5h

[root@master test]# kubectl get pods 
NAME                              READY   STATUS      RESTARTS   AGE
mypod                             0/1     Completed   0          36s
nginx-dbddb74b8-sxr6l             1/1     Running     0          12m
nginx-deployment-d55b94fd-q9vcm   1/1     Running     0          2d5h
nginx-deployment-d55b94fd-xhrv4   1/1     Running     0          12m
nginx-deployment-d55b94fd-xj67v   1/1     Running     0          2d5h
  • 查看容器详细信息:kubectl describe pod 名称
[root@master test]# kubectl describe pod mypod 
Name:               mypod
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               20.0.0.54/20.0.0.54
Start Time:         Mon, 12 Oct 2020 17:03:54 +0800
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"mypod","namespace":"default"},"spec":{"containers":[{"command":["echo...
Status:             Running
IP:                 172.17.83.2
Containers:
  nginx:
    Container ID:  docker://aae31eb96f714c6352582737276c283faf8670759ea767dfb76d87c0173d45af
    Image:         nginx
    Image ID:      docker-pullable://nginx@sha256:fc66cdef5ca33809823182c9c5d72ea86fd2cef7713cf3363e1a0b12a5d77500
    Port:          <none>
    Host Port:     <none>
    Command:
      echo
      SUCCESS
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 12 Oct 2020 17:05:56 +0800
      Finished:     Mon, 12 Oct 2020 17:05:56 +0800
    Ready:          False
    Restart Count:  3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dljps (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-dljps:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-dljps
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                Message

----     ------     ----                 ----                -------

  Normal   Scheduled  2m16s                default-scheduler   Successfully assigned default/mypod to 20.0.0.54
  Normal   Pulling    29s (x4 over 2m15s)  kubelet, 20.0.0.54  pulling image "nginx"
  Normal   Pulled     14s (x4 over 113s)   kubelet, 20.0.0.54  Successfully pulled image "nginx"
  Normal   Created    14s (x4 over 113s)   kubelet, 20.0.0.54  Created container
  Normal   Started    14s (x4 over 112s)   kubelet, 20.0.0.54  Started container
  Warning  BackOff    3s (x6 over 91s)     kubelet, 20.0.0.54  Back-off restarting failed container
  '失败的状态的原因是因为命令启动冲突,删除command: [ "echo", "SUCCESS" ],在下方进行操作'
  • 同时更改一下版本
[root@master test]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: nginx
      image: nginx:1.14  '更改版本'
      imagePullPolicy: Always
[root@master test]# kubectl delete -f pod1.yaml '删除原有的资源'
pod "mypod" deleted
[root@master test]# kubectl apply -f pod1.yaml '更新资源'
pod/mypod created

[root@master test]# kubectl get pods -w
NAME                              READY   STATUS              RESTARTS   AGE
mypod                             0/1     ContainerCreating   0          6s
nginx-dbddb74b8-sxr6l             1/1     Running             0          16m
nginx-deployment-d55b94fd-q9vcm   1/1     Running             0          2d5h
nginx-deployment-d55b94fd-xhrv4   1/1     Running             0          16m
nginx-deployment-d55b94fd-xj67v   1/1     Running             0          2d5h
mypod   1/1   Running   0     17s
  • 查看分配节点
[root@master test]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE
mypod                             1/1     Running   0          101s   172.17.83.2   20.0.0.54   <none>
nginx-dbddb74b8-sxr6l             1/1     Running   0          18m    172.17.58.6   20.0.0.56   <none>
nginx-deployment-d55b94fd-q9vcm   1/1     Running   0          2d5h   172.17.58.4   20.0.0.56   <none>
nginx-deployment-d55b94fd-xhrv4   1/1     Running   0          18m    172.17.58.5   20.0.0.56   <none>
nginx-deployment-d55b94fd-xj67v   1/1     Running   0          2d5h   172.17.58.3   20.0.0.56   <none>
  • 在任意node节点使用curl 查看头部信息
[root@node1 ~]# curl -I 172.17.83.2
HTTP/1.1 200 OK
Server: nginx/1.14			'这边版本就是之前设置的'
Date: Mon, 12 Oct 2020 09:11:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 29 Sep 2020 14:12:31 GMT
Connection: keep-alive
ETag: "5f7340cf-264"
Accept-Ranges: bytes

二:k8s的harbor私有仓库部署

  • docker和docker-compose安装
'关闭防火墙和核心防护'
[root@harbor yum.repos.d]# systemctl stop firewalld.service 
[root@harbor yum.repos.d]# setenforce 0 
[root@harbor yum.repos.d]# vim /etc/selinux/config 
SELINUX=disabled

'安装依赖包'
[root@harbor ~]# yum install -y yum-utils device-mappeer-persistent-data lvm2
'yum-utils提供了yum-config-manager'
'device mapper存储驱动城区需要device-mapper-persistent-tata和lvm2'
'device mapper是linux2.6内核中支持逻辑卷管理的通用设备映射机制,它为实现用于存储资源管理的块设备驱动提供了一个高度模块化的内核架构'

'设置阿里云镜像源'
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

'安装docker-ce'
[root@harbor yum.repos.d]# yum install -y docker-ce

'启动服务'
[root@harbor yum.repos.d]# systemctl start docker.service 
[root@harbor yum.repos.d]# systemctl enable docker.service

'镜像加速'
[root@harbor yum.repos.d]# cd /etc/docker/
[root@harbor docker]# ls
key.json
[root@harbor docker]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://yu1vx79j.mirror.aliyuncs.com"]
}
EOF
[root@harbor docker]# ls
daemon.json  key.json
[root@harbor docker]# systemctl daemon-reload 
[root@harbor docker]# systemctl restart docker

'网络优化'
[root@harbor docker]# vim /etc/sysctl.conf 
net.ipv4.ip_forward=1    '开启路由转发功能'
[root@harbor docker]# sysctl -p
net.ipv4.ip_forward = 1
[root@harbor docker]# service network restart 
[root@harbor docker]# systemctl restart docker
[root@harbor ~]# rz -E
rz waiting to receive.
[root@harbor ~]# ls
anaconda-ks.cfg  docker-compose  harbor-offline-installer-v1.2.2.tgz
[root@harbor ~]# mv docker-compose  /usr/local/bin/
[root@harbor ~]# chmod +x /usr/local/bin/docker-compose 
[root@harbor ~]# docker-compose -v
docker-compose version 1.21.1, build 5a3f1a3
  • 安装harbor
[root@harbor ~]# tar zxf harbor-offline-installer-v1.2.2.tgz -C /usr/local/	'解压到指定目录'
[root@harbor ~]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common                     harbor_1_1_0_template  LICENSE
docker-compose.clair.yml   harbor.cfg             NOTICE
docker-compose.notary.yml  harbor.v1.2.2.tar.gz   prepare
docker-compose.yml         install.sh             upgrade
[root@harbor harbor]# vim harbor.cfg 	'修改配置文件'
hostname = 20.0.0.53	'修改为监听本地地址,不可以使用localhost或者127。0.0.1'
[root@harbor harbor]# sh install.sh 
  • web网站登录测试

在这里插入图片描述

在这里插入图片描述

  • 创建一个项目用来做测试

在这里插入图片描述

在这里插入图片描述

  • 所有node节点配置连接私有仓库(注意后面的逗号要添加)
[root@node01 ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://ix6qlzoo.mirror.aliyuncs.com"],
  "insecure-registries":["20.0.0.53"]
}
[root@node1 ~]# systemctl daemon-reload 
[root@node1 ~]# systemctl restart docker.service 
  • 所有node节点都登录harbor仓库(在使用harbor仓库下载镜像创建资源的时候,需要保证node节点处于登陆的状态)
[root@node1 ~]# docker login 20.0.0.53		'登陆'
Username: admin
Password: 						'输入密码Harbor12345'
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@node1 ~]# 
[root@node1 ~]# ls -a
.docker '这个凭证是你登陆之后才会有的,如果当你下载镜像文件被拒绝时,先检查凭证'
[root@node1 ~]# ls .docker/
config.json

'查看登陆凭证'
[root@node1 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIyMC4wLjAuNTMiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==

[root@node2 ~]# docker login 20.0.0.53
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@node2 ~]# cat .docker/config.json |base64 -w 0
ewoJImF1dGhzIjogewoJCSIyMC4wLjAuNTMiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==

'node1和node2的凭证是一样的,因为都是harbor给他们下发'
  • 下载Tomcat镜像进行推送
[root@node1 ~]# docker pull tomcat  '(从公有仓库进行下载)'

'推送格式'
 docker tag SOURCE_IMAGE[:TAG] 192.168.195.80/project/IMAGE[:TAG]

'打标签'
[root@node1 ~]# docker tag tomcat 20.0.0.53/project/tomcat

'推送成功'
[root@localhost ~]# docker push 20.0.0.53/project/tomcat
  • 刷新网页进行查看

在这里插入图片描述

  • 指定node节点从私有仓库下载

1、查看node节点登录harbor的凭据(所有node节点的凭据是一样的)

[root@node01 ~]# cat .docker/config.json |base64 -w 0  '以64位解码,-w 0 表示不转行'
ewoJImF1dGhzIjogewoJCSIyMC4wLjAuNTMiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==

2、node节点下载一个tomcat镜像

[root@node1 ~]# docker pull tomcat:8.0.52
[root@node2 ~]# docker pull tomcat:8.0.52

3、master节点创建一个yaml文件

[root@master test]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: my-tomcat
[root@master test]# kubectl apply -f tomcat-deployment.yaml 
deployment.extensions/my-tomcat created
service/my-tomcat created

[root@master test]# kubectl get pod -w
NAME                              READY   STATUS    RESTARTS   AGE
my-tomcat-57667b9d9-2hwqh         1/1     Running   0          8s
my-tomcat-57667b9d9-g5vdk         1/1     Running   0          8s

[root@master test]# kubectl describe pod my-tomcat-57667b9d9-2hwqh
Events:
  Type    Reason     Age   From                Message
  ----    ------     ----  ----                -------
  Normal  Scheduled  92s   default-scheduler   Successfully assigned default/my-tomcat-57667b9d9-2hwqh to 20.0.0.56
  Normal  Pulled     92s   kubelet, 20.0.0.56  Container image "docker.io/tomcat:8.0.52" already present on machine  不需要下载镜像,因为就在本地
  Normal  Created    92s   kubelet, 20.0.0.56  Created container
  Normal  Started    92s   kubelet, 20.0.0.56  Started container

[root@master test]# kubectl get svc
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP          3d10h
my-tomcat       NodePort    10.0.0.151   <none>        8080:31111/TCP   2m59s
nginx-service   NodePort    10.0.0.174   <none>        80:30928/TCP     2d8h
  • 访问tomcat网页
    在这里插入图片描述

  • node01上操作(之前登陆过harbor仓库的节点)上传镜像

'镜像打标签'
[root@localhost ~]# docker tag tomcat:8.0.52 20.0.0.53/project/tomcat

'上传镜像到harbor'
[root@localhost ~]# docker push 20.0.0.53/project/tomcat
  • 刷新网页查看

在这里插入图片描述

4、master节点创建secret资源

[root@master test]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-pull-secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIyMC4wLjAuNTMiOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTMgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson

[root@master test]# kubectl create -f registry-pull-secret.yaml '创建secret资源'
secret/registry-pull-secret created

[root@master test]# kubectl get secret	'查看secret资源'
NAME                   TYPE                                  DATA   AGE
default-token-dljps    kubernetes.io/service-account-token   3      3d10h
registry-pull-secret   kubernetes.io/dockerconfigjson        1      13s

[root@master test]# kubectl delete -f tomcat-deployment.yaml 	'删除之前创建的tomcat资源'
deployment.extensions "my-tomcat" deleted
service "my-tomcat" deleted

[root@master test]# vim tomcat-deployment.yaml 
    spec:
      imagePullSecrets:
      - name: registry-pull-secret	'添加'
      containers:
      - name: my-tomcat
        image: 20.0.0.53/project/tomcat   '修改为私有仓库路径'
        ports:
        - containerPort: 80

[root@master test]# kubectl create -f tomcat-deployment.yaml '创建资源从harbor中下载镜像'
deployment.extensions/my-tomcat created
service/my-tomcat created

[root@master test]# kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
my-tomcat-ddcff464d-dzbv7         1/1     Running   0          43s
my-tomcat-ddcff464d-trlrw         1/1     Running   0          43s

[root@master test]# kubectl describe pod my-tomcat-ddcff464d-dzbv7
Events:
  Type    Reason     Age   From                Message

----    ------     ----  ----                -------
  Normal  Scheduled  66s   default-scheduler   Successfully assigned default/my-tomcat-ddcff464d-dzbv7 to 20.0.0.56
  Normal  Pulling    66s   kubelet, 20.0.0.56  pulling image "20.0.0.53/project/tomcat"
  Normal  Pulled     47s   kubelet, 20.0.0.56  Successfully pulled image "20.0.0.53/project/tomcat"
  Normal  Created    47s   kubelet, 20.0.0.56  Created container
  Normal  Started    47s   kubelet, 20.0.0.56  Started container
  • 访问harbor 查看tomcat下载的次数

在这里插入图片描述

三:k8s中遇到的问题

3.1:Terminating状态

  • 如果遇到处于Terminating状态的无法删除的资源如何处理
[root@localhost demo]# kubectl get pods
NAME                              READY   STATUS        RESTARTS   AGE
my-tomcat-57667b9d9-nklvj         1/1     Terminating   0          10h
my-tomcat-57667b9d9-wllnp         1/1     Terminating   0          10h
  • 这种情况下可以使用强制删除命令:kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]
[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-nklvj --force --grace-period=0 -n default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "my-tomcat-57667b9d9-nklvj" force deleted

3.2:无法从仓库下载报错

  • 缺少仓库的凭据
[root@localhost ~]# docker pull 20.0.0.53/project/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 20.0.0.53/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  • 进行进项下载问题就会出现,需要登录才能下载

3.3:CrashLoopBackOff状态

  • 失败的状态的原因是因为命令启动冲突
删除 command: [ "echo", "SUCCESS" ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值