(高可用)八、k8s(三)

七、service

0、概念

  • service可以视为一组提供相同服务的pod对外访问接口,借助service可以方便实现服务发现和负载均衡
  • 默认只支持4层负载均衡,没有7层能力
  • 分类:
    • ClusterIP(默认,只能在集群内访问,k8s自动分配给service的虚拟ip)
    • NodePort(将service通过指定的端口对外暴露,访问任意一个node都可以访问到服务)
    • NodeIP(nodeport都将路由到ClusterIP)
    • LoadBalancer(在NodePort基础上,借助cloud provider创建一个外部负载均衡器,只能在云服务器上使用)
    • ExternalName(将服务通过DNS CNAME记录方式转发至指定的域名)
  • service由kube-proxy+iptables共同实现
  • kube-proxy处理service时,会产生大量的iptables规则,会大量消耗cpu
    • 开启IPVS模式的service,可以让k8s支持更多的pod
[root@k8s2 controllers]# yum install ipvsadm -y

# cm,configmap,k8s配置和应用分离的设计
[root@k8s2 controllers]# kubectl edit cm kube-proxy -n kube-system
    mode: "ipvs"
configmap/kube-proxy edited

# 需要使配置更新生效,删掉旧的
[root@k8s2 controllers]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-5f2pj" deleted
pod "kube-proxy-l2nfq" deleted
pod "kube-proxy-q6g5d" deleted

# 可以看到新的生效了
[root@k8s2 controllers]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-5l9vs               1/1     Running   0                3m1s
kube-proxy-sxj4v               1/1     Running   0                3m
kube-proxy-zf6nn               1/1     Running   0                2m59s

# 查看ip a,出现一个新的网卡,上面是集群所有的VIP地址
10: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 56:a6:a9:63:9f:c6 brd ff:ff:ff:ff:ff:ff
    inet 10.108.131.166/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.99.234.144/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.103.177.19/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 192.168.147.51/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.104.73.158/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.103.208.102/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.110.218.50/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever


# 使用ipvsadm查看路由策略
[root@k8s2 controllers]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:30668 rr
TCP  172.17.0.1:36166 rr
TCP  192.168.147.51:80 rr
TCP  192.168.147.51:443 rr
TCP  192.168.147.101:30668 rr
TCP  192.168.147.101:36166 rr
TCP  10.96.0.1:443 rr
  -> 192.168.147.101:6443         Masq    1      0          0

k8s网络通信

  1. k8s通过CNI接口接入其他插件来实现网络通讯,如flannel、calico等
  2. CNI插件存放位置:/etc/cni/net.d/
  3. 插件使用的解决方案:
    • 虚拟网桥:虚拟网卡,多个容器共用一个虚拟网卡
    • 多路复用:MacVLAN,多个容器共用一个物理网卡
    • 硬件交换:SR-LOV,一个物理网卡虚拟出多个接口,性能最佳
  1. 同一个pod中容器之间通信,用回环地址
  2. pod之间的通信:
    • 同意节点的pod之间,通过cni网桥转发数据包
    • 不同节点的pod之间,需要网络插件支持
  1. pod和service之间通信:通过iptables或ipvs实现
  2. pod和外网通信:iptables的MASQUERADE
  3. service与集群外部客户端之间通信:ingress、nodeport、loadbalancer
flannel网络插件
  • Vxlan
    • vxlan:报文封装,默认
    • Directrouting:直接路由,跨网段使用vlan,同网段使用host-gw
  • host-gw:主机网关,性能好,二层,不能跨网段,容易产生广播风暴
  • UDP:性能差
# 配置flannel
[root@k8s2 controllers]# kubectl -n kube-flannel edit cm
        "Backend": {
          "Type": "host-gw"
        }

# 使配置生效,删掉原pod,新pod会自动拉起
[root@k8s2 controllers]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-7swkg" deleted
pod "kube-flannel-ds-8t7nz" deleted
pod "kube-flannel-ds-qb5pg" deleted

[root@k8s2 controllers]# kubectl -n kube-flannel get pod
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-9xs7s   1/1     Running   0          7s
kube-flannel-ds-cw9f8   1/1     Running   0          6s
kube-flannel-ds-x865h   1/1     Running   0          7s

# 节点不多的情况下,不需要走flannel.1网卡
[root@k8s2 controllers]# ip r
default via 192.168.147.2 dev ens33 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 192.168.147.102 dev ens33
10.244.2.0/24 via 192.168.147.103 dev ens33

calico网络插件
  • 可以编辑网络策略
  • flannel实现的是网络通信,calico的特性是在pod之间的隔离
  • 通过BGP路由
  • 纯3层的转发,中间没有经过NAT和overlay,转发效率最好
  • 仅依赖三层路由可达
  • 文档:https://projectcalico.docs.tigera.io/about/about-calico
  • 网络架构:Felix(监听ECTD中心的存储获取事件,负责注册)+BIRD(从内核获取到路由变化,负责路由)
  • 工作模式:IPIP(不同网段)、BGP(同一网段,大型网络)
  • NetworkPolicy策略模型:控制某个ns下的pod的网络出入站规则
# 下载配置文件
# https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml

# 镜像本地化
## docker.io/calico/node:v3.25.0
## docker.io/calico/kube-controllers:v3.25.0
## docker.io/calico/cni:v3.25.0

# 更改配置:关闭IPIP隧道,修改CIDR地址段
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP

            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"


# 删除flannel插件,并删除flannel配置文件,/etc/cni/net.d/
[root@k8s2 calico]# rm -f /etc/cni/net.d/*
[root@k8s2 calico]# kubectl create -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created



# network policy示例
[root@k8s2 calico]# vim networkpolicy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:  # 控制哪个标签的pod
      role: myapp
  policyTypes:
    - Ingress
  #  - Egress
  ingress:    # 入栈,一般只调整这个
    - from:
        # '-'标识的是列表的一项,表示或
        #- ipBlock:  # 控制那些ip地址的流量可以进来
        #    cidr: 172.17.0.0/16
        #    except:
        #      - 172.17.1.0/24
        #- namespaceSelector:
        #    matchLabels:
        #      project: myproject
        - podSelector:   # 控制那些pod的流量可以进来
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 80
  #egress:    # 出栈
  #  - to:
  #      - ipBlock:
  #          cidr: 10.0.0.0/24
  #    ports:
  #      - protocol: TCP
  #        port: 5978


# 创建一个具有role: frontend标签的pod
[root@k8s2 calico]# kubectl run demo --image alexw.com/library/busybox -it
If you don't see a command prompt, try pressing enter.
/ #
/ # wget 10.105.53.141
Connecting to 10.105.53.141 (10.105.53.141:80)
^C
/ # exit
Session ended, resume using 'kubectl attach demo -c demo -i -t' command when the pod is running


[root@k8s2 calico]# kubectl get pod --show-labels
NAME                    READY   STATUS    RESTARTS      AGE     LABELS
demo                    1/1     Running   1 (14s ago)   62s     run=demo
web1-7dc6f88697-crd8n   1/1     Running   0             4m38s   app=myapp,pod-template-hash=7dc6f88697
web1-7dc6f88697-txhp7   1/1     Running   0             4m38s   app=myapp,pod-template-hash=7dc6f88697
web1-7dc6f88697-wkvxn   1/1     Running   0             4m38s   app=myapp,pod-template-hash=7dc6f88697
[root@k8s2 calico]# kubectl label pod demo role=frontend
pod/demo labeled
[root@k8s2 calico]# kubectl get pod --show-labels
NAME                    READY   STATUS    RESTARTS      AGE     LABELS
demo                    1/1     Running   1 (74s ago)   2m2s    role=frontend,run=demo  ###  添加了一个标签
web1-7dc6f88697-crd8n   1/1     Running   0             5m38s   app=myapp,pod-template-hash=7dc6f88697
web1-7dc6f88697-txhp7   1/1     Running   0             5m38s   app=myapp,pod-template-hash=7dc6f88697
web1-7dc6f88697-wkvxn   1/1     Running   0             5m38s   app=myapp,pod-template-hash=7dc6f88697


# 重新进入pod测试
[root@k8s2 calico]# kubectl attach demo -c demo -i -t
If you don't see a command prompt, try pressing enter.
/ #
/ # wget 10.105.53.141
Connecting to 10.105.53.141 (10.105.53.141:80)
saving to 'index.html'
index.html           100% |*************************************************************************************************************************|    65  0:00:00 ETA
'index.html' saved
/ #





1、示例

ClusterIP示例

apiVersion: v1
kind: Service
metadata:
  name: ClusterIP-service
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
      app: alexw.com/library/nginx
  type: ClusterIP

headless无头服务

  • 不需要分配一个VIP,直接以DNS记录的方式解析出被代理pod的IP
  • 域名格式: s e r v i c e n a m e . {servicename}. servicename.{namespace}.svc.cluster.local
# 拉起一个headless service
[root@k8s2 services]# vim headless_demo.yaml

apiVersion: v1
kind: Service
metadata:
  name: headless-service
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
      app: headless-service
  clusterIP: None


# 拉起一个具有和headless service的tag相同的pod
[root@k8s2 controllers]# kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP            NODE   NOMINATED NODE   READINESS GATES
demo-67dbbdfbfd-tbr6h   1/1     Running   0          2m7s   10.244.1.52   k8s3   <none>           <none>

# 可以看到headless service可以自动解析到具有同样tag的pod
[root@k8s2 controllers]# kubectl describe svc headless-service
Name:              headless-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=headless-service
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                None
IPs:               None
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.52:80  # 这就是具有相同tag的pod的ip
Session Affinity:  None
Events:            <none>


NodePort示例

  • NodePort:nodeport》clusterip》pod
[root@k8s2 ~]# vim nodeport_service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: NodePort


[root@k8s2 ~]# kubectl apply -f nodeport_service.yaml
service/my-nginx created

[root@k8s2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        24h
my-nginx     NodePort    10.110.218.50    <none>        80:31779/TCP   24s  ### NodePort分配地址在30000~32767之间
myapp        NodePort    10.107.222.140   <none>        80:31532/TCP   22h
mydb         ClusterIP   10.96.238.173    <none>        80/TCP         123m
myservice    ClusterIP   10.98.196.110    <none>        80/TCP         123m

# 可以修改为clusterIP类型
[root@k8s2 ~]# kubectl edit svc my-nginx

  sessionAffinity: None
  type: ClusterIP

[root@k8s2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        24h
my-nginx     ClusterIP   10.110.218.50    <none>        80/TCP         2m36s
myapp        NodePort    10.107.222.140   <none>        80:31532/TCP   22h
mydb         ClusterIP   10.96.238.173    <none>        80/TCP         125m
myservice    ClusterIP   10.98.196.110    <none>        80/TCP         125m


#########  指定NodePort端口
# 修改apiserver配置
[root@k8s2 ~]# cd /etc/kubernetes/manifests/
[root@k8s2 manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s2 manifests]# vim kube-apiserver.yaml
# 加入一行
- --service-node-port-range=30000-40000

# k8s会自动重启apiserver,apiserver监听6443端口
[root@k8s2 manifests]# kubectl get pod
The connection to the server 192.168.147.101:6443 was refused - did you specify the right host or port?

# 修改service配置
[root@k8s2 ~]# vim nodeport_service.yaml
#加入
nodePort: 33333

[root@k8s2 ~]# kubectl apply -f nodeport_service.yaml
service/my-nginx configured

[root@k8s2 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        24h
my-nginx     NodePort    10.110.218.50    <none>        80:33333/TCP   19m  ### 修改成功

LoadBalancer示例

  • LoadBalancer(适用于公有云):loadbalanced》nodeport》clusterip》pod
# 配置metallb
## 修改kube-system和kube-proxy
[root@k8s2 ~]# kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

## 使kube-proxy配置修改生效
[root@k8s2 ~]# kubectl -n kube-system get pod | grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'
pod "kube-proxy-dkn9w" deleted
pod "kube-proxy-fb5p5" deleted
pod "kube-proxy-l6zkz" deleted

# 下载metallb配置
[root@k8s2 ~]# mkdir metallb
[root@k8s2 ~]# cd metallb/
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

# 将镜像本地化
[root@k8s2 metallb]# cat metallb-frr.yaml | grep image:
        image: quay.io/metallb/controller:v0.13.7
        image: frrouting/frr:v7.5.1
        image: frrouting/frr:v7.5.1
        image: frrouting/frr:v7.5.1
        image: quay.io/metallb/speaker:v0.13.7
        image: frrouting/frr:v7.5.1
        image: quay.io/metallb/speaker:main
        image: quay.io/metallb/speaker:main

[root@k8s1 harbor]# docker pull quay.io/metallb/controller:v0.13.7
[root@k8s1 harbor]# docker pull quay.io/metallb/speaker:v0.13.7

[root@k8s1 harbor]# docker tag quay.io/metallb/speaker:v0.13.7 alexw.com/metallb/speaker:v0.13.7
[root@k8s1 harbor]# docker tag quay.io/metallb/controller:v0.13.7 alexw.com/metallb/

[root@k8s1 harbor]# docker push alexw.com/metallb/controller:v0.13.7
[root@k8s1 harbor]# docker push alexw.com/metallb/controller:v0.13.7

# 修改配置中的仓库地址,指向本地仓库
[root@k8s2 metallb]# cat metallb-frr.yaml | grep image:
        image: alexw.com/metallb/controller:v0.13.7
        image: frrouting/frr:v7.5.1
        image: frrouting/frr:v7.5.1
        image: frrouting/frr:v7.5.1
        image: alexw.com/metallb/speaker:v0.13.7
        image: frrouting/frr:v7.5.1
        image: alexw.com/metallb/speaker:main
        image: alexw.com/metallb/speaker:main

# 依照配置创建pod
[root@k8s2 metallb]# kubectl apply -f metallb-frr.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/frr-startup created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

# 出现一个新的namespace
[root@k8s2 metallb]# kubectl get ns
NAME              STATUS   AGE
default           Active   26h
kube-flannel      Active   21h
kube-node-lease   Active   26h
kube-public       Active   26h
kube-system       Active   26h
metallb-system    Active   7m5s

[root@k8s2 metallb]# kubectl get pod -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-786b5d77b7-sf792   1/1     Running   0          20m
speaker-5hl5x                 4/4     Running   0          117s
speaker-tb4mq                 4/4     Running   0          117s
speaker-vdzp4                 4/4     Running   0          118s


# 配置metallb,使用L2的策略
## 创建地址池
[root@k8s2 metallb]# vim l2_config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.147.50-192.168.147.60


# 修改service配置
  type: LoadBalancer

[root@k8s2 metallb]# kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1        <none>           443/TCP        28h
my-nginx     LoadBalancer   10.110.218.50    192.168.147.50   80:33333/TCP   4h9m


ExternalName示例

  • ExternalName(适用场景:部分应用在集群内,部分在集群外,由DNS解析实现)
[root@k8s2 ~]# vim extname_demo.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ExternalName
  externalName: alexw.com

[root@k8s2 ~]# kubectl create -f extname_demo.yaml
service/my-service created
[root@k8s2 ~]# kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>           443/TCP        29h
my-nginx     LoadBalancer   10.110.218.50   192.168.147.50   80:33333/TCP   5h14m
my-service   ExternalName   <none>          alexw.com        <none>         8s
  • ExternalIP(管理员手动注册的ip,不推荐)

2、Ingress

  • 文档:https://kubernetes.github.io/ingress-nginx/user-guide
  • 通过一个ip(就是Ingress Controller)暴露多个service(通过Ingress Controller调用集群内的service)
  • 七层的负载均衡
  • 两部分组成:Ingress Controller和Ingress Service
  • Controller会根据用户定义的Ingress对象,提供对应的代理能力。业内常用软件nginx、envoy、traefik等都为k8s维护的有专门的Controller
# 这里以ingress-nginx为例
## https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml

# 配置文件涉及到的镜像做本地化
[root@k8s2 ingress]# cat deploy.yaml | grep image:
        image: registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f

# 修改配置中镜像地址指向本地仓库,并记得删除sha256哈希校验
[root@k8s2 ingress]# vim deploy.yaml

# 由配置生成资源
[root@k8s2 ingress]# kubectl create -f deploy.yaml

# 产生了一个新的namespace
[root@k8s2 ingress]# kubectl get ns
NAME              STATUS   AGE
default           Active   30h
ingress-nginx     Active   25s
kube-flannel      Active   24h
kube-node-lease   Active   30h
kube-public       Active   30h
kube-system       Active   30h
metallb-system    Active   168m

[root@k8s2 ingress]# kubectl -n ingress-nginx get pod -o wide
NAME                                        READY   STATUS      RESTARTS   AGE   IP            NODE   NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-x9sg9        0/1     Completed   0          22s   10.244.1.35   k8s3   <none>           <none>
ingress-nginx-admission-patch-n5gm2         0/1     Completed   0          21s   10.244.2.55   k8s4   <none>           <none>
ingress-nginx-controller-589cc55fb7-dfbgn   0/1     Running     0          22s   10.244.2.56   k8s4   <none>           <none>

[root@k8s2 ingress]# kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.103.177.19   192.168.147.51   80:36166/TCP,443:30668/TCP   2m5s     #######  外部流量的入口在这,controller的地址
ingress-nginx-controller-admission   ClusterIP      10.99.234.144   <none>           443/TCP                      2m5s
  • 使用ingress controller暴露集群内服务
# 创建两个service
# 这里的服务web1和web2是个没有外部地址,仅能集群内访问的服务
[root@k8s2 ingress]# kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP   31h
my-nginx     ClusterIP      10.110.218.50    <none>        80/TCP    7h13m
my-service   ExternalName   <none>           alexw.com     <none>    119m
web1         ClusterIP      10.108.131.166   <none>        80/TCP    5m58s
web2         ClusterIP      10.103.208.102   <none>        80/TCP    17s

# 各有三个副本
[root@k8s2 ingress]# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
web1-6fc5554c45-bdx4b   1/1     Running   0          100s
web1-6fc5554c45-lbn2z   1/1     Running   0          100s
web1-6fc5554c45-m8tc6   1/1     Running   0          100s
web2-f988c7556-94njc    1/1     Running   0          7s
web2-f988c7556-gz559    1/1     Running   0          7s
web2-f988c7556-n4bdn    1/1     Running   0          7s


# 创建基于名称的虚拟主机的ingress规则
[root@k8s2 ingress]# vim name-virtual-host-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  ingressClassName: nginx-example
  rules:
  - host: www1.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web1
            port:
              number: 80
  - host: www2.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web2
            port:
              number: 80

# 创建ingress service
[root@k8s2 ingress]# kubectl create -f name-virtual-host-ingress.yaml

# 获取到ADDRESS即正常启动
[root@k8s2 ingress]# kubectl get ingress
NAME                        CLASS   HOSTS                           ADDRESS          PORTS   AGE
name-virtual-host-ingress   nginx   www1.alexw.com,www2.alexw.com   192.168.147.51   80      20m

# 做地址解析,解析到ingress controller上
192.168.147.51 www1.alexw.com www2.alexw.com

[root@k8s1 ~]# curl www2.alexw.com
Version: v2
[root@k8s1 ~]# curl www1.alexw.com
Version: v1

  • 进入controller的交互式页面
[root@k8s2 ~]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-589cc55fb7-dfbgn -- bash
bash-5.1$ ls
fastcgi.conf            geoip                   mime.types              nginx.conf              scgi_params             uwsgi_params.default
fastcgi.conf.default    koi-utf                 mime.types.default      nginx.conf.default      scgi_params.default     win-utf
fastcgi_params          koi-win                 modsecurity             opentracing.json        template
fastcgi_params.default  lua                     modules                 owasp-modsecurity-crs   uwsgi_params
  • ingress的TLS配置
# 创建自签名证书
[root@k8s2 ingress]# openssl req -x509 -sha256 -nodes -days 3654 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/0=nginxsvc"
Generating a 2048 bit RSA private key
.....+++
.......................+++
writing new private key to 'tls.key'
-----
Subject Attribute 0 has no known NID, skipped
[root@k8s2 ingress]# ls
deploy.yaml  name-virtual-host-ingress.yaml  services2.yaml  services.yaml  tls.crt  tls.key
[root@k8s2 ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
[root@k8s2 ingress]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-r78tr   kubernetes.io/service-account-token   3      2d2h
tls-secret            kubernetes.io/tls                     2      30s

# 修改yaml配置
[root@k8s2 ingress]# vim tls-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  tls:
    - hosts:
      - www1.alexw.com
      - www2.alexw.com
      secretName: tls-secret
  ingressClassName: nginx
  rules:
  - host: www1.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web1
            port:
              number: 80
  - host: www2.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web2
            port:
              number: 80


# 查看ingress状态
[root@k8s2 ingress]# kubectl describe ingress
Name:             name-virtual-host-ingress
Labels:           <none>
Namespace:        default
Address:          192.168.147.51
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  tls-secret terminates www1.alexw.com,www2.alexw.com
Rules:
  Host            Path  Backends
  ----            ----  --------
  www1.alexw.com
                  /   web1:80 (10.244.1.47:80,10.244.2.71:80,10.244.2.74:80)
  www2.alexw.com
                  /   web2:80 (10.244.1.48:80,10.244.2.72:80,10.244.2.73:80)
Annotations:      <none>
Events:
  Type    Reason  Age                 From                      Message
  ----    ------  ----                ----                      -------
  Normal  Sync    115s (x3 over 19h)  nginx-ingress-controller  Scheduled for sync


# 测试https
[root@k8s1 ~]# curl -k https://www1.alexw.com
Version: v1
[root@k8s1 ~]# curl -k https://www2.alexw.com
Version: v2
  • ingress认证
# 创建htppasswd文件
[root@k8s2 ingress]# htpasswd -c auth alexw
New password:
Re-type new password:
Adding password for user alexw

# 将htppasswd文件转换为k8s的secret资源
[root@k8s2 ingress]# kubectl create secret generic basic-auth --from-file=auth
secret/basic-auth created

# 查看认证信息
[root@k8s2 ingress]# kubectl create secret generic basic-auth --from-file=auth
secret/basic-auth created
[root@k8s2 ingress]# kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
  auth: YWxleHc6JGFwcjEkRGdveHhocmEkN25UYzVJZnYydkhUUk05ZU1aYy53Lgo=
kind: Secret
metadata:
  creationTimestamp: "2023-01-23T12:15:12Z"
  name: basic-auth
  namespace: default
  resourceVersion: "149244"
  uid: 214dfd4f-b955-4152-870d-40b9e26c3320
type: Opaque

# 修改yaml,将ingress与认证绑定
metadata:
  name: name-virtual-host-ingress
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - alexw'


# 在访问就会要求认证
[root@k8s1 ~]# curl -k https://www1.alexw.com
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

[root@k8s1 ~]# curl -k https://www1.alexw.com -ualexw:passwd
Version: v1

  • ingress地址重定向
# 文档https://kubernetes.github.io/ingress-nginx/examples/rewrite/
[root@k8s2 ingress]# vim redirect_ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: rewrite
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: www3.alexw.com
    http:
      paths:
      - path: /something(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: http-svc
            port:
              number: 80

[root@k8s2 ingress]# kubectl get ingress
NAME                        CLASS   HOSTS                           ADDRESS          PORTS     AGE
name-virtual-host-ingress   nginx   www1.alexw.com,www2.alexw.com   192.168.147.51   80, 443   20h
rewrite                     nginx   www3.alexw.com                  192.168.147.51   80        16m


  • 金丝雀发布/灰度发布
# https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary
## 接收正常访问流量
### 拉起一个deployment和一个service,用ingress暴露这个service
[root@k8s2 ingress]# old_service.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: alexw.com/library/myapp:v1
---
apiVersion: v1
kind: Service
metadata:
  name: myappv1
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: myapp
  type: ClusterIP



[root@k8s2 ingress]# vim ingress-v1.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-v1
spec:
  ingressClassName: nginx
  rules:
  - host: www1.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: myappv1
            port:
              number: 80

### 正常访问旧版本
[root@k8s1 ~]# curl www1.alexw.com
Version: v1

## 拉起一个新版本
[root@k8s2 ingress]# vim new_service.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myappv2
  labels:
    app: myappv2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myappv2
  template:
    metadata:
      labels:
        app: myappv2
    spec:
      containers:
      - name: myappv2
        image: alexw.com/library/myapp:v2
---
apiVersion: v1
kind: Service
metadata:
  name: myappv2
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app: myappv2
  type: ClusterIP


### 使用annotations指定金丝雀更新(使用特殊的Http Header)
#### 可选的annotation:
nginx.ingress.kubernetes.io/affinity-canary-behavior	"sticky" or "legacy"
nginx.ingress.kubernetes.io/canary	"true" or "false"
nginx.ingress.kubernetes.io/canary-by-header	string
nginx.ingress.kubernetes.io/canary-by-header-value	string
nginx.ingress.kubernetes.io/canary-by-header-pattern	string
nginx.ingress.kubernetes.io/canary-by-cookie	string
nginx.ingress.kubernetes.io/canary-weight	number
nginx.ingress.kubernetes.io/canary-weight-total	number


[root@k8s2 ingress]# vim ingress-v2.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-v2
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: stage
    nginx.ingress.kubernetes.io/canary-by-header-value: gray
spec:
  ingressClassName: nginx
  rules:
  - host: www1.alexw.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: myappv2
            port:
              number: 80


## 测试
[root@k8s1 ~]# curl www1.alexw.com
Version: v1
[root@k8s1 ~]# curl -H "stage:gray" www1.alexw.com
Version: v2

八、存储

1、configmap

  • 键值对
  • 镜像与配置文件解耦,向镜像注入配置
  1. 通过命令行创建
[root@k8s2 calico]# kubectl create configmap config-demo --from-literal=key1=v1 --from-literal=key2=v2
configmap/config-demo created

[root@k8s2 calico]# kubectl get cm
NAME               DATA   AGE
config-demo        2      5s
kube-root-ca.crt   1      2d19h

[root@k8s2 calico]# kubectl describe cm config-demo
Name:         config-demo
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
key1:
----
v1
key2:
----
v2

BinaryData
====

Events:  <none>
  1. 通过文件创建
  • 文件名是key,文件内容是值
[root@k8s2 storage]# vim conf1

nameserver 144.144.144.144

[root@k8s2 storage]# kubectl create configmap config-demo2 --from-file=conf1
configmap/config-demo2 created

[root@k8s2 storage]# kubectl describe cm config-demo2
Name:         config-demo2
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
conf1:
----
nameserver 144.144.144.144


BinaryData
====

Events:  <none>
  1. 通过cm设置环境变量
[root@k8s2 storage]# kubectl create configmap cm1-config --from-literal=db_host=alexw.com

[root@k8s2 storage]# kubectl create configmap cm2-config --from-literal=db_port=3306

[root@k8s2 storage]# vim env_config.yaml

apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: busybox
      command: ["/bin/sh", "-c","env"]
      env:
        - name: key1
          valueFrom:
            configMapKeyRef:
              name: cm1-config
              key: db_host
        - name: key2
          valueFrom:
            configMapKeyRef:
              name: cm2-config
              key: db_port
  restartPolicy: Never


[root@k8s2 storage]# kubectl logs configmap-demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=configmap-demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
key1=alexw.com
KUBERNETES_PORT_443_TCP_PROTO=tcp
key2=3306
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/

## 另一种方式
[root@k8s2 storage]# vim cmd_config.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod2
spec:
  containers:
    - name: demo2
      image: busybox
      command: ["/bin/sh", "-c","env"]
      envFrom:
        - configMapRef:
            name: cm1-config
  restartPolicy: Never

## 容器中读取环境变量
[root@k8s2 storage]# vim pod3.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod3
spec:
  containers:
    - name: demo3
      image: busybox
      command: ["/bin/sh", "-c","echo ${db_host}"]
      envFrom:
        - configMapRef:
            name: cm1-config
  restartPolicy: Never


[root@k8s2 storage]# kubectl logs pod3
alexw.com


# 通过mount设置环境变量
[root@k8s2 storage]# vim pod4.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod4
spec:
  containers:
    - name: demo4
      image: busybox
      command: ["/bin/sh", "-c","cat /config/db_host"]
      volumeMounts:
      - name: config-volume
        mountPath: /config
  volumes:
    - name: config-volume      ## 挂载的是cm,挂载进去key是文件名,value是文件内容
      configMap:
        name: cm1-config
  restartPolicy: Never

[root@k8s2 storage]# kubectl apply -f pod4.yaml
pod/pod4 created

[root@k8s2 storage]# kubectl logs pod4
alexw.com

  1. cm热更新
# 创建nginx配置
[root@k8s2 storage]# vim nginx.conf

server {
        listen  8000;
        server_name     _;
        location / {
                root /usr/share/nginx/html;
                index index.html index.htm;
        }
}


[root@k8s2 storage]# kubectl create configmap nginxconfig --from-file nginx.conf


# 创建一个deployment,向容器注入配置
[root@k8s2 storage]# vim nginx_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx-deploy
          image: alexw.com/library/nginx
          volumeMounts:
          - name: config-volume
            mountPath: /etc/nginx/conf.d
      volumes:
        - name: config-volume
          configMap:
            name: nginxconfig


[root@k8s2 storage]# kubectl exec nginx-deploy-97d4554b4-mhtzw -- cat /etc/nginx/conf.d/nginx.conf
server {
        listen  8000;
        server_name     _;
        location / {
                root /usr/share/nginx/html;
                index index.html index.htm;
        }
}


[root@k8s2 storage]# curl 10.244.106.133:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


# 在集群外修改nginx配置,会被自动同步到集群内,修改端口为8080
[root@k8s2 storage]# kubectl edit cm nginxconfig

apiVersion: v1
data:
  nginx.conf: "server {\n\tlisten\t8080;\n\tserver_name\t_;\n\tlocation / {\n\t\troot
    /usr/share/nginx/html;\n\t\tindex index.html index.htm;\n\t}\n}\n"


# 热更新后,不会触发滚动更新,需要手动触发
## 注意这里要给一个json,注意格式
[root@k8s2 storage]# kubectl patch deployments.apps nginx-deploy --patch '{"spec":{"template":{"metadata":{"annotations":{"version/config":"20230124"}}}}}'
deployment.apps/nginx-deploy patched

[root@k8s2 storage]# curl 10.244.219.12:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

2、secret

  • 保存敏感信息

  • pod以两种方式使用secret

    • 作为volume中的文件被挂载到pod中的一个或多个容器
    • 当kubectl为pod拉取镜像时使用
  • 类型:

    • Service Account:k8s自动创建包含访问API凭证的secret,并自动修改pod来使用此类型的secret,用于pod在集群内认证身份
[root@k8s2 storage]# kubectl exec nginx-deploy-579756c57d-9tclr -- ls /var/run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token

    • Opaque:使用base64编码,安全性弱
[root@k8s2 storage]# echo -n 'admin' > username.txt
[root@k8s2 storage]# echo -n 'passwd' > psd.txt

[root@k8s2 storage]# kubectl create secret generic db-user-pass --from-file=username.txt --from-file=psd.txt
secret/db-user-pass created

[root@k8s2 storage]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
db-user-pass          Opaque                                2      5s

[root@k8s2 storage]# kubectl get secrets db-user-pass -o yaml
apiVersion: v1
data:
  psd.txt: cGFzc3dk
  username.txt: YWRtaW4=
kind: Secret
metadata:
  creationTimestamp: "2023-01-24T06:32:52Z"
  name: db-user-pass
  namespace: default
  resourceVersion: "184652"
  uid: 2ea92504-efc8-43f5-bbb4-ddeee2b002a1
type: Opaque

[root@k8s2 storage]# echo YWRtaW4= | base64 -d
admin


### 使用yaml文件创建
[root@k8s2 storage]# vim secret_demo.yaml

apiVersion: v1
data:
  password: cGFzc3dk
  username: YWRtaW4=
kind: Secret
metadata:
  name: secret-demo


## 挂载到pod内
[root@k8s2 storage]# vim nginx_secret.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: alexw.com/library/nginx
    volumeMounts:
    - name: secrets
      mountPath: "/secrets"
      readOnly: true
  volumes:
  - name: secrets
    secret:
      secretName: secret-demo

[root@k8s2 storage]# kubectl exec mypod -- ls /secrets
password
username

[root@k8s2 storage]# kubectl exec mypod -- cat /secrets/password
passwd

### 挂载到指定路径
[root@k8s2 storage]# vim nginx_secret2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mypod2
spec:
  containers:
  - name: mypod2
    image: alexw.com/library/nginx
    volumeMounts:
    - name: secrets
      mountPath: "/secrets"
      readOnly: true
  volumes:
  - name: secrets
    secret:
      secretName: secret-demo
      items:
      - key: password
        path: alexw.com/alexw-pwd

[root@k8s2 storage]# kubectl exec mypod2 -- cat /secrets/alexw.com/alexw-pwd
passwd
    • kubernetes.io/dockerconfigjson:储存docker registry的认证信息
# 新建一个私有仓库
# 新建一个pod去这个私有仓库拉镜像,没有密码拉不到
[root@k8s2 storage]# vim secret_demo2.yaml

apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
spec:
  containers:
    - name: test-container
      image: alexw.com/personal/busybox
  imagePullSecrets:
    - name: myreposadmin


# 创建一个docker-registry的secret资源
[root@k8s2 storage]# kubectl create secret docker-registry myreposadmin --docker-server=alexw.com --docker-username=admin --docker-password=Harbor12345 --docker-email=alexw@163.com
secret/myreposadmin created

3、volumes配置管理

  • 文档:https://kubernetes.io/zh-cn/docs/concepts/storage/volumes/
  1. 容器内文件在磁盘上是临时存放的,当容器崩溃k8s会自动重启容器,临时文件会消失;同一个pod上多个容器之间有共享数据的需求;k8s抽象出volume来解决这两个问题
  2. volume具有和pod相同的生命周期,pod重启后volume内数据也会保留
  3. k8s支持多类型的卷,支持任意数量的卷
  4. 卷不能挂载到其他卷上,也不能和其他卷有硬链接。pod上每个容器只能独立的指定卷的挂载位置

常见卷

  • emptyDir

    • emptyDir 的一些用途:
    • 缓存空间,例如基于磁盘的归并排序。
    • 为耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
    • 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
# 在内存中开辟100M用于交换数据
[root@k8s2 storage]# vim empty_volumn_demo.yaml

apiVersion: v1
kind: Pod
metadata:
  name: vol1
spec:
  containers:
  - image: alexw.com/library/busybox
    name: vm1
    command: ["sleep","300"]
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  - image: alexw.com/library/nginx
    name: vm2
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir:
      medium: Memory
      sizeLimit: 100Mi


[root@k8s2 storage]# kubectl exec vol1 -c vm1 -it -- sh
/ #
/ # ls /cache
/ # cd /cache
/cache # echo "index!" > index.html

~ # wget localhost
Connecting to localhost (127.0.0.1:80)
saving to 'index.html'
index.html           100% |******************************************************************************|     7  0:00:00 ETA
'index.html' saved
  • hostPath
    • 随着pod被调度到别的节点,原来hostPath的数据还在原来的节点上
    • hostPath 卷能将主机节点文件系统上的文件或目录挂载到你的 Pod 中。 虽然这不是大多数 Pod 需要的,但是它为一些应用程序提供了强大的逃生舱。
    • 运行一个需要访问 Docker 内部机制的容器;可使用 hostPath 挂载 /var/lib/docker 路径。
    • 在容器中运行 cAdvisor 时,以 hostPath 方式挂载 /sys。
    • 允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在。
[root@k8s2 storage]# vim hostpath_vol_demo.yaml

apiVersion: v1
kind: Pod
metadata:
  name: vol2
spec:
  containers:
  - image: alexw.com/library/nginx
    name: test-container
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /data
      type: DirectoryOrCreate

# 节点被部署在k8s3上
[root@k8s2 storage]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
vol2   1/1     Running   0          7s    10.244.219.16   k8s3   <none>           <none>

[root@k8s2 storage]# curl 10.244.219.16
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.23.3</center>
</body>
</html>

# k8s3上创建index.html
[root@k8s3 ~]# echo thisisindex > /data/index.html

# 测试
[root@k8s2 storage]# curl 10.244.219.16
thisisindex


  • nfs
# 搭建nfs服务器
[root@k8s4 ~]# yum install -y nfs-utils

[root@k8s1 harbor]# vim /etc/exports
/nfsdata        *(rw,sync,no_root_squash)

[root@k8s1 harbor]# mkdir -m 777 /nfsdata
[root@k8s1 harbor]# systemctl enable --now nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@k8s1 harbor]# showmount -e
Export list for k8s1:
/nfsdata *

# 创建yaml
[root@k8s2 storage]# vim nfs_vol_demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pod
spec:
  containers:
  - image: alexw.com/library/nginx
    name: test-container
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: test-volume
  volumes:
  - name: test-volume
    nfs:
      server: 192.168.147.100        #### nfs服务器地址
      path: /nfsdata


# 直接创建pod报错
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    50s                default-scheduler  Successfully assigned default/nfs-pod to k8s3
  Warning  FailedMount  18s (x7 over 50s)  kubelet            MountVolume.SetUp failed for volume "test-volume" : mount failed                                                            : exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.147.100:/nfsdata /var/lib/kubelet/pods/ee7b7657-86a3-4314-872c-ecd074688782/volumes/kuberne                                                            tes.io~nfs/test-volume
Output: mount: wrong fs type, bad option, bad superblock on 192.168.147.100:/nfsdata,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.


# 需要在子节点安装nfs驱动
[root@k8s4 ~]# yum install -y nfs-utils
  • PersistentVolume
    • 生命周期和使用它的pod相互独立
    • 两种提供方式:静态PV(集群管理员创建,携带真实存储的详情,对于集群用户可用。存在于k8s API中,可用于存储使用)和动态PV(管理员创建的PV都不匹配用户的PVC时,集群尝试专门创建volume给用户,基于StorageClass)
    • persistentVolumeClaim:persistentVolumeClaim 卷用来将持久卷(PersistentVolume)挂载到 Pod 中
    • 持久卷申领(PersistentVolumeClaim)是用户在不知道特定云环境细节的情况下“申领”持久存储(例如 GCE PersistentDisk 或者 iSCSI 卷)的一种方法。
    • pvc有自己的命名空间
    • PV和PVC必须一对一绑定
  1. 静态pv
# nfsdata目录底下创建三个新目录
[root@k8s1 harbor]# ll /nfsdata/
total 4
-rw-r--r-- 1 root root 12 Jan 24 01:04 index.html
drwxr-xr-x 2 root root  6 Jan 24 01:17 pv1
drwxr-xr-x 2 root root  6 Jan 24 01:17 pv2
drwxr-xr-x 2 root root  6 Jan 24 01:17 pv3

# 创建三个pv,分别指向三个nfs子目录
[root@k8s2 pv]# vim nfs_pv1.yaml
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv1
    server: 192.168.147.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv2
spec:
  capacity:
    storage: 4Gi
  accessModes:
  - ReadWriteOnce  ### 规定访问模式,可选为:ReadWriteOnce(单点读写)、ReadOnlyMany(多点只读)、ReadWriteMany(多点读写)
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Recycle  ### 规定回收策略:Retain(保留,需手动回收)、Recycle(回收,自动删除卷内数据)、Delete(删除,删除相关的存储资源)【当前只有NFS和HostPath才支持Recycle】
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv2
    server: 192.168.147.100
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata/pv3
    server: 192.168.147.100

[root@k8s2 pv]# kubectl apply -f nfs_pv1.yaml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@k8s2 pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv1    3Gi        RWO            Recycle          Available           nfs                     7s
pv2    4Gi        RWO            Recycle          Available           nfs                     7s
pv3    5Gi        RWO            Recycle          Available           nfs                     7s


# 创建pvc与已有pv绑定
[root@k8s2 pv]# vim pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc2
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc3
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi



# 测试,pvc2和pvc3因为找不到匹配的pv,无法创建
[root@k8s2 pv]# kubectl apply -f pvc1.yaml
persistentvolumeclaim/pvc1 unchanged
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@k8s2 pv]# kubectl get pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM          STORAGECLASS   REASON   AGE
pv1    3Gi        RWO            Recycle          Bound       default/pvc1   nfs                     13m
pv2    4Gi        RWO            Recycle          Available                  nfs                     13m
pv3    5Gi        RWO            Recycle          Available                  nfs                     13m
[root@k8s2 pv]# kubectl get pvc
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc1   Bound     pv1      3Gi        RWO            nfs            91s
pvc2   Pending                                      nfs            8s
pvc3   Pending                                      nfs            8s
  • pod挂载pv
[root@k8s2 pv]# vim presistent_pod_demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: web
      image: alexw.com/library/nginx
      volumeMounts:
      - mountPath: "/usr/share/nginx/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: pvc1   # 这里要配置有效的pvc

# 测试
[root@k8s1 pv1]# echo testserver > index.html

[root@k8s2 pv]# curl 10.244.219.20
testserver

  1. 动态pv
  • StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略
  • 每个StorageClass包括provisioner、parameters和reclaimPolicy
    • provisioner(存储分配器):用来决定使用哪些卷插件分配pv,必须指定,可以指定内部或外部分配器。外部分配器的代码地址为,kubernetes-incubator/external-storage,其中包括NFS和Ceph
    • Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建pv的回收策略,回收策略包括Delete或Retain,默认Delete

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-client-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: alexw.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.147.100
            - name: NFS_PATH
              value: /nfsdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.147.100
            path: /nfsdata

# 产生了一个pod,一个deployment,一个rs
[root@k8s2 pv]# kubectl -n nfs-client-provisioner get all
NAME                                          READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-854ff697c4-2j6xh   1/1     Running   0          3m15s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           3m15s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-854ff697c4   1         1         1       3m15s


# 创建一个动态pvc
[root@k8s2 pv]# vim dynamic_pv.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi


# 测试
[root@k8s2 pv]# kubectl apply -f dynamic_pv.yaml
persistentvolumeclaim/test-claim created
[root@k8s2 pv]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                STORAGECLASS   REASON   AGE
pvc-0e5447ae-4e12-4f71-9b2a-d679e6320a2e   1Mi        RWX            Delete           Bound         default/test-claim   nfs-client              5s
[root@k8s2 pv]# kubectl get pvc
NAME         STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound         pvc-0e5447ae-4e12-4f71-9b2a-d679e6320a2e   1Mi        RWX            nfs-client     8s

# 自动创建了一个目录
[root@k8s1 nfsdata]# ls
default-test-claim-pvc-0e5447ae-4e12-4f71-9b2a-d679e6320a2e  index.html  pv1  pv2  pv3

# 删除动态pv后,自动创建了备份
[root@k8s1 nfsdata]# ls
archived-default-test-claim-pvc-0e5447ae-4e12-4f71-9b2a-d679e6320a2e  index.html  pv1  pv2  pv3



评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值