05流量管理原理-3金丝雀&TCP流量整形比例分配

该博客演示了如何使用Istio进行微服务的金丝雀发布,逐步将流量从reviews服务的v1版本平滑迁移到v3版本。同时介绍了TCP流量整形,通过配置路由规则将TCP流量从100%路由到v1版本逐步调整为20%流量路由到v2版本。整个过程中详细展示了相关命令和配置文件。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

微服务金丝雀发布

这个任务向您展示了如何逐步地将流量从微服务的一个版本迁移到另一个版本。例如,您可以将流量从旧版本迁移到新版本。在Istio中,您可以通过配置一系列规则来实现这一目标,这些规则将一定比例的流量路由到一个或另一个服务。在这个任务中,您将50%的流量发送给评审:v1, 50%发送给评审:v3。然后,您将通过向reviews:v3 发送100%的流量来完成迁移。

1. 应用基于权重的路由

运行以下的命令去

[root@master ~]# kubectl apply -f istio-1.6.2/samples/bookinfo/networking/virtual-service-all-v1.yaml
//路由所有的流量到每个微服务的v1版本。
[root@master ~]# kubectl get VirtualService reviews -oyaml    //创建review的规则
...
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

从reviews:v1转移50%的流量到reviews:v3

[root@master ~]# cat istio-1.6.2/samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

创建的规则

[root@master ~]# kubectl apply -f istio-1.6.2/samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
virtualservice.networking.istio.io/reviews configured

此时测试页面将出现v1版 无星星,和v3版红色星星两种。每次种出现2次后,切换另一种。这里不再截图

现在把所有的流量都切换到v3版本。

[root@master ~]# cat istio-1.6.2/samples/bookinfo/networking/virtual-service-reviews-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v3
[root@master ~]#  kubectl apply -f istio-1.6.2/samples/bookinfo/networking/virtual-service-reviews-v3.yaml
virtualservice.networking.istio.io/reviews configured
//此时访问  只会出现v3版本的红色星星

TCP流量整形比例分配

这个任务向您展示了如何逐步地将TCP流量从一个微服务版本迁移到另一个微服务版本。例如,您可以将TCP流量从旧版本迁移到新版本。

一个常见的用例是将TCP流量从微服务的一个版本逐渐迁移到另一个版本。在Istio中,您可以通过配置一系列规则来实现这个目标,这些规则将TCP流量的一定百分比路由到一个或另一个服务。在这个任务中,您将把100%的TCP流量发送到TCP-echo:v1。然后,使用Istio的加权路由特性将20%的TCP流量路由到TCP-echo:v2。

  1. 部署tcp-echo微服务的v1版本。
[root@master ~]# kubectl create namespace istio-io-tcp-traffic-shifting
namespace/istio-io-tcp-traffic-shifting created   //针对TCP的流量整形创建名称空间
[root@master ~]# kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled
namespace/istio-io-tcp-traffic-shifting labeled  //sidecar的注入
[root@master ~]# kubectl apply -f istio-1.6.2/samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
service/tcp-echo created       //创建服务,对应的deployment-pod
deployment.apps/tcp-echo-v1 created
deployment.apps/tcp-echo-v2 created
[root@master ~]# cat istio-1.6.2/samples/tcp-echo/tcp-echo-services.yaml
apiVersion: v1
kind: Service
metadata:
  name: tcp-echo
  labels:
    app: tcp-echo
spec:
  ports:
  - name: tcp
    port: 9000
  - name: tcp-other
    port: 9001
  # Port 9002 is omitted intentionally for testing the pass through filter chain.
  selector:
    app: tcp-echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tcp-echo-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tcp-echo
      version: v1
  template:
    metadata:
      labels:
        app: tcp-echo
        version: v1
    spec:
      containers:
      - name: tcp-echo
        image: docker.io/istio/tcp-echo-server:1.2
        imagePullPolicy: IfNotPresent
        args: [ "9000,9001,9002", "one" ]
        ports:
        - containerPort: 9000
        - containerPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tcp-echo-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tcp-echo
      version: v2
  template:
    metadata:
      labels:
        app: tcp-echo
        version: v2
    spec:
      containers:
      - name: tcp-echo
        image: docker.io/istio/tcp-echo-server:1.2
        imagePullPolicy: IfNotPresent
        args: [ "9000,9001,9002", "two" ]
        ports:
        - containerPort: 9000
        - containerPort: 9001

路由所有的TCP流量到微服务tcp-echo的v1版本。

[root@master ~]# kubectl apply -f istio-1.6.2/samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
gateway.networking.istio.io/tcp-echo-gateway created     //gateway
destinationrule.networking.istio.io/tcp-echo-destination created  //DestinationRule
virtualservice.networking.istio.io/tcp-echo created   //VirtualService
[root@master ~]# cat istio-1.6.2/samples/tcp-echo/tcp-echo-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tcp-echo-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: tcp-echo-destination
spec:
  host: tcp-echo
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v1
  1. 确认tcp-echo服务处于运行状态并获取Ingress-gateway的端口和IP
[root@master ~]# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}')
[root@master ~]# echo $INGRESS_PORT
31400
[root@master ~]# export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.clusterIP}')
//kubectl -n istio-system get service istio-ingressgateway -o json将得到json配置,再截取
[root@master ~]# echo $INGRESS_HOST     //这里是单机的nodeport
10.103.213.82
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')     //loadBalancer这样获取

发送一些TCP的流量到tcp-echo微服务进行测试。

[root@master ~]# for i in {1..10}; do \
docker run -e INGRESS_HOST=$INGRESS_HOST -e INGRESS_PORT=$INGRESS_PORT -it --rm busybox sh -c "(date; sleep 0.1) | nc $INGRESS_HOST $INGRESS_PORT"; \
done
one Sat Apr 30 14:42:03 UTC 2022    时间前面有一个one代表着路由到v1版本。
one Sat Apr 30 14:42:04 UTC 2022
one Sat Apr 30 14:42:05 UTC 2022
one Sat Apr 30 14:42:05 UTC 2022
one Sat Apr 30 14:42:06 UTC 2022
one Sat Apr 30 14:42:06 UTC 2022
one Sat Apr 30 14:42:07 UTC 2022
one Sat Apr 30 14:42:08 UTC 2022
one Sat Apr 30 14:42:08 UTC 2022
one Sat Apr 30 14:42:09 UTC 2022

现在把v1版本的20%流量路由到v2版本上去。

[root@master ~]# kubectl apply -f istio-1.6.2/samples/tcp-echo/tcp-echo-20-v2.yaml
virtualservice.networking.istio.io/tcp-echo created
[root@master ~]# cat istio-1.6.2/samples/tcp-echo/tcp-echo-20-v2.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tcp-echo
spec:
  hosts:
  - "*"
  gateways:
  - tcp-echo-gateway
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v1
      weight: 80
    - destination:
        host: tcp-echo
        port:
          number: 9000
        subset: v2
      weight: 20

确认配置好的规则,一定不要漏掉,影响实验结果

 ~]# kubectl get virtualservice tcp-echo -o yaml -nistio-io-tcp-traffic-shifting

发送一些的TCP流量到tcp-echo微服务验证

[root@master ~]# for i in {1..10}; do \
docker run -e INGRESS_HOST=$INGRESS_HOST -e INGRESS_PORT=$INGRESS_PORT -it --rm busybox sh -c "(date; sleep 0.1) | nc $INGRESS_HOST $INGRESS_PORT"; \
done
one Sat Apr 30 15:07:55 UTC 2022     //并不是百分之百按规则。趋势近似与80%和20%
one Sat Apr 30 15:07:56 UTC 2022
two Sat Apr 30 15:07:57 UTC 2022
one Sat Apr 30 15:07:57 UTC 2022
one Sat Apr 30 15:07:58 UTC 2022
two Sat Apr 30 15:07:58 UTC 2022
two Sat Apr 30 15:08:00 UTC 2022
one Sat Apr 30 15:08:00 UTC 2022
one Sat Apr 30 15:08:01 UTC 2022
one Sat Apr 30 15:08:01 UTC 2022

移除tcp-echo应用和路由规则

kubectl delete namespace istio-io-tcp-traffic-shifting
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值