gRPC负载均衡(自定义负载均衡)(etcd)

1、gRPC负载均衡(自定义负载均衡)(etcd)

上篇文章介绍了如何实现gRPC负载均衡,但目前官方只提供了pick_firstround_robin两种负载均衡策略,

轮询法round_robin不能满足因服务器配置不同而承担不同负载量,这篇文章将介绍如何实现自定义负载均衡策

略–加权随机法

加权随机法可以根据服务器的处理能力而分配不同的权重,从而实现处理能力高的服务器可承担更多的请求,处

理能力低的服务器少承担请求。

1.1 自定义负载均衡策略

gRPC提供了V2PickerBuilderV2Picker接口让我们实现自己的负载均衡策略。

type V2PickerBuilder interface {
	Build(info PickerBuildInfo) balancer.V2Picker
}

V2PickerBuilder接口:创建V2版本的子连接选择器。

Build方法:返回一个V2选择器,将用于gRPC选择子连接。

type V2Picker interface {
	Pick(info PickInfo) (PickResult, error)
}

V2Picker接口:用于gRPC选择子连接去发送请求。

Pick方法:子连接选择

我们需要把服务器地址的权重添加进去,但是地址resolver.Address并没有提供权重的属性。官方给的答复

是:把权重存储到地址的元数据metadata中。

// attributeKey is the type used as the key to store AddrInfo in the Attributes
// field of resolver.Address.
type attributeKey struct{}

// AddrInfo will be stored inside Address metadata in order to use weighted balancer.
type AddrInfo struct {
	Weight int
}

// SetAddrInfo returns a copy of addr in which the Attributes field is updated
// with addrInfo.
func SetAddrInfo(addr resolver.Address, addrInfo AddrInfo) resolver.Address {
	addr.Attributes = attributes.New()
	addr.Attributes = addr.Attributes.WithValues(attributeKey{}, addrInfo)
	return addr
}

// GetAddrInfo returns the AddrInfo stored in the Attributes fields of addr.
func GetAddrInfo(addr resolver.Address) AddrInfo {
	v := addr.Attributes.Value(attributeKey{})
	ai, _ := v.(AddrInfo)
	return ai
}

定义AddrInfo结构体并添加权重Weight属性,Set方法把Weight存储到resolver.Address中,Get方法从

resolver.Address获取Weight

解决权重存储问题后,接下来我们实现加权随机法负载均衡策略。

首先实现V2PickerBuilder接口,返回子连接选择器。

func (*rrPickerBuilder) Build(info base.PickerBuildInfo) balancer.V2Picker {
	grpclog.Infof("weightPicker: newPicker called with info: %v", info)
	if len(info.ReadySCs) == 0 {
		return base.NewErrPickerV2(balancer.ErrNoSubConnAvailable)
	}
	var scs []balancer.SubConn
	for subConn, addr := range info.ReadySCs {
		node := GetAddrInfo(addr.Address)
		if node.Weight <= 0 {
			node.Weight = minWeight
		} else if node.Weight > 5 {
			node.Weight = maxWeight
		}
		for i := 0; i < node.Weight; i++ {
			scs = append(scs, subConn)
		}
	}
	return &rrPicker{
		subConns: scs,
	}
}

加权随机法中,我使用空间换时间的方式,把权重转成地址个数(例如addr1的权重是3,那么添加3个子连接

到切片中;addr2权重为1,则添加1个子连接;选择子连接时候,按子连接切片长度生成随机数,以随机数作

为下标就是选中的子连接),避免重复计算权重。考虑到内存占用,权重定义从15权重。

接下来实现子连接的选择,获取随机数,选择子连接

type rrPicker struct {
	subConns []balancer.SubConn
	mu sync.Mutex
}

func (p *rrPicker) Pick(balancer.PickInfo) (balancer.PickResult, error) {
	p.mu.Lock()
	index := rand.Intn(len(p.subConns))
	sc := p.subConns[index]
	p.mu.Unlock()
	return balancer.PickResult{SubConn: sc}, nil
}

关键代码完成后,我们把加权随机法负载均衡策略命名为weight,并注册到gRPC的负载均衡策略中。

// Name is the name of weight balancer.
const Name = "weight"
// NewBuilder creates a new weight balancer builder.
func newBuilder() balancer.Builder {
	return base.NewBalancerBuilderV2(Name, &rrPickerBuilder{}, base.Config{HealthCheck: false})
}

func init() {
	balancer.Register(newBuilder())
}

最后,我们只需要在服务端注册服务时候附带权重,然后客户端在服务发现时把权重Setresolver.Address

中,最后客户端把负载论衡策略改成weight就完成了。

//SetServiceList 设置服务地址
func (s *ServiceDiscovery) SetServiceList(key, val string) {
	s.lock.Lock()
	defer s.lock.Unlock()
	//获取服务地址
	addr := resolver.Address{Addr: strings.TrimPrefix(key, s.prefix)}
	//获取服务地址权重
	nodeWeight, err := strconv.Atoi(val)
	if err != nil {
		//非数字字符默认权重为1
		nodeWeight = 1
	}
	//把服务地址权重存储到resolver.Address的元数据中
	addr = weight.SetAddrInfo(addr, weight.AddrInfo{Weight: nodeWeight})
	s.serverList[key] = addr
	s.cc.UpdateState(resolver.State{Addresses: s.getServices()})
	log.Println("put key :", key, "wieght:", val)
}

客户端使用weight负载均衡策略

func main() {
	r := etcdv3.NewServiceDiscovery(EtcdEndpoints)
	resolver.Register(r)
	// 连接服务器
	conn, err := grpc.Dial(
		fmt.Sprintf("%s:///%s", r.Scheme(), SerName),
		grpc.WithBalancerName("weight"),
		grpc.WithInsecure(),
	)
	if err != nil {
		log.Fatalf("net.Connect err: %v", err)
	}
	defer conn.Close()

1.2 proto编写和编译

// 协议为proto3
syntax = "proto3";
package proto;
option go_package = "./;proto";

// 定义发送请求信息
message SimpleRequest{
    // 定义发送的参数,采用驼峰命名方式,小写加下划线,如:student_name
    // 参数类型 参数名 标识号(不可重复)
    string data = 1;
}

// 定义响应信息
message SimpleResponse{
    // 定义接收的参数
    // 参数类型 参数名 标识号(不可重复)
    int32 code = 1;
    string value = 2;
}

// 定义我们的服务(可定义多个服务,每个服务可定义多个接口)
service Simple{
    rpc Route (SimpleRequest) returns (SimpleResponse){};
}
$ cd proto
$ protoc --go_out=plugins=grpc:. simple.proto

1.3 权重分配

package weight

import (
	"google.golang.org/grpc/attributes"
	"google.golang.org/grpc/balancer"
	"google.golang.org/grpc/balancer/base"
	"google.golang.org/grpc/resolver"
	"log"
	"math/rand"
	"sync"
)

// Name is the name of weight balancer.
const Name = "weight"

var (
	minWeight = 1
	maxWeight = 5
)

// attributeKey是用作在resolver.Address的属性字段中存储AddrInfo的键的类型
type attributeKey struct{}

// AddrInfo将存储在地址元数据中,以便使用加权平衡器
type AddrInfo struct {
	Weight int
}

// SetAddrInfo返回addr的副本,其中属性字段用addrInfo更新
func SetAddrInfo(addr resolver.Address, addrInfo AddrInfo) resolver.Address {
	addr.Attributes = attributes.New()
	addr.Attributes = addr.Attributes.WithValues(attributeKey{}, addrInfo)
	return addr
}

// GetAddrInfo返回存储在addr的属性字段中的AddrInfo
func GetAddrInfo(addr resolver.Address) AddrInfo {
	v := addr.Attributes.Value(attributeKey{})
	ai, _ := v.(AddrInfo)
	// ai:  {1}
	// ai:  {4}
	log.Println("ai: ", ai)
	return ai
}

// newBuilder创建一个新的重量平衡器生成器
func newBuilder() balancer.Builder {
	return base.NewBalancerBuilderV2(Name, &rrPickerBuilder{}, base.Config{HealthCheck: false})
}

func init() {
	balancer.Register(newBuilder())
}

type rrPickerBuilder struct{}

func (*rrPickerBuilder) Build(info base.PickerBuildInfo) balancer.V2Picker {
	log.Printf("weightPicker: newPicker called with info: %v", info)
	if len(info.ReadySCs) == 0 {
		return base.NewErrPickerV2(balancer.ErrNoSubConnAvailable)
	}
	log.Println("len(info.ReadySCs): ", len(info.ReadySCs))
	var scs []balancer.SubConn
	for subConn, addr := range info.ReadySCs {
		node := GetAddrInfo(addr.Address)
		if node.Weight <= 0 {
			node.Weight = minWeight
		} else if node.Weight > 5 {
			node.Weight = maxWeight
		}
		// 权重是几就添加几个子连接到切片中
		for i := 0; i < node.Weight; i++ {
			scs = append(scs, subConn)
		}
		// scs:  [0xc00006c360]
		// scs:  [0xc00006c360 0xc00006c440 0xc00006c440 0xc00006c440 0xc00006c440]
		log.Println("scs: ", scs)
	}
	return &rrPicker{
		subConns: scs,
	}
}

type rrPicker struct {
	// subConns是创建此选择器时循环平衡器的快照,切片是不可变的,每个Get()将从中进行循环选择,并返回选定的SubConn
	subConns []balancer.SubConn
	mu       sync.Mutex
}

func (p *rrPicker) Pick(balancer.PickInfo) (balancer.PickResult, error) {
	p.mu.Lock()
	index := rand.Intn(len(p.subConns))
	sc := p.subConns[index]
	p.mu.Unlock()
	return balancer.PickResult{SubConn: sc}, nil
}

1.4 服务注册

package etcdv3

import (
	"context"
	"go.etcd.io/etcd/clientv3"
	"log"
	"time"
)

// ServiceRegister创建租约注册服务
type ServiceRegister struct {
	// etcd client
	cli *clientv3.Client
	// 租约ID
	leaseID clientv3.LeaseID
	// 租约keepalieve相应chan
	keepAliveChan <-chan *clientv3.LeaseKeepAliveResponse
	// key
	key string
	// value
	weight string
}

// NewServiceRegister 新建注册服务
func NewServiceRegister(endpoints []string, addr, weigit string, lease int64) (*ServiceRegister, error) {
	cli, err := clientv3.New(clientv3.Config{
		Endpoints:   endpoints,
		DialTimeout: 5 * time.Second,
	})
	if err != nil {
		log.Fatal(err)
	}
	ser := &ServiceRegister{
		cli: cli,
		// /grpclb/simple_grpc/localhost:8000
		key:    "/" + schema + "/" + addr,
		weight: weigit,
	}
	// 申请租约设置时间keepalive
	if err := ser.putKeyWithLease(lease); err != nil {
		return nil, err
	}
	return ser, nil
}

// 设置租约
func (s *ServiceRegister) putKeyWithLease(lease int64) error {
	// 设置租约时间
	resp, err := s.cli.Grant(context.Background(), lease)
	if err != nil {
		return err
	}
	// 注册服务并绑定租约
	_, err = s.cli.Put(context.Background(), s.key, s.weight, clientv3.WithLease(resp.ID))
	if err != nil {
		return err
	}
	// 设置续租定期发送需求请求
	leaseRespChan, err := s.cli.KeepAlive(context.Background(), resp.ID)
	if err != nil {
		return err
	}
	s.leaseID = resp.ID
	s.keepAliveChan = leaseRespChan
	log.Printf("Put key:%s  weight:%s  success!", s.key, s.weight)
	return nil
}

// ListenLeaseRespChan监听续租情况
func (s *ServiceRegister) ListenLeaseRespChan() {
	for leaseKeepResp := range s.keepAliveChan {
		log.Println("续约成功", leaseKeepResp)
	}
	log.Println("关闭续租")
}

// Close注销服务
func (s *ServiceRegister) Close() error {
	// 撤销租约
	if _, err := s.cli.Revoke(context.Background(), s.leaseID); err != nil {
		return err
	}
	log.Println("撤销租约")
	return s.cli.Close()
}

1.5 服务发现

package etcdv3

import (
	"context"
	"etcd3/balancer/weight"
	"github.com/coreos/etcd/mvcc/mvccpb"
	"go.etcd.io/etcd/clientv3"
	"google.golang.org/grpc/resolver"
	"log"
	"strconv"
	"strings"
	"sync"
	"time"
)

const schema = "grpclb"

//ServiceDiscovery 服务发现
type ServiceDiscovery struct {
	// etcd client
	cli *clientv3.Client
	cc  resolver.ClientConn
	// 服务列表
	serverList sync.Map
	// 监视的前缀
	prefix string
}

// NewServiceDiscovery新建发现服务
func NewServiceDiscovery(endpoints []string) resolver.Builder {
	cli, err := clientv3.New(clientv3.Config{
		Endpoints:   endpoints,
		DialTimeout: 5 * time.Second,
	})
	if err != nil {
		log.Fatal(err)
	}
	return &ServiceDiscovery{
		cli: cli,
	}
}

// Build为给定目标创建一个新的resolver,当调用grpc.Dial()时执行
func (s *ServiceDiscovery) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOption) (resolver.Resolver, error) {
	log.Println("Build")
	s.cc = cc
	s.prefix = "/" + target.Scheme + "/" + target.Endpoint + "/"
	// 根据前缀获取现有的key
	resp, err := s.cli.Get(context.Background(), s.prefix, clientv3.WithPrefix())
	if err != nil {
		return nil, err
	}
	for _, ev := range resp.Kvs {
		s.SetServiceList(string(ev.Key), string(ev.Value))
	}
	s.cc.UpdateState(resolver.State{Addresses: s.getServices()})
	// 监视前缀,修改变更的server
	go s.watcher()
	return s, nil
}

// ResolveNow 监视目标更新
func (s *ServiceDiscovery) ResolveNow(rn resolver.ResolveNowOption) {
	log.Println("ResolveNow")
}

// Scheme return schema
func (s *ServiceDiscovery) Scheme() string {
	return schema
}

// Close关闭
func (s *ServiceDiscovery) Close() {
	log.Println("Close")
	s.cli.Close()
}

//watcher 监听前缀
func (s *ServiceDiscovery) watcher() {
	rch := s.cli.Watch(context.Background(), s.prefix, clientv3.WithPrefix())
	log.Printf("watching prefix:%s now...", s.prefix)
	for wresp := range rch {
		for _, ev := range wresp.Events {
			switch ev.Type {
			case mvccpb.PUT: // 新增或修改
				s.SetServiceList(string(ev.Kv.Key), string(ev.Kv.Value))
			case mvccpb.DELETE: // 删除
				s.DelServiceList(string(ev.Kv.Key))
			}
		}
	}
}

// SetServiceList设置服务地址
func (s *ServiceDiscovery) SetServiceList(key, val string) {
	// 获取服务地址
	addr := resolver.Address{Addr: strings.TrimPrefix(key, s.prefix)}
	// 获取服务地址权重
	nodeWeight, err := strconv.Atoi(val)
	if err != nil {
		// 非数字字符默认权重为1
		nodeWeight = 1
	}
	// 把服务地址权重存储到resolver.Address的元数据中
	addr = weight.SetAddrInfo(addr, weight.AddrInfo{Weight: nodeWeight})
	s.serverList.Store(key, addr)
	s.cc.UpdateState(resolver.State{Addresses: s.getServices()})
	log.Println("put key :", key, "wieght:", val)
}

// DelServiceList删除服务地址
func (s *ServiceDiscovery) DelServiceList(key string) {
	s.serverList.Delete(key)
	s.cc.UpdateState(resolver.State{Addresses: s.getServices()})
	log.Println("del key:", key)
}

// GetServices获取服务地址
func (s *ServiceDiscovery) getServices() []resolver.Address {
	addrs := make([]resolver.Address, 0, 10)
	s.serverList.Range(func(k, v interface{}) bool {
		addrs = append(addrs, v.(resolver.Address))
		return true
	})
	return addrs
}

1.6 服务端

package main

import (
	"context"
	"etcd3/etcdv3"
	pb "etcd3/proto"
	"google.golang.org/grpc"
	"log"
	"net"
)

// SimpleService定义我们的服务
type SimpleService struct{}

const (
	// Address监听地址
	Address string = "localhost:8000"
	// Network网络通信协议
	Network string = "tcp"
	// SerName服务名称
	SerName string = "simple_grpc"
)

// EtcdEndpoints etcd集群地址
var EtcdEndpoints = []string{"localhost:2379"}

func main() {
	// 监听本地端口
	listener, err := net.Listen(Network, Address)
	if err != nil {
		log.Fatalf("net.Listen err: %v", err)
	}
	log.Println(Address + " net.Listing...")
	// 新建gRPC服务器实例
	grpcServer := grpc.NewServer()
	// 在gRPC服务器注册我们的服务
	pb.RegisterSimpleServer(grpcServer, &SimpleService{})
	// 把服务注册到etcd
	ser, err := etcdv3.NewServiceRegister(EtcdEndpoints, SerName+"/"+Address, "1", 5)
	if err != nil {
		log.Fatalf("register service err: %v", err)
	}
	defer ser.Close()
	// 用服务器Serve()方法以及我们的端口信息区实现阻塞等待,直到进程被杀死或者Stop()被调用
	err = grpcServer.Serve(listener)
	if err != nil {
		log.Fatalf("grpcServer.Serve err: %v", err)
	}
}

// Route实现Route方法
func (s *SimpleService) Route(ctx context.Context, req *pb.SimpleRequest) (*pb.SimpleResponse, error) {
	log.Println("receive: " + req.Data)
	res := pb.SimpleResponse{
		Code:  200,
		Value: "hello " + req.Data,
	}
	return &res, nil
}

1.7 客户端

package main

import (
	"context"
	"etcd3/etcdv3"
	pb "etcd3/proto"
	"fmt"
	"google.golang.org/grpc"
	"google.golang.org/grpc/resolver"
	"log"
	"strconv"
	"time"
)

var (
	// EtcdEndpoints etcd集群地址
	EtcdEndpoints = []string{"localhost:2379"}
	// SerName服务名称
	SerName    = "simple_grpc"
	grpcClient pb.SimpleClient
)

func main() {
	r := etcdv3.NewServiceDiscovery(EtcdEndpoints)
	resolver.Register(r)
	// 连接服务器
	conn, err := grpc.Dial(
		fmt.Sprintf("%s:///%s", r.Scheme(), SerName),
		// 这个地方需要改成weight
		grpc.WithBalancerName("weight"),
		grpc.WithInsecure(),
	)
	if err != nil {
		log.Fatalf("net.Connect err: %v", err)
	}
	defer conn.Close()
	// 建立gRPC连接
	grpcClient = pb.NewSimpleClient(conn)
	for i := 0; i < 100; i++ {
		route(i)
		time.Sleep(1 * time.Second)
	}

}

// route调用服务端Route方法
func route(i int) {
	// 创建发送结构体
	req := pb.SimpleRequest{
		Data: "grpc " + strconv.Itoa(i),
	}
	// 调用我们的服务(Route方法)
	// 同时传入了一个 context.Context,在有需要时可以让我们改变RPC的行为,比如超时/取消一个正在运行的RPC
	res, err := grpcClient.Route(context.Background(), &req)
	if err != nil {
		log.Fatalf("Call Route err: %v", err)
	}
	// 打印返回值
	log.Println(res)
}

1.8 运行效果

运行服务1,权重为1

[root@zsx server]# go run server.go
2023-02-14 10:25:32.826914 I | localhost:8000 net.Listing...
2023-02-14 10:25:32.829437 I | Put key:/grpclb/simple_grpc/localhost:8000  weight:1  success!

运行服务2,权重为4

[root@zsx server]# go run server1.go
2023-02-14 10:26:00.647939 I | localhost:8001 net.Listing...
2023-02-14 10:26:00.651005 I | Put key:/grpclb/simple_grpc/localhost:8001  weight:4  success!

运行客户端:

[root@zsx client]# go run client.go
2023-02-14 10:28:01.521576 I | Build
2023-02-14 10:28:01.524903 I | put key : /grpclb/simple_grpc/localhost:8000 wieght: 1
2023-02-14 10:28:01.524925 I | put key : /grpclb/simple_grpc/localhost:8001 wieght: 4
2023-02-14 10:28:01.527360 I | watching prefix:/grpclb/simple_grpc/ now...
2023-02-14 10:28:01.528217 I | code:200 value:"hello grpc 0"
2023-02-14 10:28:02.529385 I | code:200 value:"hello grpc 1"
2023-02-14 10:28:03.530815 I | code:200 value:"hello grpc 2"
2023-02-14 10:28:04.531676 I | code:200 value:"hello grpc 3"
2023-02-14 10:28:05.533643 I | code:200 value:"hello grpc 4"
2023-02-14 10:28:06.534383 I | code:200 value:"hello grpc 5"
2023-02-14 10:28:07.535481 I | code:200 value:"hello grpc 6"
2023-02-14 10:28:08.536642 I | code:200 value:"hello grpc 7"
2023-02-14 10:28:09.538163 I | code:200 value:"hello grpc 8"
2023-02-14 10:28:10.539766 I | code:200 value:"hello grpc 9"
......

查看前50次请求在服务1服务器2的负载情况。服务1分配了9次请求,服务2分配了41次请求,接近权重比

值。

[root@zsx server]# go run server.go
2023-02-14 10:25:32.826914 I | localhost:8000 net.Listing...
2023-02-14 10:25:32.829437 I | Put key:/grpclb/simple_grpc/localhost:8000  weight:1  success!
2023-02-14 10:28:01.527954 I | receive: grpc 0
2023-02-14 10:28:07.535339 I | receive: grpc 6
2023-02-14 10:28:08.536178 I | receive: grpc 7
2023-02-14 10:28:10.539432 I | receive: grpc 9
2023-02-14 10:28:18.551878 I | receive: grpc 17
2023-02-14 10:28:21.556451 I | receive: grpc 20
2023-02-14 10:28:29.568674 I | receive: grpc 28
2023-02-14 10:28:30.569699 I | receive: grpc 29
2023-02-14 10:28:39.580685 I | receive: grpc 38
2023-02-14 10:28:42.582966 I | receive: grpc 41
......
[root@zsx server]# go run server1.go
2023-02-14 10:26:00.647939 I | localhost:8001 net.Listing...
2023-02-14 10:26:00.651005 I | Put key:/grpclb/simple_grpc/localhost:8001  weight:4  success!
2023-02-14 10:28:02.529241 I | receive: grpc 1
2023-02-14 10:28:03.530650 I | receive: grpc 2
2023-02-14 10:28:04.531456 I | receive: grpc 3
2023-02-14 10:28:05.533156 I | receive: grpc 4
2023-02-14 10:28:06.534254 I | receive: grpc 5
2023-02-14 10:28:09.537517 I | receive: grpc 8
2023-02-14 10:28:11.541235 I | receive: grpc 10
2023-02-14 10:28:12.542726 I | receive: grpc 11
2023-02-14 10:28:13.543879 I | receive: grpc 12
2023-02-14 10:28:14.545716 I | receive: grpc 13
2023-02-14 10:28:15.547280 I | receive: grpc 14
2023-02-14 10:28:16.548520 I | receive: grpc 15
2023-02-14 10:28:17.549883 I | receive: grpc 16
2023-02-14 10:28:19.554145 I | receive: grpc 18
2023-02-14 10:28:20.555051 I | receive: grpc 19
2023-02-14 10:28:22.558659 I | receive: grpc 21
2023-02-14 10:28:23.559799 I | receive: grpc 22
2023-02-14 10:28:24.561627 I | receive: grpc 23
2023-02-14 10:28:25.563411 I | receive: grpc 24
2023-02-14 10:28:26.564220 I | receive: grpc 25
2023-02-14 10:28:27.565778 I | receive: grpc 26
2023-02-14 10:28:28.567630 I | receive: grpc 27
2023-02-14 10:28:31.570972 I | receive: grpc 30
2023-02-14 10:28:32.571856 I | receive: grpc 31
2023-02-14 10:28:33.573264 I | receive: grpc 32
2023-02-14 10:28:34.574380 I | receive: grpc 33
2023-02-14 10:28:35.575546 I | receive: grpc 34
2023-02-14 10:28:36.577172 I | receive: grpc 35
2023-02-14 10:28:37.578017 I | receive: grpc 36
2023-02-14 10:28:38.579229 I | receive: grpc 37
2023-02-14 10:28:40.581311 I | receive: grpc 39
2023-02-14 10:28:41.582122 I | receive: grpc 40
2023-02-14 10:28:43.584327 I | receive: grpc 42
2023-02-14 10:28:44.585437 I | receive: grpc 43
2023-02-14 10:28:45.586696 I | receive: grpc 44
2023-02-14 10:28:46.589103 I | receive: grpc 45
2023-02-14 10:28:47.590910 I | receive: grpc 46
2023-02-14 10:28:48.592322 I | receive: grpc 47
2023-02-14 10:28:49.593531 I | receive: grpc 48
2023-02-14 10:28:50.595408 I | receive: grpc 49
2023-02-14 10:28:51.596997 I | receive: grpc 50
......

断开服务2,所有请求流向服务1

[root@zsx server]# go run server.go
......
2023-02-14 10:29:01.613135 I | receive: grpc 60
2023-02-14 10:29:02.615931 I | receive: grpc 61
2023-02-14 10:29:03.616709 I | receive: grpc 62
2023-02-14 10:29:04.618873 I | receive: grpc 63
2023-02-14 10:29:05.619534 I | receive: grpc 64
2023-02-14 10:29:06.620700 I | receive: grpc 65
2023-02-14 10:29:07.623159 I | receive: grpc 66
2023-02-14 10:29:08.625286 I | receive: grpc 67
2023-02-14 10:29:09.626981 I | receive: grpc 68
2023-02-14 10:29:10.627817 I | receive: grpc 69
2023-02-14 10:29:11.629112 I | receive: grpc 70
2023-02-14 10:29:12.630524 I | receive: grpc 71
2023-02-14 10:29:13.632267 I | receive: grpc 72
2023-02-14 10:29:14.633897 I | receive: grpc 73
2023-02-14 10:29:15.635173 I | receive: grpc 74
......

以权重为4,重启服务2,请求以加权随机法流向两个服务器:

[root@zsx server]# go run server.go
......
2023-02-14 10:35:13.615554 I | receive: grpc 74
2023-02-14 10:35:18.632576 I | receive: grpc 79
2023-02-14 10:35:33.672250 I | receive: grpc 94
2023-02-14 10:35:37.685112 I | receive: grpc 98
......
[root@zsx server]# go run server1.go
2023-02-14 10:35:14.077492 I | localhost:8001 net.Listing...
2023-02-14 10:35:14.080789 I | Put key:/grpclb/simple_grpc/localhost:8001  weight:4  success!
2023-02-14 10:35:14.618379 I | receive: grpc 75
2023-02-14 10:35:15.622523 I | receive: grpc 76
2023-02-14 10:35:16.626798 I | receive: grpc 77
2023-02-14 10:35:17.629423 I | receive: grpc 78
2023-02-14 10:35:19.635279 I | receive: grpc 80
2023-02-14 10:35:20.636903 I | receive: grpc 81
2023-02-14 10:35:21.641017 I | receive: grpc 82
2023-02-14 10:35:22.643229 I | receive: grpc 83
2023-02-14 10:35:23.646195 I | receive: grpc 84
2023-02-14 10:35:24.648788 I | receive: grpc 85
2023-02-14 10:35:25.651361 I | receive: grpc 86
2023-02-14 10:35:26.654053 I | receive: grpc 87
2023-02-14 10:35:27.656535 I | receive: grpc 88
2023-02-14 10:35:28.658263 I | receive: grpc 89
2023-02-14 10:35:29.661086 I | receive: grpc 90
2023-02-14 10:35:30.663600 I | receive: grpc 91
2023-02-14 10:35:31.666052 I | receive: grpc 92
2023-02-14 10:35:32.668253 I | receive: grpc 93
2023-02-14 10:35:34.675183 I | receive: grpc 95
2023-02-14 10:35:35.678464 I | receive: grpc 96
2023-02-14 10:35:36.682813 I | receive: grpc 97
2023-02-14 10:35:38.686545 I | receive: grpc 99

1.9 总结

本篇文章以加权随机法为例,介绍了如何实现gRPC自定义负载均衡策略,以满足我们的需求。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值