Discovery Service in Serice Mesh

本文深入探讨了Service Mesh在微服务架构中的关键作用,特别是Istio和Linkerd如何处理服务发现与配置资源的动态更新。通过对比Envoy与Pilot、Linkerd2-proxy等组件,解析了它们在Kubernetes环境下实现服务发现和配置分布的机制,包括xDS协议的细节及其对性能的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Overview

Discovery Service is becoming a bothering and complex issue in production, as the micro-service architecture has been widely used. Number of services may scales, communication between services must be guranteed to be configurable, avaliable and reliable.

One of the drivers for Service Mesh is to decouple discovery service from app’s business logic. There are serveral open-source solutions, and this post will do comparison of them, by architecture, performance and scale. Note that although some service meshes are platform-independent and compatible with multiple enviroments, this post only focus on kubernetes environment.


Table of Content


Envoy & Pilot

Envoy interacts with Pilot to discover dynamic services. An universal data plane API, named xDS, has been introduced for envoy process to discover different grantity of services.

Envoy discovers its various dynamic resources via the filesystem or by querying one or more management servers. Collectively, these discovery services and their corresponding APIs are referred to as xDS

This envoy official post has illustrated driver and original design of xDS and its iteration form v1 to v2. The detailded xDS protocol is documented here.

You may also notice a protocol, named Mesh Config Protocol (MCP), a subscription-based configuration distribution API. However, MCP server has been disabled by default in Galley since istio 1.15. Apart fromMCP, there are alternatives for pilot to get config resouce.

It is important to know that the machanism to discover dynamic Service and Config Resource are different:

  • xDS deals with Service Discovery, for Envoy to get pod , service , node, and endpoint from Pilot.
  • Config Resources refers to resources like VirtualService, RouteRule and other istio configurations. Istio utilizes CRDs to represents config resource. Also remind that CRDs are registered to kubenetes apiserver via istioctl or helm (deprecated) during istio setup. For Pilot, dynamic Config Resources can be got via Config Controller’s implementations as the following listed:
    • MCP Controller, which will not be discussed in this post.
    • Filesystem, for testing usage.
    • Kube Config Controller , which will be covered with more details in this post, and it will be refer as Config Controller.
..
- Diagram of istio Service discovery machanism, from https://istio.io/docs/ops/deployment/architecture/

The above diagram is to illustrate interaction between Pilotand Envoy.

  1. User newly deployed service instances firstly notify paltform adaptor to register themself.
  2. Then pilot utilizes the abstract model to transfer platform-related metadatas to the schema it konws.
  3. Finally, pilot distributes these service changes.

However, remind that Platform Adaptor, Abstract model and Envoy API are conceptual design that build a skeleton for internal implementation. Pilot consists of two components/binaries, pilot-discovery and pilot-agent.

  • pilot-discovery deployed as kubernetes deployment.
  • pilot-agent runs as a process located in each sidecar container.
  • As the official design diagram shown below, Discovery services is pilot-discovery, and àgent is pilot-agent correspondingly. Here are cmd arguments for pilot-discovery and pilot-agent.

Istio Pilot agent runs in the sidecar or gateway container and bootstraps Envoy.
Istio Pilot provides fleet-wide traffic management capabilities in the Istio Service Mesh.


..
- Diagram of istio Pilot data flow and control flow, from https://github.com/istio/old_pilot_repo/blob/master/doc/design.md/

Pilot-agent
  • Its functionalty is to configures, initializes and controls Envoy proxy lifecycle,
  • It shares same image with Envoy.Thus actually within Envoy sidecar container, there are two running processes of two binaries, /usr/local/bin/pilot-agent and /usr/local/bin/envoy respectively.
  • Check here for source code implementation. This piece of code is simple and obvious that it contains three critical functions, restart, run and cleanup. The following description is cited from the comment:
The restart protocol matches Envoy semantics for restart epochs: to successfully launch 
a new Envoy process that will replace the running Envoy processes, ...
...
Run function is a call to start the proxy and must block until the proxy exits. 
...
Cleanup function is executed immediately after the proxy exits and must be non-blocking since it
is executed synchronously in the main agent control loop

Pilot-discovery

Pilot-discovery is the main process of Pilot. The following subtopic covers functionality overview, how pilot dicover and distribute service & config, Mesh Config Protocol in detail and finally, summary.


Functionality Overview

It implements Platform Adaptor, Abstract model and Envoy API functionalities:

  • Platform Adaptor: Pilot provides a option for users to configure Service Registry. The concept of Service Registry roughly means platforms like Kubernetes and Consul, which are service discovery systems, provideing API to inform consumer changes of service and its service endpoint. The component to handle these kinds of API is Service Controller. Another component, Config Controller handles config resource changes API.
  • Abstact Model: Covert both config resources and service into an universal schema.
  • Envoy API: Pilot provides a set of gRPC APIs for service distirubution.

Config Resources & Service Discovery
  • The whole sets of CRDs for config resources can be check here, and note that this yaml config file is used as an input of code generating tool defined here, which generates schema here.

  • Config resource can also be configured to store in a filesystem, for testing purpose, as this approach does not provide indexing, data consistency model and so on.

    If FileDir is set, that directory will be monitored for CRD yaml files and will update the controller as those files change (This is used for testing purposes). Otherwise, a CRD client is created based on the config.

  • Pilot-discovery implements Config Controller and Service Controller to handle config resources changes and service changes respectively as mentioned. They both leverage kuberntes client-go, which is a RESTful client with built-in cache and queue machanism to for developer do CRUD and Watch action for a given set of resources stroed in etcd. While Service Controller implements Controller interface, Config Controller implements ConfigStoreCache interface, and both of them enable handler register, so that event handlers for config and service updates can be set up. Remind that these event handler functions are critical that it sends wrap updates to a defined data structure and send them to the gRPC server using channel.

  • Config Controller's implementation, source code here, loads all CRDs that pilot needs, which has been registered to kubernetes apiserver before. Then here it creates dedicated goroutine(roughly 23+ goroutines) to subscribe to different config resource.

  • Service Controller source code here handle service changes. Its Run function has created 4 goroutines for sharedInformers of Pod, Node, Service and Endpoint correspondingly.

  • Kubernetes CRD, Client-go, and Controller machanism can be check in another post of mine.


Service Distribution
  • For Envoy API , pilot-discovery launches a gRPC server here in a goroutine to serve service. Http servers has also been started here as seperated goroutines, for and readiness probe and k8s webhook usage.

  • EnvoyXdsServer, the core implementation of the gRPC server:

    • Protobuf used is defined here, it generates golang interfaces that the discovery service should implement.
    • pilot's gRPC server implements these interfaces in here , for ADS handling.
    • The registered handle functions here are passing service and config updates from Service Controller and Config Controller to EnvoyXdsServer using golang channel, and then let the gRPC server push these updates to consumer as MCP designed.
    • 3 goroutines are created here for updates handling, pushing, and metric.

xDS In Detail

After illustration of how service and config from kubernetes apiserver to pilot-discovery. It comes to the most sophisticated part, the xDS, which play an important role in service distribution and it may bring performance impact, e.g. endpoint push. Also incremental xDS will be discussed.


Summary

We can see that pilot handle service and config from kuberntes to envoy. The full path can be described as :

      gRPC                        RESTful      gRPC                                     
Etcd  --->  Kubernetes Apiserver  --->  Pilot  --->  Envoy


Linkerd2-proxy

// linkerd soluction of xds

Reference

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值