Service Mesh Sidecar Proxy Comparison

Overview

Service Mesh and cloud native are becoming attractive topices during recent years, As project like istio is experiencing an upward trend, developers may wonder if they can embrace service mesh of not. In this case, If service mesh is able to address your facing problems, the next step is to choose a suitable one.

When it comes to technology adoption among popular service meshes, data panel is a big concern, as it raises the major overhead comparied with those that has not adopt service mesh.

Linkerd2-proxy is an open-source proxy designed to address service mesh communcation issues. While it is written in Rust, its peers and competitors, Envoy is written C++, which has been adopt by istio. Another sidecar proxy, Mosn, introduced by SOFA Mesh(SOFA Mesh is deprecated by 2020 though, Mosn is still working), is an althernative for envoy as sidecar in istio environment and getting closer with industrial usage.

The following table do comparison between istio and linkerd and SOFAMesh

Service MeshIstioLinkerdSOFAMesh
ProxyEnvoyLinkerd2-proxyMosn
Proxy LanguageRustC++Golang
Proxy PerformanceAccepatbleSmaller and FasterTo be compared
Proxy UsageGeneral-purposeDesigned for service mesh sidecarDesigned for service mesh sidecar
Control PanelMixer+Pilot+Citadel/Istiodpublic-api+tap+destination+controller+identity…Mixer+Pilot+Citadel/Istiod

Note that part of the above perfomance metric is based on this post, which is benchmarking Linkerd 2.3-edge-19.5.2 and Istio 1.1.6.

This post will focus on feature, architecture, machanism and source code implementation of three data panels, Linkerd2-proxy, Envoy and Mosn, aimining to figure out questions on:

  • Performance comparison.
  • Features, scale, and robustness comparison.
  • Community support, popularity and trend comparison.

Hope that it provides insights for your technology adoption decision. Aslo remind that for this post:

  • Only focus on kubernetes-based service mesh.
  • May not analyze souce code in line-by-line style, but may help you capture the whole picture.
  • There will be continuous updates for this post.

Table of Contents


Performance

Sidecar proxy’s performance issues are conttibuted by serveral aspects:

  • Traffic hijiacking
  • Treading Model
  • Discovery Service
  • Filter Chains

Traffic Hijacking

Hijacking traffic of the deployed service is the prerequests for sidecar proxy to come into effect. By using a set of tools, it enables sidecar proxy to be the only way that inbound and outbound traffic go through. Table below shows tools and methods used to implement traffic hijacking fort istio and linkerd respectively.


Service MeshIstioSofa MeshLinkerd
InitContainer NeededYesNo
SolutionIt leverages kubernetes admission webhook to inject istio-init container, which is based on proxy_init image and using init shell script as entrypoint, into same pod with the deployed service instance.Service Registry
Hijack MethodIptable-basedTraffic Takeover
Default Redirected Port15001
Sidecar proxyEnvoyMosnLinker2-proxy
Optimization

// todo


Threading model

The sidecar proxy itself is somehow a middleware that acts like a secondary service, which processes all traffic from the business service transparently. Although sidecar proxy shares the same network namespace with the business service, communication overhead like tcp establishment cost can not be avoided. Such communication overhead has been illustrated by the blow figure from AWS App Mesh.

..
- Picture from https://aws.amazon.com/blogs/compute/learning-aws-app-mesh

This subtopic focuses on sidecar proxy internal threading model. It raises an really interesting point that Envoy, Linkerd2-proxy and Mosn are implemented in three distinguished language C++, Rust and Golang correspondingly. In this case, different languages bring different features and it plays an important role for the implementation of mutil-threading model.

Additionally, Nginx is introduced below in order to helps better understanding, since it is a widely used porxy with a huge user community and rich documentation.


..
- Diagram of nginx's architecture from https://www.aosabook.org/en/nginx.html

  • Nginx
    • Limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.

  • Envoy
    • Each envoy process has a main thread that it forks and cordinates worker threads, also a file flusher thread for logging. See official blog here.
    • Worker threads to be equal to the number of hardware threads on the machine is recommended
    • Listener accepts connection, bounds it to one of workers, which perform listening, filtering, and forwarding.

  • Linkerd2-proxy
    • todo

  • Mosn
    • todo

ProxyEnvoyLinkerd2-proxyMosn
Threading ModelMain thread, worker threads, and a file filter thread
Default Threading Model
Alternative Threading Model
IO Blocking

Discovery Service

Discovers service may become a performance bottleneck, expecially when the number of the endpoints becomes huge or it flutuates frequently, or worsely, both happends.

Check my another post for more information.


Filter Chain

Scalability


SidecarEnvoyLinkerd2-proxyMosn
Supported PlatformKuberbetes, Mesos and more
Multi-platform Support ComponentPlatform Adapter
xDS Support
Protocol Extension Support

Community Attraction

Number of iteraction of software version, and how many issues and pull request related, are two critical figures to indicate popularity of open-source projects. The reason is that more popularity means more adoption, which leads to more issues raised and more pull request posted, and finally more issues fixed.


Reference

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值