Overview
Service Mesh and cloud native are becoming attractive topices during recent years, As project like istio is experiencing an upward trend, developers may wonder if they can embrace service mesh of not. In this case, If service mesh is able to address your facing problems, the next step is to choose a suitable one.
When it comes to technology adoption among popular service meshes, data panel is a big concern, as it raises the major overhead comparied with those that has not adopt service mesh.
Linkerd2-proxy is an open-source proxy designed to address service mesh communcation issues. While it is written in Rust, its peers and competitors, Envoy is written C++, which has been adopt by istio. Another sidecar proxy, Mosn, introduced by SOFA Mesh(SOFA Mesh is deprecated by 2020 though, Mosn is still working), is an althernative for envoy as sidecar in istio environment and getting closer with industrial usage.
The following table do comparison between istio
and linkerd
and SOFAMesh
Service Mesh | Istio | Linkerd | SOFAMesh |
---|---|---|---|
Proxy | Envoy | Linkerd2-proxy | Mosn |
Proxy Language | Rust | C++ | Golang |
Proxy Performance | Accepatble | Smaller and Faster | To be compared |
Proxy Usage | General-purpose | Designed for service mesh sidecar | Designed for service mesh sidecar |
Control Panel | Mixer+Pilot+Citadel/Istiod | public-api+tap+destination+controller+identity… | Mixer+Pilot+Citadel/Istiod |
Note that part of the above perfomance metric is based on this post, which is benchmarking Linkerd 2.3-edge-19.5.2
and Istio 1.1.6
.
This post will focus on feature, architecture, machanism and source code implementation of three data panels, Linkerd2-proxy
, Envoy
and Mosn
, aimining to figure out questions on:
- Performance comparison.
- Features, scale, and robustness comparison.
- Community support, popularity and trend comparison.
Hope that it provides insights for your technology adoption decision. Aslo remind that for this post:
- Only focus on kubernetes-based service mesh.
- May not analyze souce code in line-by-line style, but may help you capture the whole picture.
- There will be continuous updates for this post.
Table of Contents
Performance
Sidecar proxy’s performance issues are conttibuted by serveral aspects:
- Traffic hijiacking
- Treading Model
- Discovery Service
- Filter Chains
Traffic Hijacking
Hijacking traffic of the deployed service is the prerequests for sidecar proxy to come into effect. By using a set of tools, it enables sidecar proxy to be the only way that inbound and outbound traffic go through. Table below shows tools and methods used to implement traffic hijacking fort istio and linkerd respectively.
Service Mesh | Istio | Sofa Mesh | Linkerd |
---|---|---|---|
InitContainer Needed | Yes | No | |
Solution | It leverages kubernetes admission webhook to inject istio-init container, which is based on proxy_init image and using init shell script as entrypoint, into same pod with the deployed service instance. | Service Registry | |
Hijack Method | Iptable-based | Traffic Takeover | |
Default Redirected Port | 15001 | ||
Sidecar proxy | Envoy | Mosn | Linker2-proxy |
Optimization |
// todo
Threading model
The sidecar proxy itself is somehow a middleware that acts like a secondary service, which processes all traffic from the business service transparently. Although sidecar proxy shares the same network namespace with the business service, communication overhead like tcp establishment cost can not be avoided. Such communication overhead has been illustrated by the blow figure from AWS App Mesh
.

This subtopic focuses on sidecar proxy internal threading model. It raises an really interesting point that Envoy, Linkerd2-proxy and Mosn are implemented in three distinguished language C++, Rust and Golang correspondingly. In this case, different languages bring different features and it plays an important role for the implementation of mutil-threading model.
Additionally, Nginx is introduced below in order to helps better understanding, since it is a widely used porxy with a huge user community and rich documentation.

- Nginx
- Limited number of single-threaded processes called
workers
. Within eachworker
nginx can handle many thousands of concurrent connections and requests per second.
- Limited number of single-threaded processes called
- Envoy
- Each envoy process has a main thread that it forks and cordinates worker threads, also a file flusher thread for logging. See official blog here.
- Worker threads to be equal to the number of hardware threads on the machine is recommended
- Listener accepts connection, bounds it to one of workers, which perform listening, filtering, and forwarding.
- Linkerd2-proxy
- todo
- Mosn
- todo
Proxy | Envoy | Linkerd2-proxy | Mosn |
---|---|---|---|
Threading Model | Main thread, worker threads, and a file filter thread | ||
Default Threading Model | |||
Alternative Threading Model | |||
IO Blocking |
Discovery Service
Discovers service may become a performance bottleneck, expecially when the number of the endpoints becomes huge or it flutuates frequently, or worsely, both happends.
Check my another post for more information.
Filter Chain
Scalability
Sidecar | Envoy | Linkerd2-proxy | Mosn |
---|---|---|---|
Supported Platform | Kuberbetes, Mesos and more | ||
Multi-platform Support Component | Platform Adapter | ||
xDS Support | ✅ | ✅ | |
Protocol Extension Support | ✅ |
Community Attraction
Number of iteraction of software version, and how many issues and pull request related, are two critical figures to indicate popularity of open-source projects. The reason is that more popularity means more adoption, which leads to more issues raised and more pull request posted, and finally more issues fixed.