Real World Microservices

微服务路由与部署策略
本文探讨了微服务环境下如何通过引入RPC层的路由概念来解决复杂的服务部署问题,包括新版本服务的灰度发布与回滚等挑战。介绍了Linkerd作为服务代理,提供了负载均衡、重试等功能,并详细讲解了其路由机制。
                                                                                                     

Microservices allow engineering teams to move quickly to grow a product… assuming they don’t get bogged down by the complexity of operating a distributed system. In this post, I’ll show you how some of the hardest operational problems in microservices—staging and canarying of deep services—can be solved by introducing the notion of routing to the RPC layer.

Looking back at my time as an infrastructure engineer at Twitter (from 2010 to 2015), I now realize that we were “doing microservices”, though we didn’t have that vocabulary at the time. (We used what I now understand to be a bad word—SOA).

Buzzwords aside, our motivations were the same as those doing microservices today. We needed to allow engineering teams to operate independently—to control their own deploy schedules, on call rotations, availability, and scale. These teams needed the flexibility to iterate and scale quickly and independently—without taking down the site.

Having worked on one of the world’s largest microservice applications through its formational years, I can assure you that microservices are not magical scaling sprinkles—nor flexibility, nor security, nor reliability sprinkles. It’s my experience that they are considerably more difficult to operate than their monolithic counterparts. The tried and true tools we’re used to—configuration management, log processing, strace, tcpdump, etc—prove to be crude and dull instruments when applied to microservices. In a world where a single request may touch hundreds of services, each with hundreds of instances, where do I run tcpdump? Which logs do I read? If it’s slow, how do I figure out why? When I want to change something, how do I ensure these changes are safe?


We replaced our monolith with micro services so that every outage could be more like a murder mystery.

— Honest Status Page (@honest_update)  October 7, 2015


When Twitter moved to microservices, it had to expend hundreds (thousands?) of staff-years just to reclaim operability. If every organization had to put this level of investment into microservices, the vast majority of these projects would simply fail. Thankfully, over the past few years, open source projects have emerged to ease some of the burden of microservice operations: projects that abstract the details of datacenters and clouds, or offer visibility into a system’s runtime state, or make it easier to write services. But this still isn’t a complete picture of what’s needed to operate microservices at scale. While there are a variety of good tools that help teams go from source code to artifact to cloud, operators don’t have nearly enough control over how these services interact once they’re running. At Twitter, we learned that we need tools that operate on the communication between services—RPC.

It’s this experience that motivated linkerd (pronounced “linker dee”), a proxy designed to give service operators command & control over traffic between services. This encompasses a variety of features including transport securityload balancing, multiplexing, timeouts, retries, and routing.

In this post, I’ll discuss linkerd’s approach to routing. Classically, routing is one of the problems that is addressed at Layers 3 and 4—TCP/IP—with hardware load balancers, BGP, DNS, iptables, etc. While these tools still have a place in the world, they’re difficult to extend to modern multi-service software systems. Instead of operating on connections and packets, we want to operate on requests and responses. Instead of IP addressees and ports, we want to operate on services and instances.

In fact, we’ve found request routing to be a versatile, high-leverage tool that can be employed to solve some of the hardest problems that arise in microservices, allowing production changes to be safe, incremental, and controllable.

ROUTING IN LINKERD

linkerd doesn’t need to be configured with a list of clients. Instead itdynamically routes requests and provisions clients as needed. The basic mechanics of routing involve three things:

  • logical name, describing a request

  • concrete name, describing a service (i.e. in service discovery)

  • and a delegation table (dtab), describing the mapping of logical to concrete names.

linkerd assigns a logical name to every request it processes, for example/http/1.1/GET/users/add or /thrift/userService/addUser. Logical names describe information relevant to the application but not its infrastructure, so they typically do not describe any details about service discovery (e.g. etcd, consul, ZooKeeper), environment (e.g. prod, staging), or region (e.g. us-central-1b, us-east-1).

These sorts of details are encoded in concrete names. Concrete names typically describe a service discovery backend like ZooKeeper, etcd, consul, DNS, etc. For example:

  • /$/inet/users.example.com/8080 names an inet address.

  • /io.l5d.k8s/default/thrift/users names a kubernetes service.

  • /io.l5d.serversets/users/prod/thrift names a ZooKeeper serverset.

This “namer” subsystem is pluggable so that it can be extended to support arbitrary service discovery schemes.

DELEGATION

The distinction between logical and concrete names offers two real benefits:

  1. Application code is focused on business logic–users, photos, tweets, etc–and not operational details

  2. Backends can be determined contextually and, with the help of namerd, dynamically.

The mapping from logical to concrete names is described by a delegation table, or Dtab. For example, linkerd can assign names to HTTP requests in the form /http/1.1/<METHOD>/<HOST>.

Suppose we configure linkerd as follows:

namers:- kind: io.l5d.experimental.k8s  authTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/tokenrouters:- protocol: http  servers:   - port: 4140   baseDtab: |     /srv         => /io.l5d.k8s/default/http ;     /host        => /srv ;     /http/1.1/*  => /host ;

In this configuration, a logical name like /http/1.1/GET/users is delegated to the concrete name /io.l5d.k8s/default/http/users through rewrites:

FromDelegationTo
/http/1.1/GET/users/http/1.1/* => /host/host/users
/host/users/host => /srv/srv/users
/srv/users/srv => /io.l5d/default/http/io.l5d.k8s/default/http/users


Finally, the concrete name, /io.l5d.k8s/default/http/users, addresses a service discovery system—in this case, the Kubernetes master API. The io.l5d.k8snamer expects names in the form namespace / port / service, so linkerd load balancers over the addresses on the http port of the users service in thedefault namespace.

Multiple namers may be combined to express logic such as find this service in ZooKeeper, but if it’s not there fall back to the local filesystem:

namers:- kind: io.l5d.fs  rootDir: /path/to/services- kind: io.l5d.serversets  zkAddrs:   - host: 127.0.0.1    port: 2181routers:- protocol: http  servers:   - port: 4140   baseDtab: |     /srv         => /io.l5d.fs ;     /srv         => /io.l5d.serversets/path/to/services ;     /host        => /srv ;     /http/1.1/*  => /host ;

The /srv delegations are combined to construct a fallback so that if a serverset cannot be found, lookups will be performed against the filesystem namer.

PER-REQUEST OVERRIDES

This concept of contextual resolution can be extended to alter how individual requests are routed.

Suppose you want to stage a new version of a service and you want to get an idea how the application will behave with the new version. Assume that this service isn’t directly user-facing, but has other services that call it—a “users” service is generally a good example. You have a few options:

  1. Just deploy it to production. #YOLO

  2. Deploy staging versions of all of the services that call your service.

Real World Microservices


Neither of these options are particularly manageable. The former causes user-facing problems. The latter becomes complex and cumbersome—you may not have the access or tooling needed to deploy new configurations of all of the services that call you…

Happily, the routing capabilities we have with linkerd allow us to do ad-hoc staging! We can extend the delegation system described above on an individual request to stage a new version of the users service without changing any of its callers. For example:

$ curl -H 'Dtab-local: /host/users=>/srv/users-v2' https://example.com/

This would cause all services that would ordinarily send requests to/srv/users to instead send requests to /srv/users-v2. Only on this request. Across all services!

And this isn’t just limited to curl commands: this sort of thing can also easily be supported by browser plugins.

This approach greatly reduces the overhead of staging new versions of services in a complex microservice.

DYNAMIC ROUTING WITH NAMERD

I’ve described how we can configure linkerd with a static delegation table. But what if we want to change routing policy at runtime? What if we want to use a similar approach that we used for staging to support “canary” or “blue-green”” deploys? Enter namerd.

namerd is a service that allows operators to manage delegations. It fronts service discovery systems so that linkerd does not need to communicate with service discovery directly—linkerd instances resolve names through namerd, which maintains a view of service discovery backends.

Real World Microservices

namerd is configured with:

  • A (pluggable) storage backend, e.g. ZooKeeper or etcd.

  • “Namers” that inform namerd how to perform service discovery.

  • Some external interfaces–usually a control interface so that operators may update delegations, and a sync interface for linkerd instances.

linkerd’s configuration is then simplified to be something like the following:

routers:- protocol: http  servers:   - port: 4180   interpreter:     kind: io.l5d.namerd    namespace: web    dst: /$/inet/namerd.example.com/4290

And namerd has a configuration like:

# pluggable dtab storage -- for this example we'll just use an in-memory version.storage:   kind: io.buoyant.namerd.storage.inMemory# pluggable namers (for service discovery)namers:   - kind: io.l5d.fs    ...   - kind: io.l5d.serversets    ...interfaces:   # used by linkerds to receive updates   - kind: thriftNameInterpreter    ip: 0.0.0.0    port: 4100    # used by `namerctl` to manage configuration   - kind: httpController    ip: 0.0.0.0    port: 4180

Once namerd is running and linkerd is configured to resolve through it, we can use the namerctl command-line utility to update routing dynamically.

When namerd first starts, we create a basic dtab (called web) as follows:

$ namerctl dtab create web - <<EOF /srv         => /io.l5d.fs ; /srv         => /io.l5d.serversets/path/to/services ; /host        => /srv ; /http/1.1/*  => /host ; EOF

For example, to “canary test” our users-v2 service, we might send 1% of real production traffic to it:

$ namerctl dtab update web - <<EOF /srv         => /io.l5d.fs ; /srv         => /io.l5d.serversets/path/to/services ; /host        => /srv ; /http/1.1/*  => /host ; /host/users  => 1 * /srv/users-v2 & 99 * /srv/users ; EOF

We can control how much traffic the new version gets by altering weights. For instance, to send 25% of users traffic to users-v2, we update namerd with:

$ namerctl dtab update web - <<EOF /srv         => /io.l5d.fs ; /srv         => /io.l5d.serversets/path/to/services ; /host        => /srv ; /http/1.1/*  => /host ; /host/users  => 1 * /srv/users-v2 & 3 * /srv/users ; EOF

Finally, when we’re happy with the performance of the new service, we can update namerd to prefer the new version as long as it’s there, but to fall-back to the original version should it disappear:

$ namerctl dtab update web - <<EOF /srv         => /io.l5d.fs ; /srv         => /io.l5d.serversets/path/to/services ; /host        => /srv ; /http/1.1/*  => /host ; /host/users  => /srv/users-v2 | /srv/users ; EOF

Unlike linkerd, namerd is still a fairly new project. We’re iterating quickly to make sure it’s easy to operate and debug. As it matures, it will give operators a powerful tool to control the services at runtime. It can be integrated with deployment tools to do safe, gradual, managed rollouts (and rollbacks) of new features. It will help teams move features out of a monolith into microservices. And it will improve debuggability of systems. I’ve seen first-hand how powerful RPC tooling can be, and I’m excited to introduce these features to the open source community.

Just like linkerd, namerd is open source under the Apache License v2. We’re excited about releasing it to the community, and we hope you get involved with what we’re building at Buoyant. It’s going to be awesome.

TRY IT FOR YOURSELF

We’ve published the linkerd-examples repository with examples of how to run linkerd & namerd on Kubernetes and Mesos + Marathon. These repositories should have everything you need to get up and routing.

If you have any questions along the way, please don’t hesitate to ask us onslack.linkerd.io.

原文链接: https://blog.buoyant.io/2016/05/04/real-world-microservices-when-services-stop-playing-well-and-start-getting-real



查看原文:http://www.zoues.com/2016/10/27/real-world-microservices/
标题SpringBoot智能在线预约挂号系统研究AI更换标题第1章引言介绍智能在线预约挂号系统的研究背景、意义、国内外研究现状及论文创新点。1.1研究背景与意义阐述智能在线预约挂号系统对提升医疗服务效率的重要性。1.2国内外研究现状分析国内外智能在线预约挂号系统的研究与应用情况。1.3研究方法及创新点概述本文采用的技术路线、研究方法及主要创新点。第2章相关理论总结智能在线预约挂号系统相关理论,包括系统架构、开发技术等。2.1系统架构设计理论介绍系统架构设计的基本原则和常用方法。2.2SpringBoot开发框架理论阐述SpringBoot框架的特点、优势及其在系统开发中的应用。2.3数据库设计与管理理论介绍数据库设计原则、数据模型及数据库管理系统。2.4网络安全与数据保护理论讨论网络安全威胁、数据保护技术及其在系统中的应用。第3章SpringBoot智能在线预约挂号系统设计详细介绍系统的设计方案,包括功能模块划分、数据库设计等。3.1系统功能模块设计划分系统功能模块,如用户管理、挂号管理、医生排班等。3.2数据库设计与实现设计数据库表结构,确定字段类型、主键及外键关系。3.3用户界面设计设计用户友好的界面,提升用户体验。3.4系统安全设计阐述系统安全策略,包括用户认证、数据加密等。第4章系统实现与测试介绍系统的实现过程,包括编码、测试及优化等。4.1系统编码实现采用SpringBoot框架进行系统编码实现。4.2系统测试方法介绍系统测试的方法、步骤及测试用例设计。4.3系统性能测试与分析对系统进行性能测试,分析测试结果并提出优化建议。4.4系统优化与改进根据测试结果对系统进行优化和改进,提升系统性能。第5章研究结果呈现系统实现后的效果,包括功能实现、性能提升等。5.1系统功能实现效果展示系统各功能模块的实现效果,如挂号成功界面等。5.2系统性能提升效果对比优化前后的系统性能
在金融行业中,对信用风险的判断是核心环节之一,其结果对机构的信贷政策和风险控制策略有直接影响。本文将围绕如何借助机器学习方法,尤其是Sklearn工具包,建立用于判断信用状况的预测系统。文中将涵盖逻辑回归、支持向量机等常见方法,并通过实际操作流程进行说明。 一、机器学习基本概念 机器学习属于人工智能的子领域,其基本理念是通过数据自动学习规律,而非依赖人工设定规则。在信贷分析中,该技术可用于挖掘历史数据中的潜在规律,进而对未来的信用表现进行预测。 二、Sklearn工具包概述 Sklearn(Scikit-learn)是Python语言中广泛使用的机器学习模块,提供多种数据处理和建模功能。它简化了数据清洗、特征提取、模型构建、验证与优化等流程,是数据科学项目中的常用工具。 三、逻辑回归模型 逻辑回归是一种常用于分类任务的线性模型,特别适用于二类问题。在信用评估中,该模型可用于判断借款人是否可能违约。其通过逻辑函数将输出映射为0到1之间的概率值,从而表示违约的可能性。 四、支持向量机模型 支持向量机是一种用于监督学习的算法,适用于数据维度高、样本量小的情况。在信用分析中,该方法能够通过寻找最佳分割面,区分违约与非违约客户。通过选用不同核函数,可应对复杂的非线性关系,提升预测精度。 五、数据预处理步骤 在建模前,需对原始数据进行清理与转换,包括处理缺失值、识别异常点、标准化数值、筛选有效特征等。对于信用评分,常见的输入变量包括收入水平、负债比例、信用历史记录、职业稳定性等。预处理有助于减少噪声干扰,增强模型的适应性。 六、模型构建与验证 借助Sklearn,可以将数据集划分为训练集和测试集,并通过交叉验证调整参数以提升模型性能。常用评估指标包括准确率、召回率、F1值以及AUC-ROC曲线。在处理不平衡数据时,更应关注模型的召回率与特异性。 七、集成学习方法 为提升模型预测能力,可采用集成策略,如结合多个模型的预测结果。这有助于降低单一模型的偏差与方差,增强整体预测的稳定性与准确性。 综上,基于机器学习的信用评估系统可通过Sklearn中的多种算法,结合合理的数据处理与模型优化,实现对借款人信用状况的精准判断。在实际应用中,需持续调整模型以适应市场变化,保障预测结果的长期有效性。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值