High-Performance Load Balancing with NGINX (Part 1 of 3)

本文探讨了NGINX在HTTP、TCP和UDP协议下实现负载均衡的方法,重点关注会话持久性和智能路由策略,以确保高性能和高可用性的互联网服务。

原文链接:高效的负载均衡通过nginx

Today’s internet user experience demands performance and uptime. To achieve this, multiple copies of the same system are run, and the load is distributed over them. As the load increases, another copy of the system can be brought online. This architecture technique is called horizontal scaling. Software-based infrastructure is increasing in popularity because of its flexibility, opening up a vast world of possibilities. Whether the use case is as small as a set of two for high availability or as large as thousands around the globe, there’s a need for a load-balancing solution that is as dynamic as the infrastructure. NGINX fills this need in a number of ways, such as HTTP, TCP, and UDP load balancing, which we cover in this post.
今天的互联网用户体验 需要 性能 和 正常运行时间。此,运行同一系统的多个副本,并在其上分配负载。随着负载的增加,系统的另一个副本可以联机。这种架构技术称为水平拓展。基于软件的基础设施因其灵活性而日益普及,开辟了广阔的可能性。对于高可用性而言,用例是否只有一组两个小的用例,或者全球范围内大到千个用例,需要一个与基础架构一样动态的负载平衡解决方案。NGINX以多种方式满足了这一需求,例如HTTP,TCP和UDP负载平衡,我们将在本文中介绍。

When balancing load, it’s important that the impact to the client is only a positive one. Many modern web architectures employ stateless application tiers, storing state in shared memory or databases. However, this is not the reality for all. Session state is immensely valuable and vast in interactive applications. This state might be stored locally to the application server for a number of reasons; for example, in applications for which the data being worked is so large that network overhead is too expensive in performance. When state is stored locally to an application server, it is extremely important to the user experience that the subsequent requests continue to be delivered to the same server. Another facet of the situation is that servers should not be released until the session has finished. Working with stateful applications at scale requires an intelligent load balancer. NGINX Plus offers multiple ways to solve this problem by tracking cookies or routing.
在平衡负载时,重要的是对客户的影响只是积极的。许多现代Web体系结构使用无状态应用程序层,将状态存储在共享内存或数据库中。然而,这不是所有人的现实。会话状态在交互式应用程序中非常有价值且非常有用。出于多种原因,此状态可能本地存储在应用程序服务器中; 例如,在正在工作的数据太大以至于网络开销在性能上太昂贵的应用程序中。当状态本地存储到应用程序服务器时,对于用户体验来说非常重要的是后续请求继续传递到同一服务器。这种情况的另一个方面是在会话结束之前不应该释放服务器。大规模使用有状态应用程序需要智能负载平衡器。NGINX Plus通过跟踪cookie或路由提供多种方法来解决此问题。

This post covers session persistence as it pertains to load balancing with NGINX and NGINX Plus.
这篇文章介绍了会话持久性,因为它与NGINX和NGINX Plus的负载平衡有关。

How distribute load between two or more HTTP servers?

如何在两个或多个HTTP服务器之间分配负载?
Use NGINX’s HTTP module to load balance over HTTP servers using the upstream block:
使用NGINX的HTTP模块使用upstream块在HTTP服务器上进行负载均衡:

upstream backend {
    server 10.10.12.45:80      weight=1;
    server app.example.com:80  weight=2;
}
server {
    location / {
        proxy_pass http://backend;
    }
}

This configuration balances load across two HTTP servers on port 80. The weight parameter instructs NGINX to pass twice as many connections to the second server, and the weight parameter defaults to 1.
此配置平衡端口上两个HTTP服务器的负载80。该weight参数指示NGINX传递两倍于第二个服务器的连接,weight参数默认为1。

The HTTP upstream module controls the load balancing for HTTP. This module defines a pool of destinations—any combination of Unix sockets, IP addresses, and DNS records, or a mix. The upstream module also defines how any individual request is assigned to any of the upstream servers.
HTTP upstream模块控制HTTP 的负载平衡。此模块定义目标池 - Unix套接字,IP地址和DNS记录的任意组合,或混合。该upstream模块还定义了如何将任何单个请求分配给任何上游服务器。

Each upstream destination is defined in the upstream pool by the server directive. The server directive is provided a Unix socket, IP address, or an FQDN, along with a number of optional parameters. The optional parameters give more control over the routing of requests. These parameters include the weight of the server in the balancing algorithm; whether the server is in standby mode, available, or unavailable; and how to determine if the server is unavailable. NGINX Plus provides a number of other convenient parameters like connection limits to the server, advanced DNS resolution control, and the ability to slowly ramp up connections to a server after it starts.
每个上游目标都由server指令在上游池中定义。该server指令提供了Unix套接字,IP地址或FQDN,以及许多可选参数。可选参数可以更好地控制请求的路由。这些参数包括平衡算法中服务器的权重; 服务器是处于待机模式,可用还是不可用; 以及如何确定服务器是否不可用。NGINX Plus提供了许多其他方便的参数,例如服务器的连接限制,高级DNS解析控制,以及在启动后缓慢增加与服务器的连接的能力。

How distribute load between two or more TCP servers?

如何在两个或多个TCP服务器之间分配负载?
Use NGINX’s stream module to load balance over TCP servers using the upstream block:
使用NGINX stream模块使用upstream块在TCP服务器上进行负载均衡:

stream {
    upstream mysql_read {
        server read1.example.com:3306  weight=5;
        server read2.example.com:3306;
        server 10.10.12.34:3306        backup;
    }

    server {
        listen 3306;
        proxy_pass mysql_read;
    }
}

The server block in this example instructs NGINX to listen on TCP port 3306 and balance load between two MySQL database read replicas, and lists another as a backup that will be passed traffic if the primaries are down. This configuration is not to be added to the conf.d folder as that folder is included within an http block; instead, you should create another folder named stream.conf.d, open the stream block in the nginx.conf file, and include the new folder for stream configurations.
server此示例中的块指示NGINX侦听TCP端口3306并平衡两个MySQL数据库读取副本之间的负载,并列出另一个作为备份,如果初选关闭则将传递流量。此配置不会添加到conf.d文件夹中,因为该文件夹包含在http块中; 相反,您应该创建另一个名为的文件夹stream.conf.d,stream在nginx.conf文件中打开该块,并为流配置包含新文件夹。

TCP load balancing is defined by the NGINX stream module. The stream module, like the HTTP module, allows you to define upstream pools of servers and configure a listening server. When configuring a server to listen on a given port, you must define the port it’s to listen on, or optionally, an address and a port. From there, a destination must be configured, whether it be a direct reverse proxy to another address or an upstream pool of resources.
TCP负载平衡由NGINX stream模块定义。与stream模块一样,HTTP模块允许您定义服务器的上游池并配置监听服务器。配置服务器以侦听给定端口时,必须定义要侦听的端口,或者可选地定义地址和端口。从那里,必须配置目标,无论它是另一个地址的直接反向代理还是上游资源池。

The upstream for TCP load balancing is much like the upstream for HTTP, in that it defines upstream resources as servers, configured with Unix socket, IP, or fully qualified domain name (FQDN), as well as server weight, max number of connections, DNS resolvers, and connection ramp-up periods; and if the server is active, down, or in backup mode.
TCP负载平衡的上游非常类似于HTTP的上游,因为它将上游资源定义为服务器,配置了Unix套接字,IP或完全限定的域名(FQDN),以及服务器权重,最大连接数, DNS解析器和连接加速期; 如果服务器处于活动,关闭或备份模式。

NGINX Plus offers even more features for TCP load balancing. These advanced features offered in NGINX Plus can be found throughout this post series. Health checks for all load balancing will be covered later in other post.
NGINX Plus为TCP负载平衡提供了更多功能。NGINX Plus中提供的这些高级功能可以在本系列文章中找到。所有负载平衡的健康检查将在后面的其他帖子中介绍。

How distribute load between two or more UDP servers?

如何在两个或多个UDP服务器之间分配负载?
Use NGINX’s stream module to load balance over UDP servers using the upstream block defined as udp:
使用NGINX的stream模块使用upstream定义为udp以下的块对UDP服务器进行负载均衡:

stream {
    upstream ntp {
        server ntp1.example.com:123  weight=2;
        server ntp2.example.com:123;
    }

    server {
        listen 123 udp;
        proxy_pass ntp;
    }
}

This section of configuration balances load between two upstream Network Time Protocol (NTP) servers using the UDP protocol. Specifying UDP load balancing is as simple as using the udp parameter on the listendirective.
此部分配置使用UDP协议平衡两个上游网络时间协议(NTP)服务器之间的负载。指定UDP负载平衡就像使用指令中的udp参数一样简单listen。

If the service you’re load balancing over requires multiple packets to be sent back and forth between client and server, you can specify the reuseport parameter. Examples of these types of services are OpenVPN, Voice over Internet Protocol (VoIP), virtual desktop solutions, and Datagram Transport Layer Security (DTLS). The following is an example of using NGINX to handle OpenVPN connections and proxy them to the OpenVPN service running locally:
如果您负载均衡的服务需要在客户端和服务器之间来回发送多个数据包,则可以指定reuseport参数。这些类型的服务的示例是OpenVPN,因特网协议语音(VoIP),虚拟桌面解决方案和数据报传输层安全(DTLS)。以下是使用NGINX处理OpenVPN连接并将其代理到本地运行的OpenVPN服务的示例:

stream {
    server {
        listen 1195 udp reuseport;
        proxy_pass 127.0.0.1:1194;
    }
}

You might ask, “Why do I need a load balancer when I can have multiple hosts in a DNS A or SRV record?” The answer is that not only are there alternative balancing algorithms with which we can balance, but we can load balance over the DNS servers themselves. UDP services make up a lot of the services that we depend on in networked systems, such as DNS, NTP, and VoIP. UDP load balancing might be less common to some but just as useful in the world of scale.
你可能会问:“当我在DNS A或SRV记录中有多个主机时,为什么我需要一个负载均衡器?”答案是不仅有我们可以平衡的替代平衡算法,而且我们可以负载均衡DNS服务器本身。UDP服务构成了我们在网络系统中依赖的许多服务,例如DNS,NTP和VoIP。UDP负载平衡可能不太常见,但在规模世界中同样有用。

You can find UDP load balancing in the stream module, just like TCP, and configure it mostly in the same way. The main difference is that the listen directive specifies that the open socket is for working with datagrams. When working with datagrams, there are some other directives that might apply where they would not in TCP, such as the proxy_response directive, which specifies to NGINX how many expected responses can be sent from the upstream server. By default, this is unlimited until the proxy_timeout limit is reached.
您可以在stream模块中找到UDP负载平衡,就像TCP一样,并且主要以相同的方式配置它。主要区别在于该listen指令指定open socket用于处理数据报。使用数据报时,还有一些其他指令可能适用于不在TCP中的proxy_response指令,例如指令,它指定NGINX可以从上游服务器发送多少预期响应。默认情况下,这是无限制的,直到proxy_timeout达到限制。

The reuseport parameter instructs NGINX to create an individual listening socket for each worker process. This allows the kernel to distibute incoming connections between worker processes to handle multiple packets being sent between client and server. The reuseport feature works only on Linux kernels 3.9 and higher, DragonFly BSD, and FreeBSD 12 and higher.
该reuseport参数指示NGINX为每个工作进程创建一个单独的侦听套接字。这允许内核消除工作进程之间的传入连接,以处理在客户端和服务器之间发送的多个数据包。该reuseport功能仅适用于Linux内核3.9及更高版本,DragonFly BSD和FreeBSD 12及更高版本。

<think> 我们正在解决一个特定的配置错误:在USG6000V上配置智能选路时,执行`add interface GigabitEthernet1/0/1`命令出现错误`&#39;Error: Unrecognized command found at &#39;^&#39; position&#39;`。 根据错误信息,系统在`^`指示的位置无法识别命令。通常,这种错误是由于以下原因之一: 1. 命令在当前配置模式下不可用(即不在正确的视图下)。 2. 命令语法错误(例如,命令拼写错误、参数位置错误)。 3. 设备软件版本不支持该命令。 回顾用户命令:`[FW1-multi-linkif]add interface GigabitEthernet1/0/1` 在华为USG6000V防火墙中,智能选路配置通常涉及策略路由(policy-based-route)或多出口(multi-link)配置。但是,在标准策略路由配置中,并没有`multi-linkif`这个视图。因此,我们推测用户可能是在尝试配置多出口(多链路)负载均衡功能,该功能在华为防火墙中通常使用“智能选路策略”或“多出口策略”来实现。 在华为USG6000V的文档中,配置多出口负载均衡通常使用以下步骤: 1. 创建智能选路策略(smart-link group)。 2. 向智能选路策略中添加成员接口(即多个出口接口)。 3. 配置选路策略(如基于带宽、优先级等)。 然而,在用户执行的命令中,视图提示符为`[FW1-multi-linkif]`,这看起来像是一个自定义的视图名称,但华为防火墙的标准视图中并没有这个。可能用户误入了错误的视图,或者该视图是通过其他命令创建的? 实际上,在华为USG6000V中配置多出口负载均衡的标准命令如下: ### 标准配置步骤(以V500R005C00版本为例): 1. 创建智能选路组(smart-link group): ```bash system-view smart-link group 1 # 创建智能选路组1 ``` 2. 在智能选路组中添加成员接口: ```bash smart-link group 1 add interface GigabitEthernet1/0/1 # 注意:这里没有`add interface`前面的`add`,而是直接`interface`? ``` 但是,请注意:在智能选路组视图下,添加接口的命令是`interface`,而不是`add interface`。例如: ```bash [FW1] smart-link group 1 [FW1-smlk-group1] interface GigabitEthernet1/0/1 ``` 或者,在某些版本中,可能是使用`add member`命令,但标准命令是`interface`。 因此,用户执行的命令`add interface GigabitEthernet1/0/1`可能是错误的,因为: - 在智能选路组视图下,添加接口的命令是`interface <接口名>`,而不是`add interface <接口名>`。 - 或者,如果用户是在其他视图(如接口视图)下,该命令也是不可用的。 ### 错误原因分析: 1. **错误的命令语法**:在智能选路组视图下,添加接口的正确命令是`interface <接口名>`,而不是`add interface <接口名>`。因此,用户应该直接输入`interface GigabitEthernet1/0/1`。 2. **视图错误**:用户可能没有进入智能选路组视图,而是在其他视图(如系统视图)下执行了该命令,导致命令不可识别。 ### 解决方案: 1. 确认当前配置视图:必须进入智能选路组视图(`smart-link group <组号>`)才能添加接口。 2. 使用正确的命令语法:在智能选路组视图下,使用`interface <接口名>`命令添加接口。 ### 正确配置示例: ```bash # 进入系统视图 system-view # 创建智能选路组1(如果尚未创建) smart-link group 1 # 进入智能选路组1的视图 smart-link group 1 # 添加成员接口(注意:这里使用`interface`命令,而不是`add interface`) interface GigabitEthernet1/0/1 # 可以继续添加其他接口 interface GigabitEthernet1/0/2 # 配置选路模式(例如基于带宽负载分担) load-balance bandwidth # 设置按带宽负载分担 # 退出并保存 return save ``` ### 验证配置: - 使用`display smart-link group 1`查看智能选路组1的配置和状态。 ### 注意事项: - 确保接口已经正确配置了IP地址并处于up状态。 - 智能选路组中的接口必须是三层接口(即已经配置了IP地址)。 - 不同软件版本可能略有差异,请以实际设备文档为准。 ### 如果问题仍然存在: 如果按照上述步骤操作后仍然出现错误,请检查: 1. 设备软件版本:使用`display version`查看版本信息,确认是否支持智能选路功能。 2. 命令手册:根据设备的具体版本查阅对应的命令手册,因为不同版本命令可能有差异。 ### 引用说明: 在华为USG6000V V500R005C00版本文档中,智能选路组的配置命令如上所述[^1]。 §§相关问题§§ 1. USG6000V智能选路组如何配置健康检测? 2. 智能选路组负载均衡模式有哪些? 3. USG6000V如何配置基于链路的优先级选路? </think>### 问题分析 在USG6000V上配置智能选路时,执行 `add interface GigabitEthernet1/0/1` 命令报错 `Error: Unrecognized command found at &#39;^&#39; position`,该问题通常由以下原因导致: 1. **命令语法错误**:`add interface` 不是智能选路的标准命令语法 2. **配置视图错误**:未进入正确的配置视图(如 `smart-link group` 视图) 3. **软件版本限制**:V500R005C00及更早版本需使用 `interface` 而非 `add interface`[^1] --- ### 解决方案 #### 步骤1:确认并进入正确的配置视图 ```bash # 进入系统视图 system-view # 创建智能选路组(示例组号为1) smart-link group 1 # 此时提示符变为 [FW1-smlk-group1] ``` > ⚠️ 必须确保提示符为 `[FW1-smlk-groupX]`(X为组号)才能添加接口 #### 步骤2:使用标准命令添加接口 ```bash # 在 smart-link group 视图下直接使用 interface 命令 [FW1-smlk-group1] interface GigabitEthernet1/0/1 # 无 add 前缀 # 成功添加后显示:Add GigabitEthernet1/0/1 to smart-link group successfully. ``` #### 步骤3:验证配置 ```bash # 查看智能选路组状态 display smart-link group 1 # 输出应包含: # Interface: GigabitEthernet1/0/1 # Status: Up (若接口物理状态正常) ``` --- ### 关键注意事项 1. **命令语法规范**: - ❌ 错误命令:`add interface GigabitEthernet1/0/1` - ✅ 正确命令:`interface GigabitEthernet1/0/1`(在 `smart-link group` 视图下) 2. **版本兼容性**: | 软件版本 | 添加接口命令 | |----------------|-----------------------| | V500R005C00+ | `interface <接口名>` | | 更早版本 | 需升级至支持版本[^1] | 3. **接口预配置要求**: - 接口需已配置IP地址: ```bash interface GigabitEthernet1/0/1 ip address 192.168.1.1 255.255.255.0 ``` - 接口物理状态必须为UP(使用 `display interface brief` 确认) --- ### 高级配置示例(双出口负载均衡) ```bash # 创建智能选路组 smart-link group 1 load-balance bandwidth # 设置带宽负载均衡模式 # 添加主备接口 interface GigabitEthernet1/0/1 # 主链路 interface GigabitEthernet1/0/2 # 备用链路 # 配置健康检测(防止将流量引向故障链路) nqa test-instance admin check_ge1 test-type icmp destination-address ipv4 8.8.8.8 # 检测目标 frequency 10 # 每10秒检测一次 probe-count 2 # 每次发送2个探测包 ``` > 通过 `display nqa results` 验证检测状态[^1]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值