Rate Limiter(1)OnFlight Control and Rate Limiter

本文介绍了一种针对SOLR7服务的请求限速策略,通过使用Guava库中的RateLimiter来实现对请求速率的有效控制,并探讨了另一种通过控制消息飞行中的数量来避免请求队列过载的方法。
Rate Limiter(1)OnFlight Control and Rate Limiter

From my understanding, there are 2 ways to control the speed.

On my case, I send requests too fast to SOLR7. I guess there is jetty there on SOLR7 server side.
And it will put the pending requests in queue or something, finally, it will time out if the queue is too large.

Control Message In Flight
import java.util.concurrent.Semaphore;

private final Semaphore inflightMsg;

So when I start to send the message, I will do
this.inflightMsg.acquire();

And after I get response back from the remote server, I will release the resource I acquire
logger.trace("Worker {} - Request for more SQS messages", workerId);
SqsFutureReceiver<ReceiveMessageRequest, ReceiveMessageResult> processSqsMessageFuture = new SqsFutureReceiver<>();
sqsClient.receiveMessageAsync(receiveMessageRequest, processSqsMessageFuture);
logger.trace("Worker {} - Block thread for response", workerId);
ReceiveMessageResult response = processSqsMessageFuture.get();

logger.trace("Worker {} - Process response in a non-blocking fashion", workerId);
processSqsMessages(sqsClient, solrClient, response).thenAccept(batchMessageProcessedCount -> {
logger.trace("Update message count");
long totalMessageProcessed = messageCount.addAndGet(batchMessageProcessedCount);

// log rates
if (totalMessageProcessed % 100 == 0) {
long currentTime = System.currentTimeMillis();
long duration = (currentTime - STARTTIME) / 1000;
logger.info("Worker {} - Message count {}; {} messages/second", workerId, totalMessageProcessed,
totalMessageProcessed * 10 / duration);
}
}).whenComplete((value, ex)->{
this.inflightMsg.release();
});

Here is how I init the messages in flight
new Semaphore(inflightLimit)

Permits Per Second

pom.xml configuration
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>23.2-jre</version>
</dependency>

Code Sample
package com.sillycat.jobssolrconsumer.service;

import java.time.ZonedDateTime;
import org.junit.Assert;
import org.junit.Test;

import com.google.common.util.concurrent.RateLimiter;

public class RateLimitServiceTest {
@Test
public void dummy() {
Assert.assertTrue(true);
}

@Test
public void givenLimitedResource(){
//100 permits per second
RateLimiter rateLimiter = RateLimiter.create(100);
long startTime = ZonedDateTime.now().getSecond();
rateLimiter.acquire(100);
System.out.println(ZonedDateTime.now() + " - I am flying like a funny bird 1.");
long elaspedTimeSeconds = ZonedDateTime.now().getSecond() - startTime;
Assert.assertTrue(elaspedTimeSeconds <= 1);
rateLimiter.acquire(100);
System.out.println(ZonedDateTime.now() + " - I am flying like a funny bird 2.");
elaspedTimeSeconds = ZonedDateTime.now().getSecond() - startTime;
Assert.assertTrue(elaspedTimeSeconds >= 1);
}
}


Reference:
http://www.baeldung.com/guava-rate-limiter
https://stackoverflow.com/questions/31883739/throttling-method-calls-using-guava-ratelimiter-class
CPU Code Rate Limiter Traffic sent to the CPU can be rate-limited on a per-CPU code basis. This is a powerful mechanism for the CPU to control the maximum rate at which a specific type of packet can be sent to the CPU. This protects the CPU from potential control packet denial-of-service attacks, while still allowing the CPU to receive other types of control packets. The CPU code rate-limiter mechanism limits the number of packets for a given CPU code (or group of CPU codes) that can be sent to the CPU within a configurable time window. At the start of each time window, the packet count is reset to zero. For a given CPU code, if the packet counter exceeds the configured packet limit, the packet to the CPU is discarded, the associated rate-limiter interrupt cause bit is set indicating to the CPU a potential attack, and a global CPU Code Rate Limiter Drop Counter is incremented. To enable CPU code rate-limiting, the CPU code table entry is configured with a CPU code rate-limiter index ranging from 1 to 255. To disable CPU code rate-limiting, the entry rate-limiter index is set to 0. In a cascaded system (refer to Section 18.1, Cascade Ports), a per-CPU code table entry mode selectively applies the CPU code rate-limiter, depending on whether the packet was sent to the CPU by the local device or not. The mode descriptions are as follows: Note To prevent the CPU from receiving an interrupt on every packet dropped due to exceeding its packet limit, Marvell recommends to set the CPU Code Rate Limit Interrupt Cause Mask bits. The CPU periodically polls the CPU Code Rate Limit Interrupt Cause bits to detect the event where control traffic is dropped due to rate limiting. Local Rate-Limiting Mode Used to rate-limit the traffic that the local device is generating to the CPU with this CPU code. The target CPU device may be either the local device or a remote device. In this mode, CPU code rate limiting is applied to packets that meet one of the following criteria:  Ingressed on a network port or a cascade port with non-extended 4B DSA tag, and the packet command reaches the Pre-Egress engine with the TRAP or MIRROR command, or FORWARD with target port set to the CPU port.  Ingress as FROM_CPU DSA tagged packet, the target device is this device, and the target port is set to the CPU port.  Mirrored to the CPU by either the egress STC port or egress mirror port mechanisms on this device. Aggregate Rate-Limiting Mode Used to rate limit all traffic to the CPU with this CPU code, regardless of whether the traffic was sent to the CPU by the local device or a remote device. This mode is intended to be applied only on the target CPU device to which this CPU code is sent. 翻译并解释
09-20
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值