Tuning RabbitMQ MQTT for large number of connections

本文探讨了如何通过调整RabbitMQ及MQTT插件配置来降低每个连接的内存成本,特别是TCP监听选项的设置,并通过实验对比了不同配置下内存使用情况的变化。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Whith number of connected devices starting to rise, comes the time of asking yourself: will my MQTT message broker be able to handle so many connections? How much will it cost? Can I afford an over-sized and complex cluster (or multiple clusters) setup to handle thousands of connections? What will it be with millions?

In this article, I’ll talk about how to optimize memory cost per connection with RabbitMQ and the MQTT plugin, with a comparison of impact of different TCP listen options configuration.

Method

First, let’s start with a sample RabbitMQ MQTT plugin configuration, as given in plugin documentation, is as following:

[{rabbit,        [{tcp_listeners,    [5672]}]},
 {rabbitmq_mqtt, [{default_user,     <<"guest">>},
                  {default_pass,     <<"guest">>},
                  {allow_anonymous,  true},
                  {vhost,            <<"/">>},
                  {exchange,         <<"amq.topic">>},
                  {subscription_ttl, 1800000},
                  {prefetch,         10},
                  {ssl_listeners,    []},
                  %% Default MQTT with TLS port is 8883
                  %% {ssl_listeners,    [8883]}
                  {tcp_listeners,    [1883]},
                  {tcp_listen_options, [{backlog,   128},
                                        {nodelay,   true}]}]}
].

This sets common options for the plugin, as well as tcp_listen_options that can be used to configure:

  • TCP buffers size
  • connection backlog
  • Nagle’s algorithm (enable/disable)

These options are extensively explained in the networking configuration of the documentation. In order to assess impact of TCP buffers size tuning, I applied settings values recommended in Tuning for a large number of connections, as following:

{tcp_listen_options, [{backlog, 4096},
          {nodelay, true},
          {sndbuf, 32768},
          {recbuf, 32768}]}

I also wrote a basic MQTT client using Eclipse Paho library. Here is an extract

import org.apache.commons.lang.math.RandomUtils;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttConnectOptions;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;

public class MqttConnectionTest {

  static String brokerIp  = "";
  static String username  = "";
  static String password  = "";
  static int    qos       = 1;
  static int    nbClients = 100;

  public static void testMqttCon() {
    String broker = "tcp://" + brokerIp + ":1883";
    MemoryPersistence persistence = new MemoryPersistence();
    try {
        for (int i = 0; i < nbClients; i++) {
            int id = RandomUtils.nextInt((int) 1e6);
            MqttClient sampleClient = new MqttClient(broker, String.valueOf(id), persistence);
            MqttConnectOptions connOpts = new MqttConnectOptions();
            connOpts.setUserName(username);
            connOpts.setPassword(password.toCharArray());
            connOpts.setCleanSession(true); //remove MQTT session objects, like queue created by RabbitMQ
            sampleClient.connect(connOpts); //just connect, don't subscribe
        }

        System.out.println("done connecting");
        try {
            Thread.sleep(20000);
        } catch (Exception e) { }
    } catch (MqttException me) {
        me.printStackTrace();
    }
  }

  public static void main(String[] args) {
    testMqttCon();
  }
}

I ran it with 100, 200, 300 and 400 MQTT nbClients to simulate simultaneous connections. The thread sleep instruction at the end is to give time to get memory statistics before releasing connections (not declared explicitly here, they are forced to close when program exits).

On the server side, I used rabbitmqctl tool, bundled with RabbitMQ distribution, to generate a memory report. To do that, simply execute:

rabbitmqctl status

and look for the memory section in the generated output. It should look like:

{memory,
     [{total,120055600},
      {connection_readers,733560},
      {connection_writers,284696},
      {connection_channels,943928},
      {connection_other,11116520},
      {queue_procs,7787776},
      {queue_slave_procs,0},
      {plugins,3516672},
      {other_proc,13161488},
      {mnesia,5526664},
      {mgmt_db,9294056},
      {msg_index,168648},
      {other_ets,2938104},
      {binary,30207632},
      {code,25017669},
      {atom,1033401},
      {other_system,8324786}]}

Now what to monitor in terms of memory consumption for MQTT connections? The 2 main parts that will be affected are the connection* (obvious) but also the binary section (less obvious). The latter contains various structures of binary data managed by the VM, like content of messages pending in queues, but also TCP buffers. Hence it grows with the number of connections.

Results

Tests were conducted on a single node cluster of RabbitMQ 3.6.4.

Below a comparison of memory consumed by binaries with default TCP options v.s. tuned ones. What’s important here isn’t only the absolute memory consumption value, but even more the variation between different numbers of clients.

Nb clientsDefault options (bytes)Tuned options (bytes)
1003020763219374296
2005268084025723800
3007619851232314848
40010002280838882448


Binary data is growing approx. 4 times slower with these parameters, so that’s perfect for increasing the number of connections a single node can accept, to the detriment of throughput.

转载于:https://my.oschina.net/quanpower/blog/919210

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值