ROCKETMQ记录源码细节及疑问

本文详细记录了RocketMQ客户端(包括消费者和生产者)的源码细节,如心跳发送、Offset存储、线程池配置等。同时探讨了NameServer的命令处理、路由信息管理和Broker的启动、持久化策略等关键点。文章还提出了若干疑问,如消费者消费未完成即拉取新消息的处理和Orderly消费模式的工作原理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

 

记录相关细节:

Client模块:

1.DefaultMQPushConsumer.subscribe方法,一经调用如果MQClientInstance不为空则发送心跳给Broker

2.MessageModel是BROADCASTING的则使用LocalFileOffsetStore

CLUSTERING则使用RemoteBrokerOffsetStore

3.默认5秒钟持久化一次offset MQClientInstance.startScheduledTask line 308

4.默认30秒发送一次心跳消息 MQClientInstance.startScheduledTask line 296

5.默认2分钟获取一次namesrv地址 MQClientInstance.startScheduledTask line 270

6.默认30秒更新一次路由信息 即 消费者、生产者对应的Topic信息(具体信息 之后再细看) MQClientInstance.startScheduledTask line 285

7.默认30秒清空对应消费者没有使用到的broker信息 MQClientInstance.startScheduledTask line 296

8.默认1分钟调整一次线程池情况 (实际代码注释掉了)  MQClientInstance.startScheduledTask line 320

9.消费者线程池默认初始化线程数是20个,最大值是64  ConsumeMessageConcurrentlyService line 72

10.客户端会记录group+topic的消费次数 和 消费时间 ConsumeMessageConcurrentlyService line 459  StatsItem

11.消费失败,默认重新消费16次之后才放弃DefaultMQPushConsumerImpl line 523

12.消费失败, 消费者内部会间隔5秒钟之后重试一次 DefaultMQPushConsumerImpl line 309

        switch (this.defaultMQPushConsumer.getMessageModel()) {
            case BROADCASTING:
                for (int i = ackIndex + 1; i < consumeRequest.getMsgs().size(); i++) {
                    MessageExt msg = consumeRequest.getMsgs().get(i);
                    log.warn("BROADCASTING, the message consume failed, drop it, {}", msg.toString());
                }
                break;
            case CLUSTERING:
                List<MessageExt> msgBackFailed = new ArrayList<MessageExt>(consumeRequest.getMsgs().size());
                for (int i = ackIndex + 1; i < consumeRequest.getMsgs().size(); i++) {
                    MessageExt msg = consumeRequest.getMsgs().get(i);
                    boolean result = this.sendMessageBack(msg, context);
                    if (!result) {
                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);
                        msgBackFailed.add(msg);
                    }
                }

                if (!msgBackFailed.isEmpty()) {
                    consumeRequest.getMsgs().removeAll(msgBackFailed);

                    this.submitConsumeRequestLater(msgBackFailed, consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue());
                }
                break;
            default:
                break;
        }

13.消费失败 Client会回发信息给Broker  CONSUMER_SEND_MSG_BACK  PULL_MESSAGE

ConsumerSendMsgBackRequestHeader

PullMessageRequestHeader

问题:

消费者消费还没结束,就会已经在重新拉新一批消息了,orderly消费模式是怎么处理的?

因为消费者消费是提交线程池去处理的,是异步的,待消费信息都提交到线程池之后 就会提交新的拉取请求给 拉取的线程池等待拉取

 

Namesrv模块

10.Namesrv模块采用CommandLine方式处理命令, 且Options的longArgName的值会反写到nameSrvConfig和nettyServerConfig

11.Namesrv模块使用的是DefaultRequestProcessor NameSrvController line 151

12.每10秒钟扫描是否有不活跃的Broker NamesrvController line 93

13.BrokerLinveInfo的最后更新时间距离当前时间超过2分钟则判定是失效的Broker,进行移除  RouteInfoManager line 418

14.TlsSystemConfig 维护了所有TLS的jvm key

15.1秒钟获取一次请求返回的结果集  NettyRemotingService line 223

16.注册Broker信息 RouteInfoManager line 102

17.MQClientInstance.clientId产生逻辑

    private String clientIP = RemotingUtil.getLocalAddress();
    private String instanceName = System.getProperty("rocketmq.client.name", "DEFAULT");

    public String buildMQClientId() {
        StringBuilder sb = new StringBuilder();
        sb.append(this.getClientIP());

        sb.append("@");
        sb.append(this.getInstanceName());
        if (!UtilAll.isBlank(this.unitName)) {
            sb.append("@");
            sb.append(this.unitName);
        }

        return sb.toString();
    }

 

1.MessageListener实现最终返回null的话,默认是RECONSUME_LATER

            if (null == status) {
                log.warn("consumeMessage return null, Group: {} Msgs: {} MQ: {}",
                    ConsumeMessageConcurrentlyService.this.consumerGroup,
                    msgs,
                    messageQueue);
                status = ConsumeConcurrentlyStatus.RECONSUME_LATER;
            }

2.消费端、生产端针对Broker发送心跳只针对MASTER节点发送

MQClientInstance line 513


    private void sendHeartbeatToAllBroker() {
        final HeartbeatData heartbeatData = this.prepareHeartbeatData();
        final boolean producerEmpty = heartbeatData.getProducerDataSet().isEmpty();
        final boolean consumerEmpty = heartbeatData.getConsumerDataSet().isEmpty();
        if (producerEmpty && consumerEmpty) {
            log.warn("sending heartbeat, but no consumer and no producer");
            return;
        }

        if (!this.brokerAddrTable.isEmpty()) {
            long times = this.sendHeartbeatTimesTotal.getAndIncrement();
            Iterator<Entry<String, HashMap<Long, String>>> it = this.brokerAddrTable.entrySet().iterator();
            while (it.hasNext()) {
                Entry<String, HashMap<Long, String>> entry = it.next();
                String brokerName = entry.getKey();
                HashMap<Long, String> oneTable = entry.getValue();
                if (oneTable != null) {
                    for (Map.Entry<Long, String> entry1 : oneTable.entrySet()) {
                        Long id = entry1.getKey();
                        String addr = entry1.getValue();
                        if (addr != null) {
                            if (consumerEmpty) {
                                //不是master节点则直接continue
                                if (id != MixAll.MASTER_ID)
                                    continue;
                            }

                            try {
                                int version = this.mQClientAPIImpl.sendHearbeat(addr, heartbeatData, 3000);
                                if (!this.brokerVersionTable.containsKey(brokerName)) {
                                    this.brokerVersionTable.put(brokerName, new HashMap<String, Integer>(4));
                                }
                                this.brokerVersionTable.get(brokerName).put(addr, version);
                                if (times % 20 == 0) {
                                    log.info("send heart beat to broker[{} {} {}] success", brokerName, id, addr);
                                    log.info(heartbeatData.toString());
                                }
                            } catch (Exception e) {
                                if (this.isBrokerInNameServer(addr)) {
                                    log.info("send heart beat to broker[{} {} {}] failed", brokerName, id, addr);
                                } else {
                                    log.info("send heart beat to broker[{} {} {}] exception, because the broker not up, forget it", brokerName,
                                        id, addr);
                                }
                            }
                        }
                    }
                }
            }
        }
    }

3.上传过滤器源码?没看懂

MQClientInstance line 454

    public void sendHeartbeatToAllBrokerWithLock() {
        if (this.lockHeartbeat.tryLock()) {
            try {
                this.sendHeartbeatToAllBroker();
                ???
                this.uploadFilterClassSource();
            } catch (final Exception e) {
                log.error("sendHeartbeatToAllBroker exception", e);
            } finally {
                this.lockHeartbeat.unlock();
            }
        } else {
            log.warn("lock heartBeat, but failed.");
        }
    }

4.订阅一个%RETRY%+groupname的topic?

    private void copySubscription() throws MQClientException {
        try {
            ...

            if (null == this.messageListenerInner) {
                this.messageListenerInner = this.defaultMQPushConsumer.getMessageListener();
            }

            switch (this.defaultMQPushConsumer.getMessageModel()) {
                case BROADCASTING:
                    break;
                case CLUSTERING:
                    final String retryTopic = MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup());
                    SubscriptionData subscriptionData = FilterAPI.buildSubscriptionData(this.defaultMQPushConsumer.getConsumerGroup(),
                        retryTopic, SubscriptionData.SUB_ALL);
                    this.rebalanceImpl.getSubscriptionInner().put(retryTopic, subscriptionData);
                    break;
                default:
                    break;
            }
        } catch (Exception e) {
            throw new MQClientException("subscription exception", e);
        }
    }

5.创建Topic默认会尝试5次

    public void createTopic(String key, String newTopic, int queueNum, int topicSysFlag) throws MQClientException {
        try {
           ...
                        for (int i = 0; i < 5; i++) {
                            try {
                                this.mQClientFactory.getMQClientAPIImpl().createTopic(addr, key, topicConfig, timeoutMillis);
                                createOK = true;
                                createOKAtLeastOnce = true;
                                break;
                            } catch (Exception e) {
                                if (4 == i) {
                                    exception = new MQClientException("create topic to broker exception", e);
                                }
                            }
                        }

                        if (createOK) {
                            orderTopicString.append(brokerData.getBrokerName());
                            orderTopicString.append(":");
                            orderTopicString.append(queueNum);
                            orderTopicString.append(";");
                        }
                 ...
        } catch (Exception e) {
            throw new MQClientException("create new topic failed", e);
        }
    }

6.每两分钟动态获取一次nameserver地址

MixAll  line 82

    public static String getWSAddr() {
        String wsDomainName = System.getProperty("rocketmq.namesrv.domain", "jmenv.tbsite.net");
        String wsDomainSubgroup = System.getProperty("rocketmq.namesrv.domain.subgroup", "nsaddr");
        String wsAddr = "http://" + wsDomainName + ":8080/rocketmq/" + wsDomainSubgroup;
        if(wsDomainName.indexOf(":") > 0) {
            wsAddr = "http://" + wsDomainName + "/rocketmq/" + wsDomainSubgroup;
        }

        return wsAddr;
    }

7.RebalancePushImpl实现里面,进行负载均衡的时候,发现有MessageQueue是无用的,为啥还要持久化这个MQ?再去删除本地的offsetTable?,直接删除本地offsetTable不就好了吗?没看懂

public boolean removeUnnecessaryMessageQueue(MessageQueue mq, ProcessQueue pq) {
        this.defaultMQPushConsumerImpl.getOffsetStore().persist(mq);
        this.defaultMQPushConsumerImpl.getOffsetStore().removeOffset(mq);
        ...
        return true;
    }

8.生产者启动的时候,也开启上面定时任务,也同样去拉数据?MQClientInstance line 223

    public void start() throws MQClientException {

        synchronized (this) {
            switch (this.serviceState) {
                case CREATE_JUST:
                    this.serviceState = ServiceState.START_FAILED;
                    // If not specified,looking address from name server
                    if (null == this.clientConfig.getNamesrvAddr()) {
                        this.mQClientAPIImpl.fetchNameServerAddr();
                    }
                    // Start request-response channel
                    this.mQClientAPIImpl.start();
                    // Start various schedule tasks
                    this.startScheduledTask();
                    // Start pull service
                    this.pullMessageService.start();
                    // Start rebalance service
                    this.rebalanceService.start();
                    // Start push service
                    this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
                    log.info("the client factory [{}] start OK", this.clientId);
                    this.serviceState = ServiceState.RUNNING;
                    break;
                case RUNNING:
                    break;
                case SHUTDOWN_ALREADY:
                    break;
                case START_FAILED:
                    throw new MQClientException("The Factory object[" + this.getClientId() + "] has been created before, and failed.", null);
                default:
                    break;
            }
        }
    }

 

Broker模块:

17.TopicConfigSerializeWrapper该类的作用解析出来的topicConfig信息有何用?

18.Broker启动的时候,会获取文件信息来初始化topic、queue中offset、订阅组、消费者过滤信息等  BrokerController line 220

19.BrokerController.initialize 初始化线程池以及一些定时任务

20.默认每5秒钟 持久化一次consumeroffset

21.一天记录一次broker的状态

22.默认10秒钟持久化一次consumerfilter

23.每三分钟判断一下是否需要保护broker, 但是没看到哪里有设置需要保护的参数??? 可能走的是反写到BrokerConfig的方式

24.每秒钟打印线程队列

25.可以通过JVM启动参数指定namesrv地址,也可以走动态获取BrokerController line 370  可能走的是反写到BrokerConfig的方式

if (this.brokerConfig.getNamesrvAddr() != null) {
                this.brokerOuterAPI.updateNameServerAddressList(this.brokerConfig.getNamesrvAddr());
                log.info("Set user specified name server address: {}", this.brokerConfig.getNamesrvAddr());
            } else if (this.brokerConfig.isFetchNamesrvAddrByAddressServer()) {
                this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

                    @Override
                    public void run() {
                        try {
                            BrokerController.this.brokerOuterAPI.fetchNameServerAddr();
                        } catch (Throwable e) {
                            log.error("ScheduledTask fetchNameServerAddr exception", e);
                        }
                    }
                }, 1000 * 10, 1000 * 60 * 2, TimeUnit.MILLISECONDS);
            }

26.如果是Slave节点,则每分钟同步一次信息 topic和消费offset信息

27.Broker持久化到文件的流程:先创建xxx.tmp文件并写入新的数据,再获取xxx文件的内容(如果存在的话),存在旧数据则再创建xxx.bak文件去备份旧数据,删除xxx文件,再接着重命名xxx.tmp文件为xxx

MixAll.string2File line 145

    public static void string2File(final String str, final String fileName) throws IOException {

        String tmpFile = fileName + ".tmp";
        string2FileNotSafe(str, tmpFile);

        String bakFile = fileName + ".bak";
        String prevContent = file2String(fileName);
        if (prevContent != null) {
            string2FileNotSafe(prevContent, bakFile);
        }

        File file = new File(fileName);
        file.delete();

        file = new File(tmpFile);
        file.renameTo(new File(fileName));
    }

28.MessageId生成规则 ByteBuffer填装broker的host地址获取byte[]数据(4位),填装Int值 port值(4位),再填装Long类型的文件已写入的offset+存在buffer里面未写入文件的offset值。然后对16个byte数据进行去符号位处理,取得的值无符号右移4位作为一个char,跟0xFF与运算作为一个char,最终生成了messageId

String msgId = MessageDecoder.createMessageId(this.msgIdMemory, msgInner.getStoreHostBytes(hostHolder), wroteOffset);
    public ByteBuffer getStoreHostBytes(ByteBuffer byteBuffer) {
        return socketAddress2ByteBuffer(this.storeHost, byteBuffer);
    }
    public static ByteBuffer socketAddress2ByteBuffer(final SocketAddress socketAddress, final ByteBuffer byteBuffer) {
        InetSocketAddress inetSocketAddress = (InetSocketAddress) socketAddress;
        byteBuffer.put(inetSocketAddress.getAddress().getAddress(), 0, 4);
        byteBuffer.putInt(inetSocketAddress.getPort());
        byteBuffer.flip();
        return byteBuffer;
    }
final static char[] HEX_ARRAY = "0123456789ABCDEF".toCharArray();

29.Broker服务以topic-queueId为key存储offset

            keyBuilder.setLength(0);
            keyBuilder.append(msgInner.getTopic());
            keyBuilder.append('-');
            keyBuilder.append(msgInner.getQueueId());
            String key = keyBuilder.toString();
            Long queueOffset = CommitLog.this.topicQueueTable.get(key);

30.msg body最长只能为4M 跟代码备注不符

    // The maximum size of a single log file,default is 512K
    private int maxMessageSize = 1024 * 1024 * 4;

31.MappedFileQueue维护了一个CopyOnWriteArrayList去维护

32.如何找到对应的targetFile?MappedFileQueue line 462

offset是如何设置的?

 int index = (int) ((offset / this.mappedFileSize) - (firstMappedFile.getFileFromOffset() / this.mappedFileSize));
                    MappedFile targetFile = null;
                    try {
                        targetFile = this.mappedFiles.get(index);
                    } catch (Exception ignored) {
                    }

33.同步刷盘使用了读写分离思想

    class GroupCommitService extends FlushCommitLogService {
        private volatile List<GroupCommitRequest> requestsWrite = new ArrayList<GroupCommitRequest>();
        private volatile List<GroupCommitRequest> requestsRead = new ArrayList<GroupCommitRequest>();


        private void swapRequests() {
            List<GroupCommitRequest> tmp = this.requestsWrite;
            this.requestsWrite = this.requestsRead;
            this.requestsRead = tmp;
        }

34.多个Broker族之间没有具体queueid,只有在消费者这里自己去声明queueid

Namesrv只维护broker的queueData

MQClientInstance line 208

    public static Set<MessageQueue> topicRouteData2TopicSubscribeInfo(final String topic, final TopicRouteData route) {
        Set<MessageQueue> mqList = new HashSet<MessageQueue>();
        List<QueueData> qds = route.getQueueDatas();
        for (QueueData qd : qds) {
            if (PermName.isReadable(qd.getPerm())) {
                for (int i = 0; i < qd.getReadQueueNums(); i++) {
                    MessageQueue mq = new MessageQueue(topic, qd.getBrokerName(), i);
                    mqList.add(mq);
                }
            }
        }

        return mqList;
    }

35.一个HAConnection对应一个slave节点 并且有writesocket和readsocket两个service去处理SYNCMASTER的机制

36.存储模块是用的CopyOnWriteArrayList

37.MappedFiles是循环使用的,但是文件对应的offset是不断递增的

    public MappedFile findMappedFileByOffset(final long offset, final boolean returnFirstOnNotFound) {
        try {
            MappedFile firstMappedFile = this.getFirstMappedFile();
            MappedFile lastMappedFile = this.getLastMappedFile();
            if (firstMappedFile != null && lastMappedFile != null) {
                    ...
                    //根据这里可以推测mappedFiles是循环使用的,但是offset是不断递增的,所以需要当前offset除mappedFileSize取商减去 第一个mappedFile的offset除mappedFileSize取商
                    int index = (int) ((offset / this.mappedFileSize) - (firstMappedFile.getFileFromOffset() / this.mappedFileSize));
                    MappedFile targetFile = null;
                    try {
                        targetFile = this.mappedFiles.get(index);
                    } catch (Exception ignored) {
                    }

                    if (targetFile != null && offset >= targetFile.getFileFromOffset()
                        && offset < targetFile.getFileFromOffset() + this.mappedFileSize) {
                        return targetFile;
                    }

                    for (MappedFile tmpMappedFile : this.mappedFiles) {
                        if (offset >= tmpMappedFile.getFileFromOffset()
                            && offset < tmpMappedFile.getFileFromOffset() + this.mappedFileSize) {
                            return tmpMappedFile;
                        }
                    }
                }

                if (returnFirstOnNotFound) {
                    return firstMappedFile;
                }
            }
        } catch (Exception e) {
            log.error("findMappedFileByOffset Exception", e);
        }

        return null;
    }

38.默认维护消息72个小时 MessageStoreConfig line 80

39.默认每10秒钟查看是否有过期文件 DefaultMessageStore line 1173

        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {
            @Override
            public void run() {
                DefaultMessageStore.this.cleanFilesPeriodically();
            }
        }, 1000 * 60, this.messageStoreConfig.getCleanResourceInterval(), TimeUnit.MILLISECONDS);

40.Broker提供了针对MappedFile的SPI入口  AllocateMappedFileService line 173

                MappedFile mappedFile;
                if (messageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
                    try {
                        mappedFile = ServiceLoader.load(MappedFile.class).iterator().next();
                        mappedFile.init(req.getFilePath(), req.getFileSize(), messageStore.getTransientStorePool());
                    } catch (RuntimeException e) {
                        log.warn("Use default implementation.");
                        mappedFile = new MappedFile(req.getFilePath(), req.getFileSize(), messageStore.getTransientStorePool());
                    }
                } else {
                    mappedFile = new MappedFile(req.getFilePath(), req.getFileSize());
                }

41.获取message的逻辑有点看不太懂,为什么获取两次bytebuffer?

ConsumeQueue line 485


    public SelectMappedBufferResult getIndexBuffer(final long startIndex) {
        int mappedFileSize = this.mappedFileSize;
        long offset = startIndex * CQ_STORE_UNIT_SIZE;
        if (offset >= this.getMinLogicOffset()) {
            MappedFile mappedFile = this.mappedFileQueue.findMappedFileByOffset(offset);
            if (mappedFile != null) {
                SelectMappedBufferResult result = mappedFile.selectMappedBuffer((int) (offset % mappedFileSize));
                return result;
            }
        }
        return null;
    }

    public SelectMappedBufferResult getMessage(final long offset, final int size) {
        int mappedFileSize = this.defaultMessageStore.getMessageStoreConfig().getMapedFileSizeCommitLog();
        MappedFile mappedFile = this.mappedFileQueue.findMappedFileByOffset(offset, offset == 0);
        if (mappedFile != null) {
            int pos = (int) (offset % mappedFileSize);
            return mappedFile.selectMappedBuffer(pos, size);
        }
        return null;
    }

CommitLog line 813

DefaultMessageStore line 485

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值