springboot2.X整合RocketMQ4.X
一.简介
二.Centos7.x骨架搭建
1.安装jdk11.0.10
导入linux进行安装配置
解压:tar -zxvf jdk-11.0.10_linux-x64_bin.tar.gz
重命名:mv xxx jdk11
vim /etc/profile
加入
export JAVA_HOME=/usr/local/software/jdk11
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME PATH CLASSPATH
使用 source /etc/profile 让配置立刻生效
jdk1.8方式
export JAVA_HOME=/usr/local/software/server/jdk1.8.0_161
export JRE_HOME=${JAVA_HOME}/jre
export PATH=$JAVA_HOME/bin:${JRE_HOME}/bin:$PATH
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_HOME PATH CLASSPATH
2.安装maven3.6.3
下载地址:https://maven.apache.org/download.cgi
选择 apache-maven-3.6.3-bin.tar.gz
解压:tar -zxvf apache-maven-3.6.0-bin.tar.gz
重命名: mv apache-maven-3.6.0 maven
vim /etc/profile
export PATH=/usr/local/software/maven/bin:$PATH
立刻生效:source /etc/profile
mvn -version
#更新镜像源
<!-- 阿里云仓库 -->
<mirror>
<id>alimaven</id>
<mirrorOf>central</mirrorOf>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
</mirror>
3.安装部署Rocketmq4.8.0
- 导入linux进行配置
yum install unzip
unzip rocketmq-all-4.8.0-source-release.zip
mv rocketmq-all-4.8.0-source-release rocketmq
cd rocketmq
mvn -Prelease-all -DskipTests clean install -U
cd /usr/local/software/server/rocketmq/rocketmq-all-4.8.0-source-release/distribution/target/rocketmq-4.8.0/rocketmq-4.8.0/bin
修改runserver.sh、runbroker.sh、tools.sh(jdk8无需修改)***
vim runserver.sh
*
#!/bin/sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===========================================================================================
# Java Environment Setting
#===========================================================================================
error_exit ()
{
echo "ERROR: $1 !!"
exit 1
}
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
[ ! -e "$JAVA_HOME/bin/java" ] && error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)!"
export JAVA_HOME
export JAVA="$JAVA_HOME/bin/java"
export BASE_DIR=$(dirname $0)/..
#export CLASSPATH=.:${BASE_DIR}/conf:${CLASSPATH}
export CLASSPATH=${BASE_DIR}/lib/rocketmq-namesrv-4.8.0.jar:${BASE_DIR}/lib/*:${BASE_DIR}/conf:${CLASSPATH}:${CLASS_PATH}
#===========================================================================================
# JVM Configuration
#===========================================================================================
# The RAMDisk initializing size in MB on Darwin OS for gc-log
DIR_SIZE_IN_MB=600
choose_gc_log_directory()
{
case "`uname`" in
Darwin)
if [ ! -d "/Volumes/RAMDisk" ]; then
# create ram disk on Darwin systems as gc-log directory
DEV=`hdiutil attach -nomount ram://$((2 * 1024 * DIR_SIZE_IN_MB))` > /dev/null
diskutil eraseVolume HFS+ RAMDisk ${DEV} > /dev/null
echo "Create RAMDisk /Volumes/RAMDisk for gc logging on Darwin OS."
fi
GC_LOG_DIR="/Volumes/RAMDisk"
;;
*)
# check if /dev/shm exists on other systems
if [ -d "/dev/shm" ]; then
GC_LOG_DIR="/dev/shm"
else
GC_LOG_DIR=${BASE_DIR}
fi
;;
esac
}
choose_gc_options()
{
# Example of JAVA_MAJOR_VERSION value : '1', '9', '10', '11', ...
# '1' means releases befor Java 9
JAVA_MAJOR_VERSION=$("$JAVA" -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
if [[ "$JAVA_MAJOR_VERSION" -lt "9" ]] ; then
JAVA_OPT="${JAVA_OPT} -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8 -XX:-UseParNewGC"
JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:${GC_LOG_DIR}/rmq_srv_gc_%p_%t.log -XX:+PrintGCDetails"
JAVA_OPT="${JAVA_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m"
else
JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0"
JAVA_OPT="${JAVA_OPT} -Xlog:gc*:file=${GC_LOG_DIR}/rmq_srv_gc_%p_%t.log:time,tags:filecount=5,filesize=30M"
fi
}
choose_gc_log_directory
JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
choose_gc_options
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow"
JAVA_OPT="${JAVA_OPT} -XX:-UseLargePages"
#JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${BASE_DIR}/lib:${JAVA_HOME}/lib/ext"
#JAVA_OPT="${JAVA_OPT} -Xdebug -Xrunjdwp:transport=dt_socket,address=9555,server=y,suspend=n"
JAVA_OPT="${JAVA_OPT} ${JAVA_OPT_EXT}"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"
$JAVA ${JAVA_OPT} $@
vim runbroker.sh
#!/bin/sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===========================================================================================
# Java Environment Setting
#===========================================================================================
error_exit ()
{
echo "ERROR: $1 !!"
exit 1
}
#[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
#[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
#[ ! -e "$JAVA_HOME/bin/java" ] && error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)!"
export JAVA_HOME="/usr/local/software/server/jdk11/jdk-11.0.10"
export JAVA="$JAVA_HOME/bin/java"
export BASE_DIR=$(dirname $0)/..
#export CLASSPATH=.:${BASE_DIR}/conf:${CLASSPATH}
export CLASSPATH=${BASE_DIR}/lib/rocketmq-broker-4.8.0.jar:${BASE_DIR}/lib/*:${BASE_DIR}/conf:${CLASSPATH}
#===========================================================================================
# JVM Configuration
#===========================================================================================
# The RAMDisk initializing size in MB on Darwin OS for gc-log
DIR_SIZE_IN_MB=600
choose_gc_log_directory()
{
case "`uname`" in
Darwin)
if [ ! -d "/Volumes/RAMDisk" ]; then
# create ram disk on Darwin systems as gc-log directory
DEV=`hdiutil attach -nomount ram://$((2 * 1024 * DIR_SIZE_IN_MB))` > /dev/null
diskutil eraseVolume HFS+ RAMDisk ${DEV} > /dev/null
echo "Create RAMDisk /Volumes/RAMDisk for gc logging on Darwin OS."
fi
GC_LOG_DIR="/Volumes/RAMDisk"
;;
*)
# check if /dev/shm exists on other systems
if [ -d "/dev/shm" ]; then
GC_LOG_DIR="/dev/shm"
else
GC_LOG_DIR=${BASE_DIR}
fi
;;
esac
}
choose_gc_log_directory
JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
JAVA_OPT="${JAVA_OPT} --add-exports java.base/jdk.internal.ref=ALL-UNNAMED"
JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0"
#JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:${GC_LOG_DIR}/rmq_broker_gc_%p_%t.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy"
JAVA_OPT="${JAVA_OPT} -Xlog:gc*:file=${GC_LOG_DIR}/rmq_broker_gc_%p_%t.log:time,tags:filecount=5,filesize=30M"
#JAVA_OPT="${JAVA_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m"
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow"
JAVA_OPT="${JAVA_OPT} -XX:+AlwaysPreTouch"
#JAVA_OPT="${JAVA_OPT} -XX:MaxDirectMemorySize=15g"
JAVA_OPT="${JAVA_OPT} -XX:-UseLargePages -XX:-UseBiasedLocking"
#JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${BASE_DIR}/lib:${JAVA_HOME}/lib/ext"
#JAVA_OPT="${JAVA_OPT} -Xdebug -Xrunjdwp:transport=dt_socket,address=9555,server=y,suspend=n"
JAVA_OPT="${JAVA_OPT} ${JAVA_OPT_EXT}"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"
numactl --interleave=all pwd > /dev/null 2>&1
if [ $? -eq 0 ]
then
if [ -z "$RMQ_NUMA_NODE" ] ; then
numactl --interleave=all $JAVA ${JAVA_OPT} $@
else
numactl --cpunodebind=$RMQ_NUMA_NODE --membind=$RMQ_NUMA_NODE $JAVA ${JAVA_OPT} $@
fi
else
$JAVA ${JAVA_OPT} $@
fi
vim tools.sh
#!/bin/sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#===========================================================================================
# Java Environment Setting
#===========================================================================================
error_exit ()
{
echo "ERROR: $1 !!"
exit 1
}
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=$HOME/jdk/java
[ ! -e "$JAVA_HOME/bin/java" ] && JAVA_HOME=/usr/java
[ ! -e "$JAVA_HOME/bin/java" ] && error_exit "Please set the JAVA_HOME variable in your environment, We need java(x64)!"
export JAVA_HOME
export JAVA="$JAVA_HOME/bin/java"
export BASE_DIR=$(dirname $0)/..
#export CLASSPATH=.:${BASE_DIR}/conf:${CLASSPATH}
export CLASSPATH=${BASE_DIR}/lib/rocketmq-tools-4.8.0.jar:${BASE_DIR}/lib/*:${BASE_DIR}/conf:${CLASSPATH}:${CLASS_PATH}
#===========================================================================================
# JVM Configuration
#===========================================================================================
JAVA_OPT="${JAVA_OPT} -server -Xms1g -Xmx1g -Xmn256m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=128m"
#JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${BASE_DIR}/lib:${JAVA_HOME}/jre/lib/ext:${JAVA_HOME}/lib/ext"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"
$JAVA ${JAVA_OPT} "$@"
- 启动nameserver服务:
#启动nameserver
sh bin/mqnamesrv
#nohup sh bin/mqnamesrv &
- 启动broker服务:
启动前修改broker.conf文件(多网卡需修改)
cd /usr/local/software/server/rocketmq/rocketmq-all-4.8.0-source-release/distribution/target/rocketmq-4.8.0/rocketmq-4.8.0/conf
再启动
sh bin/mqbroker -n localhost:9876
#nohup sh bin/mqbroker -n localhost:9876 -c ./conf/broker.conf &
- 通过jsp查看进程
- 测试:
#设置名称服务地址
export NAMESRV_ADDR=localhost:9876
#投递消息
sh bin/tools.sh org.apache.rocketmq.example.quickstart.Producer
SendResult [sendStatus=SEND_OK, msgId= ...
#消费消息
sh bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer
部署完成!
注:如果启动失败,有可能时内存不足,需要进入文件中调整runserver.sh、runbroker.sh即可
1、Caused by: org.apache.rocketmq.remoting.exception.RemotingConnectException: connect to <172.17.42.1:10911> failed
2、com.alibaba.rocketmq.client.exception.MQClientException: Send [1] times, still failed, cost [1647]ms, Topic: TopicTest1, BrokersSent: [broker-a, null, null]
3、org.apache.rocketmq.client.exception.MQClientException: Send [3] times, still failed, cost [497]ms, Topic: TopicTest, BrokersSent: [Book-Air.local, MacBook-Air.local, MacBook-Air.local]
解决:多网卡问题处理
1、设置producer: producer.setVipChannelEnabled(false);
2、编辑ROCKETMQ 配置文件:broker.conf(下列ip为自己的ip)
namesrvAddr = 192.168.0.101:9876
brokerIP1 = 192.168.0.101
4、DESC: service not available now, maybe disk full, CL:
解决:修改启动脚本runbroker.sh,在里面增加一句话即可:
JAVA_OPT="${JAVA_OPT} -Drocketmq.broker.diskSpaceWarningLevelRatio=0.98"
(磁盘保护的百分比设置成98%,只有磁盘空间使用率达到98%时才拒绝接收producer消息)
常见问题处理
https://blog.youkuaiyun.com/sqzhao/article/details/54834761
https://blog.youkuaiyun.com/mayifan0/article/details/67633729
https://blog.youkuaiyun.com/a906423355/article/details/78192828
4.安装部署rocketmq控制台
下载 https://github.com/apache/rocketmq-externals
docker pull apacherocketmq/rocketmq-console:2.0.0
docker run -e "JAVA_OPTS=-Drocketmq.namesrv.addr=39.97.169.72:9876;39.97.228.27:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false" -p 8080:8080 -itd apacherocketmq/rocketmq-console:2.0.0
unzip rocketmq-externals-master.zip
部署完成!
三.springboot2.x整合rocketmq4.8.0测试
1.添加依赖
<dependency>
<groupId>org.apache.rocketmq</groupId>
<artifactId>rocketmq-client</artifactId>
<version>4.8.0</version>
</dependency>
2.创建配置类
package com.example.demorocketmq.jms;
public class JmsConfig {
public static final String NAME_SERVER = "x.xxx.xxx.xxx:9876";
public static final String TOPIC = "pay_topic";
}
3.创建producer组件
package com.example.demorocketmq.jms;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.client.producer.DefaultMQProducer;
import org.springframework.stereotype.Component;
@Component
public class PayProducer {
//生产者组名称
private String producerGroup = "pay_producer_group";
private DefaultMQProducer producer;
public PayProducer(){
producer = new DefaultMQProducer(producerGroup);
producer.setNamesrvAddr(JmsConfig.NAME_SERVER);
start();
}
public DefaultMQProducer getProducer(){
return this.producer;
}
/**
* 对象在使用之前必须要调用一次,只能初始化一次
*/
private void start(){
try {
this.producer.start();
} catch (MQClientException e) {
e.printStackTrace();
}
}
/**
* 一般在应用上下文,使用上下文监听器,进行关闭
*/
public void shutdown(){
this.producer.shutdown();
}
}
4.创建consumer
package com.example.demorocketmq.jms;
import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
import org.apache.rocketmq.client.consumer.listener.ConsumeConcurrentlyStatus;
import org.apache.rocketmq.client.consumer.listener.MessageListenerConcurrently;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.common.consumer.ConsumeFromWhere;
import org.apache.rocketmq.common.message.Message;
import org.springframework.stereotype.Component;
import java.io.UnsupportedEncodingException;
@Component
public class PayConsumer {
private DefaultMQPushConsumer consumer;
private String consumerGroup = "pay_consumer_group";
public PayConsumer() throws MQClientException {
consumer = new DefaultMQPushConsumer(consumerGroup);
consumer.setNamesrvAddr(JmsConfig.NAME_SERVER);
//从最后一个开始消费
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
//订阅主题
consumer.subscribe(JmsConfig.TOPIC, "*");
//监听器
consumer.registerMessageListener((MessageListenerConcurrently) (msgs, context) -> {
try {
Message msg = msgs.get(0);
System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), new String(msgs.get(0).getBody()));
String topic = msg.getTopic();
String body = new String(msg.getBody(), "utf-8");
String tags = msg.getTags();
String keys = msg.getKeys();
System.out.println("topic=" + topic + ", tags=" + tags + ", keys=" + keys + ", msg=" + body);
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
});
consumer.start();
System.out.println("consumer start ...");
}
}
5.创建测试方法
package com.example.demorocketmq.controller;
import com.example.demorocketmq.jms.JmsConfig;
import com.example.demorocketmq.jms.PayProducer;
import org.apache.rocketmq.client.exception.MQBrokerException;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.client.producer.SendResult;
import org.apache.rocketmq.common.message.Message;
import org.apache.rocketmq.remoting.exception.RemotingException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.HashMap;
@RestController
public class PayController {
@Autowired
private PayProducer producer;
@RequestMapping("/api/v1/pay_cb")
public Object callback(String text){
Message message = new Message(JmsConfig.TOPIC,"taga",("paytest"+text).getBytes());
SendResult sendResult = null;
try {
sendResult = producer.getProducer().send(message);
} catch (MQClientException e) {
e.printStackTrace();
} catch (RemotingException e) {
e.printStackTrace();
} catch (MQBrokerException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(sendResult);
return new HashMap<>();
}
}
四.完全分布式部署(jdk1.8)
1.一主一从(异步双写,异步刷盘)
开放端口 10909 10911 10912
在第二节安装部署rocketMQ4.8基础上
分别修改
cd /usr/local/software/server/rocketmq/rocketmq-all-4.8.0-source-release/distribution/target/rocketmq-4.8.0/rocketmq-4.8.0/conf
vim 2m-2s-async
- 主节点
namesrvAddr=39.97.169.72:9876;39.97.228.27:9876
brokerClusterName=custompang
brokerName=broker-a
#0为master节点
brokerId=0
brokerIP1=外网地址
brokerIP2=外网地址
deleteWhen=04
fileReservedTime=48
#master异步复制到slaver节点
brokerRole=SYNC_MASTER
#异步刷盘
flushDiskType=ASYNC_FLUSH
启动:
nohup sh bin/mqnamesrv &
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-a.properties &
- 从节点
namesrvAddr=192.168.159.129:9876;192.168.159.130:9876
brokerClusterName=custompang
brokerName=broker-a
brokerId=1
brokerIP1=外网地址
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
启动:
nohup sh bin/mqnamesrv &
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-a-s.properties &
- 管控台启动
docker run -e "JAVA_OPTS=-Drocketmq.namesrv.addr=192.168.159.129:9876;192.168.159.130:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false" -p 8080:8080 -itd apacherocketmq/rocketmq-console:2.0.0
unzip rocketmq-externals-master.zip
2.双主双从(同步复制,异步刷盘)
- 机器列表
server1 ssh root@192.168.159.133 部署nameServer Broker-a
server2 ssh root@192.168.159.130 部署nameServer Broker-a-s
server3 ssh root@192.168.159.131 Broker-b
server4 ssh root@192.168.159.132 Broker-b-s
- 修改runserver.sh 和 runbroker.sh配置
vim runserver.sh
JAVA_OPT="${JAVA_OPT} -server -Xms528m -Xmx528m -Xmn256m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
vim runbroker.sh
JAVA_OPT="${JAVA_OPT} -server -Xms528m -Xmx528m -Xmn256m"
启动两个机器的 nameserver
nohup sh bin/mqnamesrv &
- 修改四个节点的配置
broker-a主节点
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-a.properties &
namesrvAddr=39.97.169.72:9876;39.97.228.27:9876
brokerClusterName=custompang
brokerName=broker-a
brokerId=0
brokerIP1=39.97.169.72
brokerIP2=39.97.169.72
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
defaultTopicQueueNums=4
#是否允许自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=false
#存储路径,根据需求进行配置绝对路径,默认是家目录下面
#storePathRootDir=
#storePathCommitLog
-----------------------------------------------------------------
broker-a从节点
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-a-s.properties &
namesrvAddr=39.97.169.72:9876;39.97.228.27:9876
brokerClusterName=custompang
brokerName=broker-a
brokerId=1
brokerIP1=39.97.228.27
brokerIP2=39.97.228.27
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
defaultTopicQueueNums=4
#是否允许自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=false
#存储路径,根据需求进行配置绝对路径,默认是家目录下面
#storePathRootDir=
#storePathCommitLog
-----------------------------------------------------------------
broker-b主节点
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-b.properties &
namesrvAddr=39.97.169.72:9876;39.97.228.27:9876
brokerClusterName=custompang
brokerName=broker-b
brokerId=0
brokerIP1=39.97.188.141
brokerIP2=39.97.188.141
deleteWhen=04
fileReservedTime=48
brokerRole=SYNC_MASTER
flushDiskType=ASYNC_FLUSH
defaultTopicQueueNums=4
#是否允许自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=false
#存储路径,根据需求进行配置绝对路径,默认是家目录下面
#storePathRootDir=
#storePathCommitLog
-----------------------------------------------------------------
broker-b从节点
nohup sh bin/mqbroker -c conf/2m-2s-async/broker-b-s.properties &
namesrvAddr=39.97.169.72:9876;39.97.228.27:9876
brokerClusterName=custompang
brokerName=broker-b
brokerId=1
brokerIP1=39.97.161.185
brokerIP2=39.97.161.185
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH
defaultTopicQueueNums=4
#是否允许自动创建Topic,建议线下开启,线上关闭
autoCreateTopicEnable=true
#是否允许自动创建订阅组,建议线下开启,线上关闭
autoCreateSubscriptionGroup=false
#存储路径,根据需求进行配置绝对路径,默认是家目录下面
#storePathRootDir=
#storePathCommitLog
- 关闭防火墙
CentOS 6.5关闭防火墙
servcie iptables stop
centos7关闭防火墙
systemctl stop firewalld
systemctl stop firewalld.service
- 开启安全组端口
3.常见错误
- 常见错误一
org.apache.rocketmq.remoting.exception.RemotingTooMuchRequestException:
sendDefaultImpl call timeout
原因:阿里云存在多网卡,rocketmq都会根据当前网卡选择一个IP使用,当你的机器有多块网卡时,很有可能会有问题。比如,我遇到的问题是我机器上有两个IP,一个公网IP,一个私网IP, 因此需要配置broker.conf 指定当前的公网ip, 然后重新启动broker
新增配置:conf/broker.conf (属性名称brokerIP1=broker所在的公网ip地址 )
新增这个配置:brokerIP1=120.76.62.13
启动命令:nohup sh bin/mqbroker -n localhost:9876 -c ./conf/broker.conf &
- 常见错误二
MQClientException: No route info of this topic, TopicTest1
原因:Broker 禁止自动创建 Topic,且用户没有通过手工方式创建 此Topic, 或者broker和Nameserver网络不通
解决:
通过 sh bin/mqbroker -m 查看配置
autoCreateTopicEnable=true 则自动创建topic
Centos7关闭防火墙 systemctl stop firewalld
- 常见错误三
控制台查看不了数据,提示连接 10909错误
原因:Rocket默认开启了VIP通道,VIP通道端口为10911-2=10909
解决:阿里云安全组需要增加一个端口 10909
其他错误:
https://blog.youkuaiyun.com/qq_14853889/article/details/81053145
https://blog.youkuaiyun.com/wangmx1993328/article/details/81588217#%E5%BC%82%E5%B8%B8%E8%AF%B4%E6%98%8E
https://www.jianshu.com/p/bfd6d849f156
https://blog.youkuaiyun.com/wangmx1993328/article/details/81588217
五.springboot2.x+rocketmq4.8.0实战
1.重试机制实战
- 生产者实战
public PayProducer(){
producer = new DefaultMQProducer(producerGroup);
//投递失败默认重试次数是2,这里设置为3
producer.setRetryTimesWhenSendFailed(3);
producer.setNamesrvAddr(JmsConfig.NAME_SERVER);
start();
}
- 消费者实战
consumer.registerMessageListener((MessageListenerConcurrently) (msgs, context) -> {
MessageExt msg = msgs.get(0);
//默认重试次数是16,广播模式不支持重试
int times = msg.getReconsumeTimes();
System.out.println("重试次数="+times);
try {
System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), new String(msgs.get(0).getBody()));
String topic = msg.getTopic();
String body = new String(msg.getBody(), "utf-8");
String tags = msg.getTags();
//相当于订单编号等唯一标识
String keys = msg.getKeys();
System.out.println("topic=" + topic + ", tags=" + tags + ", keys=" + keys + ", msg=" + body);
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
} catch (UnsupportedEncodingException e) {
System.out.println("消费异常");
//TODO 如果重试2此不成功,则记录数据库,发短信人工介入,并记录到日志
if(times>=2){
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
e.printStackTrace();
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
});
2.异步发送消息实战(比如发放优惠券操作,不丢失)
- 生产者:
//异步发送消息
producer.getProducer().send(message, new SendCallback() {
@Override
public void onSuccess(SendResult sendResult) {
System.out.printf("发送结果=%s,body=%s",sendResult.getSendStatus(),sendResult.toString());
}
@Override
public void onException(Throwable throwable) {
//可以做补偿机制,根据业务情况进行使用,看是否进行重试
}
});
3.OneWay发送消息实战(日志收集,可能丢失)
- 生产者:
payProducer.getProducer().sendOneway(message);
4.延时消费实战
- 使用场景:
1.提醒某人的生日
2.在某时间之内完成某个操作(超时未支付)
默认级别表如下(18个级别 分别是1-18)
"1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h";
Message message = new Message(JmsConfig.TOPIC,"taga","可以是一个订单号",("paytest"+text).getBytes());
//设置延迟消费策略 级别2是 5s后进行消费
message.setDelayTimeLevel(2);
5.MessageQueueSelector实战(顺序消息,分担负载)
- 场景:指定某个队列对某个类型的业务进行消费
- 支持同步、异步发送
- 生产者:
//0,1,2,3对应队列的下标,将消息发送到0队列中
//同步发送(顺序消息)
SendResult sendResult = producer.getProducer().send(message, new MessageQueueSelector() {
@Override
public MessageQueue select(List<MessageQueue> mqs, Message msg, Object arg) {
int queueNum = Integer.parseInt(arg.toString());
return mqs.get(queueNum);
}
},0);
//异步发送(不能保证顺序消息)
producer.getProducer().send(message, new MessageQueueSelector() {
@Override
public MessageQueue select(List<MessageQueue> mqs, Message msg, Object arg) {
int queueNum = Integer.parseInt(arg.toString());
return mqs.get(queueNum);
}
}, 0, new SendCallback() {
@Override
public void onSuccess(SendResult sendResult) {
}
@Override
public void onException(Throwable throwable) {
}
});
6.顺序发送/消费消息实战
原理:给局部队列加锁,使得不能并发消费相同的队列
场景:相同订单号的消息发送到相同的队列中
- 订单对象
package com.example.demorocketmq.domain;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;
public class ProductOrder implements Serializable {
/**
* 订单id
*/
private long orderId;
/**
* 订单类型
*/
private String type;
public long getOrderId() {
return orderId;
}
public void setOrderId(long orderId) {
this.orderId = orderId;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public ProductOrder(){}
public ProductOrder(long orderId,String type){
this.orderId = orderId;
this.type = type;
}
/**
* 测试方法
* @return
*/
public static List<ProductOrder> getOrderList(){
List<ProductOrder> list = new ArrayList<>();
list.add(new ProductOrder(111L,"创建订单"));
list.add(new ProductOrder(222L,"创建订单"));
list.add(new ProductOrder(111L,"支付订单"));
list.add(new ProductOrder(222L,"支付订单"));
list.add(new ProductOrder(111L,"完成订单"));
list.add(new ProductOrder(333L,"创建订单"));
list.add(new ProductOrder(222L,"完成订单"));
list.add(new ProductOrder(333L,"支付订单"));
list.add(new ProductOrder(333L,"完成订单"));
return list;
}
@Override
public String toString() {
return "ProductOrder{" +
"orderId=" + orderId +
", type='" + type + '\'' +
'}';
}
}
- 生产者
@RequestMapping("/api/v1/pay_cb")
public Object callback(String text) throws Exception{
List<ProductOrder> list = ProductOrder.getOrderList();
for(int i=0;i<list.size();i++){
ProductOrder order = list.get(i);
Message message = new Message(JmsConfig.TOPIC,"",order.getOrderId()+"",order.toString().getBytes());
//相同的订单id会进入同一个
SendResult sendResult = producer.getProducer().send(message, new MessageQueueSelector() {
@Override
public MessageQueue select(List<MessageQueue> mqs, Message msg, Object arg) {
Long id = (Long) arg;
long index = id % mqs.size();
return mqs.get((int)index);
}
},order.getOrderId());
System.out.printf("发送结果=%s\n",sendResult);
}
return new HashMap<>();
- 消费者
package com.example.demorocketmq.jms;
import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
import org.apache.rocketmq.client.consumer.listener.ConsumeConcurrentlyStatus;
import org.apache.rocketmq.client.consumer.listener.ConsumeOrderlyStatus;
import org.apache.rocketmq.client.consumer.listener.MessageListenerConcurrently;
import org.apache.rocketmq.client.consumer.listener.MessageListenerOrderly;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.common.consumer.ConsumeFromWhere;
import org.apache.rocketmq.common.message.MessageExt;
import org.apache.rocketmq.common.protocol.heartbeat.MessageModel;
import org.springframework.stereotype.Component;
import java.io.UnsupportedEncodingException;
@Component
public class PayOrderlyConsumer {
private DefaultMQPushConsumer consumer;
private String consumerGroup = "pay_orderly_consumer_group";
public PayOrderlyConsumer() throws MQClientException {
consumer = new DefaultMQPushConsumer(consumerGroup);
consumer.setNamesrvAddr(JmsConfig.NAME_SERVER);
consumer.setMessageModel(MessageModel.CLUSTERING);
//从最后一个开始消费
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
//订阅主题,监听所有的tag
consumer.subscribe(JmsConfig.TOPIC, "*");
//监听器
consumer.registerMessageListener((MessageListenerOrderly) (msgs, context) -> {
MessageExt msg = msgs.get(0);
//默认重试次数是16,广播模式不支持重试
int times = msg.getReconsumeTimes();
System.out.println("重试次数="+times);
try {
System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), new String(msgs.get(0).getBody()));
return ConsumeOrderlyStatus.SUCCESS;
} catch (Exception e) {
if(times>=2){
//TODO 写入日志
return ConsumeOrderlyStatus.SUCCESS;
}
e.printStackTrace();
return ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT;
}
});
consumer.start();
System.out.println("consumer start ...");
}
}
7.Tag二级过滤实战(topic是一级过滤)
- 简介:
一个Message只有一个Tag,tag是二级分类
订单:数码类订单、食品类订单
过滤分为Broker端和Consumer端过滤
Broker端过滤,减少了无用的消息的进行网络传输,增加了broker的负担
Consumer端过滤,完全可以根据业务需求进行实习,但是增加了很多无用的消息传输
一般是监听 * ,或者指定 tag,|| 运算 , SLQ92 , FilterServer等;
tag性能高,逻辑简单
SQL92 性能差点,支持复杂逻辑(只支持PushConsumer中使用) MessageSelector.bySql
语法:> , < = ,IS NULL, AND, OR, NOT 等,sql where后续的语法即可(大部分)
注意:消费者订阅关系要一致,不然会消费混乱,甚至消息丢失
订阅关系一致:订阅关系由 Topic和 Tag 组成,同一个 group name,订阅的 topic和tag 必须是一样的
在Broker 端进行MessageTag过滤,遍历message queue存储的 message tag和 订阅传递的tag 的hashcode不一样则跳过,符合的则传输给Consumer,在consumer queue存储的是对应的hashcode, 对比也是通过hashcode对比; Consumer收到过滤消息后也会进行匹配操作,但是是对比真实的message tag而不是hashcode
consume queue存储使用hashcode定长,节约空间
过滤中不访问commit log,可以高效过滤
如果存在hash冲突,Consumer端可以进行再次确认
如果想使用多个Tag,可以使用sql表达式,但是不建议,单一职责,多个队列
- tag模式实战
consumer.subscribe(JmsConfig.TOPIC, "order_pay || order_create");
- SQL92模式实战:
- 1.生产端:
Message message = new Message(JmsConfig.TOPIC,"",order.getOrderId()+"",order.toString().getBytes());
message.putUserProperty("amount","5");
- 2.消费端:
//类似sql where后的语法
consumer.subscribe(JmsConfig.TOPIC, MessageSelector.bySql(" amount > 5 "));
常见错误:The broker does not support consumer to filter message by SQL92
解决:broker.conf 里面配置如下
enablePropertyFilter=true
备注,修改之后要重启Broker
master节点配置:vim conf/2m-2s-async/broker-a.properties
slave节点配置:vim conf/2m-2s-async/broker-a-s.properties
8.分布式事务实战
RocketMQ事务消息的状态
COMMIT_MESSAGE: 提交事务消息,消费者可以消费此消息
ROLLBACK_MESSAGE:回滚事务消息,消息会在broker中删除,消费者不能消费
UNKNOW:Broker需要回查确认消息的状态
- 生产端配置
package com.example.demorocketmq.jms;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.client.producer.LocalTransactionState;
import org.apache.rocketmq.client.producer.TransactionListener;
import org.apache.rocketmq.client.producer.TransactionMQProducer;
import org.apache.rocketmq.common.message.Message;
import org.apache.rocketmq.common.message.MessageExt;
import org.springframework.stereotype.Component;
import java.util.concurrent.*;
@Component
public class TransactionProducer {
//生产者组名称
private String producerGroup = "trac_producer_group";
//事务监听器
private TransactionListener transactionListener = new TransactionListenerImpl();
private TransactionMQProducer transactionProducer = null;
//创建自定义线程池
//@param corePoolSize 池中所保存的核心线程数
//@param maximumPoolSize 池中允许的最大线程数
//@param keepActiveTime 非核心线程空闲等待新任务的最长时间
//@param timeunit keepActiveTime参数的时间单位
//@param blockingqueue 任务队列
private ExecutorService executorService = new ThreadPoolExecutor(2, 5, 100, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>(2000), new ThreadFactory() {
@Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r);
thread.setName("client-transaction-msg-check-thread");
return thread;
}
});
public TransactionProducer(){
transactionProducer = new TransactionMQProducer(producerGroup);
//投递失败默认重试次数是2,这里设置为3
transactionProducer.setRetryTimesWhenSendFailed(3);
transactionProducer.setNamesrvAddr(JmsConfig.NAME_SERVER);
transactionProducer.setTransactionListener(transactionListener);
transactionProducer.setExecutorService(executorService);
start();
}
public TransactionMQProducer getProducer(){
return this.transactionProducer;
}
/**
* 对象在使用之前必须要调用一次,只能初始化一次
*/
private void start(){
try {
this.transactionProducer.start();
} catch (MQClientException e) {
e.printStackTrace();
}
}
/**
* 一般在应用上下文,使用上下文监听器,进行关闭
*/
public void shutdown(){
this.transactionProducer.shutdown();
}
}
class TransactionListenerImpl implements TransactionListener{
/**
* 当半消息发送成功后,执行本地事务
* @param message
* @param arg == otherParam
* @return
*/
@Override
public LocalTransactionState executeLocalTransaction(Message message, Object arg) {
System.out.println("===========executeLocalTransaction===========");
String body = new String(message.getBody());
String key = message.getKeys();
String transactionId = message.getTransactionId();
System.out.println("transactionId="+transactionId+", key="+key+", body="+body);
//执行事务 TODO 执行本地事务操作数据库 begin
//增删改查
//执行事务 TODO 执行本地事务操作数据库 end
int status = Integer.parseInt(arg.toString());
//二次确认消息,然后消费者可以消费
if(status == 1){
return LocalTransactionState.COMMIT_MESSAGE;
}
//回滚消息,broker端会删除半消息
if(status == 2){
return LocalTransactionState.ROLLBACK_MESSAGE;
}
//broker端会进行回查消息(在不确认情况下)
if(status == 3){
return LocalTransactionState.UNKNOW;
}
return null;
}
/**
* 当无半消息响应时,broker会发送消息进行检查状态回查
* 回查消息,要么commit要么rollback,reconsumeTimes不生效
* @param message
* @return
*/
@Override
public LocalTransactionState checkLocalTransaction(MessageExt message) {
System.out.println("===========checkLocalTransaction===========");
//回查确认事务是否执行成功
String body = new String(message.getBody());
//通过key去检查本地事务是否完成 查询操作
String key = message.getKeys();
String transactionId = message.getTransactionId();
System.out.println("transactionId="+transactionId+", key="+key+", body="+body);
return LocalTransactionState.COMMIT_MESSAGE;
}
}
- 消费端配置
package com.example.demorocketmq.jms;
import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer;
import org.apache.rocketmq.client.consumer.listener.ConsumeConcurrentlyStatus;
import org.apache.rocketmq.client.consumer.listener.MessageListenerConcurrently;
import org.apache.rocketmq.client.exception.MQClientException;
import org.apache.rocketmq.common.consumer.ConsumeFromWhere;
import org.apache.rocketmq.common.message.MessageExt;
import org.springframework.stereotype.Component;
import java.io.UnsupportedEncodingException;
@Component
public class PayConsumer {
private DefaultMQPushConsumer consumer;
private String consumerGroup = "trac_consumer_group";
public PayConsumer() throws MQClientException {
consumer = new DefaultMQPushConsumer(consumerGroup);
consumer.setNamesrvAddr(JmsConfig.NAME_SERVER);
//广播模式,默认集群
//consumer.setMessageModel(MessageModel.BROADCASTING);
//从最后一个开始消费
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
//订阅主题
consumer.subscribe(JmsConfig.TOPIC, "*");
//监听器
consumer.registerMessageListener((MessageListenerConcurrently) (msgs, context) -> {
MessageExt msg = msgs.get(0);
//默认重试次数是16,广播模式不支持重试
int times = msg.getReconsumeTimes();
System.out.println("重试次数="+times);
try {
System.out.printf("%s Receive New Messages: %s %n", Thread.currentThread().getName(), new String(msgs.get(0).getBody()));
String topic = msg.getTopic();
String body = new String(msg.getBody(), "utf-8");
String tags = msg.getTags();
String keys = msg.getKeys();
System.out.println("topic=" + topic + ", tags=" + tags + ", keys=" + keys + ", msg=" + body);
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
} catch (UnsupportedEncodingException e) {
System.out.println("消费异常");
//TODO 如果重试2此不成功,则记录数据库,发短信人工介入,并记录到日志
if(times>=2){
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
e.printStackTrace();
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
});
consumer.start();
System.out.println("consumer start ...");
}
}
业务层代码:
import com.example.demorocketmq.jms.JmsConfig;
import com.example.demorocketmq.jms.TransactionProducer;
import org.apache.rocketmq.client.producer.SendResult;
import org.apache.rocketmq.common.message.Message;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.util.HashMap;
@RestController
public class PayController {
@Autowired
private TransactionProducer transactionProducer;
@RequestMapping("/api/v1/pay_cb")
public Object callback(String text,String tag,String otherParam) throws Exception{
Message message = new Message(JmsConfig.TOPIC,tag,tag+"_key",text.getBytes());
SendResult sendResult = transactionProducer.getProducer().
sendMessageInTransaction(message,otherParam);
return new HashMap<>();
}
}
六.消费者常见配置
消息队列RocketMQ4.X消费者核心配置讲解
consumeFromWhere配置(某些情况失效:参考https://blog.csdn.net/a417930422/article/details/83585397)
CONSUME_FROM_FIRST_OFFSET: 初次从消息队列头部开始消费,即历史消息(还储存在broker的)全部消费一遍,后续再启动接着上次消费的进度开始消费
CONSUME_FROM_LAST_OFFSET: 默认策略,初次从该队列最尾开始消费,即跳过历史消息,后续再启动接着上次消费的进度开始消费
CONSUME_FROM_TIMESTAMP : 从某个时间点开始消费,默认是半个小时以前,后续再启动接着上次消费的进度开始消费
allocateMessageQueueStrategy:
负载均衡策略算法,即消费者分配到queue的算法,默认值是AllocateMessageQueueAveragely即取模平均分配
offsetStore:消息消费进度存储器 offsetStore 有两个策略:
LocalFileOffsetStore 和 RemoteBrokerOffsetStor 广播模式默认使用LocalFileOffsetStore 集群模式默认使用RemoteBrokerOffsetStore
consumeThreadMin 最小消费线程池数量
consumeThreadMax 最大消费线程池数量
pullBatchSize: 消费者去broker拉取消息时,一次拉取多少条。可选配置
consumeMessageBatchMaxSize: 单次消费时一次性消费多少条消息,批量消费接口才有用,可选配置
messageModel : 消费者消费模式, CLUSTERING——默认是集群模式CLUSTERING BROADCASTING——广播模式
七. PushConsumer、PullConsumer消费模式分析
- PushConsumer本质是长轮训(rocketmq默认策略)
Push和Pull优缺点分析
Push:
实时性高;但增加服务端负载,消费端能力不同,如果Push推送过快,消费端会出现很多问题
Pull:
消费者从Server端拉取消息,主动权在消费者端,可控性好;但 间隔时间不好设置,间隔太短,则空请求,浪费资源;间隔时间太长,则消息不能及时处理
长轮询:
Client请求Server端也就是Broker的时候, Broker会保持当前连接一段时间 默认是15s,如果这段时间内有消息到达,则立刻返回给Consumer.没消息的话 超过15s,则返回空,再进行重新请求;主动权在Consumer中,Broker即使有大量的消息 也不会主动提送Consumer, 缺点:服务端需要保持Consumer的请求,会占用资源,需要客户端连接数可控 否则会一堆连接
PushConsumer本质是长轮训(rocketmq默认策略)
系统收到消息后自动处理消息和offset,如果有新的Consumer加入会自动做负载均衡,
在broker端可以通过longPollingEnable=true来开启长轮询
消费端代码:DefaultMQPushConsumerImpl->pullMessage->PullCallback
服务端代码:broker.longpolling
虽然是push,但是代码里面大量使用了pull,是因为使用长轮训方式达到Push效果,既有pull有的,又有Push的实时性
优雅关闭:主要是释放资源和保存Offset, 调用shutdown()即可 ,参考 @PostConstruct、@PreDestroy
PullConsumer需要自己维护Offset(参考官方例子)
官方例子路径:org.apache.rocketmq.example.simple.PullConsumer
获取MessageQueue遍历
客户维护Offset,需用用户本地存储Offset,存储内存、磁盘、数据库等
处理不同状态的消息 FOUND、NO_NEW_MSG、OFFSET_ILLRGL、NO_MATCHED_MSG、4种状态
灵活性高可控性强,但是编码复杂度会高
优雅关闭:主要是释放资源和保存Offset,需用程序自己保存好Offset,特别是异常处理的时候
八.总结
- 总结:官网说生产者发送数据到队列默认是轮询机制是针对整体队列,但如果是:
- 一个请求一条数据发送,并不是轮询,如图:
- 一个请求多次发送 ,也不是轮询,如图:
- 从上看出,在一次请求中,producer是轮询发送给队列的,多次请求并不一定
- 如果broker-a分片的主从都挂了,那么只能发送和消费broker-b的,等broker-a的其中一个恢复才能继续
- 如果broker的主节点挂了,那么producer不能发送消息,consumer可以消费
- 如果broker的从节点挂了,不受影响都正常
如果需要按顺序消费,只有给队列加锁来实现,详见实战部分内容。