Camel管理路由的解决方案 (持续更新ing)
通过Camel管理kafka的路由转发
背景:采集数据发送到共有kafka,消费kafka,发送到自己集群kafka…
在一般生产的情况下,连接别人的kafka,是需要提供认证证书的,这里我们用同一个集群进行演示
环境准备
- 安装zookeeper集群
- 安装kafka
创建公共的topic
进入对应kafka目录
1.1创建topic
bin/kafka-topics.sh --create --zookeeper node01:2181,node02:2181,node03:2181 --replication-factor 1 --partitions 1 --topic commonAPI --config retention.bytes=104857600
bin/kafka-topics.sh --create --zookeeper node01:2181,node02:2181,node03:2181 --replication-factor 1 --partitions 1 --topic camelproducer --config retention.bytes=104857600
1.2通过camel集成kafka的消费和生产进行路由转发
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>it.luke</groupId>
<artifactId>Camel_Pro</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-all</artifactId>
<version>5.15.9</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.25.1</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-quartz2</artifactId>
<version>2.16.2</version>
</dependency>
<!--kafka-->
<!--<dependency>-->
<!--<groupId>org.apache.camel</groupId>-->
<!--<artifactId>camel-kafka</artifactId>-->
<!--<version>2.25.1</version>-->
<!--</dependency>-->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka</artifactId>
<version>2.20.0</version>
</dependency>
</dependencies>
</project>
在这里有个坑,由于我安装的kafka是0.10版本的,比较老,不支持接收headers
但是camel-kafka 较高版本的组件,会自己添加headers
导致推送失败,报异常
java.lang.IllegalArgumentException: Magic v1 does not support record headers
这里挑选了2.20.0,是可以正常推送的,也可以选择升级kafka服务器版本来解决办法
package it.luke.camel_kafka;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
/**
* @author luke
* @date 2020/11/715:13
*/
public class Camel_Producer {
//生产kafka数据
public static void main(String[] args) {
DefaultCamelContext camelContext = new DefaultCamelContext();
//路由规则
try {
camelContext.addRoutes(new RouteBuilder() {
public void configure() {
from("file://D:\\Server\\Camel_Pro\\testData\\kafka-output\\").routeId("myid")
.to("kafka:camelproducer?brokers=node01:9092,node02:9092,node03:9092");
}
});
//启动线程
camelContext.start();
synchronized (Camel_Consumer.class) {
Camel_Consumer.class.wait();
}
//关闭线程
camelContext.stop();
}catch (Exception e){
e.printStackTrace();
}
}
}
启动程序,往commonAPI中写入数据,并消费camelproducer
bin/kafka-console-consumer.sh --from-beginning --topic camelproducer--zookeeper node01:2181,node02:2181,node03:2181
可以看到camelproducer中消费到了推过去的数据
一般生产环境中的kafka都是需要验证的,对应的验证参数的配置请参考官网
from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" +
"&groupId=A" +
"&sslKeystoreLocation=/path/to/keystore.jks" +
"&sslKeystorePassword=changeit" +
"&sslKeyPassword=changeit" +
"&securityProtocol=SSL")
.to("mock:result");
https://camel.apache.org/components/latest/kafka-component.html#_ssl_configuration