环境:ubuntu10 2台(32位)+JDK1.8(32位)+kafka2.11+Intellij15
目标:Java启动一个Producer,启动一个Consumer,Linux启动一个Consumer.
观察3者是否能相互通信。
注意到,Java的Producer和Consumer全是用maven构建的,父项目是kafka_demo,他们两个是module.
1、Java Producer Demo:
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
import java.util.Properties;
/**
* Created by Germmy on 2016/7/10.
*/
public class KafkaProducer {
private final Producer<String,String> producer;
public final static String TOPIC="TEST-TOPIC";
private KafkaProducer(){
Properties props=new Properties();
props.put("metadata.broker.list","192.168.200.129:9092");
props.put("serializer.class","kafka.serializer.StringEncoder");
props.put("key.serializer.class","kafka.serializer.StringEncoder");
props.put("request.required.acks","-1");
producer=new Producer<String, String>(new ProducerConfig(props)) ;
}
void produce(){
int messageNo=1000;
final int COUNT=10000;
while(messageNo<COUNT){
String key=String.valueOf(messageNo);
String data="hello kafka message"+key;
producer.send(new KeyedMessage<String,String>(TOPIC,key,data));
System.out.println(data);
messageNo++;
}
}
public static void main(String[] args){
new KafkaProducer().produce();
}
}
2、Java Consumer Demo:
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
/**
* Created by Germmy on 2016/7/10.
*/
public class KafkaConsumer {
private final ConsumerConnector consumer;
public final static String TOPIC="TEST-TOPIC";
private KafkaConsumer(){
Properties props=new Properties();
props.put("zookeeper.connect","192.168.200.129:2181");
props.put("group.id","jd-group");//消费组是什么概念?
props.put("zookeeper.session.timeout.ms","60000");
props.put("zookeeper.sync.time.ms","200");
props.put("auto.commit.interval.ms","1000");
props.put("auto.offset.reset","smallest");
props.put("serializer.class","kafka.serializer.StringEncoder");
ConsumerConfig config=new ConsumerConfig(props);
consumer=kafka.consumer.Consumer.createJavaConsumerConnector(config);
}
void consume(){
Map<String,Integer> topicCountMap=new HashMap<String, Integer>();
topicCountMap.put(KafkaConsumer.TOPIC,new Integer(1));
StringDecoder keyDecoder=new StringDecoder(new VerifiableProperties());
StringDecoder valueDecoder=new StringDecoder(new VerifiableProperties());
Map<String,List<KafkaStream<String,String>>> consumerMap=
consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder);
KafkaStream<String,String> stream=consumerMap.get(KafkaConsumer.TOPIC).get(0);
ConsumerIterator<String,String> it=stream.iterator();
while(it.hasNext()){
System.out.println(it.next().message());
}
}
public static void main(String[] args){
new KafkaConsumer().consume();
}
}
注意到,Consumer连接zookeeper的超时时间需要设置长一点,之前的版本是4秒,会报连接超时异常。我这里设置的是60S,参考链接
3、父POM:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.germmy</groupId>
<artifactId>kafkademo</artifactId>
<packaging>pom</packaging>
<version>1.0-SNAPSHOT</version>
<modules>
<module>kafka_producer</module>
<module>kafka_consumer</module>
</modules>
<dependencies>
<!--<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.0</version>
</dependency>-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.0</version>
<exclusions>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</project>
注意到,这里引入的client客户端要去除对jms的依赖,否则会报如下错:
Could not transfer artifact com.sun.jdmk:jmxtools:jar:1.2.1 from/to java.net (https://maven-repository.dev.java.net/nonav/repository): No connector available to access repository java.net (https://maven-repository.dev.java.net/nonav/repository) of type legacy using the available factories
至此,demo已经成功运行。
4、遗留的问题如下:
* Producer用mvn打包失败,具体原因待查。
* props.put("group.id","jd-group");//消费组的概念待明确。
* intellij启动多个main后,console是直接TAB在一起的,根本不要显式切换。
* alt+1,直接打开projectViews.
* 点击左下角的正方形,可以打开或者关闭所有的侧边栏。
* 关于用0.8的client报jms的错误问题,有人说是因为它去maven2的仓库中了,所以要将repositories设置指向maven3的仓库。参考链接。
* 用intellij有时maven不会自动下载依赖,此时可以用cmd直接敲mvn compile命令试试。
* 在git命令中已经和远程仓库关联,但是在intellij中还不知道如何关联。
-------------------------------------------------------------------------------------------------

本文介绍了一个使用Java编写的Kafka生产者与消费者的示例程序,包括如何配置与启动生产者和消费者,以及如何确保它们之间的通信正常。同时,文中还记录了一些在开发过程中遇到的问题及其解决方案。
791

被折叠的 条评论
为什么被折叠?



