kafka版本:0.8.2.1
使用Producer 类提供的API可以给kafka中特定的topic创建新的messages;
1. 为了使用JAVA API的Producer ,我们需要import几个支撑Producer的相关类:
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
2. 定义Producer相关属性,并创建ProducerConfig实例,告诉Producer怎么找到kafka集群、怎么序列号消息等等;
Properties props = new Properties();
props.put("metadata.broker.list", "ocs103:9092,ocs104:9092,ocs102:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "Mykafkatest.SimplePartitioner");
props.put("request.required.acks","1");
ProducerConfig config = new ProducerConfig(props);
metadata.broker.list:告诉Producer能找到一个或多个broker来决定每个topic的leader;不需要列出整个集群中所有的broker。
serializer.class :告诉Producer往Producer发送数据时使用的序列化方式;
partitioner.class : 决定消息发往topic的哪个分区;
request.required.acks: 消息到达broker后,要求broker回复一个ACK。避免消息丢失
3. 根据ProducerConfig实例化Producer
Producer<String, String> producer = new Producer<String, String>(config);
Producer是一个泛型类,实例化时,我们需要告诉它两个参数类型,第一个为Partition Key的类型,第二个为消息类型;
4. 创建消息
String toptic = "mykafka4";
String key = "192.168.0.102";
String msg = "test-kafka258";
KeyedMessage<String, String> data = new KeyedMessage<String, String>(toptic, key, msg);
KeyedMessage是一个case类,指定topic,key,partKey,message,其中,key/partKey可以缺省:
case class KeyedMessage[K, V](val topic: String, val key: K, val partKey: Any, val message: V) {
if(topic == null)
throw new IllegalArgumentException("Topic cannot be null.")
def this(topic: String, message: V) = this(topic, null.asInstanceOf[K], null, message)
def this(topic: String, key: K, message: V) = this(topic, key, key, message)
def partitionKey = {
if(partKey != null)
partKey
else if(hasKey)
key
else
null
}
def hasKey = key != null
}
5.发送消息
producer.send(data);
6. 注意事项
a.metadata.broker.list中需要用kafka注册zookeeper时的hostname,否则会出现以下异常
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
b.需要事先创建topic,否则第一次会出错
--------
完整代码如下:
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class Producertest {
public static void main(String[] args) {
Properties props = new Properties();
//props.put("metadata.broker.list", "ocs103:9092,ocs104:9092,ocs102:9092");
props.put("metadata.broker.list", "ocs103:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "Mykafkatest.SimplePartitioner");
props.put("request.required.acks","1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
String toptic = "mykafka8";
String key = "192.168.0.102";
String msg = "test-kafka258";
KeyedMessage<String, String> data = new KeyedMessage<String, String>(toptic, key, msg);
try {
int i = 0;
while(i < 10){
System.err.println("producer.send["+i+"]");i++;
producer.send(data);
}
} catch (Exception e) {
e.printStackTrace();
}
producer.close();
System.err.println("hello");
}
}
import kafka.producer.Partitioner;
import kafka.utils.VerifiableProperties;
public class SimplePartitioner implements Partitioner {
public SimplePartitioner (VerifiableProperties props) {
}
public int partition(Object key, int a_numPartitions) {
int partition = 0;
String stringKey = (String) key;
int offset = stringKey.lastIndexOf('.');
if (offset > 0) {
partition = Integer.parseInt( stringKey.substring(offset+1)) % a_numPartitions;
}
return partition;
}
}