原生方式
无论是生产者还是消费者,引入的依赖都是kafka-clients,maven坐标如下:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.1.0</version>
</dependency>
生产者
kafka生产者对象就是KafkaProducer,构造方式如下:
Properties props = new Properties();
// kafka集群地址
props.put("bootstrap.servers", "10.0.55.229:9092");
// kafka消息key的序列化方式
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
// kafka消息value的序列化方式
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(props);
消息
-创建生产者
KafkaProducer构造好后,需要构造待发送的消息。kafka消息对象是ProducerRecord,根据源码可知,构造方式有多种:
public class ProducerRecord<K, V> {
/**
* 所有构造方法最后都是调用这个构造方法, 所以弄明白这个构造方法所有参数含义就可以了
* Creates a record with a specified timestamp to be sent to a specified topic and partition
* @param topic - The topic the record will be appended to, topic名称
* @param partition - The partition to which the record should be sent, 消息发送的目标分区名称, 如果不指定, kafka会根据Partitioner计算目标分区
* @param timestamp - The timestamp of the record, in milliseconds since epoch. If null, the producer will assign
* the timestamp using System.currentTimeMillis(). 消息发送的指定时间戳, 默认为当前时间
* @param key - The key that will be included in the record, 消息的key, kafka根据这个key计算分区
* @param value - The record contents 消息的内容
* @param headers - the headers that will be included in the record
*/
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value, Iterable<Header> headers) {
// topic是构造ProducerRecord的必传参数
if (topic == null)
throw new IllegalArgumentException("Topic cannot be null.");
// 发送的时间戳不能为负数
if (timestamp != null && timestamp < 0)
throw new IllegalArgumentException(
String.format("Invalid timestamp: %d. Timestamp should always be non-negative or null.", timestamp));
// 分区值不能为负数
if (partition != null && partition < 0)
throw new IllegalArgumentException(
String.format("Invalid partition: %d. Partition number should always be non-negative or null.", partition));
this.topic = topic;
this.partition = partition;
this.key = key;
this.value = value;
this.timestamp = timestamp;
this.headers = new RecordHeaders(headers);
}
public ProducerRecord(String topic, Integer partition, Long timestamp, K key, V value) {
this(topic, partition, timestamp, key, value, null);
}
public ProducerRecord(String topic, Integer partition, K key, V value, Iterable<Header> headers) {
this(topic, partition, null, key, value, headers);
}
public ProducerRecord(String topic, Integer partition, K key, V value) {
this(topic, partition, null, key, value, null);
}
public ProducerRecord(String topic, K key, V value) {
this(topic, null, null, key, value, null);
}
public ProducerRecord(String topic, V value) {
this(topic, null, null, null, value, null);
}
... ...
}
- 创建消息
下面构造一个最常用的ProducerRecord,只指定topic和value,由kafka去决定分区:
// ProducerRecord就是发送的信息对象, 包括: topic名称, key(可选), value(发送的内容)
// key的用途主要是:消息的附加信息,用来决定消息被写到哪个分区,拥有相同key的消息会被写到同一个分区
ProducerRecord<String, String> record = new ProducerRecord<>("ORDER-DETAIL",
JSON.toJSONString(new Order(201806260001L, new Date(), 98000, "desc", "165120001")));
消费者
- 创建消费者
kafka消费者者对象就是KafkaConsumer,构造方式如下:
Properties props = new Properties();
// kafka集群地址
props.put("bootstrap.servers", "10.0.55.229:9092");
// ConsumerGroup即消费者组名称
props.put("group.id", "afei");
// kafka消息key的反序列化方式
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
// kafka消息value的序列化方式
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"