最近项目中使用到 kafka ,所以来记录一下
kafka 的作用,这里不做介绍,烦请自行百度。
项目介绍
简单介绍一下我们项目使用的目的:项目模拟交易所,进行证券之类的交易,在撮合交易中:添加委托,更新委托,添加成交,添加或者更新持仓,会频繁进行数据库操作。防止在频繁操作数据库的过程中,数据库处理不完,导致报错,然后抛出异常,数据丢失的问题。也考虑到项目以后会使用 kafka 作为总线,进行数据交互,所以在此阶段,db 操作直接使用 kafka,以后稍作改动即可。
kafka 的安装和部署,以及 demo,参考:http://blog.youkuaiyun.com/u010343544/article/details/78308881
项目实例介绍
总体思想:
1.在消息发送过来进行数据库操作的时候,我们不进行数据库操作,而是使用 kafka 发送消息到 kafka
2.kafka 消费者,消费到消息之后,进行具体的数据库操作,插入或者更新数据库,如果出错,目前是打印日志,进行记录
pom.xml 添加 kafka 依赖包
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.11.0.1</version>
</dependency>kafka 配置信息加载
配置信息:kafka.properties
##produce
bootstrap.servers=10.20.135.20:9092
producer.type=sync
request.required.acks=1
serializer.class=kafka.serializer.DefaultEncoder
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
bak.partitioner.class=kafka.producer.DefaultPartitioner
bak.key.serializer=org.apache.kafka.common.serialization.StringSerializer
bak.value.serializer=org.apache.kafka.common.serialization.StringSerializer
##consume
zookeeper.connect=10.20.135.20:2181
group.id=test-consumer-group
zookeeper.session.timeout.ms=4000
zookeeper.sync.time.ms=200
#enable.auto.commit=false
auto.commit.interval.ms=1000
auto.offset.reset=smallest
serializer.class=kafka.serializer.StringEncoder
# kafka消息配置信息
kafka.consumer.topic=test
kafka.consumer.key.989847=989847
kafka.consumer.key.989848=989848
kafka.consumer.key.989849=989849
kafka.consumer.key.989850=989850
加载信息的工具类:
import java.io.File;
import java.io.FileInputStream;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* 加载 配置的 kafka.properties 文件
*
*/
public class ReadKafkaPropertiesUtil {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(ReadKafkaPropertiesUtil.class);
/**
* 属性
*/
private static Properties properties;
/**
* 读取kafka.properties
*/
static {
// kafka.properties路径
LOGGER.debug(" read kafka.properties ");
properties = new Properties();
String path = ReadKafkaPropertiesUtil.class.getResource("/").getFile().toString() + "kafka.properties";
LOGGER.debug(" read kafka.properties path:" + path);
try {
FileInputStream fis = new FileInputStream(new File(path));
properties.load(fis);
} catch (Exception e) {
LOGGER.error(" Kafka Produce init kafka properties " + e);
}
}
/**
* 获取kafka的配置信息
*
* @return
*/
public static Properties getProperties() {
return properties;
}
/**
* 获取kafka的topic
*
*
* @return
*/
public static String getTopic() {
return properties.getProperty("kafka.consumer.topic");
}
/**
* 获取kafka的kafka.consumer.key.989847
*
* @return
*/
public static String getKey989847() {
return properties.getProperty("kafka.consumer.key.989847");
}
/**
* 获取kafka的kafka.consumer.key.989848
*
* @return
*/
public static String getKey989848() {
return properties.getProperty("kafka.consumer.key.989848");
}
/**
* 获取kafka的kafka.consumer.key.989849
*
* @return
*/
public static String getKey989849() {
return properties.getProperty("kafka.consumer.key.989849");
}
/**
* 获取kafka的kafka.consumer.key.989850
*
* @return
*/
public static String getKey989850() {
return properties.getProperty("kafka.consumer.key.989850");
}
/**
* 私有构造函数
*/
private ReadKafkaPropertiesUtil() {
}
}
kafka 生产者
kafka 生产者主要是发送消息到 kafka ,有 kafka 的 topic,key 和 value 值,代码参考:
import java.util.Properties;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.fastjson.JSON;
import com.hundsun.ftenant.common.exception.TengException;
/**
* kafka生产者
*
*/
public class KafkaProduce {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(KafkaProduce.class);
private static final String SEND_MESSAGE_FAILED_NUM = "12000002";
private static final String SEND_MESSAGE_FAILED_MESSAGE = " send message to kafka error :";
/**
* 发送消息
*
* @param topic
* @param key
* @param value
*/
public static void sendMsg(String topic, String key, String value) {
Properties properties = ReadKafkaPropertiesUtil.getProperties();
// 实例化produce
KafkaProducer<String, String> kp = new KafkaProducer<String, String>(properties);
// 消息封装
ProducerRecord<String, String> pr = new ProducerRecord<String, String>(topic, key, value);
// 发送数据
kp.send(pr, new Callback() {
// 回调函数
@Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (null != exception) {
LOGGER.error(" Kafka Produce send message error " + exception);
LOGGER.error(" Kafka Produce send message info: metadata: " + JSON.toJSONString(metadata));
throw new TengException(SEND_MESSAGE_FAILED_NUM, SEND_MESSAGE_FAILED_MESSAGE + exception.getMessage());
}
}
});
// 关闭produce
kp.close();
}
}
kafka 的消费者
思路:
1.项目启动时,启动 kafka 的监听器类:KafkaConsumeLinstener
2.kafka 的监听器,调用 kafka 的线程类:KafkaConsumeRunnable
3.kafka 的线程类,run 方法,启动 kafka 消费者,并调用接口 IKafkaDataConsumer 的方法来实现 kafka 消息的处理
4.kafka 的具体消息处理类 KafkaDataConsumer 实现 接口 IKafkaDataConsumer
代码参考:
1.web.xml 配置 kafka 的监听器
参考:
<!-- ContextLoaderListener 监听器要放在 kafka 监听器之前进行加载
因为 kafka 的监听器中使用 ServletContextEvent 进行 dao 类的加载
监听器启用的使用,spring 还没有开始加载,所以认识不了 @Service 等注解
需要手动使用 ServletContextEvent 进行后续一些类的初始化
-->
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<!-- kafka -->
<listener>
<listener-class>com.hundsun.cloudtrade.match.kafka.KafkaConsumeLinstener</listener-class>
</listener>2.kafka 的监听器
参考:
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* 消费者
*
*/
public class KafkaConsumeLinstener implements ServletContextListener {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(KafkaConsumeLinstener.class);
@Override
public void contextInitialized(ServletContextEvent sce) {
LOGGER.debug(" init kafka consume thread...... ");
Thread t = new Thread(new KafkaConsumeRunnable(sce));
t.start();
LOGGER.debug(" init kafka consume thread end ");
}
@Override
public void contextDestroyed(ServletContextEvent sce) {
// TODO Auto-generated method stub
}
}
3.kafka 的线程类
我们使用的是实现 Runnable 接口
参考:
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import javax.servlet.ServletContextEvent;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.context.support.WebApplicationContextUtils;
import com.alibaba.fastjson.JSON;
import com.hundsun.cloudtrade.match.dao.IDayEntrustDao;
import com.hundsun.cloudtrade.match.dao.IDayHoldDao;
import com.hundsun.cloudtrade.match.dao.IDayTransactionDao;
import com.hundsun.ftenant.common.kafka.ReadKafkaPropertiesUtil;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;
/**
* kafka 线程类
*
*/
public class KafkaConsumeRunnable implements Runnable {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(KafkaConsumeRunnable.class);
// 委托
private final IDayEntrustDao aIDayEntrustDao;
// 成交
private final IDayTransactionDao aIDayTransactionDao;
// 持仓
private final IDayHoldDao aIDayHoldDao;
/**
* kafka消费信息接口
*/
private final IKafkaDataConsumer kafkaDataConsumer;
/**
* spring未加载,来手动加载后续需要使用到的dao类
*
* @param sce
*/
public KafkaConsumeRunnable(ServletContextEvent sce) {
LOGGER.debug(" kafka consumer init dao class ");
aIDayHoldDao = WebApplicationContextUtils.getWebApplicationContext(sce.getServletContext()).getBean(IDayHoldDao.class);
aIDayEntrustDao = WebApplicationContextUtils.getWebApplicationContext(sce.getServletContext()).getBean(IDayEntrustDao.class);
aIDayTransactionDao = WebApplicationContextUtils.getWebApplicationContext(sce.getServletContext()).getBean(IDayTransactionDao.class);
kafkaDataConsumer = new KafkaDataConsumer(aIDayHoldDao, aIDayEntrustDao, aIDayTransactionDao);
}
/*
* 读取kafka消息
*
*/
@Override
public void run() {
// kafka配置属性获取
Properties properties = ReadKafkaPropertiesUtil.getProperties();
// kafka配置属性获取topic
String TOPIC = ReadKafkaPropertiesUtil.getTopic();
LOGGER.info(" kafka consumer topic : " + TOPIC);
LOGGER.info(" kafka consumer properties : " + JSON.toJSONString(properties));
ConsumerConfig config = new ConsumerConfig(properties);
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(TOPIC, new Integer(1));
StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());
ConsumerConnector consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);
Map<String, List<KafkaStream<String, String>>> consumerMap = consumer.createMessageStreams(topicCountMap, keyDecoder, valueDecoder);
KafkaStream<String, String> stream = consumerMap.get(TOPIC).get(0);
ConsumerIterator<String, String> it = stream.iterator();
while (it.hasNext()) {
// kafka获取到的数据
MessageAndMetadata<String, String> keyVlaue = it.next();
LOGGER.debug(" kafka get message , key : " + keyVlaue.key() + " ; value : " + keyVlaue.message());
// 处理kafka数据
kafkaDataConsumer.dealKafkaMessage(keyVlaue.key(), keyVlaue.message());
}
}
}
4.kafka 消息处理类
接口和实现类
注意点:处理消息的方法 dealKafkaMessage 套层try catch自己来处理异常,不要抛出。抛出异常线程死掉,kafka接收到消息了,但是不会消费消息。看日志会发现,kafka 意识到有消息要处理,但是指向消息的指针不会发生变化。(不知道是不是因为线程使用的是 Runnable 的原因,Runnable 不会抛出异常,而 callable 可以抛出异常)
接口:
/**
* kafka 消息处理接口
*
*/
public interface IKafkaDataConsumer {
/**
* kafka 消息的处理方法
*
* @param key
* @param message
*/
public void dealKafkaMessage(String key, String message);
}
实现类:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.fastjson.JSON;
import com.hundsun.cloudtrade.match.dao.IDayEntrustDao;
import com.hundsun.cloudtrade.match.dao.IDayHoldDao;
import com.hundsun.cloudtrade.match.dao.IDayTransactionDao;
import com.hundsun.cloudtrade.match.domain.DayEntrustDomain;
import com.hundsun.cloudtrade.match.domain.DayHoldDomain;
import com.hundsun.cloudtrade.match.domain.DayTransactionDomain;
import com.hundsun.ftenant.common.kafka.ReadKafkaPropertiesUtil;
/**
* kafka具体的消息处理
*
*/
public class KafkaDataConsumer implements IKafkaDataConsumer {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(KafkaDataConsumer.class);
// 委托
private final IDayEntrustDao aIDayEntrustDao;
// 成交
private final IDayTransactionDao aIDayTransactionDao;
// 持仓
private final IDayHoldDao aIDayHoldDao;
/**
* @param aIDayHoldDao
* @param aIDayEntrustDao2
* @param aIDayTransactionDao
*/
public KafkaDataConsumer(IDayHoldDao aIDayHoldDao, IDayEntrustDao aIDayEntrustDao, IDayTransactionDao aIDayTransactionDao) {
this.aIDayEntrustDao = aIDayEntrustDao;
this.aIDayTransactionDao = aIDayTransactionDao;
this.aIDayHoldDao = aIDayHoldDao;
}
/*
* 处理数据
*
* @see com.hundsun.ftenant.common.kafka.IKafkaDataConsumer#dealKafkaMessage(java.lang.String, java.lang.String)
*/
@Override
public void dealKafkaMessage(String key, String value) {
LOGGER.debug(" kafka get message , key : " + key + " ; value : " + value);
// 记录数据库操作是否成功
int result = 0;
try {
if (ReadKafkaPropertiesUtil.getKey989847().equals(key)) {
// 添加委托
LOGGER.debug(" kafka 989847 ");
DayEntrustDomain domain = JSON.parseObject(value, DayEntrustDomain.class);
result = aIDayEntrustDao.insertOne(domain);
} else if (ReadKafkaPropertiesUtil.getKey989848().equals(key)) {
// 更新委托
LOGGER.debug(" kafka 989848 ");
DayEntrustDomain domain = JSON.parseObject(value, DayEntrustDomain.class);
result = aIDayEntrustDao.updateOne(domain);
} else if (ReadKafkaPropertiesUtil.getKey989849().equals(key)) {
// 添加成交
LOGGER.debug(" kafka 989849 ");
DayTransactionDomain domain = JSON.parseObject(value, DayTransactionDomain.class);
result = aIDayTransactionDao.insertOne(domain);
} else if (ReadKafkaPropertiesUtil.getKey989850().equals(key)) {
// 添加或者更新持仓.
LOGGER.debug(" kafka 989850 ");
DayHoldDomain domain = JSON.parseObject(value, DayHoldDomain.class);
result = aIDayHoldDao.addOrUpdateOne_addAmount(domain);
}
} catch (Exception e) {
LOGGER.error(" insert or update db error. key: " + key + "; value:" + value);
LOGGER.error(" kafka deal data error " + e);
}
LOGGER.debug(" kafka insert or update database result : " + result);
}
}
以上,即为全部内容,欢迎留言交流讨论
kafka 配置信息加载
配置信息:kafka.properties
##produce
bootstrap.servers=10.20.135.20:9092
producer.type=sync
request.required.acks=1
serializer.class=kafka.serializer.DefaultEncoder
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
bak.partitioner.class=kafka.producer.DefaultPartitioner
bak.key.serializer=org.apache.kafka.common.serialization.StringSerializer
bak.value.serializer=org.apache.kafka.common.serialization.StringSerializer
##consume
zookeeper.connect=10.20.135.20:2181
group.id=test-consumer-group
zookeeper.session.timeout.ms=4000
zookeeper.sync.time.ms=200
#enable.auto.commit=false
auto.commit.interval.ms=1000
auto.offset.reset=smallest
serializer.class=kafka.serializer.StringEncoder
# kafka消息配置信息
kafka.consumer.topic=test
kafka.consumer.key.989847=989847
kafka.consumer.key.989848=989848
kafka.consumer.key.989849=989849
kafka.consumer.key.989850=989850
加载信息的工具类:
import java.io.File;
import java.io.FileInputStream;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* 加载 配置的 kafka.properties 文件
*
*/
public class ReadKafkaPropertiesUtil {
/**
* 日志
*/
private static Logger LOGGER = LoggerFactory.getLogger(ReadKafkaPropertiesUtil.class);
/**
* 属性
*/
private static Properties properties;
/**
* 读取kafka.properties
*/
static {
// kafka.properties路径
LOGGER.debug(" read kafka.properties ");
properties = new Properties();
String path = ReadKafkaPropertiesUtil.class.getResource("/").getFile().toString() + "kafka.properties";
LOGGER.debug(" read kafka.properties path:" + path);
try {
FileInputStream fis = new FileInputStream(new File(path));
properties.load(fis);
} catch (Exception e) {
LOGGER.error(" Kafka Produce init kafka properties " + e);
}
}
/**
* 获取kafka的配置信息
*
* @return
*/
public static Properties getProperties() {
return properties;
}
/**
* 获取kafka的topic
*
*
* @return
*/
public static String getTopic() {
return properties.getProperty("kafka.consumer.topic");
}
/**
* 获取kafka的kafka.consumer.key.989847
*
* @return
*/
public static String getKey989847() {
return properties.getProperty("kafka.consumer.key.989847");
}
/**
* 获取kafka的kafka.consumer.key.989848
*
* @return
*/
public static String getKey989848() {
return properties.getProperty("kafka.consumer.key.989848");
}
/**
* 获取kafka的kafka.consumer.key.989849
*
* @return
*/
public static String getKey989849() {
return properties.getProperty("kafka.consumer.key.989849");
}
/**
* 获取kafka的kafka.consumer.key.989850
*
* @return
*/
public static String getKey989850() {
return properties.getProperty("kafka.consumer.key.989850");
}
/**
* 私有构造函数
*/
private ReadKafkaPropertiesUtil() {
}
}

本文详细介绍了一种利用Kafka消息队列优化证券交易系统的方法,包括如何通过Kafka减轻数据库负担,防止数据丢失,以及具体的配置和代码实现。
1371





