1.安装zk集群
2.config/server.properties
修改broker.id(唯一的):broker.id=1
修改kafka绑定的网卡host.name=node-1.xiaoniu.com
修改kafka数据存放目录:log.dirs=/bigdata/kafka_2.11-0.8.2.2/data
修改zk地址:zookeeper.connect=node-1.xiaoniu.com:2181,node-2.xiaoniu.com:2181,node-3.xiaoniu.com:2181
将配置好的的zk拷贝到其他机器上(修改broker.id)
3.启动
bin/kafka-server-start.sh -daemon config/server.properties
4.创建topic
/bigdata/kafka_2.11-0.8.2.2/bin/kafka-topics.sh --create --zookeeper node-1:2181,node-2:2181,node-3:2181 --replication-factor 3 --partitions 3 --topic test
5.列出所有topic
/bigdata/kafka_2.11-0.8.2.2/bin/kafka-topics.sh --list --zookeeper localhost:2181
6.向topic中写入数据
/bigdata/kafka_2.11-0.8.2.2/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic xiaoniu
7.消费数据
/bigdata/kafka_2.11-0.8.2.2/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic xiaoniu --from-beginning
8.查看指定topic的详情
/bigdata/kafka_2.11-0.8.2.2/bin/kafka-topic.sh --describe --zookeeper localhost:2181 --topic test(kafka版本为0.8)
/bigdata/kafka_2.11-0.10.2.1/bin/kafka-console-consumer.sh --bootstrap-server node-1.xiaoniu.com:9092,node-2.xiaoniu.com:9092,node-3.xiaoniu.com:9092 --topic my-topic --from-beginning(kafka版本为0.10)
Kafka Connect:
https://kafka.apache.org/documentation/#connect
http://docs.confluent.io/2.0.0/connect/connect-jdbc/docs/index.html
Kafka Stream:
https://kafka.apache.org/documentation/streams
https://spark.apache.org/docs/1.6.1/streaming-kafka-integration.html
kafka monitor:
https://kafka.apache.org/documentation/#monitoring
https://github.com/quantifind/KafkaOffsetMonitor
https://github.com/yahoo/kafka-manager
kafka生态圈:
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem