1、启动命令:(任意节点创建)
nohup kafka-server-start.sh \
/home/cry/apps/kafka_2.11-1.1.0/config/server.properties \
1>/home/cry/apps/logs/kafka-logs/kafka_std.log \
2>/home/cry/apps/logs/kafka-logs/kafka_err.log &
2、创建topic
kafka-topics.sh \
--create \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--replication-factor 3 \
--partitions 10 \
--topic kafka_test
kafka-topics.sh \
--create \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--replication-factor 1 \
--partitions 1 \
--topic weblog
参数解释:
--create 创建 kafka topic
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 指定 kafka 的 zookeeper 地址
--replication-factor 指定每个分区的副本个数
--partitions 指定分区的个数
3、查看已经创建的所有 kafka topic(在任意节点上查看)
kafka-topics.sh \
--list \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181
4、查看某个指定的 kafka topic 的详细信息:
kafka-topics.sh \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--describe \
--topic kafka_test
结果展示:
Topic:kafka_test PartitionCount:10 ReplicationFactor:3 Configs:
Topic: kafka_test Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafka_test Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafka_test Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: kafka_test Partition: 3 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: kafka_test Partition: 4 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: kafka_test Partition: 5 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: kafka_test Partition: 6 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafka_test Partition: 7 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafka_test Partition: 8 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: kafka_test Partition: 9 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
第一行是对所有分区的一个描述,然后每个分区都会对应一行
Topic:topic 名称
Partition:topic 的分区编号
leader:负责处理消息的读和写,leader 是从所有节点中随机选择的。
replicas:列出了所有的副本节点,不管节点是否在服务中。
isr:正在服务中的节点。
5、开启生产者模拟生成数据:
kafka-console-producer.sh \
--broker-list hadoop1:9092,hadoop2:9092,hadoop3:9092 \
--topic kafka_test
6、开启消费者模拟消费数据:
kafka-console-consumer.sh \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--from-beginning \
--topic kafka_test
7、查看某 topic 某个分区的偏移量最大值和最小值
kafka-run-class.sh \
kafka.tools.GetOffsetShell \
--topic kafka_test \
--time -1 \
--broker-list hadoop1:9092,hadoop2:9092,hadoop3:9092 \
--partitions 1
8、增加 topic 分区数
kafka-topics.sh \
--alter \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test \
--partitions 20
kafka-topics.sh \
--alter \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test \
--replication-factor 2
9、删除 Topic
kafka-topics.sh \
--delete \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test
kafka-topics.sh \
--delete \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic weblog
nohup kafka-server-start.sh \
/home/cry/apps/kafka_2.11-1.1.0/config/server.properties \
1>/home/cry/apps/logs/kafka-logs/kafka_std.log \
2>/home/cry/apps/logs/kafka-logs/kafka_err.log &
2、创建topic
kafka-topics.sh \
--create \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--replication-factor 3 \
--partitions 10 \
--topic kafka_test
kafka-topics.sh \
--create \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--replication-factor 1 \
--partitions 1 \
--topic weblog
参数解释:
--create 创建 kafka topic
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 指定 kafka 的 zookeeper 地址
--replication-factor 指定每个分区的副本个数
--partitions 指定分区的个数
3、查看已经创建的所有 kafka topic(在任意节点上查看)
kafka-topics.sh \
--list \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181
4、查看某个指定的 kafka topic 的详细信息:
kafka-topics.sh \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--describe \
--topic kafka_test
结果展示:
Topic:kafka_test PartitionCount:10 ReplicationFactor:3 Configs:
Topic: kafka_test Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafka_test Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafka_test Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: kafka_test Partition: 3 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: kafka_test Partition: 4 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: kafka_test Partition: 5 Leader: 1 Replicas: 1,3,2 Isr: 1,3,2
Topic: kafka_test Partition: 6 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafka_test Partition: 7 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafka_test Partition: 8 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: kafka_test Partition: 9 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
第一行是对所有分区的一个描述,然后每个分区都会对应一行
Topic:topic 名称
Partition:topic 的分区编号
leader:负责处理消息的读和写,leader 是从所有节点中随机选择的。
replicas:列出了所有的副本节点,不管节点是否在服务中。
isr:正在服务中的节点。
5、开启生产者模拟生成数据:
kafka-console-producer.sh \
--broker-list hadoop1:9092,hadoop2:9092,hadoop3:9092 \
--topic kafka_test
6、开启消费者模拟消费数据:
kafka-console-consumer.sh \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--from-beginning \
--topic kafka_test
7、查看某 topic 某个分区的偏移量最大值和最小值
kafka-run-class.sh \
kafka.tools.GetOffsetShell \
--topic kafka_test \
--time -1 \
--broker-list hadoop1:9092,hadoop2:9092,hadoop3:9092 \
--partitions 1
8、增加 topic 分区数
kafka-topics.sh \
--alter \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test \
--partitions 20
kafka-topics.sh \
--alter \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test \
--replication-factor 2
9、删除 Topic
kafka-topics.sh \
--delete \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic kafka_test
kafka-topics.sh \
--delete \
--zookeeper hadoop1:2181,hadoop2:2181,hadoop3:2181 \
--topic weblog